June 3, 2014

Seismic Signal Processing (ICASSP 2014)

There was a special session on "Seismic Signal Processing" at International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, Florence. Our talk was on simplified optimization techniques to solve multiple reflections via adaptive filtering techniques in wavelet frame domains.

Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geophysical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries. Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints modelling a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here, but also constraints expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.
This paper  has focused on the constrained convex formulation of adaptive multiple removal. The proposed approach, based on proximal methods, is quite flexible and allows us to integrate a large panel of hard constraints corresponding to a priori knowledge on the data to be estimated (i.e. primary signal and time-varying filters). A key observation is that some of the related constraint sets can be expressed through hyperplanes, which are not only more convenient to design, but also easier to implement through straightforward projections. Since sparsifying transforms and  constraints strongly interact [Pham-2014-TSP], we now  study the class of hyperplane constraints of interest as well as their inner parameters, together with the extension to higher dimensions

May 30, 2014

Sparse template-based adaptive filtering

Significance index related to Student's t-test
The phenomenon arises in several real-life signal processing contexts: acoustic echo-cancellation (AEC) in sound and speech,  non-destructive testing where transmitted waves may rebound at material interfaces (e.g. ultrasounds), or pattern matching in images. Here in seismic reflection or seismology. Weak signals (of interest) are buried under both strong random and structured noise. Provided appropriate templates are obtained, we propose a structured-pattern filtering algorithm (called Ricochet) through constrained adaptive filtering in a  transformed domain. Its generic methodology impose sparsity: in different wavelet frames (Haar, Daubechies, Symmlets) coefficients, using the L-1 or Manhattan norm, as well as on adaptive filter coefficients using concentration measures (for sparser filters in the time domain): L-1, the Frobenius norm squared, and the mixed L-1,2 norms). Regularity properties are constrained as well, for instance slow variation on the adaptive filter coefficients (uniform, Chebychev or L-infinity norm). Quantitative results are given with a significance index, reminiscent of the Student t-test.


Abstract: Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured ``noises''. As their amplitude may be greater than signals of interest
Seismic data: primaries and multiples
Lost in multiples: a creeping primary (flat, bootom-right)
(primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by  wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations,  based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames.  The designed primal-dual algorithm solves a  constrained  minimization problem that alleviates standard regularization issues in finding hyper-parameters. The approach demonstrates  significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field seismic data.

All the metrics here are convex. Wait a bit for something completely different with non-convex penalties...

May 27, 2014

Postdoc position: Very large data management in Geosciences

Geophysical mesh at two resolutions
So we (IFPEN) have a postdoc position on "Very large data management in Geosciences" (gestion des très gros volumes de données en géosciences), with details at: 

Abstract: The main purpose of the post-doctoral work is to propose new data compression techniques for volumetric meshes able to manage seismic data values attached to geometry elements (nodes or cells) with adaptive decompression for post-processing functionalities (visualization). Compression algorithms adapted to "big data" will enable our current software scalability, for instance, geoscience fluid-flow simulation or transport combustion simulation on very large meshes. Obtained results are intended to contribute to IFPEN scientific lock about very large data management with a target of being able to process billion of cells or data samples. Results will also be used to propose new software solutions for the storage, the transfer and the processing (exploration, visualization) of these large data sets.
 
Résumé : L'objectif de ce post-doctorat est de proposer de nouvelles méthodes de compression de données
Seismic data compression and denoising
et de maillages volumiques capables de gérer des propriétés attachées à la géométrique (connexité de mailles, groupes spatiaux de traces sismiques), éventuellement évolutives, tout en permettant une décompression progressive et adaptée à la visualisation et au traitement. Les algorithmes de compression pour les données volumiques permettraient de les exploiter dans les outils logiciels qui manipulent des ensembles volumineux (simulation d'écoulement poreux en géosciences ou simulation de combustion en transport). Les résultats obtenus auront vocation à contribuer au verrou technologique concernant les très gros volumes de données avec une cible fixée sur le milliard de cellules ou d'échantillons. Les résultats seront notamment exploités pour assurer le stockage, le transfert mais aussi la manipulation (exploration, visualisation) de ces très gros volumes.

May 10, 2014

Computational Harmonic Analysis: Winter School

This message was communicated to me by Caroline Chaux, to share:

Computational Harmonic Analysis: Winter School, Marseille, October 2014

We are pleased to announce the winter school on Computational Harmonic Analysis - with Applications to Signal and Image Processing, that will be held in October 2014 (20-24), in Marseille, France (at CIRM).

The topics will be:
  • Mathematical and numerical aspects of frame theory
  • Time-frequency frames and applications to audio analysis
  • Wavelets, shearlets and geometric frames (and others *-lets or directional wavelets)
  • Inverse problems and optimization
This winter school will bring together PhD-students and young PostDocs (as well as a few experts) in the field of computational harmonic analysis, in order to explain the background and the efficiency as well as the range of application of a number of numerical algorithms which are based on the Fourier-, the wavelet and the Short-Time Fourier Transform (Time-Frequency and Gabor Analysis), as well as other atomic decomposition techniques, in particular in higher dimensions (shearlets, curvelets,...).

There is a wide range of topics to be covered, from the theoretical background (from infinite-dimensional settings, expressed in terms of function spaces to finite dimensional situations) to the development of efficient algorithms and the real-world applications to music- and sound processing or for image analysis tasks.  mathematically oriented lectures will be complemented with practical computer sessions.

The school will be limited to 40 participants. Registration is free but mandatory by June, 30th 2014. Participants can present their work during poster sessions if they want. Abstracts can be submitted by September, 1st 2014.

More information can be found on the dedicated website:
http://feichtingertorresani.weebly.com/information2.html

May 9, 2014

Three-band linear gutter-bank in Florence (ICASSP 2014)
ICASSP 2014 in Florence has just ended. The slogan was "The art of signal processing". In Florence, Art is indeed everywhere, and science, signal processing included, is not very far apart.

Take for instance this example of an analysis/synthetis three-band, apparently linear, and complex gutter-bank. I do suspect a certain redundancy i cannot yet understand. Is it related to other diffusion-based filter-banks?

May 5, 2014

Signal processing for chemical sensing (OGST Special issue)

OGST (Oil & Gas Science and Technology) has just published a special issue on "Advances in signal processing and image analysis for physicochemical, analytical chemistry and chemical sensing", vol. 69, number 2 (March-April 2014). It somehow parallels the ICASSP 2013 Special session on  Signal Processing for Chemical Sensing. Moreover, a contributed book in planned on the topic.

The editorial (F. Rocca and L. Duval) deals with informational content of data, sensory principles and, of source, the law of parcimony (beautifully illustrated in "The name of the rose"), Ockham's razor, in other words, sparsity, a common aspect in recent signal processing techniques. So why is the topic interesting for chemical engineers and scientists?

With the advent of more affordable, higher resolution or innovative data acquisition techniques (for instance hyphenated instrumentation such as two-dimensional chromatography), the need for advanced signal and image processing tools has grown in physico-chemical analysis, together with the quantity and complexity of acquired measurements.
Either with mono- (signals) or two-dimensional (from hyphenated techniques to standard images) data, processing generally aims at improving quality and at providing more precise quantitative assessment of measurements of materials and products, to yield insight or access to information, chemical properties, reactive dynamics or textural properties, to name a few (for instance). Although chemometrics embrace from experimental design to calibration, more interplay between physico-chemical analysis and generic signal and image processing is believed to strengthen the two disciplines. Indeed, although they strongly differ in background and vocabulary, both specialities share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of samples or of numerical data. 

The present call for papers aims at gathering contributions on recent progresses performed and emerging trends concerning (but not limited to):
  • 1D and 2D acquisition, sparse sampling (compressive sensing), modulation/demodulation, compression, background/baseline/trend estimation, enhancement, integration, smoothing and filtering, denoising, differentiation, detection, deconvolution and source separation, resolution improvement, peak or curve fitting and matching, clustering, segmentation, multiresolution analysis, mathematical morphology, calibration, multivariate curve resolution, property prediction, regression, data mining, tomography, visualization,
pertaining to the improvement of physico-chemical analysis techniques, including (not exclusively):
  • (high-performance) gas, liquid or ion chromatography; gel electrophoresis; diode array detector; Ultraviolet (UV), visible, Infrared (NIR, FIR), Raman or Nuclear Magnetic Resonance (NMR) spectroscopy, X-ray diffraction (XRD), X-Ray Absorption (EXAFS, XANES), mass spectrometry; photoacoustic spectroscopy (PAS); porosimetry; hyphenated techniques; ion-sensitive sensors, artificial noses; electron microscopy (SEM, TEM),
in the following proposed domains:
  • catalysis, chemical engineering, oil and gas production, refining processes, petrochemicals, and other sources of energy, in particular alternative energies with a view to sustainable development. 
    NMR data analysis: A time-domain parametric approach using adaptive subband decomposition [pdf], E.-H. Djermoune, M. Tomczak and D. Brie
    Abstract:
    This paper presents a fast time-domain data analysis method for one- and two-dimensional Nuclear Magnetic Resonance (NMR) spectroscopy, assuming Lorentzian lineshapes, based on an adaptive spectral decomposition. The latter is achieved through successive filtering and decimation steps ending up in a decomposition tree. At each node of the tree, the parameters of the corresponding subband signal are estimated using some high-resolution method. The resulting estimation error is then processed through a stopping criterion which allows one to decide whether the decimation should be carried on or not. Thus the method leads to an automated selection of the decimation level and consequently to a signal-adaptive decomposition. Moreover, it enables one to reduce the processing time and makes the choice of usual free parameters easier, comparatively to the case where the whole signal is processed at once. The efficiency of the method is demonstrated using 1-D and 2-D 13C NMR signals.
Inverse Problem Approach for Alignment of Electron Tomographic series [pdf], V.-D. Tran, M. Moreaud, É. Thiébaut, L. Denis and J.-M. Becker
Abstract:
In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
Abstract:
Grazing Incidence X-ray Diffraction (GIXD) is a widely used characterization technique, applied for the investigation of the structure of thin films. As far as organic films are concerned, the confinement of the film to the substrate results in anisotropic 2-dimensional GIXD patterns, such those observed for polythiophene-based films, which are used as active layers in photovoltaic applications. Potential malfunctions of the detectors utilized may distort the quality of the acquired images, affecting thus the analysis process and the structural information derived. Motivated by the success of Morphological Component Analysis (MCA) in image processing, we tackle in this study the problem of recovering the missing information in GIXD images due to potential detector's malfunction. First, we show that the geometrical structures which are present in the GIXD images can be represented sparsely by means of a combination of over-complete transforms, namely, the curvelet and the undecimated wavelet transform, resulting in a simple and compact description of their inherent information content. Then, the missing information is recovered by applying MCA in an inpainting framework, by exploiting the sparse representation of GIXD data in these two over-complete transform domains. The experimental evaluation shows that the proposed approach is highly efficient in recovering the missing information in the form of either randomly burned pixels, or whole burned rows, even at the order of 50 % of the total number of pixels. Thus, our approach can be applied for healing any potential problems related to detector performance during acquisition, which is of high importance in synchrotron-based experiments, since the beamtime allocated to users is extremely limited and any technical malfunction could be detrimental for the course of the experimental project. Moreover, the non-necessity of long acquisition times or repeating measurements, which stems from our results adds extra value to the proposed approach.

Abstract:
Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample’s response to a parameter (e.g. time, displacement, voltage, wavelength) that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since signals contain multiple information points, they have rich information content but are generally complex to comprehend. Multivariate Analysis (MA) has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years, there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signals), consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals), accurate elimination of interfering signals (removal of reproducible but unwanted signals) and the standardisation of signal amplitude fluctuations. At present, the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis), these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or where the variations in the signals can be complex. As science seeks to probe datasets in less and less tightly controlled situations the ability to provide high-fidelity corrections in a very flexible manner is becoming more critical and multivariate based signal processing has the potential to provide many solutions.
Design of Smart Ion-selective Electrode Arrays based on Source Separation through Nonlinear Independent Component Analysis [pdf] Leonardo T. Duarte and Christian Jutten
Abstract:
The development of chemical sensor arrays based on Blind Source Separation (BSS) provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE). The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.
Unsupervised segmentation of hyperspectral images with spatialized Gaussian mixture model and model selection [pdf] Serge Cohen, Erwan Le Pennec
Abstract:
In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

May 3, 2014

ICASSP 2014: Tutorials "sive" Florence monuments

Starting tomorrow, the International Conference on Acoustics, Speech and Signal Processing hosts 15 tutorials on solid topics, ranging from convex optimization to big data and signal processing on graphs. 

If you are wealthy enough to have registred, you may download the tutorial support pdf files from the given links, and uncompress them with the password provided with your registration. If not, sive, well, we are in the magnificient Florence, at least 12 key places are worth paying a visit, namely:
palazzovecchio, fortezzadabasso, pontevecchio, santamariadelfiore, palazzopitti, santamarianovella, giardinodiboboli, santacroce, piazzalemichelangelo, campaniledigiotto, sanlorenzo, corridoiovasariano.
If you know three other hidden places, fell free to tell.

T1 - Statistical Signal Processing for Graphs
Subject Area: Fundamentals
Speakers: Nadya T. Bliss (Arizona State University), Alfred O. Hero (University of Michigan, Ann Arbor), Benjamin A. Miller (MIT Lincoln Laboratory)

T2 - Monotone Operator Splitting Methods in Signal and Image Recovery
Subject Area: Image Processing
Speakers: P.L. Combettes (Université Pierre et Marie Curie – Paris 6), J.-C. Pesquet (Université Paris-Est), and N. Pustelnik (ENS de Lyon)

T3 - Informed Audio Source Separation: Trends, Approaches and Algorithms
Subject Area: Speech/Audio/Language Processing
Speaker: Alexey Ozerov (Technicolor), Antoine Liutkus (INRIA, Nancy Grand Est) and Gaël Richard (Telecom ParisTech)

T4 - Signal Processing for Analog Systems
Subject Area: Signal Processing System Design and Implementation
Speakers: Arthur J. Redfern, Manar El-Chammas and Lei Ding (Texas Instruments)

T5 - Transmitter Cooperation in Wireless Networks: Potential and Challenges*
Subject Area: Communications
Speakers: David Gesbert and Paul de Kerret (EURECOM)

T6 - Signal Processing for Big Data
Subject Area: Fundamentals
Speakers: G.B. Giannakis, Konstantinos Slavakis (University of Minnesota), Gonzalo Mateos (Carnegie Mellon University)

T7 - Semidefinite Relaxation: From Theory to Applications to Latest Advances*
Subject Area: Fundamentals
Speakers: Wing-Kin Ma and Anthony Man-Cho So (The Chinese University of Hong Kong)

T8 - EEG Signal Processing and Classification for Brain Computer Interfacing (BCI) Applications
Subject Area: Biomedical signal processing
Speakers: Amit Konar (Jadavpur University), Fabien Lotte (INRIA-Bordeaux Sud-Ouest), Arijit Sinharay (Tata Consultancy Services Ltd)

T9 - Deep learning for natural language processing and related applications
Subject Area: Speech/Audio/Language Processing
Speakers: Xiaodong He, Jianfeng Gao, Li Deng (Microsoft Research)

T10 - Bits and Flops in modern communications: analyzing complexity as the missing piece of the wireless-communication puzzle
Subject Area: Communications
Speakers: Petros Elia (EURECOM) and Joakim Jaldén (Royal Institute of Technology, KTH, Sweden)

T11 - An introduction to sparse stochastic processes
Subject Area: Fundamentals
Speaker: Micheal Unser (EPFL)

T12 - Factoring Tensors in the Cloud: A Tutorial on Big Tensor Data Analytics
Subject Area: Fundamentals
Speakers: Nicholas Sidiropoulos (University of Minnesota) and Evangelos Papalexakis (Carnegie Mellon University)

T13 - Complex elliptically symmetric distributions and their applications in signal processing
Subject Area: Statistical Signal Processing
Speakers: Esa Ollila (Aalto University, Finland), David E. Tyler (Rutgers University) and Frederic Pascal (SUPELEC)

T14 - Signal Processing for Finance, Economics and Marketing Modeling and Information Processing*
Subject Area: Financial data analysis
Speakers: Xiao-Ping (Steven) Zhang (Ryerson University), Fang Wang (Wilfrid Laurier University)

T15 - Signal Processing in Power Line Communication Systems
Subject Area: Communications
Speaker: Andrea M. Tonello (University of Udine, Italy)

February 3, 2014

ICASSP 2014: Seismic processing special session

Galileo inclined Planes and Curves for motion study
First good news: our seismic data processing paper has been accepted to ICASSP 2014 (a just below 50% acceptance rate: 1745 out of 3492). Second good news: it is in Florence, a magnificent city, not only for arts (the conference is called "The Art of Signal Processing), but for Science too. The Galileo (1564-1642) History of Science Museum is a must-see. At our epoch, we can afford running time consuming dumb computations, changing only one hyper-parameter at the time to solve inverse restoration problems. We benefit from neat integro-differential settings to solve convex optimization problems. It is difficult to imagine a time when people endeavored the making of Science as we know it today, with very imprecise time measurements: imagine you want to compare the duration of free-fall for two objects (a 1 kg feather ball and a 1kg feather ball, for instance, with a water clock, or clepsydra. The calculus of variation blossomed between Leibniz and Newton between 1666 and 1674. Yet, people where able to "prove" important properties of motions, such as the tautochronism of the cycloid (the motion of a point on a rolling circle). Take a wooden gutter in shape of a cycloid, drop two balls from different heights along the wooden frame, they would arrive at the same time. The video is available here, at the Galileo virtual Museum. This "tautochrone" phenomenon  (meaning "same time" in Greek) was discovered by Christiaan Huyghens around the 1650's. It was instrumental in the development of "more perfect pendulum motions", at the core of modern pendulum clocks, that allowed increased precision in Science, as well as farther and less hazardous boat trips around the world. The clock making industry is related to the longitude problem.

The special session: Seismic Signal Processing is organized by Leonardo Duarte, Daniela Donno, Renato R Lopes and João Romano.
A fundamental problem in geophysics is to estimate the properties of the Earth’s subsurface based on measurements acquired by sensors located over the area to be analyzed. Among the different methods to accomplish this task, seismic reflection is the most widespread and has been intensively applied for hydrocarbon exploration. The characterization of the subsoil using seismic reflection techniques is conducted by recording the wave field that is originated from the interaction between the environment under analysis and a seismic wave generated by controlled active sources (e.g. a dynamite explosion in land acquisition). Signal processing (SP) plays a fundamental role in seismic reflection. Indeed, in order to extract relevant information from seismic data, one has to perform tasks such as filtering, deconvolution, and signal separation. Originally, there was a close interaction between the signal processing and geophysics communities – for instance, important achievements in deconvolution and the wavelet transform were obtained in the context of seismic data. Nowadays, however, this interaction has been partially lost – as a consequence, geophysicists are not aware of the most recent SP methods, and, on other hand, the SP community is drawing weak attention to this interesting application. Given this panorama, the main goals of this special session are to shed some light on the research in seismic signal processing, and to broaden and reinforce collaboration between the signal processing and the geophysics research communities. With this goal in mind, the session comprises works on important theoretical and practical topics that arise in seismic signal processing.
The accepted presentation is: A constrained-based optimization approach for seismic data recovery problems
Abstract: Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geophysical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries.Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints modeling a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here,  but also constraints  expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.
This conference presentation is strongly related to the journal paper: A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal [page|pdf|blog|arxiv], accepted in 2014 at IEEE Transactions on Signal Processing.

December 26, 2013

Image processing for materials characterization (ICIP 2014, special session)


ICIP 2014, the IEEE International Conference on Image Processing, will take place in Paris (la Défense, to be honest) during 27-30 October 2014. The twenty special sessions have been announced (see below).
One is devoted to the exciting field of Materials science: "Image processing for materials characterization", with one introductory and five invited papers.

CLICK THE PICTURE FOR THE DEDICATED PAGE
 The deadline for paper submission is 14 February 2014. We encourage interested authors to submit as many papers as possible around this topic (ICIP 2014 submission information), and to warn one the special session organizers. Beware: the existence of the special session on "Image processing for materials characterization" does not grant, by no mean, acceptance or even higher odds to the main conference tracks.

Context
Scanning electron microscopy (SEM): or Mr. Jack (c) F. Moreau, IFPEN
A microscopic Mister Jack (left) announces the present conference/special session.  Materials science is evolving from materials discovered in Nature by chance to designed materials [1], that repair themselves, adapt to their environment, capture and store energy or information, help elaborate new devices, etc. Materials are now designed from scratch with initial blueprints, starting from atoms and molecules,   as more traditional for buildings or electronic circuits. This evolution, at the confluence of science, technology, and engineering [2], is driven by the synergy of materials science and physics, mechanics, chemistry, biology and engineering, with image processing  taking part in this challenge [3]. Indeed, the possibility of designing, analyzing and modeling materials from images (or generally two- or three-dimensional modalities) reveals important contributions to this field. The  appearance of materials  changes significantly with imaging techniques, depending  on the scale of analysis, imaging settings, physical properties and preparation of materials. Understanding these aspects turns out to be crucial for material analysis and modelization.

In particular, we face challenges regarding the characterization of the physical assembly process of materials, the formation process of images, of imaging techniques interacting with materials (geometry, transmission, illumination, reflection, scattering). Answering these questions is important to separate the material appearance from its intrinsic morphology and properties. Additionally, materials science approaches may inspire novel image processing techniques.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system.
By gathering researchers of complementary expertise, from image feature extraction to image simulation, this special session proposal will allow us to report on recent progresses performed and emerging trends in material analysis and modelization through image processing. By attracting an audience with diverse backgrounds, this proposal aims at catalyzing a new community around this exciting new area for the image processing crowd. The special session topics will be publicized, to encourage  additional submissions to the main ICIP session tracks.

Scope
Scanning electron microscopy (SEM): Catalyst section with cracks and inclusions.
This special session aims at showing some relevant problems in material characterization that can be addressed with classical or advanced methods of signal and image processing. It will be introduced by a tutorial presentation, given by the organizers, who will offer a large overview of some of the issues that may be addressed in this application domain, such as dealing with different modalities (optical, scanning or transmission electron microscopy; diffractometry; spectrometry; surface analysis instrumentation...) and applications (porous, fibrous and hard materials; membranes, surfaces and interfaces; clean energy and information storage; chemistry and catalysts; geology; forensics; bio-inspired materials and biomedical  [4]). For illustrating and to widen the points of view of the tutorial, the five invited papers of the session address some of these challenges by employing various methods, e.g. restoration; segmentation; mathematical morphology; texture analysis [5]; multiscale and directional features extraction; color and multispectral processing; stochastic models [6]. Organizing committee information is given on the next page, followed  by invited authors' contributions, in shape of  expanded abstracts, preliminary results and references. The proposal is concluded by a discussion on the authors' expertise.


Topics of interest include (but are not limited to):
  • Modalities: optical, scanning or transmission electron microscopy; diffractometry; spectrometry; surface analysis instrumentation…
  • Approaches: restoration; segmentation; mathematical morphology; texture analysis; multiscale and directional features extraction; color and multispectral processing; stochastic models; rendering; sparse sensing…
  • Applications: porous, fibrous and hard materials; membranes, surfaces and interfaces; clean energy and information storage; chemistry and catalysts; geology; forensics; bio-inspired materials and biomedical
Thanks to:
Nuit Blanche: Novel meetings: Image processing for materials characterization (ICIP 2014, special session), Spin Glass and Beyond: An old tool for new problems, ITWIST'14 deadline extended 
IFPEN

The list of ICIP 2014 special sessions:
SS-1: Variational and Morphological Optimizations: A Tribute to Vicent Caselles
Organizers: Jean Serra, Guillermo Sapiro, and Philippe Salembier

SS-2: Learning Image Features to Encode Visual Information
Organizers: Jesús Malo, Javier Portilla, and Joan Serra-Sagristà

SS-3: Plenoptic Imaging (Capture, Representation, Processing, and Display)
Organizers: Mårten Sjöström and Atanas Gotchev

SS-4: Photon-Limited Image Reconstruction
Organizers: Charles Deledalle and Joseph Salmon

SS-5: Hyperspectral Image Processing
Organizers: Saurabh Prasad and Jocelyn Chanussot

SS-6: Compact Feature-Based Representation of Visual Content
Organizers: Giuseppe Valenzise and Marco Tagliasacchi

SS-7: Advances in Optimization for Inverse-Imaging Problems
Organizers: Jalal Fadili and Gabriel Peyré

SS-8: Quality of Experience in 3D Multimedia Systems
Organizers: Janko Calic, Philippe Hanhart, Patrick Le Callet, and Alexandre Pereda

SS-9: Advances in Astronomical Signal and Image Processing
Organizers: Jérôme Bobin and Yves Wiaux

SS-10: Image Processing for Materials Characterization
Organizers: Maxime Moreaud, Laurent Duval, Camille Couprie, Dominique Jeulin, Jesús Angulo, and Hugues Talbot

SS-11: Realistic 3D in Interactive Virtual Worlds
Organizers: Julie Wall and Ebroul Izquierdo

SS-12: Electron-Microscopy Image-Processing Problems and Applications in Biology: From Structure to Dynamics
Organizers: Slavica Jonic and Carlos Oscar Sanchez Sorzano

SS-13: Advances in Facial Morpho-Functional Sign Recognition and Analysis
Organizers: A. Enis Cetin, Sara Colantonio, and Bogdan J. Matuszewski

SS-14: Synthetic Aperture Radar Imaging
Organizers: Daniele Riccio

SS-15: 3D Data Security
Organizers: William Puech and Adrian Bors

SS-16: 3D Multimedia Experience Over the Future Internet
Organizers: Safak Dogan, Erhan Ekmekcioglu, and Ahmet Kondoz

SS-17: Efficient Design of HEVC Video-Codec Implementations
Organizers: Vivienne Sze

SS-18: Behavior Imaging
Organizers: Séverine Dubuisson, Jean-Marc Odobez, and Mohamed Chetouani

SS-19: Image Processing for the Detection of Road-Surface Degradations
Organizers: Paulo Lobato Correia and Henrique Oliveira

SS-20: Privacy-Preserving Multimedia Content Analysis: Privacy by Design and Social-Impact Analysis
Organizers: Atta Badii, Touradj Ebrahimi, Jean-Luc Dugelay, Ebroul Izquierdo, Thomas Sikora, Leon Hempel, Christian Fedorczak, and Diego Fernandez Vazquez

December 17, 2013

Géométrie espace-temps

Une jolie série historique sur la géométrisation progressive du monde observé, par les philosophes et les scientifiques, en série d'épisodes de 10 minutes.

Introduction : physique et géométrie, naissance de l'esprit scientifique (de la narration mythique à l'explication).

Une vision du monde totalement géométrisée. L'harmonie et la symétrie, le cosmos, solides de Platon ou polyèdres réguliers, mis en relations avec les éléments, précurseurs des groupes de symétries. "Planète" veut dire "voyageur". Le dodécaèdre, plus proche de la sphère en symétrie, sert à représenter un cinquième élément.










A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal

[UPDATE: Accepted, IEEE Transactions on Signal Processing] A year ago we talked about a technique for Adaptive multiple subtraction with wavelet-based complex unary Wiener filters. The field of application is seismic signal processing. The fast and simple design was heuristic (helping discovery, stimulating interest as a means of furthering investigation.based on experimentation), based on an appropriate combination of "a sparsifying transform" and a closed-form one-tap, sliding-window adaptive filter. To make it more pragmatic (based on observation and real-world models), an alternative approach uses proximal algorithms to incorporate sparsity priors, either on data in redundant frame transforms and in the short-support filter design. Here is the preprint: A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal (page)

Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured ``noises''. As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by  wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations,  based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames.  The designed primal-dual algorithm solves a  constrained  minimization problem that alleviates standard regularization issues in finding hyperparameters. The approach demonstrates  significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field data.

December 14, 2013

Multirate structures, multiscale decompositions: two years after

Signal Processing
Two years ago, in December 2011, a special issue of Signal Processing was published on the theme of Advances in Multirate Filter Bank Structures and Multiscale Representations. Eleven papers, ranging from 1D or 2D data, spanning topics from filter frame design to compression, browsing applications from audio to medical imaging. This issue was very rich in interesting papers, thanks to the authors and reviewers.

Can one say a little more? Of course, bibliometrics or scientometrics generate a lot of debates. Generally, such indicators do not mean anything absolute, they may only serve as a ground for discussion. Let us just compare the figures with the journal statistics: Signal Processing has a two-year Impact Factor on 1.851 (2012). Special issue citation data is tabulated in the following array:

Citation sources (2013/12/14) Elsevier ISI-Thomson Google
Title Scopus WoS Scholar
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity 13 11 20
Augmented Lagrangian based Reconstruction of non-uniformly sub-Nyquist sampled MRI data 13 7 18
Bandlet Image Estimation with Model Selection 1 0 2
Matching Pursuit Shrinkage in Hilbert Spaces 0 0 2
Non Separable Lifting Scheme with Adaptive Update Step for Still and Stereo Image Coding 3 2 7
Multivariate empirical mode decomposition and application to multichannel filtering 11 5 14
Resonance-Based Signal Decomposition: A New Sparsity-Enabled Signal Analysis Method 18 5 24
Activelets: Wavelets for Sparse Representation of Hemodynamic Responses 10 7 12
Fast orthogonal sparse approximation algorithms over local dictionaries 4 2 6
Recursive Nearest Neighbor Search in a Sparse and Multiscale Domain for Comparing Audio Signals 0 0 0
Symmetric Tight Frame Wavelets With Dilation Factor M=4 1 0 1

One observes that the citation counts for Elsevier Scopus, ISI-Thomson Web of Science or Google Scholar are very much uneven. As usual, Google Scholar lies above the two others. This observation should suffice, at least for genuine data scientists, to refrain from using carelessly a single number such as the h-index, without citing the source. When a reality (one's paper visibility) is given three very different values by three similar sensors (with different vendors), one should be cautious about using only the sensor value she-he prefers. This attitude should be very uncoherent for people claiming they can denoise measurements, restore signals, analyze images with precise tools. And forget all about the scientific method when it comes to quantified self-performance.

Then the counts are very different for the different papers. So an average index (here 3.5, or 2.8 without the overview paper) is not meaningful. One potential sound approach is to resort to range statistics, with the least favorable index (ISI-Thomson-WoS). Four papers have not been cited yet. The seven others have a citation count [11, 7, 7, 5, 5, 2, 2] greater than the impact factor (1.851). Qualitatively, the performance of this special issue may be said a little above the journal's performance.

Of course, the eleven papers have a longer life ahead than a two-year run.The only thing we may wish is an absolute improvement of their visibility and influence. Meet you in December 2015, to see how the pack has grown. Here is the paper leaflet.

Keywords: Review; Multiscale; Geometric representations; Oriented decompositions; Scale-space; Wavelets; Atoms; Sparsity; Redundancy; Bases; Frames; Edges; Textures; Image processing; Haar wavelet; Non-Euclidean wavelets; Augmented Lagrangian methods; MRI reconstruction; Non-uniform Fourier transform; Shearlet; Compressed sensing; Model selection; White noise model; Image estimation; Geometrically regular functions; Bandlets; Dictionary; Matching pursuit; Shrinkage; Sparse representation; Lossless compression; Progressive reconstruction; Lifting schemes; Separable transforms; Non-separable transforms; Adaptive transforms; Multiresolution analysis; Wavelets; Stereo coding; Mono- and multivariate empirical mode decomposition; Filter bank structure; Electroencephalography data analysis; Sparse signal representation; Constant-Q transform; Wavelet transform; Morphological component analysis; BOLD fMRI; Hemodynamic response; Wavelet design; Sparsity; l1 minimization; Sparse approximation; Greedy algorithms; Shift invariance; Orthogonal Matching Pursuit; Multiscale decomposition; Sparse approximation; Time—frequency dictionary; Audio similarity; Wavelet transform; Frame; Symmetric filterbanks; Multiresolution analysis

October 10, 2013

Conference: Is there a place for beauty in our research? A few thoughts inspired by Richard Hamming

On Tuesday 15th of October (11:00 - 12:30) there will be a talk of general audience given by Leonid Libkin (University of Edinburgh) in Amphi Turing (Université Paris Diderot, Bâtiment Sophie Germain).

Title: Is there a place for beauty in our research? A few thoughts inspired by Richard Hamming

Abstract:
In 1986, Richard Hamming gave a famous talk entitled "You and Your Research", which discussed how to do really significant research with high impact. His main questions were: are the problems you work on important? Are they timely? What are the main open problems in your area? Can you attack them? If you are not working on them, why? Inspired by this, the University of Edinburgh started a series of Hamming seminars, in which the speakers are supposed to answer these questions about their research fields and their own research.
 
While preparing mine, I observed that, curiously enough, Hamming never talks about beauty in science and research: neither the word, nor its synonyms appear in the transcription of his talk. And yet we strive to find a beautiful definition, a beautiful theorem, a beautiful proof. But are those really necessary? Do they please anyone except the researcher discovering them? Do they have a lasting impact? Are they worth the effort, or is looking for those instances of beauty just one of the little perks we researchers are allowed to have? In this talk, which is based on the Hamming seminar I gave in Edinburgh, I shall discuss these questions, in a slightly biased view of course, affected by the research areas close to my heart.

July 4, 2013

Ecole Analyse multirésolution pour l'image 2013 (ondelettes et cie)

(M-band) Dual-tree wavelets Hilbert pairs on Haar and Riesz
L’École Analyse Multirésolution pour l'image se déroulera les 28 et 29 novembre au Creusot (en Bourgogne). Une occasion de se rafraichir les idées, dans un cadre convivial, sur les techniques multi-échelles, en bancs de filtres et transformées en ondelettes, appliquées en 2 dimensions sur les images, les graphes, les maillages. Cette année, elle est co-organisée avec CORESA 2013 (COmpression et REprésentation des Signaux Audiovisuels,  d'autres conférences en traitement de signal et d'image ici).

Objectifs et programme :

Laplacian, Mexican hat wavelet and multiscale edge detection
Objectifs
Les techniques d'analyse multirésolution et en particulier les techniques basées sur les ondelettes, sont des outils de plus en plus répandus pour le traitement du signal et de l'image qui trouvent de nombreuses applications (analyse, débruitage, segmentation, compression et codage, tatouage,…) dans des domaines aussi divers que le multimédia, la physique du solide, l'astronomie, la géophysique ou les nanotechnologies.

Cette école vise à présenter les méthodes d'analyse multirésolution 2D et 3D et leurs applications en traitement et analyse d'image. Elle a pour but de faciliter et d'accompagner la prise en main de ces techniques et leur transfert vers les autres domaines de recherche.

Les chercheurs qui travaillent depuis longtemps dans le traitement et l'analyse d'image multirésolution ont des solutions à proposer pour résoudre des problèmes concernant le débruitage, la segmentation, la détection de contours ou la compression d'image.

Public concerné
L'école "Analyse multirésolution" est ouverte aux Chercheurs, Enseignants, Industriels et Doctorants qui désirent se former dans ce domaine et acquérir la formation de base nécessaire à la bonne maitrise des outils multirésolution, voire à leur développement ou leur adaptation à une application précise.

Programme prévisionnel
Mardi 26 novembre 2013
9h00 - 10h00 : Accueil des participants
10h00 - 12h00 : Introduction,  Jean- Christophe Pesquet
14h00 - 15h30 : Analyse Multirésolution et débruitage,  Tadeuz Sliwa
16h00 - 17h30 : Ondelettes et multirésolution : de la spectroscopie RMN à l'analyse de séquences vidéo, Jean-Pierre Antoine
17h30 – 19h15 : Les ondelettes Hypercomplexes, Philippe Carré
20h00     Dîner de gala

Mercredi 27 novembre 2013
8h30 - 9h50 : Estimateurs statistiques et décompositions directionnelles, Laurent Duval
9h50 - 11h10 : Problèmes inverses et optimisation, Caroline Chaux
11h30 - 13h00 : Compression, Marc Antonini
14h30 - 16h00 : Analyse multirésolution et maillages, Sébastien Valette
16h30 - 17h30 : Remaillage semi-régulier pour l'analyse multirésolution de surfaces, Cécile Roudet, F. Payan, B. Sauvage

En complément :  formation continue MTS015 (EUROSAE), NOUVELLES METHODES DE TRAITEMENT DES SIGNAUX : Ondelettes, temps-fréquence : théorie, pratique et applications, décembre 2014

May 20, 2013

A touch of Henry Moore and the Nits

Reclining figure, Henry Moore, Lincoln Center
New York could as well have been Dutch. In 1609, English explorer Henry Hudson (according to wikipedia): 
re-discovered the region, sailing for his employer the Dutch East India Company. He proceeded to sail up what he named the North River, also called the Mauritis River, and now known as the Hudson River, to the site of the present-day New York State capital of Albany in the belief that it might represent an oceanic tributary. When the river narrowed and was no longer saline, he realized it wasn't a sea passage and sailed back downriver. He made a ten-day exploration of the area and claimed the region for his employer. In 1614 the area between Cape Cod and Delaware Bay would be claimed by the Netherlands and called Nieuw-Nederland (New Netherland).
The capital of Nieuw-Nederland was located in the southern tip of Manhattan. A last proof is the New York City flag, derived from the Prince's flag, the flag of the Dutch republic (with the date of 1625).
 
There are many good reasons to go to New York City and send a lot of time strolling through this amazing city.  One of the reasons that led me to first ride on Greyhound on 1998 from Boston to New York is the album Nits in concert, which i bought earlier in a second-hand record shop in Boston. A relatively rare promo-CD where the Nits (a Dutch band) explain some of their songs' lyrics meanings. Which is a neat way to dive into their song writing atmosphere.

On Nits in concert,  there is a track called "A touch of Henry Moore", where they allude (lyrics) to a sculpture called "Reclining figure" (1965). It was 15 years ago at UNO headquarters, it now dwells at  the Lincoln Center. Here i was back again this Tuesday (picture on the top left).  I like this instrumentally minimalist version of "A touch of Henry Moore" (from album Omsk), which still evokes the hammering of a brass being. Another version used a large tube & pipe wall, My second best remains this one, in a more visually appealing vision with percussions on a  blocky squared wall.