May 5, 2014

Signal processing for chemical sensing (OGST Special issue)

OGST (Oil & Gas Science and Technology) has just published a special issue on "Advances in signal processing and image analysis for physicochemical, analytical chemistry and chemical sensing", vol. 69, number 2 (March-April 2014). It somehow parallels the ICASSP 2013 Special session on  Signal Processing for Chemical Sensing. Moreover, a contributed book in planned on the topic.

The editorial (F. Rocca and L. Duval) deals with informational content of data, sensory principles and, of source, the law of parcimony (beautifully illustrated in "The name of the rose"), Ockham's razor, in other words, sparsity, a common aspect in recent signal processing techniques. So why is the topic interesting for chemical engineers and scientists?

With the advent of more affordable, higher resolution or innovative data acquisition techniques (for instance hyphenated instrumentation such as two-dimensional chromatography), the need for advanced signal and image processing tools has grown in physico-chemical analysis, together with the quantity and complexity of acquired measurements.
Either with mono- (signals) or two-dimensional (from hyphenated techniques to standard images) data, processing generally aims at improving quality and at providing more precise quantitative assessment of measurements of materials and products, to yield insight or access to information, chemical properties, reactive dynamics or textural properties, to name a few (for instance). Although chemometrics embrace from experimental design to calibration, more interplay between physico-chemical analysis and generic signal and image processing is believed to strengthen the two disciplines. Indeed, although they strongly differ in background and vocabulary, both specialities share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of samples or of numerical data. 

The present call for papers aims at gathering contributions on recent progresses performed and emerging trends concerning (but not limited to):
  • 1D and 2D acquisition, sparse sampling (compressive sensing), modulation/demodulation, compression, background/baseline/trend estimation, enhancement, integration, smoothing and filtering, denoising, differentiation, detection, deconvolution and source separation, resolution improvement, peak or curve fitting and matching, clustering, segmentation, multiresolution analysis, mathematical morphology, calibration, multivariate curve resolution, property prediction, regression, data mining, tomography, visualization,
pertaining to the improvement of physico-chemical analysis techniques, including (not exclusively):
  • (high-performance) gas, liquid or ion chromatography; gel electrophoresis; diode array detector; Ultraviolet (UV), visible, Infrared (NIR, FIR), Raman or Nuclear Magnetic Resonance (NMR) spectroscopy, X-ray diffraction (XRD), X-Ray Absorption (EXAFS, XANES), mass spectrometry; photoacoustic spectroscopy (PAS); porosimetry; hyphenated techniques; ion-sensitive sensors, artificial noses; electron microscopy (SEM, TEM),
in the following proposed domains:
  • catalysis, chemical engineering, oil and gas production, refining processes, petrochemicals, and other sources of energy, in particular alternative energies with a view to sustainable development. 
    NMR data analysis: A time-domain parametric approach using adaptive subband decomposition [pdf], E.-H. Djermoune, M. Tomczak and D. Brie
    Abstract:
    This paper presents a fast time-domain data analysis method for one- and two-dimensional Nuclear Magnetic Resonance (NMR) spectroscopy, assuming Lorentzian lineshapes, based on an adaptive spectral decomposition. The latter is achieved through successive filtering and decimation steps ending up in a decomposition tree. At each node of the tree, the parameters of the corresponding subband signal are estimated using some high-resolution method. The resulting estimation error is then processed through a stopping criterion which allows one to decide whether the decimation should be carried on or not. Thus the method leads to an automated selection of the decimation level and consequently to a signal-adaptive decomposition. Moreover, it enables one to reduce the processing time and makes the choice of usual free parameters easier, comparatively to the case where the whole signal is processed at once. The efficiency of the method is demonstrated using 1-D and 2-D 13C NMR signals.
Inverse Problem Approach for Alignment of Electron Tomographic series [pdf], V.-D. Tran, M. Moreaud, É. Thiébaut, L. Denis and J.-M. Becker
Abstract:
In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
Abstract:
Grazing Incidence X-ray Diffraction (GIXD) is a widely used characterization technique, applied for the investigation of the structure of thin films. As far as organic films are concerned, the confinement of the film to the substrate results in anisotropic 2-dimensional GIXD patterns, such those observed for polythiophene-based films, which are used as active layers in photovoltaic applications. Potential malfunctions of the detectors utilized may distort the quality of the acquired images, affecting thus the analysis process and the structural information derived. Motivated by the success of Morphological Component Analysis (MCA) in image processing, we tackle in this study the problem of recovering the missing information in GIXD images due to potential detector's malfunction. First, we show that the geometrical structures which are present in the GIXD images can be represented sparsely by means of a combination of over-complete transforms, namely, the curvelet and the undecimated wavelet transform, resulting in a simple and compact description of their inherent information content. Then, the missing information is recovered by applying MCA in an inpainting framework, by exploiting the sparse representation of GIXD data in these two over-complete transform domains. The experimental evaluation shows that the proposed approach is highly efficient in recovering the missing information in the form of either randomly burned pixels, or whole burned rows, even at the order of 50 % of the total number of pixels. Thus, our approach can be applied for healing any potential problems related to detector performance during acquisition, which is of high importance in synchrotron-based experiments, since the beamtime allocated to users is extremely limited and any technical malfunction could be detrimental for the course of the experimental project. Moreover, the non-necessity of long acquisition times or repeating measurements, which stems from our results adds extra value to the proposed approach.

Abstract:
Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample’s response to a parameter (e.g. time, displacement, voltage, wavelength) that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since signals contain multiple information points, they have rich information content but are generally complex to comprehend. Multivariate Analysis (MA) has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years, there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signals), consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals), accurate elimination of interfering signals (removal of reproducible but unwanted signals) and the standardisation of signal amplitude fluctuations. At present, the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis), these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or where the variations in the signals can be complex. As science seeks to probe datasets in less and less tightly controlled situations the ability to provide high-fidelity corrections in a very flexible manner is becoming more critical and multivariate based signal processing has the potential to provide many solutions.
Design of Smart Ion-selective Electrode Arrays based on Source Separation through Nonlinear Independent Component Analysis [pdf] Leonardo T. Duarte and Christian Jutten
Abstract:
The development of chemical sensor arrays based on Blind Source Separation (BSS) provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE). The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.
Unsupervised segmentation of hyperspectral images with spatialized Gaussian mixture model and model selection [pdf] Serge Cohen, Erwan Le Pennec
Abstract:
In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

No comments:

Post a Comment

Forfait Joule : grandeurs et unités du temps de travail

[Article très provisoire, pour arriver à la notion professionnelle de "forfait Joule" #ToutCaPourCa] Quand on parle de travail que...