February 3, 2014

ICASSP 2014: Seismic processing special session

Galileo inclined Planes and Curves for motion study
First good news: our seismic data processing paper has been accepted to ICASSP 2014 (a just below 50% acceptance rate: 1745 out of 3492). Second good news: it is in Florence, a magnificent city, not only for arts (the conference is called "The Art of Signal Processing), but for Science too. The Galileo (1564-1642) History of Science Museum is a must-see. At our epoch, we can afford running time consuming dumb computations, changing only one hyper-parameter at the time to solve inverse restoration problems. We benefit from neat integro-differential settings to solve convex optimization problems. It is difficult to imagine a time when people endeavored the making of Science as we know it today, with very imprecise time measurements: imagine you want to compare the duration of free-fall for two objects (a 1 kg feather ball and a 1kg feather ball, for instance, with a water clock, or clepsydra. The calculus of variation blossomed between Leibniz and Newton between 1666 and 1674. Yet, people where able to "prove" important properties of motions, such as the tautochronism of the cycloid (the motion of a point on a rolling circle). Take a wooden gutter in shape of a cycloid, drop two balls from different heights along the wooden frame, they would arrive at the same time. The video is available here, at the Galileo virtual Museum. This "tautochrone" phenomenon  (meaning "same time" in Greek) was discovered by Christiaan Huyghens around the 1650's. It was instrumental in the development of "more perfect pendulum motions", at the core of modern pendulum clocks, that allowed increased precision in Science, as well as farther and less hazardous boat trips around the world. The clock making industry is related to the longitude problem.

The special session: Seismic Signal Processing is organized by Leonardo Duarte, Daniela Donno, Renato R Lopes and João Romano.
A fundamental problem in geophysics is to estimate the properties of the Earth’s subsurface based on measurements acquired by sensors located over the area to be analyzed. Among the different methods to accomplish this task, seismic reflection is the most widespread and has been intensively applied for hydrocarbon exploration. The characterization of the subsoil using seismic reflection techniques is conducted by recording the wave field that is originated from the interaction between the environment under analysis and a seismic wave generated by controlled active sources (e.g. a dynamite explosion in land acquisition). Signal processing (SP) plays a fundamental role in seismic reflection. Indeed, in order to extract relevant information from seismic data, one has to perform tasks such as filtering, deconvolution, and signal separation. Originally, there was a close interaction between the signal processing and geophysics communities – for instance, important achievements in deconvolution and the wavelet transform were obtained in the context of seismic data. Nowadays, however, this interaction has been partially lost – as a consequence, geophysicists are not aware of the most recent SP methods, and, on other hand, the SP community is drawing weak attention to this interesting application. Given this panorama, the main goals of this special session are to shed some light on the research in seismic signal processing, and to broaden and reinforce collaboration between the signal processing and the geophysics research communities. With this goal in mind, the session comprises works on important theoretical and practical topics that arise in seismic signal processing.
The accepted presentation is: A constrained-based optimization approach for seismic data recovery problems
Abstract: Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geophysical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries.Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints modeling a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here,  but also constraints  expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.
This conference presentation is strongly related to the paper: A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal [page|pdf|blog], accepted in 2014 at IEEE Transactions on Signal Processing (currently under revision).

December 26, 2013

Image processing for materials characterization (ICIP 2014, special session)

IMPORTANT UPDATE : The submission process is over. A total of 16 papers were submitted. Thank you all!

ICIP 2014, the IEEE International Conference on Image Processing, will take place in Paris (la Défense, to be honest) during 27-30 October 2014. The twenty special sessions have just been announced (see below).

One is devoted to the exciting field of Materials science: "Image processing for materials characterization".

The deadline for paper submission is 14 February 2014. We encourage interested authors to submit as many papers as possible around this topic (ICIP 2014 submission information), and to warn one the special session organizers. Beware: the existence of the special session on "Image processing for materials characterization" does not grant, by no mean, acceptance or even higher odds to the main conference tracks.

Context
Scanning electron microscopy (SEM): Polymer-charged concrete (c) F. Moreau, IFPEN
A microscopic Mister Jack (left) announces the present conference/special session.  Materials science is evolving from materials discovered in Nature by chance to designed materials [1], that repair themselves, adapt to their environment, capture and store energy or information, help elaborate new devices, etc. Materials are now designed from scratch with initial blueprints, starting from atoms and molecules,   as more traditional for buildings or electronic circuits. This evolution, at the confluence of science, technology, and engineering [2], is driven by the synergy of materials science and physics, mechanics, chemistry, biology and engineering, with image processing  taking part in this challenge [3]. Indeed, the possibility of designing, analyzing and modeling materials from images (or generally two- or three-dimensional modalities) reveals important contributions to this field. The  appearance of materials  changes significantly with imaging techniques, depending  on the scale of analysis, imaging settings, physical properties and preparation of materials. Understanding these aspects turns out to be crucial for material analysis and modelization.

In particular, we face challenges regarding the characterization of the physical assembly process of materials, the formation process of images, of imaging techniques interacting with materials (geometry, transmission, illumination, reflection, scattering). Answering these questions is important to separate the material appearance from its intrinsic morphology and properties. Additionally, materials science approaches may inspire novel image processing techniques.
We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system.
By gathering researchers of complementary expertise, from image feature extraction to image simulation, this special session proposal will allow us to report on recent progresses performed and emerging trends in material analysis and modelization through image processing. By attracting an audience with diverse backgrounds, this proposal aims at catalyzing a new community around this exciting new area for the image processing crowd. The special session topics will be publicized, to encourage  additional submissions to the main ICIP session tracks.

Scope
Scanning electron microscopy (SEM): Catalyst section with cracks and inclusions.
This special session aims at showing some relevant problems in material characterization that can be addressed with classical or advanced methods of signal and image processing. It will be introduced by a tutorial presentation, given by the organizers, who will offer a large overview of some of the issues that may be addressed in this application domain, such as dealing with different modalities (optical, scanning or transmission electron microscopy; diffractometry; spectrometry; surface analysis instrumentation...) and applications (porous, fibrous and hard materials; membranes, surfaces and interfaces; clean energy and information storage; chemistry and catalysts; geology; forensics; bio-inspired materials and biomedical  [4]). For illustrating and to widen the points of view of the tutorial, the five invited papers of the session address some of these challenges by employing various methods, e.g. restoration; segmentation; mathematical morphology; texture analysis [5]; multiscale and directional features extraction; color and multispectral processing; stochastic models [6]. Organizing committee information is given on the next page, followed  by invited authors' contributions, in shape of  expanded abstracts, preliminary results and references. The proposal is concluded by a discussion on the authors' expertise.


Topics of interest include (but are not limited to):
  • Modalities: optical, scanning or transmission electron microscopy; diffractometry; spectrometry; surface analysis instrumentation…
  • Approaches: restoration; segmentation; mathematical morphology; texture analysis; multiscale and directional features extraction; color and multispectral processing; stochastic models; rendering; sparse sensing…
  • Applications: porous, fibrous and hard materials; membranes, surfaces and interfaces; clean energy and information storage; chemistry and catalysts; geology; forensics; bio-inspired materials and biomedical
Thanks to:
Nuit Blanche: Novel meetings: Image processing for materials characterization (ICIP 2014, special session), Spin Glass and Beyond: An old tool for new problems, ITWIST'14 deadline extended 
IFPEN

The list of ICIP 2014 special sessions:
SS-1: Variational and Morphological Optimizations: A Tribute to Vicent Caselles
Organizers: Jean Serra, Guillermo Sapiro, and Philippe Salembier

SS-2: Learning Image Features to Encode Visual Information
Organizers: Jesús Malo, Javier Portilla, and Joan Serra-Sagristà

SS-3: Plenoptic Imaging (Capture, Representation, Processing, and Display)
Organizers: Mårten Sjöström and Atanas Gotchev

SS-4: Photon-Limited Image Reconstruction
Organizers: Charles Deledalle and Joseph Salmon

SS-5: Hyperspectral Image Processing
Organizers: Saurabh Prasad and Jocelyn Chanussot

SS-6: Compact Feature-Based Representation of Visual Content
Organizers: Giuseppe Valenzise and Marco Tagliasacchi

SS-7: Advances in Optimization for Inverse-Imaging Problems
Organizers: Jalal Fadili and Gabriel Peyré

SS-8: Quality of Experience in 3D Multimedia Systems
Organizers: Janko Calic, Philippe Hanhart, Patrick Le Callet, and Alexandre Pereda

SS-9: Advances in Astronomical Signal and Image Processing
Organizers: Jérôme Bobin and Yves Wiaux

SS-10: Image Processing for Materials Characterization
Organizers: Maxime Moreaud, Laurent Duval, Camille Couprie, Dominique Jeulin, Jesús Angulo, and Hugues Talbot

SS-11: Realistic 3D in Interactive Virtual Worlds
Organizers: Julie Wall and Ebroul Izquierdo

SS-12: Electron-Microscopy Image-Processing Problems and Applications in Biology: From Structure to Dynamics
Organizers: Slavica Jonic and Carlos Oscar Sanchez Sorzano

SS-13: Advances in Facial Morpho-Functional Sign Recognition and Analysis
Organizers: A. Enis Cetin, Sara Colantonio, and Bogdan J. Matuszewski

SS-14: Synthetic Aperture Radar Imaging
Organizers: Daniele Riccio

SS-15: 3D Data Security
Organizers: William Puech and Adrian Bors

SS-16: 3D Multimedia Experience Over the Future Internet
Organizers: Safak Dogan, Erhan Ekmekcioglu, and Ahmet Kondoz

SS-17: Efficient Design of HEVC Video-Codec Implementations
Organizers: Vivienne Sze

SS-18: Behavior Imaging
Organizers: Séverine Dubuisson, Jean-Marc Odobez, and Mohamed Chetouani

SS-19: Image Processing for the Detection of Road-Surface Degradations
Organizers: Paulo Lobato Correia and Henrique Oliveira

SS-20: Privacy-Preserving Multimedia Content Analysis: Privacy by Design and Social-Impact Analysis
Organizers: Atta Badii, Touradj Ebrahimi, Jean-Luc Dugelay, Ebroul Izquierdo, Thomas Sikora, Leon Hempel, Christian Fedorczak, and Diego Fernandez Vazquez

December 17, 2013

Géométrie espace-temps

Une jolie série historique sur la géométrisation progressive du monde observé, par les philosophes et les scientifiques, en série d'épisodes de 10 minutes.

Introduction : physique et géométrie, naissance de l'esprit scientifique (de la narration mythique à l'explication).

Une vision du monde totalement géométrisée. L'harmonie et la symétrie, le cosmos, solides de Platon ou polyèdres réguliers, mis en relations avec les éléments, précurseurs des groupes de symétries. "Planète" veut dire "voyageur". Le dodécaèdre, plus proche de la sphère en symétrie, sert à représenter un cinquième élément.










A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal

[UPDATE: Accepted, IEEE Transactions on Signal Processing] A year ago we talked about a technique for Adaptive multiple subtraction with wavelet-based complex unary Wiener filters. The field of application is seismic signal processing. The fast and simple design was heuristic (helping discovery, stimulating interest as a means of furthering investigation.based on experimentation), based on an appropriate combination of "a sparsifying transform" and a closed-form one-tap, sliding-window adaptive filter. To make it more pragmatic (based on observation and real-world models), an alternative approach uses proximal algorithms to incorporate sparsity priors, either on data in redundant frame transforms and in the short-support filter design. Here is the preprint: A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal (page)

Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured ``noises''. As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by  wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations,  based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames.  The designed primal-dual algorithm solves a  constrained  minimization problem that alleviates standard regularization issues in finding hyperparameters. The approach demonstrates  significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field data.

December 14, 2013

Multirate structures, multiscale decompositions: two years after

Signal Processing
Two years ago, in December 2011, a special issue of Signal Processing was published on the theme of Advances in Multirate Filter Bank Structures and Multiscale Representations. Eleven papers, ranging from 1D or 2D data, spanning topics from filter frame design to compression, browsing applications from audio to medical imaging. This issue was very rich in interesting papers, thanks to the authors and reviewers.

Can one say a little more? Of course, bibliometrics or scientometrics generate a lot of debates. Generally, such indicators do not mean anything absolute, they may only serve as a ground for discussion. Let us just compare the figures with the journal statistics: Signal Processing has a two-year Impact Factor on 1.851 (2012). Special issue citation data is tabulated in the following array:

Citation sources (2013/12/14) Elsevier ISI-Thomson Google
Title Scopus WoS Scholar
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity 13 11 20
Augmented Lagrangian based Reconstruction of non-uniformly sub-Nyquist sampled MRI data 13 7 18
Bandlet Image Estimation with Model Selection 1 0 2
Matching Pursuit Shrinkage in Hilbert Spaces 0 0 2
Non Separable Lifting Scheme with Adaptive Update Step for Still and Stereo Image Coding 3 2 7
Multivariate empirical mode decomposition and application to multichannel filtering 11 5 14
Resonance-Based Signal Decomposition: A New Sparsity-Enabled Signal Analysis Method 18 5 24
Activelets: Wavelets for Sparse Representation of Hemodynamic Responses 10 7 12
Fast orthogonal sparse approximation algorithms over local dictionaries 4 2 6
Recursive Nearest Neighbor Search in a Sparse and Multiscale Domain for Comparing Audio Signals 0 0 0
Symmetric Tight Frame Wavelets With Dilation Factor M=4 1 0 1

One observes that the citation counts for Elsevier Scopus, ISI-Thomson Web of Science or Google Scholar are very much uneven. As usual, Google Scholar lies above the two others. This observation should suffice, at least for genuine data scientists, to refrain from using carelessly a single number such as the h-index, without citing the source. When a reality (one's paper visibility) is given three very different values by three similar sensors (with different vendors), one should be cautious about using only the sensor value she-he prefers. This attitude should be very uncoherent for people claiming they can denoise measurements, restore signals, analyze images with precise tools. And forget all about the scientific method when it comes to quantified self-performance.

Then the counts are very different for the different papers. So an average index (here 3.5, or 2.8 without the overview paper) is not meaningful. One potential sound approach is to resort to range statistics, with the least favorable index (ISI-Thomson-WoS). Four papers have not been cited yet. The seven others have a citation count [11, 7, 7, 5, 5, 2, 2] greater than the impact factor (1.851). Qualitatively, the performance of this special issue may be said a little above the journal's performance.

Of course, the eleven papers have a longer life ahead than a two-year run.The only thing we may wish is an absolute improvement of their visibility and influence. Meet you in December 2015, to see how the pack has grown. Here is the paper leaflet.

Keywords: Review; Multiscale; Geometric representations; Oriented decompositions; Scale-space; Wavelets; Atoms; Sparsity; Redundancy; Bases; Frames; Edges; Textures; Image processing; Haar wavelet; Non-Euclidean wavelets; Augmented Lagrangian methods; MRI reconstruction; Non-uniform Fourier transform; Shearlet; Compressed sensing; Model selection; White noise model; Image estimation; Geometrically regular functions; Bandlets; Dictionary; Matching pursuit; Shrinkage; Sparse representation; Lossless compression; Progressive reconstruction; Lifting schemes; Separable transforms; Non-separable transforms; Adaptive transforms; Multiresolution analysis; Wavelets; Stereo coding; Mono- and multivariate empirical mode decomposition; Filter bank structure; Electroencephalography data analysis; Sparse signal representation; Constant-Q transform; Wavelet transform; Morphological component analysis; BOLD fMRI; Hemodynamic response; Wavelet design; Sparsity; l1 minimization; Sparse approximation; Greedy algorithms; Shift invariance; Orthogonal Matching Pursuit; Multiscale decomposition; Sparse approximation; Time—frequency dictionary; Audio similarity; Wavelet transform; Frame; Symmetric filterbanks; Multiresolution analysis

October 10, 2013

Conference: Is there a place for beauty in our research? A few thoughts inspired by Richard Hamming

On Tuesday 15th of October (11:00 - 12:30) there will be a talk of general audience given by Leonid Libkin (University of Edinburgh) in Amphi Turing (Université Paris Diderot, Bâtiment Sophie Germain).

Title: Is there a place for beauty in our research? A few thoughts inspired by Richard Hamming

Abstract:
In 1986, Richard Hamming gave a famous talk entitled "You and Your Research", which discussed how to do really significant research with high impact. His main questions were: are the problems you work on important? Are they timely? What are the main open problems in your area? Can you attack them? If you are not working on them, why? Inspired by this, the University of Edinburgh started a series of Hamming seminars, in which the speakers are supposed to answer these questions about their research fields and their own research.
 
While preparing mine, I observed that, curiously enough, Hamming never talks about beauty in science and research: neither the word, nor its synonyms appear in the transcription of his talk. And yet we strive to find a beautiful definition, a beautiful theorem, a beautiful proof. But are those really necessary? Do they please anyone except the researcher discovering them? Do they have a lasting impact? Are they worth the effort, or is looking for those instances of beauty just one of the little perks we researchers are allowed to have? In this talk, which is based on the Hamming seminar I gave in Edinburgh, I shall discuss these questions, in a slightly biased view of course, affected by the research areas close to my heart.

July 4, 2013

Ecole Analyse multirésolution pour l'image 2013 (ondelettes et cie)

(M-band) Dual-tree wavelets Hilbert pairs on Haar and Riesz
L’École Analyse Multirésolution pour l'image se déroulera les 28 et 29 novembre au Creusot (en Bourgogne). Une occasion de se rafraichir les idées, dans un cadre convivial, sur les techniques multi-échelles, en bancs de filtres et transformées en ondelettes, appliquées en 2 dimensions sur les images, les graphes, les maillages. Cette année, elle est co-organisée avec CORESA 2013 (COmpression et REprésentation des Signaux Audiovisuels,  d'autres conférences en traitement de signal et d'image ici).

Objectifs et programme :

Laplacian, Mexican hat wavelet and multiscale edge detection
Objectifs
Les techniques d'analyse multirésolution et en particulier les techniques basées sur les ondelettes, sont des outils de plus en plus répandus pour le traitement du signal et de l'image qui trouvent de nombreuses applications (analyse, débruitage, segmentation, compression et codage, tatouage,…) dans des domaines aussi divers que le multimédia, la physique du solide, l'astronomie, la géophysique ou les nanotechnologies.

Cette école vise à présenter les méthodes d'analyse multirésolution 2D et 3D et leurs applications en traitement et analyse d'image. Elle a pour but de faciliter et d'accompagner la prise en main de ces techniques et leur transfert vers les autres domaines de recherche.

Les chercheurs qui travaillent depuis longtemps dans le traitement et l'analyse d'image multirésolution ont des solutions à proposer pour résoudre des problèmes concernant le débruitage, la segmentation, la détection de contours ou la compression d'image.

Public concerné
L'école "Analyse multirésolution" est ouverte aux Chercheurs, Enseignants, Industriels et Doctorants qui désirent se former dans ce domaine et acquérir la formation de base nécessaire à la bonne maitrise des outils multirésolution, voire à leur développement ou leur adaptation à une application précise.

Programme prévisionnel
Mardi 26 novembre 2013
9h00 - 10h00 : Accueil des participants
10h00 - 12h00 : Introduction,  Jean- Christophe Pesquet
14h00 - 15h30 : Analyse Multirésolution et débruitage,  Tadeuz Sliwa
16h00 - 17h30 : Ondelettes et multirésolution : de la spectroscopie RMN à l'analyse de séquences vidéo, Jean-Pierre Antoine
17h30 – 19h15 : Les ondelettes Hypercomplexes, Philippe Carré
20h00     Dîner de gala

Mercredi 27 novembre 2013
8h30 - 9h50 : Estimateurs statistiques et décompositions directionnelles, Laurent Duval
9h50 - 11h10 : Problèmes inverses et optimisation, Caroline Chaux
11h30 - 13h00 : Compression, Marc Antonini
14h30 - 16h00 : Analyse multirésolution et maillages, Sébastien Valette
16h30 - 17h30 : Remaillage semi-régulier pour l'analyse multirésolution de surfaces, Cécile Roudet, F. Payan, B. Sauvage

En complément :  formation continue MTS015 (EUROSAE), NOUVELLES METHODES DE TRAITEMENT DES SIGNAUX : Ondelettes, temps-fréquence : théorie, pratique et applications, décembre 2014

May 20, 2013

A touch of Henry Moore and the Nits

Reclining figure, Henry Moore, Lincoln Center
New York could as well have been Dutch. In 1609, English explorer Henry Hudson (according to wikipedia): 
re-discovered the region, sailing for his employer the Dutch East India Company. He proceeded to sail up what he named the North River, also called the Mauritis River, and now known as the Hudson River, to the site of the present-day New York State capital of Albany in the belief that it might represent an oceanic tributary. When the river narrowed and was no longer saline, he realized it wasn't a sea passage and sailed back downriver. He made a ten-day exploration of the area and claimed the region for his employer. In 1614 the area between Cape Cod and Delaware Bay would be claimed by the Netherlands and called Nieuw-Nederland (New Netherland).
The capital of Nieuw-Nederland was located in the southern tip of Manhattan. A last proof is the New York City flag, derived from the Prince's flag, the flag of the Dutch republic (with the date of 1625).
 
There are many good reasons to go to New York City and send a lot of time strolling through this amazing city.  One of the reasons that led me to first ride on Greyhound on 1998 from Boston to New York is the album Nits in concert, which i bought earlier in a second-hand record shop in Boston. A relatively rare promo-CD where the Nits (a Dutch band) explain some of their songs' lyrics meanings. Which is a neat way to dive into their song writing atmosphere.

On Nits in concert,  there is a track called "A touch of Henry Moore", where they allude (lyrics) to a sculpture called "Reclining figure" (1965). It was 15 years ago at UNO headquarters, it now dwells at  the Lincoln Center. Here i was back again this Tuesday (picture on the top left).  I like this instrumentally minimalist version of "A touch of Henry Moore" (from album Omsk), which still evokes the hammering of a brass being. Another version used a large tube & pipe wall, My second best remains this one, in a more visually appealing vision with percussions on a  blocky squared wall.


May 16, 2013

Gas chromatography and 2D-gas chromatography for petroleum industry: The race for selectivity

Almost a follow-up to Signal Processing for Chemical Sensing: ICASSP 2013 Special session: the book Gas chromatography and 2D-gas chromatography for petroleum industry: The race for selectivity, edited by Fabrice Bertoncini, Marion Courtiade-Tholance, Didier Thiébaut (2013, éditions Technip) is out. Our contribution lies in Chapter 3: "Data processing applied to GCxGC. Applications to the petroleum industry.".
Detailed knowledge of petroleum products at molecular scale has always been essential to understand the mechanisms leading to their formation, to design thermodynamic and kinetic models employed in the refining processes and to predict their physical properties. In view of the complexity of petroleum products, very significant research efforts have been made over the past 15 years for improving relevant analytical techniques, especially in the field of Gas Chromatography in order to improve its separation power. The advent of comprehensive Two-dimensional Gas Chromatography (GC×GC) at the end of 1990's constitutes a true revolution allowing an unprecedented insight into very complex mixture at the molecular level.
This book aims at providing a complete review of the implementation of Gas Chromatography in the field of oil industry, with an important focus on GC×GC and related multidimensional systems. It is therefore organised into 8 chapters dealing with fundamental and experimental aspects as well as data processing challenges. Recent progress in the development of these chromatographic systems are discussed according to various applications: detailed molecular analysis of hydrocarbons, speciation of hetero-element, global properties calculation based on chromatographic data and simulated distillation. Specialists from IFP Energies nouvelles, CNRS and major companies leading important research in this field have contributed, reporting a synthesis of the knowledge acquired from research these last 15 years.
Thus, this book will be useful for anyone involved in the separation of oil and derivatives: the student starting a research project, the academic researcher and the refinery engineer willing to deepen their knowledge on advanced multidimensional Gas Chromatography, as well as molecular analysis of petroleum products.

 Contents: 1. Molecular analysis for petroleum products: challenges and future needs. 2. GCxGC: a disruptive technique. 3. Data processing applied to GCxGC. Applications to the petroleum industry. 4. Coupled systems with a CG or GCxGC dimension. 5. Detailed analysis of hydrocarbons. 6. Calculating properties from chromatographic data. 7. Speciation of heteroelements. 8. Simulated distillation. Index.
Related publications:  

Comprehensive Two-Dimensional Gas Chromatography for Detailed Characterisation of Petroleum Products ,Colombe Vendeuvre, R. Ruiz-Guerrero, Fabrice Bertoncini, Laurent Duval, Didier Thiébaut, Oil and Gas Science and Technology - Revue de l'IFP, Special issue on "Recent Advances in the Analysis of Catalysts and Petroleum Products", 2007, Vol. 62, n°01, p. 043-055
Comprehensive two-dimensional gas chromatography (GC xGC or GC2D) is a major advance for the detailed characterisation of petroleum products. This technique is based on two orthogonal dimensions of separation achieved by two chromatographic capillary columns of different chemistries and selectivities. High-frequency sampling between the two columns is achieved by a modulator, ensuring that the whole sample is transferred and analysed continuously in both separations. Thus, the peak capacity and the resoluting power dramatically increase. Besides, highly structured 2D chromatograms are obtained upon the volatility and the polarity of the solute to provide more accurate molecular identification of hydrocarbons. In this paper fundamental and practical considerations for implementation of GCxGC are reviewed. Selected applications obtained using a prototype of a GCxGC chromatograph developed in-house highlight the potential of the technique for molecular characterisation of middle distillates, sulphur speciation in diesel and analysis of effluents frompetrochemical processes
Characterization of middle-distillates by comprehensive two-dimensional gas chromatography (GCxGC): a powerful alternative for performing various standard analysis of middle distillates (pdf), Colombe Vendeuvre, Rosario Ruiz-Guerrero, Fabrice Bertoncini, Laurent Duval, Didier Thiébaut, Marie-Claire Hennion, Journal of Chromatography A, 1086 (2005) p. 21-28
The detailed characterisation of middle distillates is essential for a better understanding of reactions involved in refining process. Owing to higher resolution power and enhanced sensitivity, comprehensive two-dimensional gas chromatography (GCxGC) is a powerful tool for improving characterisation of petroleum samples. The aim of this paper is to compare GCxGC and various ASTM methods – gas chromatography (GC), liquid chromatography (LC) and mass spectrometry (MS) – for group type separation and detailed hydrocarbon analysis. Best features of GCxGC are demonstrated and compared to these techniques in terms of cost, time consumption and accuracy. In particular, a new approach of simulated distillation (SimDis-GCxGC) is proposed: compared to the standard method ASTM D2887 it gives unequal information for better understanding of conversion process.
Comparison of conventional gas chromatography and comprehensive two-dimensional gas chromatography for the detailed analysis of petrochemical samples (pdf), Colombe Vendeuvre, Fabrice Bertoncini, Laurent Duval, Jean-Luc Duplan, Didier Thiébaut, Marie-Claire Hennion, Journal of Chromatography A, 1056 (2004) p. 155-162
Comprehensive two-dimensional gas chromatography (GCxGC) has been investigated for the characterization of high valuable petrochemical samples from dehydrogenation of n-paraffins, Fischer–Tropsch and oligomerization processes. GCxGC separations, performed using a dual-jets CO2 modulator, were optimized using a test mixture representative of the hydrocarbons found in petrochemicals. For complex samples, a comparison of GCxGC qualitative and quantitative results with conventional gas chromatography (1D-GC) has demonstrated an improved resolution power of major importance for the processes: the group type separation has permitted the detection of aromatic compounds in the products from dehydrogenation of n-paraffins and from oligomerization, and the separation of alcohols from other hydrocarbons in Fischer–Tropsch products.

May 4, 2013

Signal Processing for Chemical Sensing: ICASSP 2013 Special session

ICASSP 2013 (International conference on acoustic, speech and signal processing) takes place at the end of May 2013 in Vancouver, Canada. The technical program is online.

Among the numerous tracks and sessions, there were a special session on Signal Processing for Chemical Sensing (Friday, May 31, 08:00 - 10:00). The chairpersons were: Leonardo T. Duarte, Laurent Duval, Christian Jutten. Here are posted the slides (all slides in a .zip file) presented at the conference and paper abstracts.

Typical chemical signals: 1D gas chromatogram
Summary of the Special Session:
This special session aims at showing some relevant problems in chemical engineering that can be addressed with classical or advanced methods of signal and image processing. It will be introduced by a tutorial paper, presented by the organizers, who will offer a large overview of issues which have been addressed in this application domain, like chemical analysis leading to PARAFAC/tensor methods, hyper spectral imaging, ion-sensitive sensors, artificial nose, chromatography, mass spectrometry, TEP imaging, etc. For enlarging and illustrating the points of view of the tutorial, the invited papers of the session consider other applications (NMR, Raman spectroscopy, recognition of explosive compounds, etc.) addressed  by various methods, e.g. source separation, Bayesian or EMD, and exploiting priors like positivity, unit-concentration or sparsity. 
Typical chemical signals: 2D comprehensive gas chromatogram (GCxGC)
Motivation and rationale of the Special Session:
With the advent of more affordable, higher resolution or innovative data acquisition techniques, chemical analysis has  been using progressively advanced signal and image processing tools. This crucial need is exemplified in the Savitzky-Golay filter, which recently was thoroughly revisited  by R. W. Schafer ("What Is a Savitzky-Golay Filter?", Signal Processing Magazine, Jul. 2011). Indeed, both specialities (analytical chemistry and signal processing) share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of chemical  samples or of numerical data. Signal and image processing, for instance, often breaks down data into atoms, molecules, with specific decompositions and priors, as common in chemistry.

This special session will gather a representative sample of recent works in chemical sensing, aiming at introducing its  specific challenges to a broader signal processing audience, for the benefits of both domains.

Papers (upcoming)

Laurent Duval, Leonardo Duarte, Christian Jutten (paper)
Abstract: This tutorial paper aims at summarizing some  problems, ranging  from  analytical chemistry to novel chemical sensors, that can be addressed with classical or advanced methods of signal and image processing. We gather them under the denomination of "chemical sensing". It is meant to introduce the special session "Signal Processing for Chemical Sensing" with a large overview of issues which have been and remain to be addressed in this application domain, including chemical analysis leading to PARAFAC/tensor methods, hyper spectral imaging, ion-sensitive sensors, artificial nose, chromatography, mass spectrometry, etc.  For enlarging and illustrating the points of view of this tutorial, the  invited papers of the session consider other applications (NMR, Raman spectroscopy, recognition of explosive compounds, etc.) addressed  by various methods, e.g. source separation, Bayesian, and exploiting typical chemical signal priors like positivity, linearity, unit-concentration or sparsity.
Keywords: Chemical analysis, Chemical sensors, Gas chromatography, Signal processing algorithms, Spectroscopy
Paper: An overview of signal processing issues in chemical sensing (HAL)
Slides: ICASSP-2013-Duarte-overview-signal-processing-chemical-sensing.pdf

Abstract: This paper deals with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) spectroscopy. This large scale and ill-posed inverse problem is solved by the iterative minimization of a regularized objective function allowing to encode some prior assumptions on the sought distribution. The numerical optimization of the criterion is performed using a primal-dual interior point algorithm allowing to handle the non-negativity constraint. The performances of the proposed approach are illustrated through the processing of real data from a two-dimensional NMR experiment.
Keywords: T1-T2 relaxation times, Laplace transform inversion, interior-point, primal-dual, preconditioning
Paper: Primal-DualInterior Point Optimization for a Regularized Reconstruction of NMRRelaxation Time Distributions
Slides: ICASSP-2013-Moussaoui-NMR-Primal-Dual.pdf
Abstract: We propose a sparse modal estimation approach for analyzing 2-D NMR signals. It consists in decomposing the 2-D problem into two 1-D modal estimations. Each 1-D problem is formulated in terms of simultaneous sparse approximation which is efficiently solved using the Simultaneous Orthogonal Matching Pursuit method associated with a multi-grid dictionary refinement. Then, we propose a new criterion for mode pairing which comes down to solve a sparse approximation problem involving a low dimensional dictionary. The effectiveness of the method is demonstrated on real NMR data.
Keywords: Modal retrieval, sparse approximation, multi-grid, 2-D NMR
Paper: Sparse modal estimation of 2-D NMR signals
Slides: ICASSP-2013-Brie-Sparse-Modal-2d-NMR.pdf
Abstract: New sensor technologies such as Fabry-Pérot interferometers (FPI) offer low-cost and portable alternatives to traditional infrared absorption spectroscopy for chemical analysis. However, with FPIs the absorption spectrum has to be measured one wavelength at a time. In this work, we propose an active-sensing framework to select a subset of wavelengths that best separates the specific components of a chemical mixture. Compared to passive feature selection approaches, in which the subset is selected offline, active sensing selects the next feature on-the-fly based on previous measurements so as to reduce uncertainty. We propose a novel multi-modal non-negative least squares method (MM-NNLS) to solve the underlying linear system, which has multiple near optimal solutions. We tested the framework on mixture problems of up to 10 components from a library of 100 chemicals. MMNNLS can solve complex mixtures using only a small number of measurements, and outperforms passive approaches in terms of sensing efficiency and stability.
Keywords: Active sensing, tunable sensors, multi-modal optimization, chemical mixture analysis
Paper:  Active analysis of chemical mixtures with multi-modal sparse non-negative least squares
Slides: ICASSP-2013-Gutierrez-Osuna-active-multi-modal.pdf
Abstract: In this paper, a novel gas identification approach based on the Recursive Least Squares (RLS) algorithm is proposed. We detail some adaptations of RLS to be applied to a sensor matrix of several technologies in optimal conditions. The low complexity of the algorithm and its ability to process online samples from multi-sensor make the real-time identification of volatile compounds possible. The effectiveness of this approach to early detect and recognize explosive compounds in the air has been successfully demonstrated on an experimentally obtained dataset.
Keywords: Electronic nose, Pattern recognition, Multidimensional analysis, Recursive Least Squares
Slides: ICASSP-2013-Mayoue-recursive-least-squares.pdf
Three others ICASSP 2013 papers are closely related to the topic of chemical sensing and signal processing:
Earlier post: ICASSP 2013: Special sessions   

Related post :  Gas chromatography and 2D-gas chromatography for petroleum industry: The race for selectivity

March 26, 2013

Split gaussian mathematical constant?

Assuming you have a standard Gaussian bell curve (in blue). Suppose that you want to cut it into two parts of equal areas, with an horizontal line. 

Which fraction of the Gaussian peak height provides you with the red and the green curves, which sum up to the Gaussian, with equal surface integral (undr the red and the green cuves)?

It turns out that, numerically, the fraction, on the y-axis, is about 0.3063622804625085, or one over 3.26410940175247 of the peak height.

If one looks at the x-axis, one has to cut at +/- 1.538172262286592.\sigma, where \sigma is the usual Gaussian scale parameter. In practice, cutting the Gaussian at 3/10 of the height would be good enough, assuming sufficient, far to critical, sampling. Yet out of curiosity, i looked at several numerical constant tables, or even Plouffe's constant inverter, and did not find any of these three. So once again, the potential gaussian split constants  are:
  • 0.3063622804625085
  • 1.538172262286592
  • 3.26410940175247
Does anybody knows whether this Gaussian split is "common practice" in some mathematical field, or if these constants are listed somewhere?Though application dwells in the realm of fast Gaussian filter approximation. More to come.

March 23, 2013

W2 4 i3 (2D wavelets for iCube)

Under the cryptic title dwells a nice invitation by Vincent Mazet to give a talk on a panorama of 2D wavelets at iCube-MIV : modèles, images et vision (Strasbourg University). Although the abstract is in french, slides are globish. For those who have an eye for finest details, Alfréd Haar and Frigyes Riesz, two prominent functional analysis and therefore wavelet contributors (albeit indirectly) are honored on this memorial at Szeged university.
Vendredi 15 mars 2013, 10h30, A301, séminaire commun D-IRTS & École doctorale MSII
Ondelettes et autres représentations bidimensionnelles, multi-échelles et géométriques pour le traitement d'images : un panorama

Conférencier : Laurent Duval (IFP Energies nouvelles), avec Laurent Jacques, Caroline Chaux et Gabriel Peyré

Résumé : La quête de représentations optimales en traitement d'images et vision par ordinateur se heurte à la richesse et la diversité des données bidimensionnelles. De nombreux travaux se sont cependant attelés aux tâches de séparation de zones régulières, de contours et de textures, à la recherche d'un compromis entre complexité et efficacité de représentation. La prise en compte des aspects multi-échelles, dans le siècle de l'invention des ondelettes, a joué pour l'analyse d'images un rôle important. La dernière décennie a ainsi vu apparaître une série de méthodes efficaces, combinant des aspects multi-échelle à des aspects directionnels et fréquentiels, permettant de mieux prendre en compte l'orientation des éléments d'intérêt des images (curvelets, contourlets et autres shearlets). Leur fréquente redondance leur permet d'obtenir des représentations plus parcimonieuses et parfois quasi-invariantes pour certaines transformations usuelles (translation, rotation). Ces méthodes sont la motivation d'une revue thématique, incluant quelques incursions sur des domaines non-euclidiens (sphère, maillages, graphes).

Abstract: The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share a hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping “pictures”. We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.

L. Jacques, L. Duval, C. Chaux, G. Peyré, "A panorama on multiscale geometric representations, intertwining spatial, directional and frequency selectivity", Signal Processing, volume 91, number 12, December 2011, pages 2699-2730.

Slides
http://icube-miv.unistra.fr/fr/index.php?title=Fichier:Duval-20130315.pdf&page=1
http://www.laurent-duval.eu/Articles/Duval_L_20130315_lect_panorama-wavelet-multiscale-representations-ICube.pdf

December 15, 2012

ERBlet transform (on WITS: Where is the starlet)

ERBlet transform dual frame spectrum
After about a hundred starlets or wavelets in *let, the newborn ERBlet borrows from the auditory scale, namely the Equivalent Rectanguar Bandwidth and the non-stationary Gabor transforms (NSGT). See more at WITS: where is the Starlet. The attendant Matlab toolbox is there (ICASSP 2013).

ERBlet

In short: A linear and invertible time-frequency transformation adapted to human auditory perception, for masking and perceptual sparsity
Etymology: From the ERB scale or Equivalent Rectangular Bandwidth
filter banks, devised for auditory based-representation, following the philosophy of third-octave filter banks. See also Frequency Analysis and Masking - MIT, Brian C. J. Moore, 1995 and Bark and ERB Bilinear Transforms - Stanford University, by J. O. Smith III
Origin:
Thibaud Necciari, Design and implementation of the ERBlet transform, FLAME 12 (Frames and Linear Operators for Acoustical Modeling and Parameter Estimation), 2012
Time-frequency representations are widely used in audio applications involving sound analysis-synthesis. For such applications, obtaining a time-frequency transform that accounts for some aspects of human auditory perception is of high interest. To that end, we exploit the theory of non-stationary Gabor frames to obtain a perception-based, linear, and perfectly invertible time-frequency transform. Our goal is to design a non-stationary Gabor transform (NSGT) whose time-frequency resolution best matches the time-frequency analysis properties by the ear. The peripheral auditory system can be modeled in a first approximation as a bank of bandpass filters whose bandwidth increases with increasing center frequency. These so-called “auditory filters” are characterized by their equivalent rectangular bandwidths (ERB) that follow the ERB scale. Here, we use a NSGT with resolution evolving across frequency to mimic the ERB scale, thereby naming the resulting paradigm "ERBlet transform". Preliminary results will be presented. Following discussion shall focus on finding the "best" transform settings allowing to achieve perfect reconstruction while minimizing redundancy.
Thibaud Necciari with P. Balazs, B. Laback, P. Soendergaard, R. Kronland-Martinet, S. Meunier, S. Savel, and S. Ystad, The ERBlet transform, auditory time-frequency masking and perceptual sparsity, 2nd SPLab Workshop, October 24–26, 2012, Brno
The ERBlet transform, time-frequency masking and perceptual sparsity Time-frequency (TF) representations are widely used in audio applications involving sound analysis-synthesis. For such applications, obtaining an invertible TF transform that accounts for some aspects of human auditory perception is of high interest. To that end, we combine results of non-stationary signal processing and psychoacoustics. First, we exploit the theory of non-stationary Gabor frames to obtain a linear and perfectly invertible non-stationary Gabor transform (NSGT) whose TF resolution best matches the TF analysis properties by the ear. The peripheral auditory system can be modeled in a first approximation as a bank of bandpass filters whose bandwidth increases with increasing center frequency. These so-called “auditory filters” are characterized by their equivalent rectangular bandwidths (ERB) that follow the ERB scale. Here, we use a NSGT with resolution evolving across frequency to mimic the ERB scale, thereby naming the resulting paradigm “ERBlet transform”. Second, we exploit recent psychoacoustical data on auditory TF masking to find an approximation of the ERBlet that keeps only the audible components (perceptual sparsity criterion). Our long-term goal is to obtain a perceptually relevant signal representation, i.e., as close as possible to “what we see is what we hear”. Auditory masking occurs when the detection of a sound (referred to as the “target” in psychoacoustics) is degraded by the presence of another sound (the “masker”). To accurately predict auditory masking in the TF plane, TF masking data for masker and target signals with a good localization in the TF plane are required. To our knowledge, these data are not available in the literature. Therefore, we conducted psychoacoustical experiments to obtain a measure of the TF spread of masking produced by a Gaussian TF atom. The ERBlet transform and the psychoacoustical data on TF masking will be presented. The implementation of the perceptual sparsity criterion in the ERBlet will be discussed.
Contributors:
Thibaud Necciari with P. Balazs, B. Laback, P. Soendergaard, R. Kronland-Martinet, S. Meunier, S. Savel, and S. Ystad
Some properties:
Develops a non-stationary Gabor transform (NSGT) [Theory, Implementation and Application of Nonstationary Gabor Frames, P. Balazs et al., J. Comput. Appl. Math., 2011] with resolution evolving over frequency to mimic the ERB scale (Equivalent Rectangular Bandwidth, after B. C. J. Moore and B. R. Glasberg, "Suggested formulae for calculating auditory-filter bandwidths and excitation patterns", J. Acoustical Society of America 74:750-753, 1983). Linear and invertible time-frequency transform adapted to human auditory perception.
Anecdote:A Matlab implementation of the ERBlet transform should appear in 2013 for ICASSP in Vancouver.