December 20, 2014

Learning meets compression: small-data-science internship (IFPEN)

We (Camille Couprie/Laurent Duval) are looking for a higly motivated master/engineering school student for an internship on the "Compatibility between sparse machine learning and lossy compression" at IFPEN.

Internship subject: [french/english]
Many experimental designs acquire continuous or salve signals or images. Those are characteristic of a specific phenomenon. One may find examples at IFPEN in seismic data/images, NDT/NDE acoustic emissions (corrosion, battery diagnosis) engine benches (cylinder pressure data, fast camera), high-thoughput screening in chemistry. Very often, such data is analyzed with standardized, a priori indices. Comparisons between different experiments (difference- or classification-based) are often based on the same indices, without resorting to initial measurements.

The increasing data volume, the variability in sensor and sampling, the possibility of different pre-processing yield two problems: the management and access to data (« big data ») and their optimal exploitation by dimension reduction methods, supervised or unsupervised learning (« data science »). This project aims at the analysis of the possibility of a joint compressed representation of data and the extraction of pertinent indicators, at different characteristic scales, and the relative impact of the first aspect (lossy compression degradation) over the second aspect (precision and robustness of extracted feature indicators).

The internship possesses a dual goal. The first aspect will be dealing with scientific research on sparse signal/image representations with convolution networks based on multiscale wavelet techniques, called scattering networks. Their descriptors (or footprints) possess fine translation, rotation and scale invariance. Those descriptors will be employed for classification and detection. The second aspect will carry on the impact of lossy compression on the preceeding results, and the development of novel sparse representations for joint compression and learning.

J. Bruna, S. Mallat. Invariant scattering convolution networks. IEEE Trans. on Patt. Anal. and Mach. Int., 2010
L. Jacques, L. Duval, C. Chaux, G. Peyré, A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity, Signal Processing, 2011
C. Couprie, C. Farabet, L. Najman, Y. LeCun, Convolutional Nets and Watershed Cuts for Real-Time Semantic Labeling of RGBD Videos, Journal of Machine Learning Research, 2014

A PhD thesis (Characteristic fingerprint computation and storage for high-throughput data flows and their on-line analysis, with J.-C. Pesquet, Univ. Paris-Est) is proposed, starting September 2015.

Information update: http://www.laurent-duval.eu/lcd-2015-intern-learning-compression.html

November 29, 2014

Haiku (libre) : sémantique et général

La trahison des images, René Magritte
Les gens qui confondent
La carte et le territoire
Me fatiguent un peu

Big data and Data science: LIX colloquium 2014

Sketch of the Hype Cycle for Emerging Technologies
Data science and Big data are two concepts at the tip of the tongue and the top of the Gartner Hype Cycle for Emerging Technologies. Close to the peak of inflated expectations. The Data science LIX colloquium 2014 at Ecole Polytechnique, organized by Michalis Vazirgiannis from DaSciM was held yesterday on the Plateau de Saclay, which may have prevented some to attend the event. Fortunately, it was webcast.

The talks covered a wide range of topics pertaining to Datacy (data + literacy). The community detection in graphs (with survey) keynote promoted local optimization (OSLOM, with order statistics). It was said than "We should define validation procedures even before starting developing algorithms", including negative tests; on random graphs, a clustering method should find non prominent cluster (except the whole graph), in other words no signal in noise. But there was no mention to phase transition in clustering. The variety of text data (SMS, tweets, chats, emails, news web pages, books, 100 languages, OCR and spelling mistakes) and its veracity was questioned with Facebook estimating that between 5% and 11% of accounts are fake, and 68.8 percent of the Internet is spam (how did they get the 3 figures precision?). News-hungry people would be interested in EMM News, a media watch tool aggregating 10000 RSS flux and 4000 news aggregators. With all these sources, some communities are concerned with virtual ghost town effects, and look for way to spark discussions (retweets and the likes) to keep social activity alive. Flat approaches or hierarchical grouping are still debated challenges in large-scale classification and web-scale taxonomies. Potentially novel graph structures (hypernode graphs, related to hypergraphs or maybe n-polygraphs) with convex stability and spectral theory are also proposed in the first part of the colloquium.

Big Data Cap Gap: the space between all and relevant data
While Paris-Saclay center for data science has opened its website, the unbalanced data was exposed around the HiggsML data-driven challenge. Less than 100 Higgs bosons (expected) to be detected in 10^10 yearly. Big-data analogs of the greek Pythia, as well as efficient indexing and mining methods would necessary to harness the data beast. More industrials talks concluded the colloquium, given by AXA, Amazon and Google representatives, which i could not attend, left with the so-called "crap gap" in mind, i. e. the gap between Relevant Data and Big Data. 

Innovation driven by large data sets still requires, at least, vague goals in mind. In Latin, "Ignoranti quem portum petat nullus suus uentus (ventus) est", wrote Sénèque in his 71th letter to Lucilius. A possible translation in English: "When a man does not know what harbour he is making for, no wind is the right wind". In German, "Dem weht kein Wind, der keinen Hafen hat, nach dem er segelt". And "Il n'y a point de vent favorable pour celui qui ne sait dans quel port il veut arriver" in French.

All of the information, and possibly information you need, may be found in the following program and videos. As the videos are not split into talks, the time codes are provided, thanks to the excellent suggestion (and typos corrections) by Igor Carron.

LIX colloquium 2014 on Data Science LIVE part 1
  • 00:00:00 > 00:22:22: Introduction and program
  • 00:22:22 > 01:22:18: Keynote speech: Community detection in networks, Santo Fortunato, Aalto University
  • 01:22:18 > 01:57:30: Text and Big Data, Gregory Grefenstette, Inria Saclay - Île de France
  • 01:57:30 > 02:29:23: Accessing Information in Large Document Collections: classification in web-scale taxonomies, Eric Gaussier, Université Joseph Fourier (Grenoble I)
  • 02:29:23 > 03:01:32: Shaping Social Activity by Incentivizing Users, Manuel Gomez Rodriguez, Max Planck Institute for Software Systems
  • 03:01:32 > 03:38:00: Machine Learning on Graphs and Beyond, Marc Tommasi, Inria Lille

LIX colloquium 2014 on Data Science LIVE part 2
  • 00:00:00 > 00:33:57: Learning to discover: data science in high-energy physics and the HiggsML challenge, Balázs Kégl, CNRS
  • 00:34:11 > 01:06:15: Big Data on Big Systems: Busting a Few Myths, Peter Triantafillou, University of Glasgow
  • 01:06:15 > 01:38:29: Big Sequence Management, Themis Palpanas, Paris Descartes University

LIX colloquium 2014 on Data Science LIVE part 3 
  • 00:00:00 > 00:38:11: Understanding Videos at YouTube Scale, Richard Washington, Google
  • 00:38:11 > 01:05:48: AWS's big data/HPC innovations, Stephan Hadinger, Amazon Web Services
  • 01:05:48 > 02:02:41: Big Data in Insurance - Evolution or disruption? Stéphane Guinet, AXA 
  • 02:02:41 > 02:06:35: Closing words on a word cloud (with time, series, graph and classification are the big four)

November 1, 2014

Cédric Villani : les mathématiques sont un art comme les autres (podcast)

Les mathématiques sont un art comme les autres, une série de cinq entretiens avec Cédric Villani (professeur de l'Université de Lyon et directeur de l'Institut Henri Poincaré), dans l'émission "Un autre jour est possible", sur France Culture. Sur la poésie, la musique, le design, les arts de la rue et le cinéma. Cette tête chercheuse fait beaucoup et bien pour la vulgarisation des mathématiques et leur transfert innovation vers des disciplines afférentes. Louable effort. "Nul ne peut être mathématicien s'il n'a une âme de poète", disait Sophie Kowalevskaia. Les 15 et 16 décembre 2014, le forum Horizon Maths a lieu à IFP Energies nouvelles à Rueil-Malmaison (thème : "Les mathématiques se dévoilent aux industriels"), le programme en pdf est ici.En plus détaillé, après la pause podcast sur Cédric Villani.


Session « Méthodes pour la chimie ab initio »
  • Pascal Raybaud (IFPEN) : « Enjeux de la performance numérique pour les calculs ab initio
  • en catalyse »
  • Thierry Deutsch (CEA Grenoble) « Les ondelettes, une base flexible permettant un contrôle fin de la précision et la mise au point des méthodes ordre N pour le calcul de la structure électronique via BigDFT »
  • Benjamin Stamm (UPMC) « A posteriori estimation for non-linear eigenvalue problems in the context of DFT- methods »
  • Filippo Lipparini (Universität Mainz) « Large, polarizable QM/MM/Continuum computations : ancient wishes and recent advances »Eric Cancès (Ecole des Ponts ParisTech) « Aspects mathématiques de la théorie fonctionnelle de la densité (DFT) »
Session « Optimisation sans dérivée »
  • Delphine Sinoquet (IFPEN) « Applications de l’optimisation sans dérivée dans le secteur pétrolier et le domaine des énergies marines renouvelables »
  • Emmanuel Vazquez (SUPELEC) « Nouvelles fonctions de perte pour l’optimisation bayesienne »
  • Serge Gratton (CERFACS) « Optimisation sans dérivée : algorithmes stochastiques et complexité »
  • Wim van Ackooij (EDF) « Optimisation sous contraintes probabilistes et applications en management d’énergies »
  • Marc Schoenauer (INRIA) « Optimisation continue à base de comparaisons : modèles de substitution et adaptation automatique »
  • Bilan de la session par Josselin Garnier (Université Paris Diderot)
Session « Maillages et Applications Industrielles »
  • Jean-Marc Daniel (IFPEN) « Besoins pour le maillage des milieux géologiques complexes »
  • Paul-Louis George et Houman Borouchaki (INRIA et UTT - INRIA respectivement) « Panorama des méthodes génériques de génération de maillages et méthodes spécifiques de maillage en géosciences »
  • Jean-François Remacle (UCL - Rice University) « An indirect approach to hex mesh generation »
  • Thierry Coupez (Ecole Centrale de Nantes) « Frontières implicites et adaptation anisotrope de maillage »
  • Pascal Tremblay (Michelin) « Les défis de la transition du maillage hexaédrique vers tétraédrique pour des applications industrielles »
  • Bilan de la session par Frédéric Hecht (UPMC)
Session « Visualisation »
  • Sébastien Schneider (IFPEN) « Courte introduction à la visualisation pour les géosciences à IFPEN »
  • Julien Jomier (Kitware) « Scientific Visualization with Open-Source Tools »
  • Emilie Chouzenoux (UPEM) « A Random block- coordinate primal-dual proximal algorithm with application to 3D mesh denoising »
  • Jean-Daniel Fekete (INRIA) « Visualisation de réseaux par matrices d’adjacence »
  • Marc Antonini (CNRS) « Compression et visualisation de données 3D massives »
  • Bilan de la session par Julien Tierny (CNRS - UPMC)

October 31, 2014

BEADS: Baseline Estimation And Denoising w/ Sparsity

Essentials:
BEADS paper : Baseline Estimation And Denoising w/ Sparsity
BEADS Matlab toolbox

Most signals and images can be split into broad classes of morphological features. There are five traditional classes, with potentially different names, although the dams are not fully waterproof:
  • smooth parts: trends, backgrounds, continuums, biases, drifts or baselines,
  • discontinuities: contours, edges or jumps,
  • harmonic parts: oscillations, resonances, geometric textures,
  • hills: bumps, blobs or peaks,
  • noises: un-modeled, more or less random, unstructured or stochastic.
In analytical chemistry,  many types of signals (chromatography, mass spectroscopy, Raman, NMR) resemble Fourier spectra: a quantity of positive bump-like peaks, representing the proportion of chemical compounds or atoms, over an instrumental baseline, with noise. The present work (termed BEADS) combines a Baseline Estimation And Denoising. It features the use of the Sparsity of the peaks themselves, but also that of their higher-order derivatives. It also enforces positivity, with sparsity-promoting asymmetric L1 norms, or regularization thereof. A first version of the BEADS Matlab toolbox is provided.
1D chromatogram with increasing baseline


Chemometrics and Intelligent Laboratory Systems, December 2014
This paper jointly addresses the problems of chromatogram baseline correction and noise reduction. The proposed approach is based on modeling the series of chromatogram peaks as sparse with sparse derivatives, and on modeling the baseline as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is utilized. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods [Vincent Mazet's BackCor and airPLS] using both simulated and real chromatogram data, with Gaussian and Poisson noises
Asymmetric L1 penalty for positive signals
Keywords: baseline correction; baseline drift; sparse derivative; asymmetric penalty; low-pass filtering; convex optimization
  • This paper jointly addresses the problems of chromatogram baseline correction and noise reduction.
  • The series of chromatogram peaks are modeled as sparse with sparse derivatives.
  • The baseline is modeled as a low-pass signal.
  • A convex optimization problem is formulated so as to encapsulate these non-parametric models and a computationally efficient, iterative algorithm is developed.
  • The performance is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data.
I would like to thank Vincent Mazet for excellent suggestions.

September 29, 2014

Geophysics: Taking signal and noise to new dimensions (call for papers)


The journal (from SEG: Society of Exploration Geophysicists) Geophysics (SCImago journal ranking) has issued a call for papers for a special issue devoted to signal processing ("Taking signal and noise to new dimensions"), deadline end January 2015.
Original seismic stack
Large Gaussian noise corruption
Denoised with dual-tree wavelets

Taking signal and noise to new dimensions


Scope:  
The inherent complexity of seismic data has sparked, since about half a century, the development of innovative techniques to separate signal and noise. Localized time-scale representations (e. g. wavelets), parsimonious deconvolution, sparsity-promoted restoration and reconstruction are now at the core of modern signal processing and image analysis algorithms. Together with advances from computer science and machine learning, they shaped the field of data science, aiming at retrieving the inside structure of feature-rich, complex and high-dimensional datasets. This special issue is devoted to novel methodologies and strategies capable of tackling the large data volumes necessary to harness the future of subsurface exploration. A common trait resides in the possibility to seize at the same time both signal and noise properties along lower dimensional spaces, with dedicated metrics, to allow their joint use for seismic information enhancement. The traditional frontier between signal and noise is dimming, as incoherent seismic perturbations and formerly detrimental coherent wavefields, such as multiple reflections, are nowadays recognized as additional information for seismic processing, imaging and interpretation. We welcome contribution pertaining (but not limited) to:
  • emerging seismic data acquisition and management technologies
  • more compact, sparser and optimized multidimensional seismic data representations
  • filtering, noise attenuation, signal enhancement and source separation
  • advanced optimization methods and related metrics, regularizations and penalizations
  • artificial intelligence and machine learning applications for seismic data characterization
  • novel applications of signal and image processing to geophysics
  • hardware and software algorithmic breakthroughs
Timeline (tentative) for the Geophysics call of papers
  • Submission deadline: 31 Jan 2015
  • Peer review complete: 10 July 2015
  • All files submitted for production: 15 August 2015
  • Publication of issue: November-December 2015

Gravity survey @ LandTech

    September 13, 2014

    Cours : Radial basis functions

    Un message d'Albert Cohen annonce un mini-cours : "Interpolation et Quasi-Interpolation utilisant les fonctions radiales comme méthode d'approximation de plusieurs variables"
    Le professeur Martin Buhmann (Université de Justus-Liebig, Giessen), de l’Université de Giessen, fera le lundi 22 septembre au laboratoire J.L. Lions un mini-cours de 2h sur les bases de fonction radiales. Ces outils sont fréquement utilisés en approximation multivariée, en particulier en grande dimension, en statistiques (méthodes à noyaux), et parfois dans le traitement de l’image et l’approximation des EDP. Martin Buhmann est un expert reconnu dans ce domaine. Le cours aura lieu de 16:00 à 18:30 en salle 309 (salle de séminaire du LJLL) couloir 15-16, 3ème étage, Jussieu. 
    Radial version of the Laplacian of a Gaussian wavelet

    Le résumé (en français !)
    Les méthodes utilisant les fonctions radiales sont des façons d'approcher une fonction par une combinaison linéaire (finie ou infinie) de translatées d'une unique fonction, appelée fonction noyau. Cette fonction peut avoir, par exemple, la forme d'une exponentielle (noyau de Gauss ou de Poisson). Les coefficients de cette combinaison linéaire étant choisis, par exemple, en fonction des conditions d'interpolation. Beaucoup de propriétés typiques de l'approximation par des noyaux de type fonctions radiales proviennent de la symétrie radiale de ces noyaux. Les avantages de cette méthode liée aux splines à une dimension sont, d'une part sa généralisation naturelle à une dimension quelconque (les fonctions noyaux étant générées à partir d'une fonction d'une variable multidimensionnelle composée avec une norme – lorsque la norme est euclidienne on parle de fonctions radiales) et d'autre part, ses propriétés de convergence très rapide si les fonctions approximées sont assez lisses (souvent en convergence spectrale). De plus, en utilisant un grand choix de fonctions radiales, le problème d'interpolation est bien défini avec une unique solution indépendante de la dimension de l'espace et de la distribution des points d'interpolation. Cette situation optimale serait impossible par exemple dans le cas des polynômes en plusieurs dimensions. Entre autres les noyaux de Gauss, les noyaux multiquadriques, inverse multiquadriques, les noyaux de Poisson, ..., ont cette propriété intéressante qui permet de nombreuses applications. Dans ces deux exposés nous introduirons le concept de fonction radiale, nous présenterons des propriétés de ces fonctions approximantes et nous détaillerons les théorèmes de convergence qui montrent la puissance des méthodes d'approximation utilisant cette idée.

    Quelques pistes :
    Yafer Abu-Mostafa, Learning from data, introductory machine learning course, Caltech, 2012, lecture 16, Radial Basis Functions


    M. D. Buhmann : Radial basis function, Scholarpedia, 2010
    M. D. Buhmann : Radial Basis Functions: Theory and Implementations, 2003, Cambridge University Press
    J. B. Cherrie, R. K. Beatson, G. N.  Newsam,  Fast evaluation of radial basis functions: methods for generalised multiquadrics in R^n, SIAM Journal on Scientific Computation, 2002
    M. D. Buhmann : Radial basis functions, Acta Numerica, 2000
    Radial basis function methods are modern ways to approximate multivariate functions, especially in the absence of grid data. They have been known, tested and analysed for several years now and many positive properties have been identi ed. This paper gives a selective but up-to-date survey of several recent developments that explains their usefulness from the theoretical point of view and contributes useful new classes of radial basis function. We consider particularly the new results on convergence rates of interpolation with radial basis functions, as well as some of the various achievements on approximation on spheres, and the e cient numerical computation of interpolants for very large sets of data. Several examples of useful applications are stated at the end of the paper.
    Mark J. L. Orr, Introduction to Radial Basis Function Networks, April 1996
    D. H. Broomhead,  D. Lowe, Multivariable Functional Interpolation and Adaptive Networks, Complex Systems, 1988


    July 24, 2014

    Euclid in a Taxicab makes some SOOT : A smoothed l_1/l_2 norm ratio for sparsity enforcement and blind restoration

    [This post deals with a paper on an l1/l2 norm ratio proxy for sparsity, applied to blind signal deconvolution with an example on seismic data]

    There are Taxicab services in the city of Euclid, OH, near Cleveland. There is a first Euclid Avenue, in Cleveland, was a beautiful and wealthy city a century ago, with a string of mansions known as Millionaire's Row.

    According to wikipedia, "Euclid Avenue is a street name used in multiple U.S. municipalities. Surveyors frequently named a street after Euclid (or Euclides) the father of geometry as it is the basis of their profession". 

    I wonder why i have seen so far so few "rue Euclide" in France, close to none. As least, no Euclid Street in Paris, as you can check from Mathematicians with Paris streets named after them.


    Euclid Avenue Station directs subway lines A and C from Brooklyn to Manhattan. Can one really draw a line between Euclid and Manhattan? You can use the Taxicab geometry, linked to the Manhattan distance (in red, blue or green lines), in the absolute value sense, or l_1 (ell-one norm or one-norm). Or you can do like Dutilleul (garou-garou), the "Passe-muraille" by Marcel Aimé, or J'onn J'onzz the Martian Manhunter, and cross through the wall, in the Euclidean way. With the square root of sums of squares, or l_2 (ell-two norm or simply two-norm).

    Most of the standard data analysis or signal processing tools we use in every day life are based on a Euclidean minimization of the l_2 norm, from the mean to the pseudo-inverse, via the Fourier transform. Not because it is important to minimize energies, just because the derivative of a square, i.e. a linear system, was for long the only kind of systems we could solve. In the case of outliers, the l_1-norm was shown to exhibit some robustness, hence the median estimator, and its descendants, the l_1 or robust-PCA, or even robust Fourier or time-frequency transforms. Many recents efficient algorithms have been proposed to obtain these transforms or estimators. L^1 also possesses some nice geometry, quite often attributed to Herman Minkowski. For a primer, see The nature of length, area, and volume in taxicab geometry, Kevin P. Thompson, International Electronic Journal of Geometry, Volume 4 No. 2 pp. 193-207 (2011)

    But the l1 is not enough in higher dimensional spaces, to capture the low-dimension structure of sparse data. One need the counting l_0-quasi-norm, the count of non-zero elements, or numerosity. Alas, the l0-quasi-norm is barely differentiable, not even quite smooth. Hard to optimize. Luckily, under some conditions, solving a system under  a nicer l^1 norm sometimes yields the sparsest solution to the system. But not always, as shown in Minimum l_1-norm solutions are not always sparse

    Over the past years, several measures have been proposed to better measure or approximate the sparsity measure of a signal or an image. Taking into account the fact that the l_0-quasi-norm is homogeneous of degree 0, while standard norms are homogeneous of degree 1. One of those is simply the ratio of the l_1 and the l_2 norms for vectors of dimension n, which enjoys a standard inequality:

    l_1 and l_2 standard inequality
    With an appropriately scaled ratio of norms (through a simple affine transformation), one obtains a parsimony measure or a sparsity-sparseness index between 0 and 1 (also knwon as Hoyer sparsity measure):

    l_1 l_2 ratio Hoyer sparsity measure
    Yet it does not lend itself to easy optimization (due to nonconvex optimization), although several works have studies such a norm ratio. Some of them were publicized in Nuit Blanche:
    http://nuit-blanche.blogspot.fr/2012/04/compressive-sensing-this-week.html
    http://nuit-blanche.blogspot.fr/2012/03/old-and-new-algorithm-for-blind.html
    http://nuit-blanche.blogspot.fr/2008/05/cs-kgg-explanation-of-l0-and-l1.html
    All this long zigzag path introduces a paper on a smoothing of the nonconvex l_1/l_2 norm ratio in a parametrized form and regularized norm formulation, to put Euclid in a Taxicab. The corresponding algorithm, termed SOOT for "Smoothed-One-Over-Two" norm ratio, has results on its theoretical convergence. Not used here, the ratio of the l1-norm to the l2-norm, when restricted to subspaces of given dimension, is provided  with a lower bound given by the Kashin-Garnaev-Gluskin Inequality.

    It is applied to sparse blind deconvolution (or deblurring), here for an example of sparse seismic data processing. In the same way the l_1 regularization was present in a 1973 geophysical paper by Muir and Claerbout, the l_1/l_2 norm ratio idea can be traced back to technical reports by Gray. It involves a logarithm, whose effect, in some way, is close to the l_0 quasi-norm. Indeed, the logarithm is somehow a continuous limit, close to the discrete zeroth-power.

    Seismological reflectivity sequence and seismic wavelet recovery with sparse blind deconvolution

    [Abstract]
    The $\ell_1/ell_2$ ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context.However, the $\ell_1/ell_2$ function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such regularization penalties in current restoration methods.In this paper, we propose a new penalty based on a smooth approximation to the $\ell_1/ell_2$ function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact $\ell_1/ell_2$ term, on an application to seismic data blind deconvolution.
    [arXiv Link]
    [Nuit Blanche link]

    And oh, i forgot to tell you (princess Irulan, Dune introduction): there is a path between Euclid and Hardy (and Littlewood), in The Development of Prime Number Theory: From Euclid to Hardy and Littlewood. And there was a famous taxicab connection between Hardy and Ramanujan, the Hardy-Ramanujan number, or "Is 1729 a dull number?"
    Waiting for taxicab number 1729


    June 3, 2014

    Seismic Signal Processing (ICASSP 2014)

    There was a special session on "Seismic Signal Processing" at International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, Florence. Our talk was on simplified optimization techniques to solve multiple reflections via adaptive filtering techniques in wavelet frame domains.

    Random and structured noise both affect seismic data, hiding the reflections of interest (primaries) that carry meaningful geophysical interpretation. When the structured noise is composed of multiple reflections, its adaptive cancellation is obtained through time-varying filtering, compensating inaccuracies in given approximate templates. The under-determined problem can then be formulated as a convex optimization one, providing estimates of both filters and primaries. Within this framework, the criterion to be minimized mainly consists of two parts: a data fidelity term and hard constraints modelling a priori information. This formulation may avoid, or at least facilitate, some parameter determination tasks, usually difficult to perform in inverse problems. Not only classical constraints, such as sparsity, are considered here, but also constraints expressed through hyperplanes, onto which the projection is easy to compute. The latter constraints lead to improved performance by further constraining the space of geophysically sound solutions.
    This paper  has focused on the constrained convex formulation of adaptive multiple removal. The proposed approach, based on proximal methods, is quite flexible and allows us to integrate a large panel of hard constraints corresponding to a priori knowledge on the data to be estimated (i.e. primary signal and time-varying filters). A key observation is that some of the related constraint sets can be expressed through hyperplanes, which are not only more convenient to design, but also easier to implement through straightforward projections. Since sparsifying transforms and  constraints strongly interact [Pham-2014-TSP], we now  study the class of hyperplane constraints of interest as well as their inner parameters, together with the extension to higher dimensions

    May 30, 2014

    Sparse template-based adaptive filtering

    Significance index related to Student's t-test
    The phenomenon arises in several real-life signal processing contexts: acoustic echo-cancellation (AEC) in sound and speech,  non-destructive testing where transmitted waves may rebound at material interfaces (e.g. ultrasounds), or pattern matching in images. Here in seismic reflection or seismology. Weak signals (of interest) are buried under both strong random and structured noise. Provided appropriate templates are obtained, we propose a structured-pattern filtering algorithm (called Ricochet) through constrained adaptive filtering in a  transformed domain. Its generic methodology impose sparsity: in different wavelet frames (Haar, Daubechies, Symmlets) coefficients, using the L-1 or Manhattan norm, as well as on adaptive filter coefficients using concentration measures (for sparser filters in the time domain): L-1, the Frobenius norm squared, and the mixed L-1,2 norms). Regularity properties are constrained as well, for instance slow variation on the adaptive filter coefficients (uniform, Chebychev or L-infinity norm). Quantitative results are given with a significance index, reminiscent of the Student t-test.


    Abstract: Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured ``noises''. As their amplitude may be greater than signals of interest
    Seismic data: primaries and multiples
    Lost in multiples: a creeping primary (flat, bootom-right)
    (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by  wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations,  based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames.  The designed primal-dual algorithm solves a  constrained  minimization problem that alleviates standard regularization issues in finding hyper-parameters. The approach demonstrates  significantly good performance in low signal-to-noise ratio conditions, both for simulated and real field seismic data.

    All the metrics here are convex. Wait a bit for something completely different with non-convex penalties, namely smoothed versions of the ratio of the L1 norm over the L2 norm: Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed $\ell_1/ell_2$ Regularization, covered in Nuit Blanche, and with arxiv page and pdf.

    May 27, 2014

    Postdoc position: Very large data management in Geosciences


    Geophysical mesh at two resolutions
    [UPDATE: 2014/09/21 Position filled]

    So we (IFPEN) have a postdoc position on "Very large data management in Geosciences" (gestion des très gros volumes de données en géosciences), with details at: 

    Abstract: The main purpose of the post-doctoral work is to propose new data compression techniques for volumetric meshes able to manage seismic data values attached to geometry elements (nodes or cells) with adaptive decompression for post-processing functionalities (visualization). Compression algorithms adapted to "big data" will enable our current software scalability, for instance, geoscience fluid-flow simulation or transport combustion simulation on very large meshes. Obtained results are intended to contribute to IFPEN scientific lock about very large data management with a target of being able to process billion of cells or data samples. Results will also be used to propose new software solutions for the storage, the transfer and the processing (exploration, visualization) of these large data sets.
    Résumé : L'objectif de ce post-doctorat est de proposer de nouvelles méthodes de compression de données
    Seismic data compression and denoising
    et de maillages volumiques capables de gérer des propriétés attachées à la géométrique (connexité de mailles, groupes spatiaux de traces sismiques), éventuellement évolutives, tout en permettant une décompression progressive et adaptée à la visualisation et au traitement. Les algorithmes de compression pour les données volumiques permettraient de les exploiter dans les outils logiciels qui manipulent des ensembles volumineux (simulation d'écoulement poreux en géosciences ou simulation de combustion en transport). Les résultats obtenus auront vocation à contribuer au verrou technologique concernant les très gros volumes de données avec une cible fixée sur le milliard de cellules ou d'échantillons. Les résultats seront notamment exploités pour assurer le stockage, le transfert mais aussi la manipulation (exploration, visualisation) de ces très gros volumes.

    May 10, 2014

    Computational Harmonic Analysis: Winter School

    This message was communicated to me by Caroline Chaux, to share:

    Computational Harmonic Analysis: Winter School, Marseille, October 2014

    We are pleased to announce the winter school on Computational Harmonic Analysis - with Applications to Signal and Image Processing, that will be held in October 2014 (20-24), in Marseille, France (at CIRM).

    The topics will be:
    • Mathematical and numerical aspects of frame theory
    • Time-frequency frames and applications to audio analysis
    • Wavelets, shearlets and geometric frames (and others *-lets or directional wavelets)
    • Inverse problems and optimization
    This winter school will bring together PhD-students and young PostDocs (as well as a few experts) in the field of computational harmonic analysis, in order to explain the background and the efficiency as well as the range of application of a number of numerical algorithms which are based on the Fourier-, the wavelet and the Short-Time Fourier Transform (Time-Frequency and Gabor Analysis), as well as other atomic decomposition techniques, in particular in higher dimensions (shearlets, curvelets,...).

    There is a wide range of topics to be covered, from the theoretical background (from infinite-dimensional settings, expressed in terms of function spaces to finite dimensional situations) to the development of efficient algorithms and the real-world applications to music- and sound processing or for image analysis tasks.  mathematically oriented lectures will be complemented with practical computer sessions.

    The school will be limited to 40 participants. Registration is free but mandatory by June, 30th 2014. Participants can present their work during poster sessions if they want. Abstracts can be submitted by September, 1st 2014.

    More information can be found on the dedicated website:
    http://feichtingertorresani.weebly.com/information2.html

    May 9, 2014

    Three-band linear gutter-bank in Florence (ICASSP 2014)
    ICASSP 2014 in Florence has just ended. The slogan was "The art of signal processing". In Florence, Art is indeed everywhere, and science, signal processing included, is not very far apart.

    Take for instance this example of an analysis/synthetis three-band, apparently linear, and complex gutter-bank. I do suspect a certain redundancy i cannot yet understand. Is it related to other diffusion-based filter-banks?

    May 5, 2014

    Signal processing for chemical sensing (OGST Special issue)

    OGST (Oil & Gas Science and Technology) has just published a special issue on "Advances in signal processing and image analysis for physicochemical, analytical chemistry and chemical sensing", vol. 69, number 2 (March-April 2014). It somehow parallels the ICASSP 2013 Special session on  Signal Processing for Chemical Sensing. Moreover, a contributed book in planned on the topic.

    The editorial (F. Rocca and L. Duval) deals with informational content of data, sensory principles and, of source, the law of parcimony (beautifully illustrated in "The name of the rose"), Ockham's razor, in other words, sparsity, a common aspect in recent signal processing techniques. So why is the topic interesting for chemical engineers and scientists?

    With the advent of more affordable, higher resolution or innovative data acquisition techniques (for instance hyphenated instrumentation such as two-dimensional chromatography), the need for advanced signal and image processing tools has grown in physico-chemical analysis, together with the quantity and complexity of acquired measurements.
    Either with mono- (signals) or two-dimensional (from hyphenated techniques to standard images) data, processing generally aims at improving quality and at providing more precise quantitative assessment of measurements of materials and products, to yield insight or access to information, chemical properties, reactive dynamics or textural properties, to name a few (for instance). Although chemometrics embrace from experimental design to calibration, more interplay between physico-chemical analysis and generic signal and image processing is believed to strengthen the two disciplines. Indeed, although they strongly differ in background and vocabulary, both specialities share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of samples or of numerical data. 

    The present call for papers aims at gathering contributions on recent progresses performed and emerging trends concerning (but not limited to):
    • 1D and 2D acquisition, sparse sampling (compressive sensing), modulation/demodulation, compression, background/baseline/trend estimation, enhancement, integration, smoothing and filtering, denoising, differentiation, detection, deconvolution and source separation, resolution improvement, peak or curve fitting and matching, clustering, segmentation, multiresolution analysis, mathematical morphology, calibration, multivariate curve resolution, property prediction, regression, data mining, tomography, visualization,
    pertaining to the improvement of physico-chemical analysis techniques, including (not exclusively):
    • (high-performance) gas, liquid or ion chromatography; gel electrophoresis; diode array detector; Ultraviolet (UV), visible, Infrared (NIR, FIR), Raman or Nuclear Magnetic Resonance (NMR) spectroscopy, X-ray diffraction (XRD), X-Ray Absorption (EXAFS, XANES), mass spectrometry; photoacoustic spectroscopy (PAS); porosimetry; hyphenated techniques; ion-sensitive sensors, artificial noses; electron microscopy (SEM, TEM),
    in the following proposed domains:
    • catalysis, chemical engineering, oil and gas production, refining processes, petrochemicals, and other sources of energy, in particular alternative energies with a view to sustainable development. 
      NMR data analysis: A time-domain parametric approach using adaptive subband decomposition [pdf], E.-H. Djermoune, M. Tomczak and D. Brie
      Abstract:
      This paper presents a fast time-domain data analysis method for one- and two-dimensional Nuclear Magnetic Resonance (NMR) spectroscopy, assuming Lorentzian lineshapes, based on an adaptive spectral decomposition. The latter is achieved through successive filtering and decimation steps ending up in a decomposition tree. At each node of the tree, the parameters of the corresponding subband signal are estimated using some high-resolution method. The resulting estimation error is then processed through a stopping criterion which allows one to decide whether the decimation should be carried on or not. Thus the method leads to an automated selection of the decimation level and consequently to a signal-adaptive decomposition. Moreover, it enables one to reduce the processing time and makes the choice of usual free parameters easier, comparatively to the case where the whole signal is processed at once. The efficiency of the method is demonstrated using 1-D and 2-D 13C NMR signals.
    Inverse Problem Approach for Alignment of Electron Tomographic series [pdf], V.-D. Tran, M. Moreaud, É. Thiébaut, L. Denis and J.-M. Becker
    Abstract:
    In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
    Abstract:
    Grazing Incidence X-ray Diffraction (GIXD) is a widely used characterization technique, applied for the investigation of the structure of thin films. As far as organic films are concerned, the confinement of the film to the substrate results in anisotropic 2-dimensional GIXD patterns, such those observed for polythiophene-based films, which are used as active layers in photovoltaic applications. Potential malfunctions of the detectors utilized may distort the quality of the acquired images, affecting thus the analysis process and the structural information derived. Motivated by the success of Morphological Component Analysis (MCA) in image processing, we tackle in this study the problem of recovering the missing information in GIXD images due to potential detector's malfunction. First, we show that the geometrical structures which are present in the GIXD images can be represented sparsely by means of a combination of over-complete transforms, namely, the curvelet and the undecimated wavelet transform, resulting in a simple and compact description of their inherent information content. Then, the missing information is recovered by applying MCA in an inpainting framework, by exploiting the sparse representation of GIXD data in these two over-complete transform domains. The experimental evaluation shows that the proposed approach is highly efficient in recovering the missing information in the form of either randomly burned pixels, or whole burned rows, even at the order of 50 % of the total number of pixels. Thus, our approach can be applied for healing any potential problems related to detector performance during acquisition, which is of high importance in synchrotron-based experiments, since the beamtime allocated to users is extremely limited and any technical malfunction could be detrimental for the course of the experimental project. Moreover, the non-necessity of long acquisition times or repeating measurements, which stems from our results adds extra value to the proposed approach.

    Abstract:
    Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample’s response to a parameter (e.g. time, displacement, voltage, wavelength) that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since signals contain multiple information points, they have rich information content but are generally complex to comprehend. Multivariate Analysis (MA) has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years, there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signals), consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals), accurate elimination of interfering signals (removal of reproducible but unwanted signals) and the standardisation of signal amplitude fluctuations. At present, the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis), these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or where the variations in the signals can be complex. As science seeks to probe datasets in less and less tightly controlled situations the ability to provide high-fidelity corrections in a very flexible manner is becoming more critical and multivariate based signal processing has the potential to provide many solutions.
    Design of Smart Ion-selective Electrode Arrays based on Source Separation through Nonlinear Independent Component Analysis [pdf] Leonardo T. Duarte and Christian Jutten
    Abstract:
    The development of chemical sensor arrays based on Blind Source Separation (BSS) provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE). The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.
    Unsupervised segmentation of hyperspectral images with spatialized Gaussian mixture model and model selection [pdf] Serge Cohen, Erwan Le Pennec
    Abstract:
    In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.