December 26, 2011

Computer tools for buddies

Amongst a novena of virtual life savers on computers (under Windows XP or seven of course, everything is great with MacOS or Linux), we have the following. If you just want to pick one, pick Everything! Friends keep on thanking me for that.
That's my Xmas gift for virtual life friends, for what it's worth. Update from The turn of a friendly card - Train numbering trick. What else?

December 12, 2011

Chuck Norris keyboard fact

In a long lasting trend of jokes on popular antonyms, Chuck Norris facts as well as Car Friedrich Gauss facts (The greatest mathematician since antiquity or Gauss facts thus far, see Carl Friedrich Gauss Facts), i would like to share this chuck-norris-keyboard-fact (shared on Facebook or Twitter):
Chuck Norris just sucks. If he were as smart and strong it's claimed, he'd just come here and slam my head on the keyboard asdmsdfvmwefjkmasd [space] [space] [space] [space] [space] 

Okay, i haven't posted nothing for a month, let's be serious for a while. :
ChucK: audio programming language  (Strongly-timed, Concurrent, and On-the-fly Audio Programming Language)
ChucK is a new (and developing) audio programming language for real-time synthesis, composition, performance, and now, analysis - fully supported on MacOS X, Windows, and Linux. ChucK presents a new time-based, concurrent programming model that's highly precise and expressive (we call this strongly-timed), as well as dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OSC, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive control.
with something to do with keyboards...

And now for something completely different:
(Johann) Carl Friedrich Gauss is radical (with respect to the "roots", as in Jimmy Cliff's Roots radical) in signal processing for many reasons, and mainly for the possible invention of what is known today as the FFT or Fast Fourier transform (Cooley–Tukey FFT algorithm). You may just quote wikipedia:
This algorithm, including its recursive application, was invented around 1805 by Carl Friedrich Gauss, who used it to interpolate the trajectories of the asteroids Pallas and Juno, but his work was not widely recognized (being published only posthumously and in neo-Latin).[1][2] Gauss did not analyze the asymptotic computational time, however. Various limited forms were also rediscovered several times throughout the 19th and early 20th centuries.
or refer to Gauss and the history of the fast Fourier transform, by Michael T. Heideman, Don H. Johnson and C. Sidney Burrus as appeared behind paywalls Archive for History of Exact Sciences, Volume 34, Number 3, 265-277, DOI: 10.1007/BF00348431 or IEEE Acoustics, speech and signal processing Magazine, October 1984. A downloadable copy can be found here. The FFT has profound applied connections with early oil industry: some say (though i am not able to provide an online reference yet) that FFT algorithms were known to oil  industry years before Cooley and Tukey's (1965), and they did not even bother patent it (because patents are a way to disclose, and sometimes secret is just simpler for protecting). Other early insights on connections between electrical engineering and geophysics are to be found in Oral-History with Enders Robinson who states two main problems, related to some (yet unknown) inverse problem equation:
Gauss primes
They followed the new group, but they were also very concerned with their own special problems. Their problem was 1)to collect data, then 2) produce a map of where to drill the oil well.
So what about an even stronger Gauss fact? Well, Gauss can beat Stalin, Poutin AND  Medvedev. As read on
За нормальное распределение (in french, Images des mathématiques La loi normale de Gauss s’invite dans les manifs. Carl Friedrich Gauss depicted with goat's horns? No way, since Gauss is often described as the fox of mathematics, for his abilities to hide his (mathematical proofs') tracks on the sand (or snow) with his tail.
Abel said, `He is like the fox, who effaces his tracks in the sand with his tail'. Gauss, in defense of his style, said, `no self-respecting architect leaves the scaffolding in place after completing the building'.
Quoted in Startribune:
"Obviously, he doesn't agree with Gauss," one commenter wrote disdainfully, referring to pioneering mathematician Carl Friedrich Gauss, who lived 200 years ago. Disenchanted Russians argue that United Russia's reported election results are so improbable as to violate Gauss' groundbreaking work on statistics.
One slogan: "we beleive in Gauss, not in Churov". Vladimir Churov is the president of Central Election Commission of Russia, but apparently breeches the central limit theorem. So as Stalin used to say: Gauss, how many divisions? Let's look at Gauss integers (left image). Chuck Norris vs. Gauss? Battle's ongoing.

Update on Pixel shaker (aka Frédéric Morain-Nicolier): Carl Gauss strongly refuted Chuck Norris conjecture (get a snapshot now, this infringes standard arithmetics) that 1550 < 6450. A century before.

November 11, 2011

Unary Day

I wouldn't have spent this special day with unary sequence 11h1111s at 11/11/11 without a terrible pun, mentioning it as the anniversary of Attila, since it's "an invasion of the Huns" ("huns" for "ones", uttered with an awful accent); even made a Facebook page for that: Attila Fest. Not proud :) Yet this dull (not in the Ramanujan sense) sequence reminded me of two nice formula for the golden ratio (aka $\phi$, or the divine proportion): one rational, one radical. So it seems $\phi$ could be tamed (or approched) quite easily. Indeed, it is one of the simplest non-rational numbers, as a root of a basic equation of degree 2. Thus, far from being transcendental. Surprisingly, $\phi$ bears some kind of transcendence (in the religious sense) as it is, somehow, beyond the grasp of the human mind, meaning it cannot be approched easily in a "rational" way, i.e. worse that any other number, as stated in a theorem by Adolf Hurwitz (1856-1919). In the following formula, there exist infinitely many $m$ and $n$ for any irrational $\xi$, and the constant $\sqrt(5)$ cannot be improved, due to $\phi$.
The radical formula for $\phi$ above is the key: with divisions by ones, denominators increase very very slow. Contrary to common knowledge (in Age of Empires) that Huns are faster. Thus being, in think i'd better go back to a paper on applications of unary filters, instead of making dull puns.

November 4, 2011

Advances in signal and image processing for physico-chemical analysis

Looking for new frontiers in signal and image processing applications in chemical related analysis? Consider the following call for contributions.

 Call for papers: Dossier, Special issue on Advances in signal and image processing for physico-chemical analysis (pdf)

Deadlines - Intent: December 12th, 2011 / Final manuscript: January 8th, 2012

Oil & Gas Science and Technology - Revue d'IFP Energies Nouvelles
(online journal)

With the advent of more affordable, higher resolution or innovative data acquisition techniques (for instance hyphenated instrumentation such as two-dimensional chromatography), the need for advanced signal and image processing tools has grown in physico-chemical analysis, together with the quantity and complexity of acquired measurements. Either with mono- (signals) or two-dimensional (from hyphenated techniques to standard images) data, processing generally aims at improving quality and at providing more precise quantitative assessment of measurements of materials and products, to yield insight or access to information, chemical properties, reactive dynamics or textural properties, to name a few (for instance). Although chemometrics embrace from experimental design to calibration, more interplay between physico-chemical analysis and generic signal and image processing is believed to strengthen the two disciplines. Indeed, although they strongly differ in background and vocabulary, both specialities share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of samples or of numerical data.
The present call for papers aims at gathering contributions on recent progresses performed and emerging trends concerning (but not limited) to:
  • 1D and 2D acquisition, sparse sampling (compressive sensing), modulation/demodulation, compression, background/baseline/trend estimation, enhancement, integration, smoothing and filtering, denoising, differentiation, detection, deconvolution and source separation, resolution improvement, peak or curve fitting and matching, clustering, segmentation, multiresolution analysis, mathematical morphology, calibration, multivariate curve resolution, property prediction, regression, data mining, tomography, visualization,
pertaining to the improvement of physico-chemical analysis techniques, including (not exclusively):
  • (high-performance) gas, liquid or ion chromatography; gel electrophoresis; diode array detector; Ultraviolet (UV), visible, Infrared (NIR, FIR), Raman or Nuclear Magnetic Resonance (NMR) spectroscopy, X-ray diffraction (XRD), X-Ray Absorption (EXAFS, XANES), mass spectrometry; photoacoustic spectroscopy (PAS); porosimetry; hyphenated techniques; electron microscopy (SEM, TEM),
in the following proposed domains:
  • catalysis, chemical engineering, oil and gas production, refining processes, petrochemicals, and other sources of energy, in particular alternative energies with a view to sustainable development.
Provisional deadlines:
  • Statement of intent: December 12th, 2011
  • Submission of final manuscript: January 8th, 2012
  • Publication: 2nd semester 2012

August 15, 2011

Opinion on Melancholia

Lars von Trier has recently proposed to the audience the Cannes awarded movie Melancholia. As for the brunette and the blonde sisters (one organized and the other erratic), as for the two planets (Earth and Mechancolia) featured in the movie (one rotating predictively for ever around the sun, the other wobbling to its final destination), the movie has attracted two-pole reviews, either good or bad (in random order). Melancholia, as a science-fiction movie (is it?), dwells on a the line that joins Stanley Kubrick's 2001 and Andrei Tarkovsky's Solaris. On very projective line, between "zero and infinity" (le zéro et l'infini), which translates into French Darkness at noon by Arthur Koestler.

Von Trier's Mechancolia of course reminds of Albrecht Dürer's (the painter, not the insect - see Monty Python's in three languages) Melencolia I ou La Melencolia. The engraving has suffered many interpretations, with all its symbolic details (magic square, sphere, truncated rhombohedron...) combined with the standard inference of "Melencolia I" or "Melencholia Imaginativa", in which  "imagination" predominates over "mind" or "reason".

Beautiful picture, ear-deafening Wagner music, fine directed, the film would only need some direction. After clear-cut editing, it would probably make a nice desktop wallpaper.

Oh yes, i forgot to tell you (for those who have an eye for finest details, these words are from another SicFi movie Dune), the Signal Processing: Special issue on Advances in Multirate Filter Bank Structures and Multiscale Representations is out: at ScienceDirect (Signal Processing, Volume 91, Issue 12), here in a blog post or on a separate page. Feel like melancholia as the issue is finally out.

July 22, 2011

Sparse is compact and in press (special issue)

Szeged University Memorial plaque in honor of Haar and Riesz
All papers in press for the Special Issue of Signal Processing on "Multirate Filter Bank Structures and Multiscale Representations" (announced here at Nuit Blanche)  are now available online and published in Volume 91, Issue 12, December 2011. 100 years after Alfred Haar seed, with the help of about 100 reviewers. While waiting for the paperback versions in the solid world, have a look at the following contributions in digital form, gathered on a dedicated page Signal Processing: Special issue on Advances in Multirate Filter Bank Structures and Multiscale Representations. They deal with 1-D signals to 2-D images, from image coding to compressive sensing, from fixed to adaptive representations, with a common bias toward sparsity. Forthcoming "call for papers" in 2011, related to sparsity, were gathered in a previous post: Sparse is abundant.

A century after the first outbreak of wavelets in Alfred Haar's thesis in 1909, filter banks and wavelet transforms lie at the heart of many digital signal processing and communication systems. During the last thirty years, they have been the focus of tremendous theoretical advances and practical applications in a growing digital world. They are for instance present, as local linear expansions, at the core of many existing or forthcoming audio, image or video compression algorithms.
Beyond standards, many exciting developments have emerged in filter banks and wavelets from the confrontation between scientists from different fields (including signal and image processing, computer science, harmonic analysis, approximation theory, statistics, bioengineering, physics,\ldots). At their confluence, multiscale representations of data, associated with their efficient processing in a multirate manner, have unveiled tools or refreshed methods impacting the whole data management process, from acquisition to interpretation, through communications, recovery and visualization. Multirate structures naturally shelter key concepts such as the duality between redundancy and sparsity, as well as means for extracting low dimensional structures from higher ones. In image processing in particular, various extensions of wavelets provide smart linear tools for building insightful geometrical representations of natural images.
The purpose of this special issue is to report on recent progresses performed, and emerging trends, in the domain of multirate filter banks and multiscale representations of signals and images. Topics addressed faithfully reflect the active research pertaining to this field, including sparse representations of (1-D) signals to (2-D) images, multiscale models and processing, shrinkage and denoising, compressive sensing, oversampled discrete frames, geometrical multiscale transforms, hybrid and adaptive representations and non-separable lifting for image compression.

Fast orthogonal sparse approximation algorithms over local dictionaries (DOI:10.1016/j.sigpro.2011.01.004)

Boris Mailhé and Rémi Gribonval and Pierre Vandergheynst and Frédéric Bimbot

In this work we present a new greedy algorithm for sparse approximation called LocOMP.
LocOMP is meant to be run on local dictionaries made of atoms with much shorter supports than the signal length.
This notably encompasses shift-invariant dictionaries and time-frequency dictionaries, be they monoscale or multiscale.
In this case, very fast implementations of Matching Pursuit are already available.
LocOMP is almost as fast as Matching Pursuit while approaching the signal almost as well as the much slower Orthogonal Matching Pursuit.

Sparse approximation; Greedy algorithms; Shift invariance; Orthogonal Matching Pursuit

Recursive Nearest Neighbor Search in a Sparse and Multiscale Domain for Comparing Audio Signals (DOI:10.1016/j.sigpro.2011.03.002)

Bob Sturm and Laurent Daudet

We investigate recursive nearest neighbor search in a sparse domain
at the scale of audio signals.
Essentially, to approximate the cosine distance between the signals
we make pairwise comparisons between
the elements of localized sparse models built from
large and redundant multiscale dictionaries of time-frequency atoms.
Theoretically, error bounds on these approximations provide
efficient means for quickly reducing the search space to
the nearest neighborhood of a given data;
but we demonstrate here that the tightest bound
involving a probabilistic assumption does not provide a practical approach
for comparing audio signals with respect to this distance measure.
Our experiments show, however, that regardless of these non-discriminative bounds,
we only need to make a few atom pair comparisons
to reveal, e.g., the position of origin of an excerpted signal,
or melodies with similar time-frequency structures.

Multiscale decomposition; Sparse approximation; Time—frequency dictionary; Audio similarity

Symmetric Tight Frame Wavelets With Dilation Factor M=4 (DOI:10.1016/j.sigpro.2011.05.005)

Farras Abdelnour

In this paper we discuss a new set of symmetric tight frame wavelets with the associated filterbank outputs downsampled by four at each stage. The frames consist of seven generators obtained from the lowpass filter using spectral factorization, with the lowpass filter obtained via Groebner basis method. The filters are simple to construct, and offer smooth scaling functions and wavelets. Additionally, the filterbanks presented in this paper have limited redundancy while maintaining the smoothness of underlying limit functions. The filters are linear phase (symmetric), FIR, and the resulting wavelets possess vanishing moments.

Wavelet transform; Frame; Symmetric filterbanks; Multiresolution analysis

Activelets: Wavelets for Sparse Representation of Hemodynamic Responses (DOI:10.1016/j.sigpro.2011.03.008)

Ildar Khalidov and Jalal Fadili and Francois Lazeyras and Dimitri Van De Ville and Michael Unser

We propose a new framework to extract the activity-related component in the BOLD functional Magnetic Resonance Imaging (fMRI) signal. As opposed to traditional fMRI signal analysis techniques, we do not impose any prior knowledge of the event timing. Instead, our basic assumption is that the activation pattern is a sequence of short and sparsely-distributed stimuli, as is the case in slow event-related fMRI.

We introduce new wavelet bases, termed ``activelets'', which sparsify the activity-related BOLD signal. These wavelets mimic the behavior of the differential operator underlying the hemodynamic system. To recover the sparse representation, we deploy a sparse-solution search algorithm.

The feasibility of the method is evaluated using both synthetic and experimental fMRI data. The importance of the activelet basis and the non-linear sparse recovery algorithm is demonstrated by comparison against classical B-spline wavelets and linear regularization, respectively.

BOLD fMRI; Hemodynamic response; Wavelet design; Sparsity; l1 minimization

Resonance-Based Signal Decomposition: A New Sparsity-Enabled Signal Analysis Method (DOI:10.1016/j.sigpro.2010.10.018)

Ivan Selesnick

Numerous signals arising from physiological and physical processes, in addition to being non-stationary, are moreover a mixture of sustained oscillations and non-oscillatory transients that are difficult to disentangle by linear methods. Examples of such signals include speech, biomedical, and geophysical signals. Therefore, this paper describes a new nonlinear signal analysis method based on signal resonance, rather than on frequency or scale, as provided by the Fourier and wavelet transforms. This method expresses a signal as the sum of a ‘high-resonance’ and a ‘low-resonance’ component—a high-resonance component being a signal consisting of multiple simultaneous sustained oscillations; a low-resonance component being a signal consisting of non-oscillatory transients of unspecified shape and duration. The resonance-based signal decomposition algorithm presented in this paper utilizes sparse signal representations, morphological component analysis, and constant-Q (wavelet) transforms with adjustable Q-factor.
Sparse signal representation; Constant-Q transform; Wavelet transform; Morphological component analysis

Multivariate empirical mode decomposition and application to multichannel filtering (DOI:10.1016/j.sigpro.2011.01.018)

Amar Kachenoura and Julien Fleureau and Laurent Albera and Jean-Claude Nunes and Lotfi Senhadji

Empirical Mode Decomposition (EMD) is an emerging topic in signal processing research, applied in various practical fields due in particular to its data-driven filter bank properties. In this paper, a novel EMD approach called X-EMD (eXtended-EMD) is proposed, which allows for a straightforward decomposition of mono- and multivariate signals without any change in the core of the algorithm. Qualitative results illustrate the good behavior of the proposed algorithm whatever the signal dimension is. Moreover, a comparative study of X-EMD with classical mono- and multivariate methods is presented and shows its competitiveness. Besides, we show that X-EMD extends the filter bank properties enjoyed by monovariate EMD to the case of multivariate EMD. Finally, a practical application on multi-channel sleep recording is presented.
Mono- and multivariate empirical mode decomposition; Filter bank structure; Electroencephalography data analysis

A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity (DOI:10.1016/j.sigpro.2011.04.025)

Laurent Jacques and Laurent Duval and Caroline Chaux and Gabriel Peyré

The richness of natural images makes the quest for optimal representations in
image processing and computer vision challenging. The latter observation has
not prevented the design of image representations, which trade off between
efficiency and complexity, while achieving accurate rendering of smooth regions
as well as reproducing faithful contours and textures. The most recent ones,
proposed in the past decade, share an hybrid heritage highlighting the
multiscale and oriented nature of edges and patterns in images. This paper
presents a panorama of the aforementioned literature on decompositions in
multiscale, multi-orientation bases or dictionaries. They typically exhibit
redundancy to improve sparsity in the transformed domain and sometimes its
invariance with respect to simple geometric deformations (translation,
rotation). Oriented multiscale dictionaries extend traditional wavelet
processing and may offer rotation invariance. Highly redundant dictionaries
require specific algorithms to simplify the search for an efficient (sparse)
representation. We also discuss the extension of multiscale geometric
decompositions to non-Euclidean domains such as the sphere or arbitrary meshed
surfaces. The etymology of panorama suggests an overview, based on a choice of
partially overlapping "pictures". We hope that this paper will contribute to
the appreciation and apprehension of a stream of current research directions in
image understanding.
Review; Multiscale; Geometric representations; Oriented decompositions; Scale-space; Wavelets; Atoms; Sparsity; Redundancy; Bases; Frames; Edges; Textures; Image processing; Haar wavelet; Non-Euclidean wavelets

Bandlet Image Estimation with Model Selection (DOI:10.1016/j.sigpro.2011.01.013)

Charles Dossal and Stéphane Mallat and Erwan Le Pennec

To estimate geometrically regular images in the white noise model and
obtain an adaptive near asymptotic minimaxity result, we consider a model selection
based bandlet
estimator. This bandlet estimator combines the best basis selection
behaviour of the model selection and the
approximation properties of the bandlet dictionary.
We derive its near asymptotic minimaxity for geometrically regular images as an
example of model selection with general dictionary of orthogonal bases.
This paper is thus
also a self contained tutorial on model selection with orthogonal bases dictionary.
Model selection; White noise model; Image estimation; Geometrically regular functions; Bandlets

Augmented Lagrangian based Reconstruction of non-uniformly sub-Nyquist sampled MRI data (DOI:10.1016/j.sigpro.2011.04.033)

Jan Aelterman and Hiep Luong and Bart Goossens and Aleksandra Pizurica and Wilfried Philips

MRI has recently been identified as a promising application for compressed-sensing-like regularization because of its potential to speed up the acquisition while maintaining the image quality. Thereby non-uniform k-space trajectories, such as random or spiral trajectories, are becoming more and more important, because they are well suited to be used within the compressed-sensing (CS) acquisition framework. In this paper, we propose a new reconstruction technique for non-uniformly sub-Nyquist sampled k-space data. Several parts make up this technique, such as the non-uniform Fourier transform (NUFT), the discrete shearlet transform and a augmented Lagrangian based optimization algorithm. Because MRI images are real-valued, we introduce a new imaginary value suppressing prior, which attenuates imaginary components of MRI images during reconstruction, resulting in a better overall image quality. Further, a preconditioning based on the Voronoi cell size of each NUFT data point speeds up the conjugate gradient optimization used as part of the optimization algorithm. The resulting algorithm converges in a relatively small number of iterations and guarantees solutions that fully comply to the imposed constraints. The results show that the algorithm is applicable not only to sub-Nyquist sampled k-space reconstruction, but also to MR image fusion and/or resolution enhancement.
Augmented Lagrangian methods; MRI reconstruction; Non-uniform Fourier transform; Shearlet; Compressed sensing

Matching Pursuit Shrinkage in Hilbert Spaces (DOI:10.1016/j.sigpro.2011.04.010)

Tieyong Zeng and Francois Malgouyres

In this paper, we study a variant of the Matching Pursuit named Matching Pursuit Shrinkage. Similarly to the Matching Pursuit it seeks for an approximation of a datum living in a Hilbert space by a sparse linear expansion in a countable set of atoms. The difference with the usual Matching Pursuit is that, once an atom has been selected, we do not erase all the information along the direction of this atom. Doing so, we can evolve slowly along that direction. The goal is to attenuate the negative impact of bad atom selections.

We analyze the link between the shrinkage function used by the algorithm and the fact that the result belongs to $l^2$, $l^1$ and $l^0$ space. Experimental results are also reported to show the potential application of the proposed algorithm.
Dictionary; Matching pursuit; Shrinkage; Sparse representation

Non Separable Lifting Scheme with Adaptive Update Step for Still and Stereo Image Coding (DOI:10.1016/j.sigpro.2011.01.003)

Mounir Kaaniche and Amel Benazza-Benyahia and Béatrice Pesquet-Popescu and Jean-Christophe Pesquet

Many existing works related to lossy-to-lossless multiresolution image compression are based on the lifting concept. It is worth noting that a separable lifting scheme may not appear very efficient to cope with the 2D characteristics of edges which are neither horizontal nor vertical. In this paper, we propose to use 2D non-separable lifting schemes that still enable progressive reconstruction and exact decoding of images. Their relevant advantage is to yield a tractable optimization of all the involved decomposition operators. More precisely, we design the prediction operators by minimizing the variance of the detail coefficients. Concerning the update filters, we propose a new optimization criterion which aims at reducing the inherent aliasing artifacts. A theoretical analysis of the proposed method is conducted in terms of the adaptation criterion considered in the optimization of the update filter. Simulations carried out on still images and residual ones generated from stereo pairs show the benefits which can be drawn from the proposed optimization of the lifting operators.
Lossless compression; Progressive reconstruction; Lifting schemes; Separable transforms; Non-separable transforms; Adaptive transforms; Multiresolution analysis; Wavelets; Stereo coding

June 18, 2011

Sparse is abundant

William of Ockham
[Updated 2011/06/19 for iTWIST 2012 workshop in Marseille, see at bottom]
[Updated 2011/07/02 for a call of paper to Journal of Applied Mathematics on "Preconditioning Techniques for Sparse Linear Systems"]

You still have one day (deadline: 2011/06/19) to submit a paper to ACM Multimedia SRED 2011 : First International Workshop on Sparse Representation for Event Detection in Multimedia, from 28. Nov. to 1 Dec. in Arizona, USA.

Should you need a little more time, an oxymoric abundance of calls for papers related to sparsity offers additional opportunities. Let's hope your submissions will spark interesting discussions on sparsity on Nuit Blanche. (BTW, thank you Igor for promoting a PhD thesis proposal related to sparsity and seismics, i'll try to make it more international soon).

Journal: EURASIP Journal on Advances in Signal Processing
Special issue: New Image and Video Representations Based on Sparsity
Editors : Fred Truchetet, Université de Bourgogne; Akram Aldroubi, Department of Mathematics, Vanderbilt University; Ivan W. Selesnick, Polytechnic Institute of New York University; Peter Schelkens, Vakgroep Elektronica en Informatieverwerking, Vrije Universiteit Brussel; Olivier Laligant, Université de Bourgogne, Dijon, Bourgogne, France

Submission deadline: apparently open until June 30 2011 [2011/06/30], see this page (CfP)
In recent years, new signal representations based on sparsity have drawn considerable attention. Natural images can be modeled by sparse models living in high-dimensional spaces. New efficient algorithms based on sparse representations have been recently proposed and successfully applied to many image and video processing problems.This special issue will focus on how sparsity has impacted image and video processing. It intends to be an international forum for researchers to summarize the most recent developments, trends, and new ideas in the field of sparse representations and their applications to image and video processing and hence highlight the advances in this field. The topics to be covered include, but are not limited to:
  • Sparse representations for image and video
  • Multiresolution approaches, Wavelet, and X-let analysis for image processing
  • Compressed sensing
  • Applications of sparsity to image denoising, compression, segmentation, restoration, recognition, inpainting, super resolution, and so forth
Journal: EURASIP Journal on Advances in Signal Processing
Special issue: Sparse Signal Processing
Editors: Farokh Marvasti, Advanced Communications Research Institute, Sharif University of Technology; Jonathon Chambers, Advanced Signal Processing Group, Department of Electronic and Electrical Engineering, Loughborough University; Mohammad Djafari, CNRS, Ecole supérieure d'éléctricité (Supélec)

Submission deadline: July 15, 2011 [2011/07/15] (CfP)
An emerging important area of signal processing is the case when the signal is sparse in any transform domain. Sparse signal processing reveals significant reduction in the sampling rate and processing manipulations. Efficient algorithms have been developed for sparse signals for various applications; the algorithms developed seem to be application specific. It is the aim of this special issue to compare so many algorithms for these applications. The goal is a unified view of sparse signal processing by bringing together various fields.The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms papers are solicited in, but are not limited to:
  • Random sampling
  • Compressed sensing
  • Rate of innovation
  • Real Galois field error correction codes
  • Spectral estimation
  • Multisource location
  • DOA estimation in array processing
  • Sparse array beamforming
  • Sparse sensor networks
  • Blind source separation in SCA
  • Multipath channel estimation

Journal: International Journal of Mathematics and Mathematical Sciences
Special issue: Sparse Sampling and Sparse Recovery and Its Applications to Inverse Problems
Editors: Gerd Teschke, Institute for Computational Mathematics in Science and Technology, Neubrandenburg University of Applied Sciences; Anders Hansen, Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge; Ronny Ramlau, Industrial Mathematics Institute, Johannes Kepler University

Submission deadline: September 1st, 2011 [2011/09/01] (CfP)
Many applications in science and engineering require the solution of an operator equation Kx = y. Often only noisy data are available, and if the problem is ill posed, regularization methods have to be applied for the stable approximation of a solution. Influenced by the huge impact of sparse signal representations and the practical feasibility of advanced sparse recovery algorithms, the combination of sparse signal recovery and inverse problems emerged in the last decade as a new growing area. Currently, there exist a great variety of sparse recovery algorithms for inverse problems. These algorithms are successful formany applications and have lead to breakthroughs in many fields (e.g., MRI, tomography). However, the feasibility is usually limited to problems for which the data are complete and where the problem is of moderate dimension. For really large-scale problems or problems with incomplete data, these algorithms are not well suited or fail completely. In the context of signal recovery, generalized sampling theories were developed to tackle the problem of data incompleteness. One outstanding approach is the theory of compressed sensing. A major breakthrough was achieved when it was proven that it is possible to reconstruct a signal from very few measurements. A crucial condition for compressed sensing is the so-called restricted isometry property. Nowadays, this strong requirement has been relaxed in several ways, but so far all formulations of compressed sensing are in finite dimensions. Quite recently, first attempts of infinite dimensional formulations emerged. In this special issue, our focus is on stable and numerically feasible recovery algorithms and–and this is one major question–whether these technologies generalize to the solution of operator equations/inverse problems. Hence we invite authors to submit original research papers and review articles that provide the state of the art in this field and extend the known theory and contribute therefore to answer these questions. We are interested in articles that explore aspects of generalized sparse sampling, sparse recovery, and inverse problems. Potential topics include, but are not limited to:
  • Generalized sampling principles and stable reconstruction
  • Compressed sampling strategies and the solution of operator equations
  • Compressive sampling principles and their extensions to infinite dimensions
  • Sparse recovery principles for inverse problems
  • Regularization theory for inverse problems with sparsity constraints
  • Algorithms and their numerical realization
Journal: Neurocomputing

Special issue: Distributed Machine Learning and Sparse Representation with Massive Data Sets

Editors: Oliver Obst CSIRO ICT Centre, Sydney, ; Tiberio Caetano NICTA, Canberra, ; Michael Mahoney Stanford University, Stanford

Submission deadline: September 16, 2011 [2011/09/16]

The exponentially increasing demand for computing power as well as physical and economic limitations has contributed to a proliferation of distributed and parallel computer architectures. To make better use of current and future high-performance computing, and to fully benefit from these massive amounts of data, we must discover, understand and exploit the available parallelism in machine learning. Simultaneously, we have to model data in an adequate manner while keeping the models as simple as possible, by making use of a sparse representation of the data or sparse modelling of the respective underlying problem.

This special issue follows the 2011 Symposium on "Distributed Machine Learning and Sparse Representation with Massive Data Sets" (DMMD 2011). We invite both new submissions as well as previously unpublished work that have been presented on DMMD 2011. Suggested topics for this special issue include:

  • Distributed, Multicore and Cluster based Learning Techniques
  • Machine Learning on Alternative Hardware (GPUs, Robots, Sensor Networks, Mobile Phones, Cell Processors ...)
  • Sparsity in Machine Learning and Statistics
  • Learning results and techniques on Massive Datasets
  • Dimensionality Reduction, Sparse Matrix, Large Scale Kernel Methods
  • Fast Online Algorithms for Large Scale Data
  • Parallel Computing Tools and Libraries
Journal: Journal of Visual Communication and Image Representation (JVCI)
Special issue: Sparse Representations for Image and Video Analysis
Editors: Jinhui Tang, Nanjing University of Science and Technology; Shuicheng Yan, National University of Singapore; John Wright, Microsoft Research Asia; Qi Tian, University of Texas at San Antonio; Yanwei Pang, Tianjin University; Edwige Pissaloux, Université Pierre et Marie Curie

Submission deadline: October 1, 2011 [2011/10/01] (CfP)
Sparse representation has gained popularity in the last few years as a technique to reconstruct a signal with few training examples. This reconstruction can be defined as adaptively finding a dictionary which best represents the signal on sample bases. Sparse representation establishes a more rigorous mathematical framework for studying high-dimensional data and ways to uncover the structures of the data, giving rise to a large repertoire of efficient algorithms. The sparse representation has just been applied to visual analysis for few years, while has shown its advantages in processing the visual information. Thus it will have a great potential in this field.
Sparse representation has wide applications in image/video processing, analysis, and understanding, such as denoising, deblurring, inpainting, compression, super-resolution, detection, classification, recognition, and retrieval. Many approaches based on sparse representation were proposed for these applications in the past years, and showed the promising results. This special issue aims to bring together the range of research efforts in sparse representation for image/video processing, analysis, and understanding. The goals of this special issue are threefold: (1) to introduce the advances of the theories on sparse representation; (2) to survey the progress of the applications of sparse representation in visual analysis; and (3) to discuss new sparse representation based technologies that will be potentially impactful in the image/video applications (primary results are needed).

The scope of this special issue is to cover all aspects that relate to sparse representation for visual analysis. Topics of interest include, but are not limited to the following:
  • The fundamental theories on sparse representation
  • Dictionary learning for sparse representation and modeling
  • The novel learning methods based on sparse representation
  • The applications of sparse representation in image/video denoising, impainting, debluerring, compression, and super-resolution
  • Sparse representation for pattern recognition and classification
  • Sparse representation for image/video retrieval
  • Sparse reconstruction for medical imaging and radar imaging
  • Sparse component analysis and its application to blind source separation

Journal: IEEE Journal of Selected topics in Signal Processing
Special issue: Robust Measures and Tests Using Sparse Data for Detection and Estimation
Editors: Hsiao-Chun Wu, Louisiana State University; Philippe Ciblat, ENST; Octavia A. Dobre, Memorial University of Newfoundland; Jitendra K. Tugnait, Auburn University
Submission deadline: March 28, 2011 (apparently past, still on the "Open Special Issues" page, yet on the upcoming publications, due February 2012) (CfP)
The sparse (undersampled) data constraint in the statistical signal processing is quite common for efficient computation and system time-invariance validity. Hence, the research about how to build reliable statistical measures and statistical tests using sparse data for different signal processing applications is still quite challenging nowadays. When the real-time efficiency or the unnoticeable processing delay is required with the help of the state-of-the-art microprocessors or DSP platforms, researchers are still making continual efforts to develop new robust statistical methodologies. Two crucial indicators, “number-of-samples to number-ofparameters- to-be-estimated ratio” (referred to as SPR) and “system performance versus signal-to-interferenceplus- noise ratio”(referred to as SPSINR), can reflect both sparse data constraint and robustness. The objective is to seek new ideas and techniques to surmount the existing signal processing methods in terms of low SPR and superior SPSINR but still achieve good computational efficiency. In the signal processing research, reliable statistical measures such as statistical moments/cumulants, Lp-norms, mean-square-errors (MSE), Cramer-Rao bounds (CRB), signal-to-noise ratio (SNR), signal-to-interference ratio (SIR), mutual information/entropy, divergence, etc. are always in pursuit, especially subject to the restriction on the limited data and/or the time variance of the underlying systems instead of the classical asymptotical analysis based on the infinite data set. This special issue will focus on all aspects of design, development, implementation, operation, and applications of robust measures and tests using sparse data for detection and estimation.

We invite original and unpublished research contributions in all areas relevant to signal processing in cooperative cognitive radio systems. The topics of interest include, but are not limited to:
  • New robust measures or objective functions for detection and estimation using sparse data
  • New robust statistical tests for detection and estimation using sparse data
  • New theoretical and empirical analyses for detection and estimation using sparse data
  • New results for explicit expressions of CRB or variance for detection and estimation using sparse data
  • Reliable signal quality measures using sparse data
  • General frameworks for evaluating various statistical measures/tests using sparse data
Journal: Journal of Applied Mathematics
Special issue: Special Issue on Preconditioning Techniques for Sparse Linear Systems
Editors: Massimiliano Ferronato, Department of Mathematical Methods and Models for Scientific Applications, University of Padova, Padova, Italy; Edmond Chow, School of Computational Science and
Engineering, College of Computing, Georgia Institute of Technology, Atlanta; Kok Kwang Phoon, Department of Civil Engineering, National University of Singapore

Submission deadline: November 1, 2011 [2011/11/01] (CfP Special Issue on Preconditioning Techniques for Sparse Linear Systems)
The accurate and efficient solution to sparse linear systems of equations, arising from the discretization of PDEs, often represents the main memory- and time-consuming tasks in a computer simulation. Direct methods are still widely used on the basis of their robustness and reliability. However, they generally scale poorly with the matrix size, especially on 3D problems. For large sparse systems, iterative methods based on Krylov subspaces are a most attractive option. Several Krylov subspace solvers have been developed during the 1970s through the 1990s, and they are generating a growing interest in many areas of engineering and scientific computing. Nonetheless, to become really competitive with direct solvers they need an appropriate preconditioning to achieve convergence in a reasonable number of iterations.

It is widely recognized that preconditioning is the key factor to increase the robustness and the computational efficiency of iterative methods. Unfortunately, theoretical results are few, and it is not rare that “empirical” algorithms work surprisingly well despite the lack of a rigorous foundation. The research on preconditioning has significantly grown over the last two decades and currently appears to be a much more active area than either direct or iterative solution methods. On one hand, this is due to the understanding that there are virtually no limits to the available options for obtaining a good preconditioner. On the other hand, it is also generally recognized that an optimal general-purpose preconditioner is unlikely to exist, so new research fields can be opened for improving the computational efficiency in the solution of any specific problem at hand on any specific computing environment.

We invite investigators to contribute original research articles as well as review articles on the development and the application of preconditioning techniques for the solution to sparse linear systems. Potential topics include, but are not limited to:
  • Development and numerical testing of novel preconditioners
  • Development and numerical testing of preconditioners for specific applications
  • Improvement of existing general-purpose algebraic preconditioners
  • Theoretical advances on the properties of existing general-purpose algebraic preconditioners
  • Application of existing techniques to novel fields
Additional conference events are found at the Compressive sensing meetings page or SIVA Conferences (not updated often enough these times). Let us mention:
The first international Travelling Workshop of Interaction between Sparse models and Technologies (iTWIST 2012), May 9-11, 2012, at CIRM, in Marseilles, France, on "Generalized sparsity in high-dimensional geometries". The range of topics will include (but may not be limited to) :
  • Graph theory and applications
  • Dictionary learning
  • Sparse models in machine learning
  • Compressed sensing
  • Emerging and innovative acquisition technologies such as:
      * compressive imagers
      * hyperspectral imaging
      * analog to information converter
      * coded aperture system
      * computational photography
  • Applications to real-life problems (astronomy, biomedical, industry, multimedia  and any other field ... )
Organisers: Dr. Sandrine Anthoine, Prof. Yannick Boursier, Prof. Pascal Frossard,  Dr. Laurent Jacques, Prof. Pierre Vandergheynst, Prof. Christophe De Vleeschouwer
Probably more information soon at:

For the near present, we are awaiting the upcoming (September 2011) Adaptive Sparse Representation of Data and Applications in Signal and Image Processing (IEEE Journal of Selected topics in Signal Processing) and the "very very sparse" special issue "Sparse Representation of Data and Images" in Advances in Adaptive Data Analysis. Theory and Applications, which appears only on a few places (Improved analysis of the subsampled randomized Hadamard transform, cited by A Note on Low-rank Matrix Decompositions via the Subsampled Randomized Hadamard Transform).

Finally, the special issue of Signal Processing on Advances in Multirate Filter Bank Structures and Multiscale Representations is complete; 4 out of  11 papers refer to sparsity in their title, all of them in their text. Sparsity is no self-referential concept these days.

March 19, 2011

Decimate Brands of Banded matrices

About one year ago Gilbert Strang published a first paper Fast transforms: Banded matrices with banded inverses at Proceedings of the National Academy of Science (PNAS), with the following abstract:
It is unusual for both $A$ and $A^{-1}$ to be banded — but this can be a valuable property in applications. Block-diagonal matrices $F$ are the simplest examples; wavelet transforms are more subtle. We show that every example can be factored into $A = F_1 \dots F_N$ where $N$ is controlled by the bandwidths of $A$ and $A^{-1}$ (but not by their size, so this extends to infinite matrices and leads to new matrix groups). 

Banded matrices (or band matrices) are somewhat related to implementations of discrete wavelet transforms (see for instance Short Wavelets and Matrix Dilation Equations, 1995 by G. Strang and V. Strela). The 2010 PNAS  sparked some interest as a means to implement fast local linear filters in both the forward and inverse transforms (an ascribed tandem, first anagram to banded matrices), by splitting matrices into ones faster to compute. In Unraveling the Matrix (A new way of analyzing grids of numbers known as matrices could improve signal-processing applications and data-compression schemes) at MIT News, Larry Hardesty states that:
In a paper published in the July 13 issue of Proceedings of the National Academy of Science, MIT math professor Gilbert Strang describes a new way to split certain types of matrices into simpler matrices. The result could have implications for software that processes video or audio data, for compression software that squeezes down digital files so that they take up less space, or even for systems that control mechanical devices.
Richard Brualdi, the emeritus UWF Beckwith Bascom Professor of Mathematics at the University of Wisconsin-Madison, points out that a mathematical conjecture that Strang presents in the paper has already been proven by three other groups of researchers. “It’s a very interesting theorem,” says Brualdi. “It’s already generated a couple of papers, and it’ll probably generate some more.” Brualdi points out that large data sets, such as those generated by gene sequencing, medical imaging, or weather monitoring, often yield matrices with regular structures. Bandedness is one type of structure, but there are others, and Brualdi expects other mathematicians to apply techniques like Strang’s to other types of structured matrices. “Whether or not those things will work, I really don’t know,” Brualdi says. “But Gil’s already said that he’s going to look at a different structure in a future paper.”

The chronicle can been read at Dr. Dobbs and here in pdf as well. And News I care mentioned it briefly in Fast signal transform with Banded matrix method.

The blog of a mandated scribe (on mixed banded matrices) can be used as a note to self. I completely forgot to read that paper, which was openly available at the time. Thanks to an update at Gilbert Strang's pdf paper repository, a few related others are now shared in pdf:

Fast transforms: Banded matrices with banded inverses, Proc. National Academy of Sciences 107 (#28) (2010) 12413-12416.
Abstract above

Banded Matrices with Banded Inverses and A = LPU, Proceedings Intl. Congress of Chinese Mathematicians: ICCM2010, to appear.
If $A$ is a banded matrix with a banded inverse, then $A = BC = F_1 \dots F_N$ is a product of block-diagonal matrices. We review this factorization, in which the $F_i$ are tridiagonal and $N$ is independent of the matrix size. For a permutation with bandwidth $w$, each $F_i$ exchanges disjoint pairs of neighbors and $N < 2w$. This paper begins the extension to infinite matrices. For doubly infinite permutations, the factors $F$ now include the left and right shift. For banded infinite matrices, we discuss the triangular factorization $A = LPU$ (completed in a later paper on The Algebra of Elimination). Four directions for elimination give four factorizations $LPU$ and $UPL$ and $U_1 \pi U_2$ (Bruhat) and $L_1 \pi L_2$ with different $L$, $U$, $P$ and $\pi$.

Groups of banded matrices with banded inverses, Proc. AMS, to appear (2011)
A product $A = F_1 \dots F_N$ of invertible block-diagonalmatrices will be bandedwith a banded inverse. We establish this factorization with the number $N$ controlled by the bandwidths w and not by the matrix size n: When A is an orthogonal matrix, or a permutation, or banded plus finite rank, the factors $F_i$ have $w = 1$ and generate that corresponding group. In the case of infinite matrices, conjectures remain open.

Triangular factorizations: The algebra of elimination, submitted to SIAM Review (2011)
Elimination with only the necessary row exchanges will produce the triangular factorization $A = LPU$, with the (unique) permutation $P$ in the middle. The entries in $L$ are reordered in comparison with the more familiar $A = PLU$ (where $P$ is not unique). Elimination with three other starting points 1, $n$ and $n$, $n$ and $n$, 1 produces three more factorizations of $A$, including the Wiener-Hopf form $UPL$ and Bruhat’s $U_1 \pi U_2$ with two upper triangular factors.
All these starting points are useless for doubly infinite matrices. The matrix has no first or last entry. When A is banded and invertible, we look for a new way to establish $A = LPU$. First we locate the pivot rows (and the main diagonal of $A$). $LPU$ connects to the classical factorization of matrix polynomials developed for the periodic (block Toeplitz) case when $A(i,j) = A(i+b;j+b)$.
Only for anagram sake, these approaches may reanimate BDDCs (balancing domain decomposition by constraints), which may sound a macabre distend to acerbated minds walking on candidate berms.

Additional papers:
Factoring Permutation Matrices Into a Product of Tridiagonal Matrices (pdf) Michael Daniel Samson, Martianus Frederic Ezerman (July 2010)
Gilbert Strang posited that a permutation matrix of bandwidth $w$ can be written as a product of $N < 2w$ permutation matrices of bandwidth 1. A proof employing a greedy ``parallel bubblesort'' algorithm on the rows of the permutation matrix is detailed and further points of interest are elaborated. 

Factorization Of Banded Permutations (pdf), Greta Panova (July 2010)
We prove a conjecture of Gilbert Strang stating that a banded permutation of bandwidth $w$ can be represented as a product of at most $2w-1$ permutations of bandwidth 1.
Approximating the inverse of banded matrices by banded matrices with applications to probability and statistics (pdf), Peter J. Bickel, Marko Lindner (February 2010)
In the first part of this paper we give an elementary proof of the fact that if an infinite matrix $A$, which is invertible as a bounded operator on $\ell^2$, can be uniformly approximated by banded matrices then so can the inverse of $A$. We give explicit formulas for the banded approximations of $A^{-1}$ as well as bounds on their accuracy and speed of convergence in terms of their band-width. In the second part we apply these results to covariance matrices $\Sigma$ of Gaussian processes and study mixing and beta mixing of processes in terms of properties of $\Sigma$. Finally, we note some applications of our results to statistics.

New words learned by myself today:
acerbated: embittered, resentful
berm: (1) a narrow shelf, path, or ledge typically at the top or bottom of a slope; also : a mound or wall of earth or sand (2) the shoulder of a road
distend: become wider, cause to expand as it by internal pressure, swell from or as if from internal pressure