Laurent Jacques let me know of the opening of registration for ITWIST'14

Dear all, Registration is now officially open for the: ITWIST'14: "international - Traveling Workshop on Interactions between Sparse models and Technology" August 27-29, 2014, Namur, Belgium http://sites.google.com/site/ itwist14 The registration page is here: https://sites.google.com/ site/itwist14/registration Remark: registration will be closed for August 1st, or if the total number of 100 participants is reached before that date. Don't wait too long then if you wish to join us. For your information, the iTWIST'14 workshop is honored by 9 invited speakers and 21 high quality contributions listed here: https://sites.google.com/ site/itwist14/invited-speakers https://sites.google.com/ site/itwist14/talks-posters The general program can also be accessed at <https://sites.google.com/ site/itwist14/program >We are looking forward to welcoming you in Namur end of August. The iTWIST14 committee <https://sites.google.com/ site/itwist14/committees >-- Prof. Laurent JACQUES F.R.S.-FNRS Research Associate Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM/ELEN) - UCL Batiment Stévin Place du Levant 2, PO Box L5.04.04 1348 Louvain-la-Neuve, Belgium

The speakers include:

Invited Speakers

MIT, USA

"Convex recovery from interferometric measurements"

Lifting, semidefinite relaxation, and expander graphs have recently helped formulate good solutions to the phase retrieval problem (Candes et al.) and the angular synchronization problem (Singer et al.) In this talk, I explain how the same line of thought reliably removes the local minima in interferometric inversion, a useful variant of numerical inverse scattering where the problem to fit cross-correlations of wavefields rather than the wavefields themselves. While most compressed-sensing-like results assume randomness in the measurements, I explain why interferometric inversion is a setting in which a deterministic recovery result holds. In the process, we solve a question posed by Candes et al. in 2011 on robust phase retrieval.

Univ. Cambridge, UK

"Compressed sensing in the real world - The need for a new theory"

Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why uniform recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.

Columbia University, USA

"Towards Efficient Universal Compressed Sensing"

The nascent field of compressed sensing is founded on the fact that high-dimensional signals with “simple structure” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity to low-rankness. However, two fundamental questions have been left unanswered, namely: What are the general abstract meanings of “structure” and “simplicity”? And do there exist “univer- sal” algorithms for recovering such simple-structured objects from fewer samples than their ambient dimensions? As the data generation rate is soaring rapidly, the importance of such universal algo- rithms which require no or very little prior knowledge about the data generation mechanism becomes more eminent. These “universal” algorithms would be applicable to various kinds of data and hence are appealing in many applications.

In this talk, I will address both aforementioned questions. Using algorithmic information theoretic tools such as the Kolmogorov complexity, we provide a unified definition of structure and simplicity. Leveraging this new definition, motivated by Occam’s Razor, we develop and analyze an abstract algorithm, namely minimum complexity pursuit (MCP), for signal recovery. MCP requires just O(3κ) randomized samples to recover a signal of “complexity” κ and ambient dimension n. We also discuss the robustness of MCP to various types of noise in the system. While MCP is based on Kolmogorov complexity and hence is not implementable, I will also present our recent results which show how one can employ tools from universal compression to build implementable schemes with the same theoretical guarantees. These results pave the road to building efficient universal compressed sensing algorithms.

EPFL, Switzerland

"Low-rank tensor methods"

Numerical methods based on low-rank tensor approximations have been successfully applied to solve a number of challenging applications that were considered intractable until recently. This includes, for example, high-dimensional, stochastic and multiscale partial differential equations as well as large-scale Markov chains. The aim of this talk is to highlight some of these developments. In particular, we will discuss the approximability of a given tensor in tree-based low-rank formats. For function-based tensors, it is well known that classical notions of smoothness are insufficient to address this question in high dimensions. Instead, the locality of interactions in the underlying application appears to play a much more important role. Tree-based low-rank formats form smooth manifolds and therefore allow the application of dynamical low rank and Riemannian optimization techniques. We will explain how these techniques can be used to address a broad variety of applications, including tensor completion problems.

Ecole Normale Superieure de Paris, France

"A Probabilistic approach to compressed sensing: Bounds, Algorithms, and Magic Matrices"

Compressed sensing has triggered a major evolution in signal acquisition over the last decade. A major part of the theoretical and computational effort has been dedicated to convex optimization. In this talk, I will concentrate instead on probabilistic approaches and review some of the progresses that have been obtained using the joint use of three ingredients: probabilistic consideration, message-passing algorithms, and a careful design of the measurement matrix.

Duke Univ., USA

"Multiscale analysis of probability measures and data sets in high dimensions"

We discuss recent advances in the construction of multiscale decompositions and estimators for probability measures in high dimensions. Our main interest stems from the analysis of data sets in high-dimensions, which may be modeled as samples from probability measures. We focus on the case where the probability measure concentrates around low-dimensional sets, not necessarily smooth. We construct novel estimators for such probability measures, proving finite sample size performance bounds on these estimators, and show that under suitable assumption the curse of (ambient) dimensionality is defeated. We show that efficient algorithms implement these estimators, and show examples of applications. Similarly, we use the underlying multiscale decomposition of data for other statistical learning tasks, such as regression and classification, and discuss the corresponding performance bounds, algorithms, and applications.

Univ. California, Berkeley, USA

"Combining compressed sensing and representational learning - theory and two applications"

First I will describe conditions under which the sparse structure in compressed data can be guaranteed to be recovered by dictionary learning. Second I will describe two applications of such a combination of compressed sensing and dictionary learning, one in pattern classification and one in theoretical neuroscience.

For pattern classification, sparse coding has been recently demonstrated as an important first processing step. However, for high-dimensional data, sparse coding can be computationally expensive. I will present some empirical results demonstrating that random compression can be an efficient way of making sparse coding more tractable.

In the brain, high-dimensional neural representations, such as feature representations in the visual stream, have to be communicated from one brain region to the next. This communication is challenging because the population of neurons projecting from one region to another may constitute a severe wiring bottleneck. I will describe how the combination of compressed sensing and representational learning can solve this communication problem. I will discuss the predictions of this theory in the light of existing experimental neuroscience data.

Ricoh Innov., USA

"Sparse coding in computational imaging"

Sparse approximations and dictionary learning have so far found a tremendous number of applications in image processing tasks such as image restoration, image analysis and data mining. A vast majority of these applications are based on images obtained with traditional digital cameras, which have been shown to have sparse support in Gabor-like dictionaries. On the other hand, in recent years we have witnessed an expanding development of computational imaging systems, where the traditional optics are modified in order to get additional imaging benefits with more powerful computations. One well known example of such systems are light field (plenoptic) cameras, which have a microlens array placed in front of a digital sensor. This configuration allows for 3D imaging, digital refocusing and multi-spectral imaging. Images and light fields obtained with such systems have different statistics and structure than traditional images, and sparse representations with Gabor-like dictionaries might not be optimal anymore. In this talk, I will discuss challenges related to sparse representations and dictionary learning for computational imaging systems and present some recent results on learning sparse codes for solving inverse problems in plenoptic imaging.

CNRS researcher,

ITAV - USR3505

and IMT- UMR 5219,

Toulouse, France

"Variable density sampling with acquisition constraints"

One of the key concepts emerging from the compressed sensing literature is that of variable density sampling (VDS). In its simplest abstract form, VDS consists in sampling a signal by projecting it on a set of vectors drawn independently at random among a given family. In the context of Magnetic Resonance Imaging (MRI) - which will be the application motivating my talk - it consists in measuring Fourier coefficients drawn at random on the Fourier plane with a probability that decreases radially towards high frequencies.

In the first part of my talk, I will first review the choice of an adequate probability density on the family of available projections. I will show that this choice should be governed by the measurement and acquisition bases, but also by prior information on the signal to be recovered. A typical consequence of this analysis in MRI, is that low frequencies should be sampled deterministically while the remaining frequencies can be drawn at random.

Unfortunately it is usually impossible to collect independent measurements since most devices come with strong constraints such as continuity of the acquisition trajectory. The usual i.i.d. hypothesis on the drawings thus has to be relaxed to cover a wider field of applications. In the second part of my talk, I will focus on this particular problem. I will first propose a precise definition of variable density samplers as a generic stochastic processes and illustrate the essential features characterizing their practical efficiency in sampling problems. I will then propose numerical algorithms to construct good variable density samplers, by taking into account two different types of constraints:

Block constraints, meaning that the signal can only be measured by groups of measurements such as lines on the Fourier plane.

Kinematic constraints, meaning that the measurements have to lies on a parametric curve with constraints such as bounded first and second derivatives. The latter constraints are those that appear in MRI, but also in robot motion.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment