This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography , diffraction imaging, astronomical imaging and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that any complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to recover.
translated by 谷歌翻译
We consider the problem of phase retrieval, namely, recovery of a signal fromthe magnitude of its Fourier transform, or of any other linear transform. Dueto the loss of the Fourier phase information, this problem is ill-posed.Therefore, prior information on the signal is needed in order to enable itsrecovery. In this work we consider the case in which the signal is known to besparse, i.e., it consists of a small number of nonzero elements in anappropriate basis. We propose a fast local search method for recovering asparse signal from measurements of its Fourier transform (or other lineartransform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAseRetrieval. Our algorithm does not require matrix lifting, unlike previousapproaches, and therefore is potentially suitable for large scale problems suchas images. Simulation results indicate that GESPAR is fast and more accuratethan existing techniques in a variety of settings.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
* These three authors contributed equally to this work Coherent Diffractive Imaging (CDI) is an algorithmic imaging technique where intricate features are reconstructed from measurements of the freely-diffracting intensity pattern [1-7]. An important goal of such lensless-imaging methods is to study the structure of molecules (including many proteins) that cannot be crystallized [8]. Clearly, high spatial resolution and very fast measurement are key features for many applications of CDI. Ideally, one would want to perform CDI at the highest possible spatial resolution and in a single-shot measurement-such that the techniques could be applied to imaging at ultrafast rates. Undoubtedly, such capabilities would give rise to unprecedented possibilities. For example, observing molecules while they dissociate or undergo chemical reactions will considerably expand the knowledge in physics, chemistry and biology. However, the resolution of all current CDI techniques is limited by the diffraction limit, and therefore cannot resolve features smaller than one half the wavelength of the illuminating light [9], which is considered a fundamental limit in diffractive imaging [10]. Moreover, combining CDI with current sub-wavelength imaging techniques would not allow for rapid single-shot measurements that are able to follow ultrafast dynamics, because such techniques rely on multiple exposures, either through mechanical scanning (e.g., Scanning Near-Field Microscope [11, 12], scanning a sub-wavelength "hot spot" [13-15]), or by using ensemble-averaging over multiple experiments with fluorescent particles [16, 17]. Here, we present sparsity-based single-shot sub-wavelength resolution in coherent diffraction microscopy: algorithmic reconstruction of sub-wavelength features from far-field intensity patterns of sparse optical objects. We experimentally demonstrate imaging of irregular and ordered arrangements of 100nm features with illumination wavelength of 532nm (green light), thereby obtaining resolutions several times better than the diffraction limit. The sparsity-based sub-wavelength imaging concept relies on minimization of the number of degrees of freedom, and operates on a single-shot basis [18-20]. Hence, it is suitable for capturing a series of ultrafast single-exposure images, and subsequently improving their resolution considerably beyond the diffraction limit. This work paves the way for ultrafast sub-wavelength CDI, via phase retrieval at the sub-wavelength scale. For example, sparsity-based methods could considerably improve the CDI resolution with x-ray free electron laser [21], without hardware modification. Conceptually, sparsity-based methods can enhance the resolution in all imaging systems, optical and non-optical.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
Phase problems occur in many scientific disciplines, particularly those involving remote sensing using a wave field. Although there has been much interest in phase retrieval in optics and in imaging in general over the past decade, phase retrieval has a much longer history in x-ray crystallography, and a variety of powerful and practical techniques have been developed. The nature of crystallography means that crystallographic phase problems are distinct from those in other imaging contexts, but there are a number of commonalities. Here the principles of phase retrieval in crystallography are outlined and are compared and contrasted with phase retrieval in general imaging. Uniqueness results are discussed, but the emphasis is on phase-retrieval algorithms and areas in which results in one discipline have, and may, contribute to the other.
translated by 谷歌翻译
Light rays incident on a transparent object of uniform refractive index undergo deflections, which uniquely characterize the surface geometry of the object. Associated with each point on the surface is a deflection map (or spectrum) which describes the pattern of deflections in various directions. This article presents a novel method to efficiently acquire and reconstruct sparse deflection spectra induced by smooth object surfaces. To this end, we leverage the framework of Compressed Sensing (CS) in a particular implementation of a schlieren deflectometer, i.e., an optical system providing linear measurements of deflection spectra with programmable spatial light modulation patterns. In particular, we design those modulation patterns on the principle of spread spectrum CS for reducing the number of observations. Interestingly, the ability of our device to simultaneously observe the deflection spectra on a dense discretization of the object surface is related to a particular Multiple Measurement Vector (MMV) model. This scheme allows us to estimate both the noise power and the instrumental point spread function in a specific calibration procedure. We formulate the spectrum reconstruction task as the solving of a linear inverse problem regularized by an analysis sparsity prior using a translation invariant wavelet frame. Our results demonstrate the capability and advantages of using a CS based approach for deflectometric imaging both on simulated data and experimental deflectometric data. Finally, the paper presents an extension of our method showing how we can extract the main deflection direction in each point of the object surface from a few compressive measurements, without needing any costly reconstruction procedures. This compressive characterization is then confirmed with experimental results on simple plano-convex and multifocal intra-ocular lenses studying the evolution of the main deflection as a function of the object point location.
translated by 谷歌翻译
The theory of compressive sensing enables accurate and robust signal reconstruction from a number of measurements dictated by the signal's structure rather than its Fourier bandwidth. A key element of the theory is the role played by randomization. In particular, signals that are compressible in the time or space domain can be recovered from just a few randomly chosen Fourier coefficients. However, in some scenarios we can only observe the magnitude of the Fourier coefficients and not their phase. In this paper, we study the magnitude-only compressive sensing problem and in parallel with the existing theory derive sufficient conditions for accurate recovery. We also propose a new iterative recovery algorithm and study its performance. In the process, we develop a new algorithm for the phase retrieval problem that exploits a signal's compressibility rather than its support to recover it from Fourier transform magnitude measurements.
translated by 谷歌翻译
We consider the classical 1D phase retrieval problem. In order to overcome the difficulties associated with phase retrieval from measurements of the Fourier magnitude, we treat recovery from the magnitude of the short-time Fourier transform (STFT). We first show that the redundancy offered by the STFT enables unique recovery for arbitrary nonvanishing inputs, under mild conditions. An efficient algorithm for recovery of a sparse input from the STFT magnitude is then suggested, based on an adaptation of the recently proposed GESPAR algorithm. We demonstrate through simulations that using the STFT leads to improved performance over recovery from the oversampled Fourier magnitude with the same number of measurements.
translated by 谷歌翻译
This review covers latest developments in continuous-variable quantum-state tomography of optical fields and photons, placing a special accent on its practical aspects and applications in quantum information technology. Optical homodyne tomography is reviewed as a method of reconstructing the state of light in a given optical mode. A range of relevant practical topics are discussed, such as state-reconstruction algorithms (with emphasis on the maximum-likelihood technique), the technology of time-domain homodyne detection, mode matching issues, and engineering of complex quantum states of light. The paper also surveys quantum-state tomography for the transverse spatial state (spatial mode) of the field in the special case of fields containing precisely one photon. Contents
translated by 谷歌翻译
The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices-especially in optics-have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy. biological imaging | compressed sensing | computational imaging | sparse signals F luorescence microscopy is a fundamental tool in basic and applied biomedical research. Because of its optical sensitivity and molecular specificity, fluorescence imaging is employed in an increasing number of applications which, in turn, are continuously driving the development of advanced microscopy systems that provide imaging data with ever higher spatio-temporal resolution and multiplexing capabilities. In fluorescence microscopy, one can schematically distinguish two kinds of imaging approaches, differing by their excitation and detection modalities (1). In wide-field (WF) microscopy, a large sample area is illuminated and the emitted light is recorded on a multidetector array, such as a CCD camera. In contrast, in raster scan (RS) microscopy, a point excita-tion is scanned through the sample and a point detector is used to detect the fluorescence signal at each position. While very distinct in their implementation and applications, these imaging modalities have in common that the acquisition is independent of the information content of the image. Rather, the number of measurements, either serial in RS or parallel in WF, is imposed by the Nyquist-Shannon theorem. This theorem states that the sampling frequency (namely the inverse of the image pixel size) must be twice the bandwidth of the signal, which is determined by the diffraction limit of the microscope lens equal to λ∕2NA (λ is the optical wavelength and NA the objective numerical aperture). Yet, most images, including those of biological interest, can be described by a number of parameters much lower than the total number of pixels. In everyday's world, a striking consequence of this compressibility is the ability of c
translated by 谷歌翻译
Low-rank matrices play a fundamental role in modeling and computational methods for signal processing and machine learning. In many applications where low-rank matrices arise, these matrices cannot be fully sampled or directly observed, and one encounters the problem of recovering the matrix given only incomplete and indirect observations. This paper provides an overview of modern techniques for exploiting low-rank structure to perform matrix recovery in these settings, providing a survey of recent advances in this rapidly-developing field. Specific attention is paid to the algorithms most commonly used in practice, the existing theoretical guarantees for these algorithms, and representative practical applications of these techniques.
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
Despite the rapid progress in optical imaging, most of the advanced microscopy modalities still require complex and costly setups that unfortunately limit their use beyond well equipped laboratories. In the meantime, microscopy in resource-limited settings has requirements significantly different from those encountered in advanced laboratories, and such imaging devices should be cost-effective, compact, lightweight and appropriately accurate and simple to be usable by minimally trained personnel. Furthermore, these portable microscopes should ideally be digitally integrated as part of a telemedicine network that connects various mobile health-care providers to a central laboratory or hospital. Toward this end, here we demonstrate a lensless on-chip microscope weighing $46 grams with dimensions smaller than 4.2 cm  4.2 cm  5.8 cm that achieves sub-cellular resolution over a large field of view of $24 mm 2. This compact and lightweight microscope is based on digital in-line holography and does not need any lenses, bulky optical/mechanical components or coherent sources such as lasers. Instead, it utilizes a simple light-emitting-diode (LED) and a compact opto-electronic sensor-array to record lensless holograms of the objects, which then permits rapid digital reconstruction of regular transmission or differential interference contrast (DIC) images of the objects. Because this lensless incoherent holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems in resource limited settings.
translated by 谷歌翻译
我们从量化测量中解决了相位检索(PR)的问题。目标是从以无限精度编码的二次测量中重建信号,这在许多实际应用中确实是这种情况。我们开发了一种秩1投影算法,该算法可以恢复信号,以确保与测量的一致性,也就是说,编码后的恢复信号必须产生与开始时相同的一组测量值。 Therank-1投影源于提升的想法,最初是在PhaseLift的背景下提出的。使用单侧二次成本强制执行一致性标准。我们还确定了不同矢量导致同一组量化测量的概率,这使得无法解决它们。当然,这种概率取决于这些矢量的相关程度,以及测量结果的粗略/精细程度。所提出的算法还能够在信号上结合稀疏约束。对成本函数的分析表明,它是有界的,无论是在边界还是在下面,都取决于相关性与基本事实的相关程度。我们还得出了可实现的重建精度的Cram \'er-Rao下界(CRB)。与现有技术的比较表明,所提出的算法具有更高的重构精度,距离CRB约2至3 dB。当量化是粗略的时,在竞争算法上重建信噪比的边缘,中间值更高(约5到6dB)。
translated by 谷歌翻译
We study the problem of recovering the phase from magnitude measurements;specifically, we wish to reconstruct a complex-valued signal x of C^n aboutwhich we have phaseless samples of the form y_r = |< a_r,x >|^2, r = 1,2,...,m(knowledge of the phase of these samples would yield a linear system). Thispaper develops a non-convex formulation of the phase retrieval problem as wellas a concrete solution algorithm. In a nutshell, this algorithm starts with acareful initialization obtained by means of a spectral method, and then refinesthis initial estimate by iteratively applying novel update rules, which havelow computational complexity, much like in a gradient descent scheme. The maincontribution is that this algorithm is shown to rigorously allow the exactretrieval of phase information from a nearly minimal number of randommeasurements. Indeed, the sequence of successive iterates provably converges tothe solution at a geometric rate so that the proposed scheme is efficient bothin terms of computational and data resources. In theory, a variation on thisscheme leads to a near-linear time algorithm for a physically realizable modelbased on coded diffraction patterns. We illustrate the effectiveness of ourmethods with various experiments on image data. Underlying our analysis areinsights for the analysis of non-convex optimization schemes that may haveimplications for computational problems beyond phase retrieval.
translated by 谷歌翻译
Phase retrieval seeks to recover a signal x from the amplitude |Ax| of linearmeasurements. We cast the phase retrieval problem as a non-convex quadraticprogram over a complex phase vector and formulate a tractable relaxation(called PhaseCut) similar to the classical MaxCut semidefinite program. Wesolve this problem using a provably convergent block coordinate descentalgorithm whose structure is similar to that of the original greedy algorithmin Gerchberg-Saxton, where each iteration is a matrix vector product. Numericalresults show the performance of this approach over three different phaseretrieval problems, in comparison with greedy phase retrieval algorithms andmatrix completion formulations.
translated by 谷歌翻译
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
translated by 谷歌翻译