Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter based low-rank Hankel matrix approach (ALOHA) is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on the fundamental duality between the transform domain sparsity in the primary space and the low-rankness of weighted Hankel matrix in the reciprocal space, which converts pMRI and CS-MRI to a k-space interpolation problem using structured matrix completion. Using theoretical results from the latest compressed sensing literatures, we showed that the required sampling rates for ALOHA may achieve the optimal rate. Experimental results with in vivo data for single/multi-coil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object-the high end of its spectrum-from coarse scale information only-from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0, 1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up until a frequency cutoff f c. We show that one can super-resolve these point sources with infinite precision-i.e. recover the exact locations and amplitudes-by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2/f c. This result extends to higher dimensions and other models. In one dimension for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary.
translated by 谷歌翻译
Statistical inference and information processing of high-dimensional data often require efficient and accurate estimation of their second-order statistics. With rapidly changing data, limited processing power and storage at the acquisition devices, it is desirable to extract the covariance structure from a single pass over the data and a small number of stored measurements. In this paper, we explore a quadratic (or rank-one) measurement model which imposes minimal memory requirements and low computational complexity during the sampling process, and is shown to be optimal in preserving various low-dimensional covariance structures. Specifically, four popular structural assumptions of covariance matrices, namely low rank, Toeplitz low rank, sparsity, jointly rank-one and sparse structure, are investigated, while recovery is achieved via convex relaxation paradigms for the respective structure. The proposed quadratic sampling framework has a variety of potential applications including streaming data processing, high-frequency wireless communication, phase space tomography and phase retrieval in optics, and non-coherent subspace detection. Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the information theoretic limits. We also demonstrate the robustness of this approach against noise and imperfect structural assumptions. Our analysis is established upon a novel notion called the mixed-norm restricted isometry property (RIP-2//1), as well as the conventional RIP-2//2 for near-isotropic and bounded measurements. In addition, our results improve upon the best-known phase retrieval (including both dense and sparse signals) guarantees using PhaseLift with a significantly simpler approach.
translated by 谷歌翻译
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finite-dimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finite-dimensional techniques are ill-suited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. The key to these developments is the introduction of two new concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize the fundamentally infinite-dimensional reconstruction problem.
translated by 谷歌翻译
We consider the problem of super-resolving the line spectrum of a multisinusoidal signal from a finite number of samples, some of which may be completely corrupted. Measurements of this form can be modeled as an additive mixture of a sinusoidal and a sparse component. We propose to demix the two components and super-resolve the spectrum of the multisinusoidal signal by solving a convex program. Our main theoretical result is that-up to logarithmic factors-this approach is guaranteed to be successful with high probability for a number of spectral lines that is linear in the number of measurements, even if a constant fraction of the data are outliers. The result holds under the assumption that the phases of the sinusoidal and sparse components are random and the line spectrum satisfies a minimum-separation condition. We show that the method can be implemented via semidefinite programming and explain how to adapt it in the presence of dense perturbations, as well as exploring its connection to atomic-norm denoising. In addition, we propose a fast greedy demixing method which provides good empirical results when coupled with a local nonconvex-optimization step.
translated by 谷歌翻译
The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data. In this paper, we study the low-tubal-rank tensor completion problem, i.e., to recover a third-order tensor by observing a subset of its elements selected uniformly at random. We propose a fast iterative algorithm, called Tubal-Alt-Min, that is inspired by a similar approach for low-rank matrix completion. The unknown low-tubal-rank tensor is represented as the product of two much smaller tensors with the low-tubal-rank property being automatically incorporated, and Tubal-Alt-Min alternates between estimating those two tensors using tensor least squares minimization. First, we note that tensor least squares minimization is different from its matrix counterpart and nontrivial as the circular convolution operator of the low-tubal-rank tensor model is intertwined with the sub-sampling operator. Second, the theoretical performance guarantee is challenging since Tubal-Alt-Min is iterative and nonconvex in nature. We prove that 1) Tubal-Alt-Min guarantees exponential convergence to the global optima, and 2) for an n × n × k tensor with tubal-rank r n, the required sampling complexity is O(nr 2 k log 3 n) and the computational complexity is O(n 2 rk 2 log 2 n). Third, on both synthetic data and real-world video data, evaluation results show that compared with tensor-nuclear norm minimization (TNN-ADMM), Tubal-Alt-Min improves the recovery error dramatically (by orders of magnitude). It is estimated that Tubal-Alt-Min converges at an exponential rate 10 −0.4423Iter where Iter denotes the number of iterations, which is much faster than TNN-ADMM's 10 −0.0332Iter , and the running time can be accelerated by more than 5 times for a 200 × 200 × 20 tensor.
translated by 谷歌翻译
In this paper, we consider the Tensor Robust Principal Component Analysis (TRPCA) problem, which aims to exactly recover the low-rank and sparse components from their sum. Our model is based on the recently proposed tensor-tensor product (or t-product) [15]. Induced by the t-product, we first rigorously deduce the tensor spectral norm, tensor nuclear norm, and tensor average rank, and show that the tensor nuclear norm is the convex envelope of the tensor average rank within the unit ball of the tensor spectral norm. These definitions, their relationships and properties are consistent with matrix cases. Equipped with the new tensor nuclear norm, we then solve the TRPCA problem by solving a convex program and provide the theoretical guarantee for the exact recovery. Our TRPCA model and recovery guarantee include matrix RPCA as a special case. Numerical experiments verify our results, and the applications to image recovery and background modeling problems demonstrate the effectiveness of our method.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion.
translated by 谷歌翻译
Purpose: MR parameter mapping is one of clinically valuable MR imaging techniques. However, increased scan time makes it difficult for routine clinical use. This article aims at developing an accelerated MR parameter mapping technique using annihilating filter based low-rank Hankel matrix approach (ALOHA). Theory: When a dynamic sequence can be sparsified using spatial wavelet and temporal Fourier transform, this results in a rank-deficient Hankel structured matrix that is constructed using weighted k-t measurements. ALOHA then utilizes the low rank matrix completion algorithm combined with a multiscale pyrami-dal decomposition to estimate the missing k-space data. Methods: Spin-echo inversion recovery and multiecho spin echo pulse sequences for T 1 and T 2 mapping, respectively, were redesigned to perform undersampling along the phase encoding direction according to Gaussian distribution. The missing k-space is reconstructed using ALOHA. Then, the parameter maps were constructed using nonlinear regression. Results: Experimental results confirmed that ALOHA outper-formed the existing compressed sensing algorithms. Compared with the existing methods, the reconstruction errors appeared scattered throughout the entire images rather than exhibiting systematic distortion along edges and the parameter maps. Conclusion: Given that many diagnostic errors are caused by the systematic distortion of images, ALOHA may have a great potential for clinical applications.
translated by 谷歌翻译
This work investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(s log s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.
translated by 谷歌翻译
Recently, deep learning approaches with various network architectures haveachieved significant performance improvement over existing iterativereconstruction methods in various imaging problems. However, it is stillunclear why these deep learning architectures work for specific inverseproblems. To address these issues, here we show that the long-searched-formissing link is the convolution framelets for representing a signal byconvolving local and non-local bases. The convolution framelets was originallydeveloped to generalize the theory of low-rank Hankel matrix approaches forinverse problems, and this paper further extends the idea so that we can obtaina deep neural network using multilayer convolution framelets with perfectreconstruction (PR) under rectilinear linear unit nonlinearity (ReLU). Ouranalysis also shows that the popular deep network components such as residualblock, redundant filter channels, and concatenated ReLU (CReLU) do indeed helpto achieve the PR, while the pooling and unpooling layers should be augmentedwith high-pass branches to meet the PR condition. Moreover, by changing thenumber of filter channels and bias, we can control the shrinkage behaviors ofthe neural network. This discovery leads us to propose a novel theory for deepconvolutional framelets neural network. Using numerical experiments withvarious inverse problems, we demonstrated that our deep convolution frameletsnetwork shows consistent improvement over existing deep architectures.Thisdiscovery suggests that the success of deep learning is not from a magicalpower of a black-box, but rather comes from the power of a novel signalrepresentation using non-local basis combined with data-driven local basis,which is indeed a natural extension of classical signal processing theory.
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
After a decade of extensive study of the sparse representation synthesismodel, we can safely say that this is a mature and stable field, with cleartheoretical foundations, and appealing applications. Alongside this approach,there is an analysis counterpart model, which, despite its similarity to thesynthesis alternative, is markedly different. Surprisingly, the analysis modeldid not get a similar attention, and its understanding today is shallow andpartial. In this paper we take a closer look at the analysis approach, betterdefine it as a generative model for signals, and contrast it with the synthesisone. This work proposes effective pursuit methods that aim to solve inverseproblems regularized with the analysis-model prior, accompanied by apreliminary theoretical study of their performance. We demonstrate theeffectiveness of the analysis model in several experiments.
translated by 谷歌翻译
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit regularization, gradient descent enforces proper regularization implicitly under various statistical models. In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This "implicit regularization" feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial computational savings. Focusing on three fundamental statistical estimation problems, i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. In particular, by marrying statistical modeling with generic optimization theory, we develop a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal error control-measured entrywise and by the spectral norm-which might be of independent interest.
translated by 谷歌翻译
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-in-variant" functions spaces, including splines and wavelets. Practically , this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonban-dlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.
translated by 谷歌翻译
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F ; it includes all models-e.g. Gaussian, frequency measurements-discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP)-they make use of a much weaker notion-or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
translated by 谷歌翻译
We propose a new method for reconstruction of sparse signals with and withoutnoisy perturbations, termed the subspace pursuit algorithm. The algorithm hastwo important characteristics: low computational complexity, comparable to thatof orthogonal matching pursuit techniques when applied to very sparse signals,and reconstruction accuracy of the same order as that of LP optimizationmethods. The presented analysis shows that in the noiseless setting, theproposed algorithm can exactly reconstruct arbitrary sparse signals providedthat the sensing matrix satisfies the restricted isometry property with aconstant parameter. In the noisy setting and in the case that the signal is notexactly sparse, it can be shown that the mean squared error of thereconstruction is upper bounded by constant multiples of the measurement andsignal perturbation energies.
translated by 谷歌翻译
规则化回归问题在统计建模,信号处理和机器学习中无处不在。特别是稀疏回归在科学模型发现中起到了重要作用,包括压缩感知应用,变量选择和高维分析。我们提出了稀疏松弛正则回归的国外框架,称为SR3。关键的想法是解决正规化问题的放松,这比现有技术具有优势:(1)轻松问题的解决方案在错误,误报和调节方面都是优越的,(2)放松允许极快的算法用于凸和非凸构造,以及(3)这些方法适用于复合正则化器,如总变差(TV)及其非凸变体。我们在合成和实际数据的一系列正则化回归问题中展示了SR3的优势(计算效率,更高的准确性,更快的收敛速度,更高的灵活性),包括压缩感知,LASSO,矩阵完成,电视正规化和群组稀疏性等应用。为了促进可重复的研究,我们还提供了一个实现这些示例的伴随Matlab包。
translated by 谷歌翻译