A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of co-channel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all individual CM signals that are present in the channel. In this paper, we show that the underlying constant modulus fac-torization problem is, in fact, a generalized eigenvalue problem, and may be solved via a simultaneous diagonalization of a set of matrices. With this new, analytical approach, it is possible to detect the number of CM signals present in the channel, and to retrieve all of them exactly, rejecting other, non-CM signals. Only a modest amount of samples are required. The algorithm is robust in the presence of noise, and is tested on measured data, collected from an experimental setup .
translated by 谷歌翻译
Absfracf-Processing the signals received on an array of sensors for the location of the emitter is of great enough interest to have been treated under many special case assumptions. The general problem considers sensors with arbitrary locations and arbitrary directional characteristics (gain/phase/polarization) in a noisehnterference environment of arbitrary covariance matrix. This report is concerned first with the multiple emitter aspect of this problem and second with the generality of solution. A description is given of the multiple signal classification (MUSIC) algorithm, which provides asymptotically unbiased estimates of 1) number of incident wavefronts present; 2) directions of arrival (DOA) (or emitter locations); 3) strengths and cross correlations among the incident waveforms; 4) noiselinterference strength. Examples and comparisons with methods based on maximum likelihood (ML) and maximum entropy (ME), as well as conventional beamforming are included. An example of its use as a multiple frequency estimator operating on time series is included.
translated by 谷歌翻译
Subspace estimation plays an important role in a variety of modern signal processing applications. In this paper, we present a new approach for tracking the signal subspace recur-sively. It is based on a novel interpretation of the signal subspace as the solution of a projection like unconstrained minimization problem. We show that recursive least squares techniques can be applied to solve this problem by making an appropriate projection approximation. The resulting algorithms have a computational complexity of 0 (1 1 r) , where I I is the input vector dimension and r is the number of desired eigencomponents. Simulation results demonstrate that the tracking capability of these algorithms is similar to and in some cases more robust than the computationally expensive batch eigenvalue decomposition. Relations of the new algorithms to other subspace tracking methods and numerical issues are also discussed.
translated by 谷歌翻译
This paper links multiple invariance sensor array processing (MI-SAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for low-rank decomposition of three-and higher way arrays. This link facilitates the derivation of powerful identifiability results for MI-SAP, shows that the uniqueness of single-and multiple-invariance ESPRIT stems from uniqueness of low-rank decomposition of three-way arrays, and allows tapping on the available expertise for fitting the PARAFAC model. The results are applicable to both data-domain and subspace MI-SAP formulations. The paper also includes a constructive uniqueness proof for a special PARAFAC model.
translated by 谷歌翻译
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
In various applications, it is necessary to keep track of a low-rank approximation of a covariance matrix, R(t), slowly varying with time. It is convenient to track the left singular vectors associated with the largest singular values of the triangular factor, L(t), of its Cho-lesky factorization. These algorithms are referred to as "square-root." The drawback of the Eigenvalue Decomposition (€VD) or the Singular Value Decomposition (SVD) is usually the volume of the computations. Various numerical methods carrying out this task are surveyed in this paper, and we show why this admittedly heavy computational burden is questionable in numerous situations and should be revised. Indeed, the complexity per eigenpair is generally a quadratic function of the problem size, but there exist faster algorithms whose complexity is linear. Finally, in order to make a choice among the large and fuzzy set of available techniques, comparisons are made based on computer simulations in a relevant signal processing context.
translated by 谷歌翻译
Separation of sources consists of recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is available: The linear mixture should be "blindly" processed. This typically occurs in narrowband array processing applications when the array manifold is unknown or distorted. This paper introduces a new source separation technique exploiting the time coherence of the source signals. In contrast with other previously reported techniques, the proposed approach relies only on stationary second-order statistics that are based on a joint diagonalization of a set of covariance matrices. Asymp-totic performance analysis of this method is carried out; some numerical simulations are provided to illustrate the effectiveness of the proposed method.
translated by 谷歌翻译
在这项工作中,我们提出了基于Krylov子空间的到达方向(DoA)估计算法,该算法有效地利用了影响传感器阵列的信号的先验知识。所提出的多步知识辅助共轭梯度(CG)(MS-KAI-CG)算法执行传感器数据的估计协方差矩阵中发现的不需要的项的减法。此外,我们开发了一种装备的MS-KAI-CG版本。前向后平均,称为MS-KAI-CG-FB,适用于具有相关信号的场景。与利用已知DoA的优势来增强输入数据的协方差矩阵的估计的当前知识辅助方法不同,MS-KAI-CG算法利用前向平滑的协方差矩阵的结构及其扰动项的知识。使用不相关和相关信号的模拟表明MS-KAI-CG算法优于现有技术。
translated by 谷歌翻译
We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC. Index Terms Compressed sensing, joint sparsity, multiple measurement vectors (MMV), subspace estimation, restricted isometry property (RIP), sensor array processing, spectrum-blind sampling.
translated by 谷歌翻译
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3 × 3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
translated by 谷歌翻译
Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100) and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such an approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A/D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bit-rate and power consumption. In order to reduce complexity, Hybrid Digital-Analog architectures have been proposed. Our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A/D converters and RF modulation/demodulation chains is needed. In JSDM, users are grouped according to similarity of their channels' dominant subspaces, and these groups are separated by the analog beamforming stage. Further multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M-dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation algorithms provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the users' channel covariances are perfectly known.
translated by 谷歌翻译
Tensors or {\em multi-way arrays} are functions of three or more indices$(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functionsof two indices $(r,c)$ for (row,column). Tensors have a rich history,stretching over almost a century, and touching upon numerous disciplines; butthey have only recently become ubiquitous in signal and data analytics at theconfluence of signal processing, statistics, data mining and machine learning.This overview article aims to provide a good starting point for researchers andpractitioners interested in learning about and working with tensors. As such,it focuses on fundamentals and motivation (using various application examples),aiming to strike an appropriate balance of breadth {\em and depth} that willenable someone having taken first graduate courses in matrix algebra andprobability to get started doing research and/or developing tensor algorithmsand software. Some background in applied optimization is useful but notstrictly required. The material covered includes tensor rank and rankdecomposition; basic tensor factorization models and their relationships andproperties (including fairly good coverage of identifiability); broad coverageof algorithms ranging from alternating optimization to stochastic gradient;statistical performance analysis; and applications ranging from sourceseparation to collaborative filtering, mixture and topic modeling,classification, and multilinear subspace learning.
translated by 谷歌翻译
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection , change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspec-tral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images. Index Terms-Dimensionality reduction, hyperspectral imagery , hyperspectral signal subspace identification by minimum error (HySime), hyperspectral unmixing, linear mixture, minimum mean square error (mse), subspace identification.
translated by 谷歌翻译
Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.
translated by 谷歌翻译
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
translated by 谷歌翻译
对于科学和工程领域的许多现代应用,数据以流式方式收集,携带时变信息,并且从业者需要及时处理有限量的存储器和计算资源以进行决策。这通常与缺失的数据问题相结合,使得仅观察到一小部分数据属性。这些复杂性对流式主成分分析(PCA)和子空间跟踪问题产生了重要且非常规的约束,这是信号处理和机器学习中许多推理任务的重要组成部分。本调查文章回顾了各种经典算法和最近的算法,用于解决具有低计算和记忆复杂性的问题,特别是那些适用于缺失数据的大数据体系的算法。我们通过代数和几何分析来说明流PCA和子空间跟踪算法,并且需要仔细调整以处理丢失的数据。对渐近和非渐近收敛保证进行了综述。最后,我们对几种竞争算法的性能进行了基准测试。有条件和病态系统的缺失数据的存在。
translated by 谷歌翻译
We consider the notion of qualitative information and the practicalities of extracting it from experimental data. Our approach, based on a theorem of Takens, draws on ideas from the generalized theory of information known as singular system analysis due to Bertero, Pike and co-workers. We illustrate our technique with numerical data from the chaotic regime of the Lorenz model.
translated by 谷歌翻译