Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.
translated by 谷歌翻译
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the Information Theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the "sampling theorem," harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future.
translated by 谷歌翻译
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
translated by 谷歌翻译
translated by 谷歌翻译
Shannon's determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.
translated by 谷歌翻译
声学数据提供从生物学和通信到海洋和地球科学等领域的科学和工程见解。我们调查了机器学习(ML)的进步和变革潜力,包括声学领域的深度学习。 ML是用于自动检测和利用模式印度的广泛的统计技术家族。相对于传统的声学和信号处理,ML是数据驱动的。给定足够的训练数据,ML可以发现特征之间的复杂关系。通过大量的训练数据,ML candiscover模型描述复杂的声学现象,如人类语音和混响。声学中的ML正在迅速发展,具有令人瞩目的成果和未来的重大前景。我们首先介绍ML,然后在五个声学研究领域强调MLdevelopments:语音处理中的源定位,海洋声学中的源定位,生物声学,地震探测和日常场景中的环境声音。
translated by 谷歌翻译
A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject. 1 Motivation Imagine that you are in a room where two people are speaking simultaneously. You have two microphones, which you hold in different locations. The microphones give you two recorded time signals, which we could denote by x 1 (t) and x 2 (t), with x 1 and x 2 the amplitudes, and t the time index. Each of these recorded signals is a weighted sum of the speech signals emitted by the two speakers, which we denote by s 1 (t) and s 2 (t). We could express this as a linear equation: x 1 (t) = a 11 s 1 + a 12 s 2 (1) x 2 (t) = a 21 s 1 + a 22 s 2 (2) where a 11 , a 12 , a 21 , and a 22 are some parameters that depend on the distances of the microphones from the speakers. It would be very useful if you could now estimate the two original speech signals s 1 (t) and s 2 (t), using only the recorded signals x 1 (t) and x 2 (t). This is called the cocktail-party problem. For the time being, we omit any time delays or other extra factors from our simplified mixing model. As an illustration, consider the waveforms in Fig. 1 and Fig. 2. These are, of course, not realistic speech signals, but suffice for this illustration. The original speech signals could look something like those in Fig. 1 and the mixed signals could look like those in Fig. 2. The problem is to recover the data in Fig. 1 using only the data in Fig. 2. Actually, if we knew the parameters a i j , we could solve the linear equation in (1) by classical methods. The point is, however, that if you don't know the a i j , the problem is considerably more difficult. One approach to solving this problem would be to use some information on the statistical properties of the signals s i (t) to estimate the a ii. Actually, and perhaps surprisingly, it turns out that it is enough to assume that s 1 (t) and s 2 (t), at each time instant t, are statistically independent. This is not an unrealistic assumption in many cases, 1
translated by 谷歌翻译
Gossip algorithms are attractive for in-network processing in sensor networksbecause they do not require any specialized routing, there is no bottleneck orsingle point of failure, and they are robust to unreliable wireless networkconditions. Recently, there has been a surge of activity in the computerscience, control, signal processing, and information theory communities,developing faster and more robust gossip algorithms and deriving theoreticalperformance guarantees. This article presents an overview of recent work in thearea. We describe convergence rate results, which are related to the number oftransmitted messages and thus the amount of energy consumed in the network forgossiping. We discuss issues related to gossiping over wireless links,including the effects of quantization and noise, and we illustrate the use ofgossip algorithms for canonical signal processing tasks including distributedestimation, source localization, and compression.
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
High-resolution signal parameter estimation is a problem of significance in many signal processing applications. Such applications include direction-of-arrival (DOA) estimation, system identification , and time series analysis. A novel approach to the general problem of signal parameter estimation is described. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a trans-lational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC.
translated by 谷歌翻译
Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of co-channel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all individual CM signals that are present in the channel. In this paper, we show that the underlying constant modulus fac-torization problem is, in fact, a generalized eigenvalue problem, and may be solved via a simultaneous diagonalization of a set of matrices. With this new, analytical approach, it is possible to detect the number of CM signals present in the channel, and to retrieve all of them exactly, rejecting other, non-CM signals. Only a modest amount of samples are required. The algorithm is robust in the presence of noise, and is tested on measured data, collected from an experimental setup .
translated by 谷歌翻译
This paper presents a new technique for achieving blind signal separation when given only a single channel recording. The main concept is based on exploiting a priori sets of time-domain basis functions learned by independent component analysis (ICA) to the separation of mixed source signals observed in a single channel. The inherent time structure of sound sources is reflected in the ICA basis functions, which encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis functions. For each time point we infer the source parameters and their contribution factors. This inference is possible due to prior knowledge of the basis functions and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for simulated mixtures as well as real environment recordings employing mixtures of two different sources.
translated by 谷歌翻译
独立分量分析(ICA)是一种统计方法,用于将可观察的多维随机向量转换为彼此尽可能独立的分量。通常,ICA框架假设一个模型,根据该模型生成观察结果(例如带加性噪声的线性变换)。有限域上的ICA是ICA的一个特例,其中观察和独立分量都在有限字母表上。在本文中,我们考虑了有限域情形的一个公式,其中观察向量被分解为其独立的成分(尽可能多),而在生成它的过程中没有先验假设。这种概括也称为Barlow的最小冗余表示,被认为是一个开放的问题。我们提出了几个定理,并表明这个难题可以用Branchand绑定搜索树算法准确地解决,或者用一系列线性问题紧密地近似。此外,我们表明存在一个简单的变换(即,顺序置换),它提供了最优解的贪婪但非常有效的近似。我们进一步表明,虽然不是每个随机向量都可以有效地分解成独立的组件,但是随着维数的增加,绝大多数向量确实很好地分解(即,在小的恒定成本内)。此外,我们表明我们可以实际上实现这种有利的恒定成本,其复杂性在字母表大小上是偶然线性的。我们的贡献为理论和计算保证提供了巴洛问题的第一套解决方案。最后,我们在多源编码应用程序中演示了我们建议的框架。
translated by 谷歌翻译
Tensors or {\em multi-way arrays} are functions of three or more indices$(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functionsof two indices $(r,c)$ for (row,column). Tensors have a rich history,stretching over almost a century, and touching upon numerous disciplines; butthey have only recently become ubiquitous in signal and data analytics at theconfluence of signal processing, statistics, data mining and machine learning.This overview article aims to provide a good starting point for researchers andpractitioners interested in learning about and working with tensors. As such,it focuses on fundamentals and motivation (using various application examples),aiming to strike an appropriate balance of breadth {\em and depth} that willenable someone having taken first graduate courses in matrix algebra andprobability to get started doing research and/or developing tensor algorithmsand software. Some background in applied optimization is useful but notstrictly required. The material covered includes tensor rank and rankdecomposition; basic tensor factorization models and their relationships andproperties (including fairly good coverage of identifiability); broad coverageof algorithms ranging from alternating optimization to stochastic gradient;statistical performance analysis; and applications ranging from sourceseparation to collaborative filtering, mixture and topic modeling,classification, and multilinear subspace learning.
translated by 谷歌翻译
We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
translated by 谷歌翻译
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
translated by 谷歌翻译
While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes a form of inverse problem of the unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in continuous domain that can be implemented as a measurement domain interpolation, after which a signal reconstruction can be done using classical analytic reconstruction methods. The main idea is originated from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, which shows that a low-rank interpolator in the spectral domain can enjoy all the benefit of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large class of finite rate of innovations (FRI) signals at near optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation will benefit from inherent regularization and the optimal incoherence parameter. Using the powerful dual certificates and golfing scheme, we show that the new framework still achieves the near-optimal sampling rate for general class of FRI signal recovery, and the sampling rate can be further reduced for the class of cardinal splines. Numerical results using various type of FRI signals confirmed that the proposed low-rank interpolation approach has significant better phase transition than the conventional CS approaches. Index Terms Compressed sensing, signals of finite rate of innovations, spectral compressed sensing, low rank matrix completion, dual certificates, golfing scheme
translated by 谷歌翻译