We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
We address the problem of reconstructing a multi-band signal from its sub-Nyquist point-wise samples. To date, all reconstruction methods proposed for this class of signals assumed knowledge of the band locations. In this paper, we develop a non-linear blind perfect reconstruction scheme for multi-band signals which does not require the band locations. Our approach assumes an existing blind multi-coset sampling method. The sparse structure of multi-band signals in the continuous frequency domain is used to replace the continuous reconstruction with a single finite dimensional problem without the need for discretization. The resulting problem can be formulated within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate. Index Terms Kruskal-rank, Landau-Nyquist rate, multiband, multiple measurement vectors (MMV), nonuniform periodic sampling, orthogonal matching pursuit (OMP), signal representation, sparsity.
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets, sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem, but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here we describe a sub-Nyquist sampling and recovery approach called Doppler focusing which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement which scales linearly with P , obtaining good detection performance even at SNR as low as-25dB. The recovery is based on the Xampling framework, which allows reducing the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype.
translated by 谷歌翻译
The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of non-adaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the original sparse vector is captured in the measurements, and can be recovered using efficient polynomial methods. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common non-zero location set. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing recovery methods to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic recovery of a single sparse vector by randomly combining the measurement vectors. Our approach results in exact recovery of both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently recover the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm is characterized by fast run time and superior recovery rate than known popular methods. Index Terms Basis pursuit, compressed sensing, multiple measurement vectors (MMV), orthogonal matching pursuit (OMP), sparse representation.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
Representing a continuous-time signal by a set of samples is a classical problem in signal processing. We study this problem under the additional constraint that the samples are quantized or compressed in a lossy manner under a limited bitrate budget. To this end, we consider a combined sampling and source coding problem in which an analog stationary Gaussian signal is reconstructed from its encoded samples. These samples are obtained by a set of bounded linear functionals of the continuous-time path, with a limitation on the average number of samples obtained per unit time available in this setting. We provide a full characterization of the minimal distortion in terms of the sampling frequency, the bitrate, and the signal's spectrum. Assuming that the signal's energy is not uniformly distributed over its spectral support, we show that for each compression bitrate there exists a critical sampling frequency smaller than the Nyquist rate, such that the distortion in signal reconstruction when sampling at this frequency is minimal. Our results can be seen as an extension of the classical sampling theorem for bandlimited random processes in the sense that it describes the minimal amount of excess distortion in the reconstruction due to lossy compression of the samples, and provides the minimal sampling frequency required in order to achieve this distortion. Finally, we compare the fundamental limits in the combined source coding and sampling problem to the performance of pulse code modulation (PCM), where each sample is quantized by a scalar quantizer using a fixed number of bits.
translated by 谷歌翻译
Shannon's determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.
translated by 谷歌翻译
Traditional sampling theories consider the problem of reconstructing anunknown signal $x$ from a series of samples. A prevalent assumption which oftenguarantees recovery from the given measurements is that $x$ lies in a knownsubspace. Recently, there has been growing interest in nonlinear but structuredsignal models, in which $x$ lies in a union of subspaces. In this paper wedevelop a general framework for robust and efficient recovery of such signalsfrom a given set of samples. More specifically, we treat the case in which $x$lies in a sum of $k$ subspaces, chosen from a larger set of $m$ possibilities.The samples are modelled as inner products with an arbitrary set of samplingfunctions. To derive an efficient and robust recovery algorithm, we show thatour problem can be formulated as that of recovering a block-sparse vector whosenon-zero elements appear in fixed blocks. We then propose a mixed$\ell_2/\ell_1$ program for block sparse recovery. Our main result is anequivalence condition under which the proposed convex algorithm is guaranteedto recover the original signal. This result relies on the notion of blockrestricted isometry property (RIP), which is a generalization of the standardRIP used extensively in the context of compressed sensing. Based on RIP we alsoprove stability of our approach in the presence of noise and modelling errors. A special case of our framework is that of recovering multiple measurementvectors (MMV) that share a joint sparsity pattern. Adapting our results to thiscontext leads to new MMV recovery methods as well as equivalence conditionsunder which the entire set can be determined efficiently.
translated by 谷歌翻译
translated by 谷歌翻译
Optical performance monitoring (OPM) is an important issue for proper operation of next-generation optical networks. Among various monitored parameters, the optical signal-to-noise ratio (OSNR) and fiber transmission impairments such as chromatic dispersion (CD), polarization mode dispersion (PMD), and polarization dependent loss (PDL) are paid special attention, because they serve information of the channel quality, which helps to manage the network. Several methods have been proposed for monitoring tasks, which are based on pilot tones, RF tones, asynchronous histogram, and fiber nonlinear effects. Most of them need costly devices, tap optical power from the channel, and introduce transmission overhead. On the other hand, in this research, we investigate OPM based on digital coherent receivers, which overcomes such difficulties and ensures cost-efficient, robust and reliable monitoring. Linear channel impairments such as CD, PMD, and PDL are monitored from the transfer functions of adaptive filters. A digital coherent receiver allows polarization demultiplexing and equalization of all these impairments by using four finite-impulse-response (FIR) filters structured in a two-by-two butterfly configuration. After the filters are adapted by a suitable algorithm, we can construct a frequency-dependent two-by-two matrix with four elements, which are transfer functions of the adapted four FIR filters. The inverse of this matrix is called the monitoring matrix and can be approximated as the transfer matrix of the channel, and contains combined effects of CD, PMD and PDL. A precise algorithm is required to separate out the impairments from this matrix. We propose a simple and unified algorithm to separate out CD, differential group delay (DGD), PDL, and second-order PMD from the monitoring matrix. The components of second-order PMD, polarization-dependent chromatic dispersion (PCD) and depolarization (DEP) of principal states of polarization are obtained separately. This algorithm has an advantage that individual impairment can be estimated directly from the monitoring matrix without any matrix decomposition; thus it enables accurate estimation of the impairments, even when the transmitted signal suffers from distortion stemming from various origins. Also, no additional hardware is required for our proposed algorithm. For filter adaptation, we use the constant-modulus algorithm (CMA), as it enables long-tap-filter adaptation efficiently even in the presence of large laser phase noise unlike the commonly used decision-directed least-mean-square (DD-LMS) algorithm. However, CMA can suffer from ii the singularity problem which means both the output ports of butterfly configuration converge to the same polarization tributary. Consequently, we avoid the singularity problem by introducing the training mode in CMA. In the training mode, the LMS algorithm is used to determine in which output port of the butterfly configuration each polarization tributary appear
translated by 谷歌翻译
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-in-variant" functions spaces, including splines and wavelets. Practically , this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonban-dlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.
translated by 谷歌翻译
A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N × N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently "incoherent" pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar.
translated by 谷歌翻译
While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes a form of inverse problem of the unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in continuous domain that can be implemented as a measurement domain interpolation, after which a signal reconstruction can be done using classical analytic reconstruction methods. The main idea is originated from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, which shows that a low-rank interpolator in the spectral domain can enjoy all the benefit of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large class of finite rate of innovations (FRI) signals at near optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation will benefit from inherent regularization and the optimal incoherence parameter. Using the powerful dual certificates and golfing scheme, we show that the new framework still achieves the near-optimal sampling rate for general class of FRI signal recovery, and the sampling rate can be further reduced for the class of cardinal splines. Numerical results using various type of FRI signals confirmed that the proposed low-rank interpolation approach has significant better phase transition than the conventional CS approaches. Index Terms Compressed sensing, signals of finite rate of innovations, spectral compressed sensing, low rank matrix completion, dual certificates, golfing scheme
translated by 谷歌翻译
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finite-dimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finite-dimensional techniques are ill-suited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. The key to these developments is the introduction of two new concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize the fundamentally infinite-dimensional reconstruction problem.
translated by 谷歌翻译
Coarse quantization at the base station (BS) of a massive multiuser (MU) multiple-input multiple-output (MIMO) wireless system promises significant power and cost savings. Coarse quantization also enables significant reductions of the raw analog-to-digital converter (ADC) data that must be transferred from a spatially-separated antenna array to the baseband processing unit. The theoretical limits as well as practical transceiver algorithms for such quantized MU-MIMO systems operating over frequency-flat, narrowband channels have been studied extensively. However, the practically relevant scenario where such communication systems operate over frequency-selective, wideband channels is less well understood. This paper investigates the uplink performance of a quantized massive MU-MIMO system that deploys orthogonal frequency-division multiplexing (OFDM) for wideband communication. We propose new algorithms for quantized maximum a-posteriori (MAP) channel estimation and data detection, and we study the associated performance/quantization trade-offs. Our results demonstrate that coarse quantization (e.g., four to six bits, depending on the ratio between the number of BS antennas and the number of users) in massive MU-MIMO-OFDM systems entails virtually no performance loss compared to the infinite-precision case at no additional cost in terms of baseband processing complexity. Index Terms-Analog-to-digital conversion, convex optimization , forward-backward splitting (FBS), frequency-selective channels , massive multiuser multiple-input multiple-output (MU-MIMO), maximum a-posteriori (MAP) channel estimation, minimum mean-square error (MMSE) data detection, orthogonal frequency-division multiplexing (OFDM), quantization.
translated by 谷歌翻译
translated by 谷歌翻译
This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object-the high end of its spectrum-from coarse scale information only-from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0, 1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up until a frequency cutoff f c. We show that one can super-resolve these point sources with infinite precision-i.e. recover the exact locations and amplitudes-by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2/f c. This result extends to higher dimensions and other models. In one dimension for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary.
translated by 谷歌翻译