Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets, sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem, but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here we describe a sub-Nyquist sampling and recovery approach called Doppler focusing which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement which scales linearly with P , obtaining good detection performance even at SNR as low as-25dB. The recovery is based on the Xampling framework, which allows reducing the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
We address the problem of reconstructing a multi-band signal from its sub-Nyquist point-wise samples. To date, all reconstruction methods proposed for this class of signals assumed knowledge of the band locations. In this paper, we develop a non-linear blind perfect reconstruction scheme for multi-band signals which does not require the band locations. Our approach assumes an existing blind multi-coset sampling method. The sparse structure of multi-band signals in the continuous frequency domain is used to replace the continuous reconstruction with a single finite dimensional problem without the need for discretization. The resulting problem can be formulated within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate. Index Terms Kruskal-rank, Landau-Nyquist rate, multiband, multiple measurement vectors (MMV), nonuniform periodic sampling, orthogonal matching pursuit (OMP), signal representation, sparsity.
translated by 谷歌翻译
Shannon's determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.
translated by 谷歌翻译
translated by 谷歌翻译

2019-01-11

translated by 谷歌翻译
Optical performance monitoring (OPM) is an important issue for proper operation of next-generation optical networks. Among various monitored parameters, the optical signal-to-noise ratio (OSNR) and fiber transmission impairments such as chromatic dispersion (CD), polarization mode dispersion (PMD), and polarization dependent loss (PDL) are paid special attention, because they serve information of the channel quality, which helps to manage the network. Several methods have been proposed for monitoring tasks, which are based on pilot tones, RF tones, asynchronous histogram, and fiber nonlinear effects. Most of them need costly devices, tap optical power from the channel, and introduce transmission overhead. On the other hand, in this research, we investigate OPM based on digital coherent receivers, which overcomes such difficulties and ensures cost-efficient, robust and reliable monitoring. Linear channel impairments such as CD, PMD, and PDL are monitored from the transfer functions of adaptive filters. A digital coherent receiver allows polarization demultiplexing and equalization of all these impairments by using four finite-impulse-response (FIR) filters structured in a two-by-two butterfly configuration. After the filters are adapted by a suitable algorithm, we can construct a frequency-dependent two-by-two matrix with four elements, which are transfer functions of the adapted four FIR filters. The inverse of this matrix is called the monitoring matrix and can be approximated as the transfer matrix of the channel, and contains combined effects of CD, PMD and PDL. A precise algorithm is required to separate out the impairments from this matrix. We propose a simple and unified algorithm to separate out CD, differential group delay (DGD), PDL, and second-order PMD from the monitoring matrix. The components of second-order PMD, polarization-dependent chromatic dispersion (PCD) and depolarization (DEP) of principal states of polarization are obtained separately. This algorithm has an advantage that individual impairment can be estimated directly from the monitoring matrix without any matrix decomposition; thus it enables accurate estimation of the impairments, even when the transmitted signal suffers from distortion stemming from various origins. Also, no additional hardware is required for our proposed algorithm. For filter adaptation, we use the constant-modulus algorithm (CMA), as it enables long-tap-filter adaptation efficiently even in the presence of large laser phase noise unlike the commonly used decision-directed least-mean-square (DD-LMS) algorithm. However, CMA can suffer from ii the singularity problem which means both the output ports of butterfly configuration converge to the same polarization tributary. Consequently, we avoid the singularity problem by introducing the training mode in CMA. In the training mode, the LMS algorithm is used to determine in which output port of the butterfly configuration each polarization tributary appear
translated by 谷歌翻译
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the Information Theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the "sampling theorem," harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future.
translated by 谷歌翻译
translated by 谷歌翻译
Representing a continuous-time signal by a set of samples is a classical problem in signal processing. We study this problem under the additional constraint that the samples are quantized or compressed in a lossy manner under a limited bitrate budget. To this end, we consider a combined sampling and source coding problem in which an analog stationary Gaussian signal is reconstructed from its encoded samples. These samples are obtained by a set of bounded linear functionals of the continuous-time path, with a limitation on the average number of samples obtained per unit time available in this setting. We provide a full characterization of the minimal distortion in terms of the sampling frequency, the bitrate, and the signal's spectrum. Assuming that the signal's energy is not uniformly distributed over its spectral support, we show that for each compression bitrate there exists a critical sampling frequency smaller than the Nyquist rate, such that the distortion in signal reconstruction when sampling at this frequency is minimal. Our results can be seen as an extension of the classical sampling theorem for bandlimited random processes in the sense that it describes the minimal amount of excess distortion in the reconstruction due to lossy compression of the samples, and provides the minimal sampling frequency required in order to achieve this distortion. Finally, we compare the fundamental limits in the combined source coding and sampling problem to the performance of pulse code modulation (PCM), where each sample is quantized by a scalar quantizer using a fixed number of bits.
translated by 谷歌翻译
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-in-variant" functions spaces, including splines and wavelets. Practically , this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonban-dlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.
translated by 谷歌翻译
The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations.
translated by 谷歌翻译

2008-07-29
Traditional sampling theories consider the problem of reconstructing anunknown signal $x$ from a series of samples. A prevalent assumption which oftenguarantees recovery from the given measurements is that $x$ lies in a knownsubspace. Recently, there has been growing interest in nonlinear but structuredsignal models, in which $x$ lies in a union of subspaces. In this paper wedevelop a general framework for robust and efficient recovery of such signalsfrom a given set of samples. More specifically, we treat the case in which $x$lies in a sum of $k$ subspaces, chosen from a larger set of $m$ possibilities.The samples are modelled as inner products with an arbitrary set of samplingfunctions. To derive an efficient and robust recovery algorithm, we show thatour problem can be formulated as that of recovering a block-sparse vector whosenon-zero elements appear in fixed blocks. We then propose a mixed$\ell_2/\ell_1$ program for block sparse recovery. Our main result is anequivalence condition under which the proposed convex algorithm is guaranteedto recover the original signal. This result relies on the notion of blockrestricted isometry property (RIP), which is a generalization of the standardRIP used extensively in the context of compressed sensing. Based on RIP we alsoprove stability of our approach in the presence of noise and modelling errors. A special case of our framework is that of recovering multiple measurementvectors (MMV) that share a joint sparsity pattern. Adapting our results to thiscontext leads to new MMV recovery methods as well as equivalence conditionsunder which the entire set can be determined efficiently.
translated by 谷歌翻译
The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of non-adaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the original sparse vector is captured in the measurements, and can be recovered using efficient polynomial methods. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common non-zero location set. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing recovery methods to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic recovery of a single sparse vector by randomly combining the measurement vectors. Our approach results in exact recovery of both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently recover the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm is characterized by fast run time and superior recovery rate than known popular methods. Index Terms Basis pursuit, compressed sensing, multiple measurement vectors (MMV), orthogonal matching pursuit (OMP), sparse representation.
translated by 谷歌翻译
translated by 谷歌翻译
Timescale and, to a lesser extent, pitch-scale modifications of speech and audio signals are the subject of major theoretical and practical interest. Applications are numerous, including, to name but a few, text-to-speech synthesis (based on acoustical unit concatenation), transformation of voice characteristics, foreign language learning but also audio monitoring or film/soundtrack post-synchronization. To fulfill the need for high-quality time and pitch-scaling , a number of algorithms have been proposed recently, along with their real-time implementation, sometimes for very inexpensive hardware. It appears that most of these algorithms can be viewed as slight variations of a small number of basic schemes. This contribution reviews frequency-domain algorithms (phase-vocoder) and time-domain algorithms (Time-Domain Pitch-Synchronous Overlap/Add and the like) in the same framework. More recent variations of these schemes are also presented.
translated by 谷歌翻译
Light rays incident on a transparent object of uniform refractive index undergo deflections, which uniquely characterize the surface geometry of the object. Associated with each point on the surface is a deflection map (or spectrum) which describes the pattern of deflections in various directions. This article presents a novel method to efficiently acquire and reconstruct sparse deflection spectra induced by smooth object surfaces. To this end, we leverage the framework of Compressed Sensing (CS) in a particular implementation of a schlieren deflectometer, i.e., an optical system providing linear measurements of deflection spectra with programmable spatial light modulation patterns. In particular, we design those modulation patterns on the principle of spread spectrum CS for reducing the number of observations. Interestingly, the ability of our device to simultaneously observe the deflection spectra on a dense discretization of the object surface is related to a particular Multiple Measurement Vector (MMV) model. This scheme allows us to estimate both the noise power and the instrumental point spread function in a specific calibration procedure. We formulate the spectrum reconstruction task as the solving of a linear inverse problem regularized by an analysis sparsity prior using a translation invariant wavelet frame. Our results demonstrate the capability and advantages of using a CS based approach for deflectometric imaging both on simulated data and experimental deflectometric data. Finally, the paper presents an extension of our method showing how we can extract the main deflection direction in each point of the object surface from a few compressive measurements, without needing any costly reconstruction procedures. This compressive characterization is then confirmed with experimental results on simple plano-convex and multifocal intra-ocular lenses studying the evolution of the main deflection as a function of the object point location.
translated by 谷歌翻译
${authors} 分类：${tags}
${pubdate}${abstract_cn}
translated by 谷歌翻译