Starting from Shannon's celebrated 1948 channel coding theorem, we trace the evolution of channel coding from Hamming codes to capacity-approaching codes. We focus on the contributions that have led to the most significant improvements in performance vs. complexity for practical applications, particularly on the additive white Gaussian noise (AWGN) channel. We discuss algebraic block codes, and why they did not prove to be the way to get to the Shannon limit. We trace the antecedents of today's capacity-approaching codes: convolutional codes, concatenated codes, and other probabilistic coding schemes. Finally, we sketch some of the practical applications of these codes.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) single-user matched filtering, c) decorrelation, and d) MMSE linear processing.
translated by 谷歌翻译
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the Information Theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the "sampling theorem," harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future.
translated by 谷歌翻译
Representing a continuous-time signal by a set of samples is a classical problem in signal processing. We study this problem under the additional constraint that the samples are quantized or compressed in a lossy manner under a limited bitrate budget. To this end, we consider a combined sampling and source coding problem in which an analog stationary Gaussian signal is reconstructed from its encoded samples. These samples are obtained by a set of bounded linear functionals of the continuous-time path, with a limitation on the average number of samples obtained per unit time available in this setting. We provide a full characterization of the minimal distortion in terms of the sampling frequency, the bitrate, and the signal's spectrum. Assuming that the signal's energy is not uniformly distributed over its spectral support, we show that for each compression bitrate there exists a critical sampling frequency smaller than the Nyquist rate, such that the distortion in signal reconstruction when sampling at this frequency is minimal. Our results can be seen as an extension of the classical sampling theorem for bandlimited random processes in the sense that it describes the minimal amount of excess distortion in the reconstruction due to lossy compression of the samples, and provides the minimal sampling frequency required in order to achieve this distortion. Finally, we compare the fundamental limits in the combined source coding and sampling problem to the performance of pulse code modulation (PCM), where each sample is quantized by a scalar quantizer using a fixed number of bits.
translated by 谷歌翻译
We study two families of error-correcting codes defined in terms of very sparse matrices. "MN" (MacKay-Neal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are "very good," in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes.
translated by 谷歌翻译
This paper studies randomly spread code-division multiple access (CDMA) and multiuser detection in the large-system limit using the replica method developed in statistical physics. Arbitrary input distributions and flat fading are considered. A generic multiuser detector in the form of the posterior mean estimator is applied before single-user decoding. The generic detector can be particularized to the matched filter, decorrelator, linear MMSE detector, the jointly or the individually optimal detector, and others. It is found that the detection output for each user, although in general asymptotically non-Gaussian conditioned on the transmitted symbol, converges as the number of users go to infinity to a deterministic function of a "hidden" Gaussian statistic independent of the interferers. Thus the multiuser channel can be decoupled: Each user experiences an equivalent single-user Gaussian channel, whose signal-to-noise ratio suffers a degradation due to the multiple-access interference. The uncoded error performance (e.g., symbol-error-rate) and the mutual information can then be fully characterized using the degradation factor, also known as the multiuser efficiency, which can be obtained by solving a pair of coupled fixed-point equations identified in this paper. Based on a general linear vector channel model, the results are also applicable to MIMO channels such as in multiantenna systems. Index Terms-Channel capacity, code-division multiple access (CDMA), free energy, multiple-input multiple-output (MIMO) channel, multiuser detection, multiuser efficiency, replica method, statistical mechanics.
translated by 谷歌翻译
translated by 谷歌翻译
A bandwidth-and power-efficient modulation scheme using M-ary differential phase-shift keying (DPSK)/ differential quadrature amplitude modulation (DQAM) and low-density parity-check (LDPC) codes as component codes in a multilevel coding (MLC) is proposed for optical transmission systems with direct detection. An MLC scheme with 2 b/s/Hz spectral efficiency based on block-circulant component codes provides the coding gain of 12.3 dB when compared to uncoded 8-DPSK, and 8.3 dB when compared to uncoded 4-DPSK (QDPSK). Index Terms-Differential quadrature amplitude modulation (DQAM), low-density parity-check codes (LDPC), M-ary differential phase-shift keying (DPSK), Mach-Zehnder interferometer, multilevel coding, optical communications.
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
The problem of reliably reconstructing a function of sources over a multiple-access channel is considered. It is shown that there is no source-channel separation theorem even when the individual sources are independent. Joint source-channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched. Even when the channel and function are mismatched, these computation codes often outperform separation-based strategies. Achievable distortions are given for the distributed refinement of the sum of Gaussian sources over a Gaussian multiple-access channel with a joint source-channel lattice code. Finally, computation codes are used to determine the multicast capacity of finite field multiple-access networks, thus linking them to network coding.
translated by 谷歌翻译
Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.
translated by 谷歌翻译
Abstrucf-The Viterbi algorithm (VA) is a recursive optimal solution to the problem of estimating the state sequence of a discrete-time finite-state Markov process observed in memoryless noise. Many problems in areas such as digital communications can be cast in this form. This paper gives a tutorial exposition of the algorithm and of how it is implemented and analyzed. Applications to date are reviewed. Increasing use of the algorithm in a widening variety of areas is foreseen.
translated by 谷歌翻译
A method is developed for representing any communication system geometrically. Messages and the corresponding signals are points in two "function spaces," and the modulation process is a mapping of one space into the other. Using this representation, a number of results in communication theory are deduced concerning expansion and compression of bandwidth and the threshold effect. Formulas are found for the maximum rate of transmission of binary digits over a system when the signal is perturbed by various types of noise. Some of the properties of "ideal" systems which transmit at this maximum rate are discussed. The equivalent number of binary digits per second for certain information sources is calculated.
translated by 谷歌翻译
Coarse quantization at the base station (BS) of a massive multiuser (MU) multiple-input multiple-output (MIMO) wireless system promises significant power and cost savings. Coarse quantization also enables significant reductions of the raw analog-to-digital converter (ADC) data that must be transferred from a spatially-separated antenna array to the baseband processing unit. The theoretical limits as well as practical transceiver algorithms for such quantized MU-MIMO systems operating over frequency-flat, narrowband channels have been studied extensively. However, the practically relevant scenario where such communication systems operate over frequency-selective, wideband channels is less well understood. This paper investigates the uplink performance of a quantized massive MU-MIMO system that deploys orthogonal frequency-division multiplexing (OFDM) for wideband communication. We propose new algorithms for quantized maximum a-posteriori (MAP) channel estimation and data detection, and we study the associated performance/quantization trade-offs. Our results demonstrate that coarse quantization (e.g., four to six bits, depending on the ratio between the number of BS antennas and the number of users) in massive MU-MIMO-OFDM systems entails virtually no performance loss compared to the infinite-precision case at no additional cost in terms of baseband processing complexity. Index Terms-Analog-to-digital conversion, convex optimization , forward-backward splitting (FBS), frequency-selective channels , massive multiuser multiple-input multiple-output (MU-MIMO), maximum a-posteriori (MAP) channel estimation, minimum mean-square error (MMSE) data detection, orthogonal frequency-division multiplexing (OFDM), quantization.
translated by 谷歌翻译
translated by 谷歌翻译
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban-dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.
translated by 谷歌翻译
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
translated by 谷歌翻译
We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
translated by 谷歌翻译
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-in-variant" functions spaces, including splines and wavelets. Practically , this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonban-dlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned.
translated by 谷歌翻译