This review covers latest developments in continuous-variable quantum-state tomography of optical fields and photons, placing a special accent on its practical aspects and applications in quantum information technology. Optical homodyne tomography is reviewed as a method of reconstructing the state of light in a given optical mode. A range of relevant practical topics are discussed, such as state-reconstruction algorithms (with emphasis on the maximum-likelihood technique), the technology of time-domain homodyne detection, mode matching issues, and engineering of complex quantum states of light. The paper also surveys quantum-state tomography for the transverse spatial state (spatial mode) of the field in the special case of fields containing precisely one photon. Contents
translated by 谷歌翻译
In this paper, we discuss building blocks that enable the exploitation of optical capacities beyond 100 Gb∕s. Optical networks will benefit from more flexibility and agility in their network elements, especially from coherent transceivers. To achieve capacities of 400 Gb∕s and more, coherent transceivers will operate at higher symbol rates. This will be made possible with higher bandwidth components using new electro-optic technologies implemented with indium phosphide and silicon photonics. Digital signal processing will benefit from new algorithms. Multi-dimensional modulation, of which some formats are already in existence in current flexible coherent transceiv-ers, will provide improved tolerance to noise and fiber non-linearities. Constellation shaping will further improve these tolerances while allowing a finer granularity in the selection of capacity. Frequency-division multiplexing will also provide improved tolerance to the nonlinear characteristics of fibers. Algorithms with reduced computation complexity will allow the implementation, at speeds, of direct pre-compensation of nonlinear propagation effects. Advancement in forward error correction will shrink the performance gap with Shannon's limit. At the network control and management level, new tools are being developed to achieve a more efficient utilization of networks. This will also allow for network virtualization, orchestration, and management. Finally, FlexEthernet and FlexOTN will be put in place to allow network operators to optimize capacity in their optical transport networks without manual changes to the client hardware.
translated by 谷歌翻译
今天的电信网络已成为大量广泛异构数据的来源。该信息可以从网络交通轨迹,网络警报,信号质量指示符,用户行为数据等中检索。需要高级数学工具从这些数据中提取有意义的信息,并从网络生成的数据中做出与网络的正常运行有关的决策。在这些数学工具中,机器学习(ML)被认为是执行网络数据分析和实现自动网络自配置和故障管理的最具前景的方法之一。 ML技术在光通信网络领域的应用受到光网络在最近几年所面临的网络复杂性的前所未有的增长的推动。这种复杂性的增加是由于引入了一系列可调和相互依赖的系统参数(例如,路由配置,调制格式,符号率,编码方案等),这些参数通过使用相干传输/接收技术,高级数字信号处理和光纤传播中非线性效应的补偿。在本文中,我们概述了ML在光通信和网络中的应用。我们对涉及该主题的相关文献进行分类和调查,并且我们还为对该领域感兴趣的研究人员和从业者提供了ML的入门教程。虽然最近出现了大量的研究论文,但ML光学网络的应用仍处于起步阶段:为了激发这一领域的进一步工作,我们总结了该论文提出了新的可能的研究方向。
translated by 谷歌翻译
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, realtime performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
translated by 谷歌翻译
Several approximate non-linear fiber propagation models have been proposed over the years. Recent reconsideration and extension of earlier modeling efforts has led to the formalization of the so-called Gaussian-Noise (GN) model. The evidence collected so far hints at the GN-model as being a relatively simple and, at the same time, sufficiently reliable tool for performance prediction of uncompensated coherent systems, characterized by a favorable accuracy vs. complexity trade-off. This paper tries to pull together the recent results regarding the GN-model definition, understanding, relations vs. other models , validation, limitations, closed form solutions, approximations and, in general, its applications and implications in link analysis and optimization, also within a network environment.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
translated by 谷歌翻译
在本文中,我们实现了一个光纤通信系统作为端到端深度神经网络,包括完整的发射机链,信道模型和接收机。这种方法可以在单个端到端流程中优化收发器。我们通过将其应用于强度调制/直接检测(IM / DD)系统来说明这种方法的好处,并表明我们可以实现低于6.7 \%硬判决纠错(HD-FEC)阈值的误码率。我们对发射器和接收器的所有组件以及光纤通道进行建模,并应用深度学习来找到最小化符号误差的发射器和接收器配置。我们在仿真中提出并验证了一种训练方法,该方法可以产生鲁棒且灵活的收发器,无需重新配置即可在大范围的链路色散上实现可靠的传输。在实验中首次成功验证了fromend-to-end深度学习的结果。特别是,我们在超过40 \,km的距离处实现低于HD-FEC阈值的42 \,Gb / s的信息速率。我们发现我们的结果优于基于2和4级脉冲幅度调制(PAM2 / PAM4)的传统IM / DD解决方案,在接收器处具有前馈均衡(FFE)。 Ourstudy是基于端到端深度学习的光纤通信系统优化的第一步。
translated by 谷歌翻译
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
translated by 谷歌翻译
The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
translated by 谷歌翻译
The availability of low-cost hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., networks of wirelessly interconnected devices that are able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. In this paper, the state of the art in algorithms, protocols, and hardware for wireless multimedia sensor networks is surveyed, and open research issues are discussed in detail. Architectures for WMSNs are explored, along with their advantages and drawbacks. Currently off-the-shelf hardware as well as available research prototypes for WMSNs are listed and classified. Existing solutions and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are investigated, along with possible cross-layer synergies and optimizations.
translated by 谷歌翻译
Shannon's determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.
translated by 谷歌翻译
光纤非线性干扰(NLI)建模和监控是支持弹性光网络(EON)的关键构建模块。过去,它们通常是分开开发和研究的。此外,对于异构动态光网络,仍然需要提高先前提出的方法的准确性。在本文中,我们介绍了机器学习(ML)在NLI建模和监控中的应用。特别是,我们首先建议使用ML方法来校准当前光纤非线性模型的误差。高斯噪声(GN)模型用作说明性示例,并且借助于人工神经网络(ANN)证明了显着的改进。此外,我们建议使用ML来组合建模和监测方案,以更好地估计NLI方差。通过1603链路进行了广泛的仿真,以评估和分析各种方案的性能,并展示了ML辅助建模和监控组合的卓越性能。
translated by 谷歌翻译
Operators are pressured to maximize the achieved capacity over deployed links. This can be obtained by operating in the weakly nonlinear regime, requiring a precise understanding of the transmission conditions. Ideally, optical transponders should be capable of estimating the regime of operation from the received signal and feeding that information to the upper management layers to optimize the transmission characteristics; however, this estimation is challenging. This paper addresses this problem by estimating the linear and nonlinear signal-to-noise ratio (SNR) from the received signal. This estimation is performed by obtaining features of two distinct effects: nonlin-ear phase noise and second-order statistical moments. A small neural network is trained to estimate the SNRs from the extracted features. Over extensive simulations covering 19,800 sets of realistic fiber transmissions, we verified the accuracy of the proposed techniques. Employing both approaches simultaneously gave measured performances of 0.04 and 0.20 dB of standard error for the linear and nonlinear SNRs, respectively.
translated by 谷歌翻译
Multiuser Multiple-Input Multiple-Output (MIMO) offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multiuser MIMO, as originally envisioned with roughly equal numbers of service-antennas and terminals and frequency division duplex operation, is not a scalable technology. Massive MIMO (also known as "Large-Scale Antenna Systems", "Very Large MIMO", "Hyper MIMO", "Full-Dimension MIMO" and "ARGOS") makes a clean break with current practice through the use of a large excess of service-antennas over active terminals and time division duplex operation. Extra antennas help by focusing energy into ever-smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include the extensive use of inexpensive low-power components , reduced latency, simplification of the media access control (MAC) layer, and ro-bustness to intentional jamming. The anticipated throughput depend on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly-joined terminals, the exploitation of extra degrees of freedom provided by the excess of service-antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This paper presents an overview of the massive MIMO concept and of contemporary research on the topic. 1 Background: MultiUser MIMO Maturing MIMO, Multiple-Input Multiple Output, technology relies on multiple antennas to simultaneously transmit multiple streams of data in wireless communication systems. When MIMO is used to communicate with several terminals at the same time, we speak of multiuser MIMO. Here, we just say MU-MIMO for short.
translated by 谷歌翻译
We have proposed a dynamically configurable and fast optical impairment model for the abstraction of the optical physical layer, enabling new capabilities such as indirect estimation of physical operating parameters in multivendor networks based on pre-FEC BER information and machine learning. BER is commonly reported by deployed coherent transponders; therefore, this scheme does not increase hardware cost. The estimated parameters can subsequently be used to predict optical signal quality at the receiver of not-already-established optical connections more accurately than possible based on the limited amount of information available at the time of offline system design. The higher accuracy and certainty reduce the required amount of required system margin that must be allocated to guarantee reliable optical connectivity. The remaining margin can then be applied toward increased transmission capacity, or a reduced number of regenera-tors in the network. We demonstrate the quality of transmission prediction experimentally in an optical mesh network with 0.6 dB Q-factor accuracy, and quantify the benefit in terms of network capacity gain in metro networks by impairment-aware network simulation.
translated by 谷歌翻译
We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets, sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem, but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here we describe a sub-Nyquist sampling and recovery approach called Doppler focusing which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement which scales linearly with P , obtaining good detection performance even at SNR as low as-25dB. The recovery is based on the Xampling framework, which allows reducing the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype.
translated by 谷歌翻译
物联网(IoT)的兴起带来了许多新的传感机制。在这些机制中,声学传感近年来备受关注。声学传感利用声学传感器超越其主要用途,即记录和播放,以实现有趣的应用和新的使用体验。在本文中,我们首次展示了使用商品硬件进行无声传感的最新进展。我们提出了一个总体框架,对声学传感系统的主要构建块进行了分类。该框架由三层组成,即物理层,处理层和应用层。我们强调处理层中的不同传感方法和物理层中的基本设计考虑因素。深入介绍了Manyexisting和潜在的应用,包括上下文感知应用,人机界面和空中声学通信。还讨论了挑战和未来的研究趋势。
translated by 谷歌翻译