Iterative detection and decoding (IDD) is known to achieve near-capacity performance in multi-antenna wireless systems. We propose deep-unfolded interleaved detection and decoding (DUIDD), a new paradigm that reduces the complexity of IDD while achieving even lower error rates. DUIDD interleaves the inner stages of the data detector and channel decoder, which expedites convergence and reduces complexity. Furthermore, DUIDD applies deep unfolding to automatically optimize algorithmic hyperparameters, soft-information exchange, message damping, and state forwarding. We demonstrate the efficacy of DUIDD using NVIDIA's Sionna link-level simulator in a 5G-near multi-user MIMO-OFDM wireless system with a novel low-complexity soft-input soft-output data detector, an optimized low-density parity-check decoder, and channel vectors from a commercial ray-tracer. Our results show that DUIDD outperforms classical IDD both in terms of block error rate and computational complexity.
translated by 谷歌翻译
多用户多输入多输出(MU-MIMO)系统可用于满足5G和超越网络的高吞吐量要求。基站在上行链路MU-MIMO系统中为许多用户提供服务,从而导致多用户干扰(MUI)。设计用于处理强大MUI的高性能探测器具有挑战性。本文分析了最先进消息传递(MP)检测器中使用高MUI的后验分布近似引起的性能降解。我们开发一个基于图神经网络的框架来微调MP检测器的腔分布,从而改善MP检测器中的后验分布近似。然后,我们提出了两个基于神经网络的新型检测器,它们依赖于期望传播(EP)和贝叶斯平行干扰取消(BPIC),分别称为GEPNET和GPICNET探测器。 GEPNET检测器可最大化检测性能,而GPICNET检测器平衡了性能和复杂性。我们提供了置换量比属性的证明,即使在具有动态变化的用户数量的系统中,也只能对检测器进行一次培训。仿真结果表明,所提出的GEPNET检测器性能在各种配置中接近最大似然性能,而GPICNET检测器将BPIC检测器的多路复用增益加倍。
translated by 谷歌翻译
我们考虑在线性符号间干扰通道上使用因子图框架的符号检测的应用。基于Ungerboeck观察模型,可以得出具有吸引人复杂性能的检测算法。但是,由于基础因子图包含循环,因此总和算法(SPA)产生了次优算法。在本文中,我们制定并评估有效的策略,以通过神经增强来提高基于因子图的符号检测的性能。特别是,我们将因子节点的神经信念传播和概括是减轻因子图内周期效应的有效方法。通过将通用预处理器应用于通道输出,我们提出了一种简单的技术来改变每个SPA迭代中的基本因子图。使用这种动态因子图跃迁,我们打算保留水疗消息的外在性质,否则由于周期而受到损害。仿真结果表明,所提出的方法可以大大改善检测性能,甚至可以在各种传输方案中接近最大后验性能,同时保留在块长度和通道内存中线性线性的复杂性。
translated by 谷歌翻译
Link-Adaptation(LA)是无线通信的最重要方面之一,其中发射器使用的调制和编码方案(MCS)适用于通道条件,以满足某些目标误差率。在具有离细胞外干扰的单用户SISO(SU-SISO)系统中,LA是通过计算接收器处计算后平均值 - 交换后噪声比(SINR)进行的。可以在使用线性探测器的多用户MIMO(MU-MIMO)接收器中使用相同的技术。均衡后SINR的另一个重要用途是用于物理层(PHY)抽象,其中几个PHY块(例如通道编码器,检测器和通道解码器)被抽象模型取代,以加快系统级级别的模拟。但是,对于具有非线性接收器的MU-MIMO系统,尚无等效于平衡后的SINR,这使LA和PHY抽象都极具挑战性。这份由两部分组成的论文解决了这个重要问题。在这一部分中,提出了一个称为检测器的称为比特 - 金属解码速率(BMDR)的度量,该指标提出了相当于后平等SINR的建议。由于BMDR没有封闭形式的表达式可以启用其瞬时计算,因此一种机器学习方法可以预测其以及广泛的仿真结果。
translated by 谷歌翻译
这是两部分纸的第二部分,该论文着重于具有非线性接收器的多用户MIMO(MU-MIMO)系统的链接适应(LA)和物理层(PHY)抽象。第一部分提出了一个新的指标,称为检测器,称为比率解码率(BMDR),是非线性接收器的等效量等效的信号与交换后噪声比率(SINR)。由于该BMDR没有封闭形式的表达式,因此有效地提出了基于机器学习的方法来估计其。在这一部分中,第一部分中开发的概念用于开发LA的新算法,可用检测器列表中的动态检测器选择以及具有任意接收器的MU-MIMO系统中的PHY抽象。提出了证实所提出算法的功效的广泛仿真结果。
translated by 谷歌翻译
Effective and adaptive interference management is required in next generation wireless communication systems. To address this challenge, Rate-Splitting Multiple Access (RSMA), relying on multi-antenna rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receivers, has been intensively studied in recent years, albeit mostly under the assumption of perfect Channel State Information at the Receiver (CSIR) and ideal capacity-achieving modulation and coding schemes. To assess its practical performance, benefits, and limits under more realistic conditions, this work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods, which aims to unite the simple structure of the conventional SIC receiver and the robustness and model agnosticism of deep learning techniques. The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS), and average training overhead. Also, a comparison with the SIC receiver, with perfect and imperfect CSIR, is given. Results reveal that the MBDL receiver outperforms by a significant margin the SIC receiver with imperfect CSIR, due to its ability to generate on demand non-linear symbol detection boundaries in a pure data-driven manner.
translated by 谷歌翻译
Channel estimation is a critical task in multiple-input multiple-output (MIMO) digital communications that substantially effects end-to-end system performance. In this work, we introduce a novel approach for channel estimation using deep score-based generative models. A model is trained to estimate the gradient of the logarithm of a distribution and is used to iteratively refine estimates given measurements of a signal. We introduce a framework for training score-based generative models for wireless MIMO channels and performing channel estimation based on posterior sampling at test time. We derive theoretical robustness guarantees for channel estimation with posterior sampling in single-input single-output scenarios, and experimentally verify performance in the MIMO setting. Our results in simulated channels show competitive in-distribution performance, and robust out-of-distribution performance, with gains of up to $5$ dB in end-to-end coded communication performance compared to supervised deep learning methods. Simulations on the number of pilots show that high fidelity channel estimation with $25$% pilot density is possible for MIMO channel sizes of up to $64 \times 256$. Complexity analysis reveals that model size can efficiently trade performance for estimation latency, and that the proposed approach is competitive with compressed sensing in terms of floating-point operation (FLOP) count.
translated by 谷歌翻译
由于其快速和低功率配置,可重新配置的智能表面(RISS)最近被视为未来无线网络的节能解决方案,这在实现大规模连通性和低延迟通信方面具有增加的潜力。基于RIS的系统中的准确且低空的通道估计是通常的RIS单元元素及其独特的硬件约束,这是最关键的挑战之一。在本文中,我们专注于RIS授权的多用户多用户多输入单输出(MISO)上行链路通信系统的上行链路,并根据并行因子分解提出了一个通道估计框架,以展开所得的级联通道模型。我们为基站和RIS之间的渠道以及RIS与用户之间的渠道提供了两种迭代估计算法。一个基于交替的最小二乘(ALS),而另一个使用向量近似消息传递到迭代的迭代中,从估计的向量重建了两个未知的通道。为了从理论上评估基于ALS的算法的性能,我们得出了其估计值CRAM \'ER-RAO BOND(CRB)。我们还通过估计的通道和基本站的不同预码方案讨论了可实现的总和率计算。我们的广泛仿真结果表明,我们的算法表现优于基准方案,并且ALS技术可实现CRB。还证明,使用估计通道的总和率总是在各种设置下达到完美通道的总和,从而验证了提出的估计算法的有效性和鲁棒性。
translated by 谷歌翻译
在多输入多输出(MIMO)系统中使用深度自动码器(DAE)进行端到端通信,是一种具有重要潜力的新概念。在误码率(BER)方面,已示出DAE-ADED MIMO以占地识别的奇异值分解(SVD)为基础的预编码MIMO。本文提出将信道矩阵的左右奇异矢量嵌入到DAE编码器和解码器中,以进一步提高MIMO空间复用的性能。 SVD嵌入式DAE主要优于BER的理论线性预编码。这是显着的,因为它表明所提出的DAES通过将通信系统视为单个端到端优化块来超出当前系统设计的极限。基于仿真结果,在SNR = 10dB,所提出的SVD嵌入式设计可以实现近10美元,并将BER减少至少10次,而没有SVD,相比增长了18倍的增长率最高18倍具有理论线性预编码。我们将这一点归因于所提出的DAE可以将输入和输出与具有有限字母输入的自适应调制结构匹配。我们还观察到添加到DAE的剩余连接进一步提高了性能。
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译
最近在无线通信领域的许多任务中考虑了机器学习算法。以前,我们已经提出了使用深度卷积神经网络(CNN)进行接收器处理的使用,并证明它可以提供可观的性能提高。在这项研究中,我们专注于发射器的机器学习算法。特别是,我们考虑进行波束形成并提出一个CNN,该CNN对于给定上行链路通道估计值作为输入,输出下链路通道信息用于波束成形。考虑到基于UE接收器性能的损失函数的上行链路传输和下行链路传输,CNN以有监督的方式进行培训。神经网络的主要任务是预测上行链路和下行链路插槽之间的通道演变,但它也可以学会处理整个链中的效率低下和错误,包括实际的光束成型阶段。提供的数值实验证明了波束形成性能的改善。
translated by 谷歌翻译
For improving short-length codes, we demonstrate that classic decoders can also be used with real-valued, neural encoders, i.e., deep-learning based codeword sequence generators. Here, the classical decoder can be a valuable tool to gain insights into these neural codes and shed light on weaknesses. Specifically, the turbo-autoencoder is a recently developed channel coding scheme where both encoder and decoder are replaced by neural networks. We first show that the limited receptive field of convolutional neural network (CNN)-based codes enables the application of the BCJR algorithm to optimally decode them with feasible computational complexity. These maximum a posteriori (MAP) component decoders then are used to form classical (iterative) turbo decoders for parallel or serially concatenated CNN encoders, offering a close-to-maximum likelihood (ML) decoding of the learned codes. To the best of our knowledge, this is the first time that a classical decoding algorithm is applied to a non-trivial, real-valued neural code. Furthermore, as the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
translated by 谷歌翻译
Ultra-reliable short-packet communication is a major challenge in future wireless networks with critical applications. To achieve ultra-reliable communications beyond 99.999%, this paper envisions a new interaction-based communication paradigm that exploits feedback from the receiver. We present AttentionCode, a new class of feedback codes leveraging deep learning (DL) technologies. The underpinnings of AttentionCode are three architectural innovations: AttentionNet, input restructuring, and adaptation to fading channels, accompanied by several training methods, including large-batch training, distributed learning, look-ahead optimizer, training-test signal-to-noise ratio (SNR) mismatch, and curriculum learning. The training methods can potentially be generalized to other wireless communication applications with machine learning. Numerical experiments verify that AttentionCode establishes a new state of the art among all DL-based feedback codes in both additive white Gaussian noise (AWGN) channels and fading channels. In AWGN channels with noiseless feedback, for example, AttentionCode achieves a block error rate (BLER) of $10^{-7}$ when the forward channel SNR is 0 dB for a block size of 50 bits, demonstrating the potential of AttentionCode to provide ultra-reliable short-packet communications.
translated by 谷歌翻译
近年来,已经进行了重要的研究活动,可以通过深度学习自动化渠道编码器和解码器的设计。由于通道编码的维度挑战,因此通过深度学习技术设计和训练相对较大的神经通道代码非常复杂。因此,文献中的大多数结果仅限于相对较短的代码,其信息位少于100个。在本文中,我们构建了Productaes,这是一个有效的深度学习驱动(编码器,解码器)对的家族,旨在以可管理的培训复杂性来培训相对较大的通道代码(编码器和解码器)。我们基于经典产品代码的想法,并建议使用较小的代码组件构建大型神经代码。更具体地说,我们没有直接培训编码器和解码器的大型神经代码$ k $ and blocklength $ n $,而是提供了一个框架,需要培训对代码参数的神经编码器和解码器$(n_1,k_1)$和$(n_2,k_2)$,这样$ n_1 n_2 = n $和$ k_1 k_2 = k $。我们的培训结果表明,对于参数守则$(225,100)$和中等长度参数码$(441,196)$,与连续的极性码相比,参数码(225,100)$(225,100)$(225,100)$(225,100)$(225,100)$(225,100)$(225,100)$(225,100)$(225,100)$,与连续的极地码相比取消(SC)解码器。此外,我们的结果表明,涡轮自动编码器(涡轮增压器)和最先进的古典代码有意义。这是设计产品自动编码器和培训大型频道代码的开创性工作的第一项工作。
translated by 谷歌翻译
给定有限数量的训练数据样本的分类的基本任务被考虑了具有已知参数统计模型的物理系统。基于独立的学习和统计模型的分类器面临使用小型训练集实现分类任务的主要挑战。具体地,单独依赖基于物理的统计模型的分类器通常遭受它们无法适当地调整底层的不可观察的参数,这导致系统行为的不匹配表示。另一方面,基于学习的分类器通常依赖于来自底层物理过程的大量培训数据,这在最实际的情况下可能不可行。本文提出了一种混合分类方法 - 被称为亚牙线的菌丝 - 利用基于物理的统计模型和基于学习的分类器。所提出的解决方案基于猜想,即通过融合它们各自的优势,刺鼠线将减轻与基于学习和统计模型的分类器的各个方法相关的挑战。所提出的混合方法首先使用可用(次优)统计估计程序来估计不可观察的模型参数,随后使用基于物理的统计模型来生成合成数据。然后,培训数据样本与基于学习的分类器中的合成数据结合到基于神经网络的域 - 对抗训练。具体地,为了解决不匹配问题,分类器将从训练数据和合成数据的映射学习到公共特征空间。同时,培训分类器以在该空间内找到判别特征,以满足分类任务。
translated by 谷歌翻译
在带有频划分双链体(FDD)的常规多用户多用户多输入多输出(MU-MIMO)系统中,尽管高度耦合,但已单独设计了通道采集和预编码器优化过程。本文研究了下行链路MU-MIMO系统的端到端设计,其中包括试点序列,有限的反馈和预编码。为了解决这个问题,我们提出了一个新颖的深度学习(DL)框架,该框架共同优化了用户的反馈信息生成和基础站(BS)的预编码器设计。 MU-MIMO系统中的每个过程都被智能设计的多个深神经网络(DNN)单元所取代。在BS上,神经网络生成试验序列,并帮助用户获得准确的频道状态信息。在每个用户中,频道反馈操作是由单个用户DNN以分布方式进行的。然后,另一个BS DNN从用户那里收集反馈信息,并确定MIMO预编码矩阵。提出了联合培训算法以端到端的方式优化所有DNN单元。此外,还提出了一种可以避免针对可扩展设计的不同网络大小进行重新训练的培训策略。数值结果证明了与经典优化技术和其他常规DNN方案相比,提出的DL框架的有效性。
translated by 谷歌翻译
Communication and computation are often viewed as separate tasks. This approach is very effective from the perspective of engineering as isolated optimizations can be performed. On the other hand, there are many cases where the main interest is a function of the local information at the devices instead of the local information itself. For such scenarios, information theoretical results show that harnessing the interference in a multiple-access channel for computation, i.e., over-the-air computation (OAC), can provide a significantly higher achievable computation rate than the one with the separation of communication and computation tasks. Besides, the gap between OAC and separation in terms of computation rate increases with more participating nodes. Given this motivation, in this study, we provide a comprehensive survey on practical OAC methods. After outlining fundamentals related to OAC, we discuss the available OAC schemes with their pros and cons. We then provide an overview of the enabling mechanisms and relevant metrics to achieve reliable computation in the wireless channel. Finally, we summarize the potential applications of OAC and point out some future directions.
translated by 谷歌翻译
查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效果,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差率性能之间的权衡。然后,采用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实现友好的解决方案,又比独立参数导致更快的培训收敛。我们为5G低密度均衡检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,平均消息量化位低于3.1位。此外,我们表明,学到的位宽也将其推广到其他代码速率和渠道。
translated by 谷歌翻译
在这项工作中,我们提出了一个完全可区分的图形神经网络(GNN)的架构,用于用于通道解码和展示各种编码方案的竞争性解码性能,例如低密度奇偶校验检查(LDPC)和BCH代码。这个想法是让神经网络(NN)通过给定图的通用消息传递算法,该算法通过用可训练的函数替换节点和边缘消息更新来代表正向误差校正(FEC)代码结构。与许多其他基于深度学习的解码方法相反,提出的解决方案享有对任意块长度的可扩展性,并且训练不受维数的诅咒的限制。我们在常规渠道解码中对最新的解码以及最近的基于深度学习的结果基准了我们提出的解码器。对于(63,45)BCH代码,我们的解决方案优于加权信念传播(BP)的解码约0.4 dB,而解码迭代率明显较小,甚至对于5G NR LDPC代码,我们观察到与常规BP解码相比,我们观察到竞争性能。对于BCH代码,所得的GNN解码器只能以9640个权重进行完全参数。
translated by 谷歌翻译
显示用于误差校正的小型神经网络(NNS)可改善经典通道代码并解决通道模型更改。我们通过多次使用相同的NN使用相同的NN扩展了任何此类结构的代码维度,这些NN与外部经典代码串行串联。我们设计具有相同网络参数的NN,其中每个REED - Solomon CodeWord符号都是对其他NN的输入。与小型神经代码相比,增加了加斯噪声通道的块误差概率的显着改善,以及通道模型变化的稳健性。
translated by 谷歌翻译