我们研究了因子图框架在线性符号间干扰通道上的符号检测的应用。循环因子图具有产生低复杂性符号探测器的潜力,但是如果应用无处不在的总和产物算法,则是次优的。在本文中,我们介绍并评估策略,以通过神经增强来提高基于环的符号检测算法的性能。特别是,我们将神经信念传播作为抵消因子图内周期效应的有效方法。我们进一步提出了通道输出的线性预处理器的应用和优化。通过修改观察模型,预处理可以有效地改变基本因子图,从而显着提高检测性能并降低复杂性。
translated by 谷歌翻译
我们考虑在线性符号间干扰通道上使用因子图框架的符号检测的应用。基于Ungerboeck观察模型,可以得出具有吸引人复杂性能的检测算法。但是,由于基础因子图包含循环,因此总和算法(SPA)产生了次优算法。在本文中,我们制定并评估有效的策略,以通过神经增强来提高基于因子图的符号检测的性能。特别是,我们将因子节点的神经信念传播和概括是减轻因子图内周期效应的有效方法。通过将通用预处理器应用于通道输出,我们提出了一种简单的技术来改变每个SPA迭代中的基本因子图。使用这种动态因子图跃迁,我们打算保留水疗消息的外在性质,否则由于周期而受到损害。仿真结果表明,所提出的方法可以大大改善检测性能,甚至可以在各种传输方案中接近最大后验性能,同时保留在块长度和通道内存中线性线性的复杂性。
translated by 谷歌翻译
传统上依赖于时间序列推断的方法的设计统计模型,其描述了所需期望序列和观察到的序列之间的关系。已经得出了广泛的基于模型的算法,以使用表示基础分布的因子图上的递归计算来实现可控复杂性的推断。替代模型 - 不可知方法利用机器学习(ML)方法。在这里,我们提出了一个框架,它将基于模型的算法和数据驱动ML工具组合起来的静止时间序列。在所提出的方法中,开发了神经网络以分别学习描述时间序列分布的因子图的特定组件,而不是完全推理任务。通过利用该分布的静止性质,可以将所得方法应用于不同时间持续时间的序列。学习的因子图可以使用紧凑的神经网络来实现使用小型训练集的培训,或者可选地用于改进现有的深度推理系统。我们介绍了一种基于学习的静止因子图的推理算法,其学习从标记数据实现总和 - 产品方案,并且可以应用于不同长度的序列。我们的实验结果表明了所提出的学习因素图表学习从睡眠级数据集进行睡眠阶段检测的小型训练集的精确推断的能力,以及与未知通道的数字通信中的符号检测。
translated by 谷歌翻译
在这项工作中,我们提出了一个完全可区分的图形神经网络(GNN)的架构,用于用于通道解码和展示各种编码方案的竞争性解码性能,例如低密度奇偶校验检查(LDPC)和BCH代码。这个想法是让神经网络(NN)通过给定图的通用消息传递算法,该算法通过用可训练的函数替换节点和边缘消息更新来代表正向误差校正(FEC)代码结构。与许多其他基于深度学习的解码方法相反,提出的解决方案享有对任意块长度的可扩展性,并且训练不受维数的诅咒的限制。我们在常规渠道解码中对最新的解码以及最近的基于深度学习的结果基准了我们提出的解码器。对于(63,45)BCH代码,我们的解决方案优于加权信念传播(BP)的解码约0.4 dB,而解码迭代率明显较小,甚至对于5G NR LDPC代码,我们观察到与常规BP解码相比,我们观察到竞争性能。对于BCH代码,所得的GNN解码器只能以9640个权重进行完全参数。
translated by 谷歌翻译
多用户多输入多输出(MU-MIMO)系统可用于满足5G和超越网络的高吞吐量要求。基站在上行链路MU-MIMO系统中为许多用户提供服务,从而导致多用户干扰(MUI)。设计用于处理强大MUI的高性能探测器具有挑战性。本文分析了最先进消息传递(MP)检测器中使用高MUI的后验分布近似引起的性能降解。我们开发一个基于图神经网络的框架来微调MP检测器的腔分布,从而改善MP检测器中的后验分布近似。然后,我们提出了两个基于神经网络的新型检测器,它们依赖于期望传播(EP)和贝叶斯平行干扰取消(BPIC),分别称为GEPNET和GPICNET探测器。 GEPNET检测器可最大化检测性能,而GPICNET检测器平衡了性能和复杂性。我们提供了置换量比属性的证明,即使在具有动态变化的用户数量的系统中,也只能对检测器进行一次培训。仿真结果表明,所提出的GEPNET检测器性能在各种配置中接近最大似然性能,而GPICNET检测器将BPIC检测器的多路复用增益加倍。
translated by 谷歌翻译
Effective and adaptive interference management is required in next generation wireless communication systems. To address this challenge, Rate-Splitting Multiple Access (RSMA), relying on multi-antenna rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receivers, has been intensively studied in recent years, albeit mostly under the assumption of perfect Channel State Information at the Receiver (CSIR) and ideal capacity-achieving modulation and coding schemes. To assess its practical performance, benefits, and limits under more realistic conditions, this work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods, which aims to unite the simple structure of the conventional SIC receiver and the robustness and model agnosticism of deep learning techniques. The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS), and average training overhead. Also, a comparison with the SIC receiver, with perfect and imperfect CSIR, is given. Results reveal that the MBDL receiver outperforms by a significant margin the SIC receiver with imperfect CSIR, due to its ability to generate on demand non-linear symbol detection boundaries in a pure data-driven manner.
translated by 谷歌翻译
由于处理非covex公式的能力,深入研究深度学习(DL)技术以优化多用户多输入单输出(MU-MISO)下行链接系统。但是,现有的深神经网络(DNN)的固定计算结构在系统大小(即天线或用户的数量)方面缺乏灵活性。本文开发了一个双方图神经网络(BGNN)框架,这是一种可扩展的DL溶液,旨在多端纳纳波束形成优化。首先,MU-MISO系统以两分图为特征,其中两个不相交的顶点集(由传输天线和用户组成)通过成对边缘连接。这些顶点互连状态是通过通道褪色系数建模的。因此,将通用的光束优化过程解释为重量双分图上的计算任务。这种方法将波束成型的优化过程分为多个用于单个天线顶点和用户顶点的子操作。分离的顶点操作导致可扩展的光束成型计算,这些计算不变到系统大小。顶点操作是由一组DNN模块实现的,这些DNN模块共同构成了BGNN体系结构。在所有天线和用户中都重复使用相同的DNN,以使所得的学习结构变得灵活地适合网络大小。 BGNN的组件DNN在许多具有随机变化的网络尺寸的MU-MISO配置上进行了训练。结果,训练有素的BGNN可以普遍应用于任意的MU-MISO系统。数值结果验证了BGNN框架比常规方法的优势。
translated by 谷歌翻译
传统上,信号处理,通信和控制一直依赖经典的统计建模技术。这种基于模型的方法利用代表基本物理,先验信息和其他领域知识的数学公式。简单的经典模型有用,但对不准确性敏感,当真实系统显示复杂或动态行为时,可能会导致性能差。另一方面,随着数据集变得丰富,现代深度学习管道的力量增加,纯粹的数据驱动的方法越来越流行。深度神经网络(DNNS)使用通用体系结构,这些架构学会从数据中运行,并表现出出色的性能,尤其是针对受监督的问题。但是,DNN通常需要大量的数据和巨大的计算资源,从而限制了它们对某些信号处理方案的适用性。我们对将原则数学模型与数据驱动系统相结合的混合技术感兴趣,以从两种方法的优势中受益。这种基于模型的深度学习方法通​​过为特定问题设计的数学结构以及从有限的数据中学习来利用这两个部分领域知识。在本文中,我们调查了研究和设计基于模型的深度学习系统的领先方法。我们根据其推理机制将基于混合模型/数据驱动的系统分为类别。我们对以系统的方式将基于模型的算法与深度学习以及具体指南和详细的信号处理示例相结合的领先方法进行了全面综述。我们的目的是促进对未来系统的设计和研究信号处理和机器学习的交集,这些系统结合了两个领域的优势。
translated by 谷歌翻译
Iterative detection and decoding (IDD) is known to achieve near-capacity performance in multi-antenna wireless systems. We propose deep-unfolded interleaved detection and decoding (DUIDD), a new paradigm that reduces the complexity of IDD while achieving even lower error rates. DUIDD interleaves the inner stages of the data detector and channel decoder, which expedites convergence and reduces complexity. Furthermore, DUIDD applies deep unfolding to automatically optimize algorithmic hyperparameters, soft-information exchange, message damping, and state forwarding. We demonstrate the efficacy of DUIDD using NVIDIA's Sionna link-level simulator in a 5G-near multi-user MIMO-OFDM wireless system with a novel low-complexity soft-input soft-output data detector, an optimized low-density parity-check decoder, and channel vectors from a commercial ray-tracer. Our results show that DUIDD outperforms classical IDD both in terms of block error rate and computational complexity.
translated by 谷歌翻译
Link-Adaptation(LA)是无线通信的最重要方面之一,其中发射器使用的调制和编码方案(MCS)适用于通道条件,以满足某些目标误差率。在具有离细胞外干扰的单用户SISO(SU-SISO)系统中,LA是通过计算接收器处计算后平均值 - 交换后噪声比(SINR)进行的。可以在使用线性探测器的多用户MIMO(MU-MIMO)接收器中使用相同的技术。均衡后SINR的另一个重要用途是用于物理层(PHY)抽象,其中几个PHY块(例如通道编码器,检测器和通道解码器)被抽象模型取代,以加快系统级级别的模拟。但是,对于具有非线性接收器的MU-MIMO系统,尚无等效于平衡后的SINR,这使LA和PHY抽象都极具挑战性。这份由两部分组成的论文解决了这个重要问题。在这一部分中,提出了一个称为检测器的称为比特 - 金属解码速率(BMDR)的度量,该指标提出了相当于后平等SINR的建议。由于BMDR没有封闭形式的表达式可以启用其瞬时计算,因此一种机器学习方法可以预测其以及广泛的仿真结果。
translated by 谷歌翻译
我们根据光学通信中的载体回收率的变异推断研究了自适应盲人均衡器的潜力。这些均衡器基于最大似然通道估计的低复杂性近似。我们将变异自动编码器(VAE)均衡器的概念概括为包括概率星座塑形(PCS)的高阶调制格式,无处不在,在光学通信中,对接收器进行过度采样和双极化传输。除了基于卷积神经网络的黑盒均衡器外,我们还提出了基于线性蝴蝶滤波器的基于模型的均衡器,并使用变异推理范式训练过滤器系数。作为副产品,VAE还提供了可靠的通道估计。我们在具有符号间干扰(ISI)的经典添加剂白色高斯噪声(AWGN)通道和色散线性光学双极化通道上分析了VAE的性能和灵活性。我们表明,对于固定的固定通道但也随时间变化的通道,它可以超越最先进的恒定算法(CMA)来扩展盲人自适应均衡器的应用范围。评估伴随着超参数分析。
translated by 谷歌翻译
Algorithmic solutions for multi-object tracking (MOT) are a key enabler for applications in autonomous navigation and applied ocean sciences. State-of-the-art MOT methods fully rely on a statistical model and typically use preprocessed sensor data as measurements. In particular, measurements are produced by a detector that extracts potential object locations from the raw sensor data collected for a discrete time step. This preparatory processing step reduces data flow and computational complexity but may result in a loss of information. State-of-the-art Bayesian MOT methods that are based on belief propagation (BP) systematically exploit graph structures of the statistical model to reduce computational complexity and improve scalability. However, as a fully model-based approach, BP can only provide suboptimal estimates when there is a mismatch between the statistical model and the true data-generating process. Existing BP-based MOT methods can further only make use of preprocessed measurements. In this paper, we introduce a variant of BP that combines model-based with data-driven MOT. The proposed neural enhanced belief propagation (NEBP) method complements the statistical model of BP by information learned from raw sensor data. This approach conjectures that the learned information can reduce model mismatch and thus improve data association and false alarm rejection. Our NEBP method improves tracking performance compared to model-based methods. At the same time, it inherits the advantages of BP-based MOT, i.e., it scales only quadratically in the number of objects, and it can thus generate and maintain a large number of object tracks. We evaluate the performance of our NEBP approach for MOT on the nuScenes autonomous driving dataset and demonstrate that it has state-of-the-art performance.
translated by 谷歌翻译
最近,基于深层神经网络(DNN)的物理层通信技术引起了极大的兴趣。尽管模拟实验已经验证了它们增强通信系统和出色性能的潜力,但对理论分析的关注很少。具体而言,物理层中的大多数研究都倾向于专注于DNN模型在无线通信问题上的应用,但理论上不了解DNN在通信系统中的工作方式。在本文中,我们旨在定量分析为什么DNN可以在物理层中与传统技术相比,并在计算复杂性方面提高其成本。为了实现这一目标,我们首先分析基于DNN的发射器的编码性能,并将其与传统发射器进行比较。然后,我们理论上分析了基于DNN的估计器的性能,并将其与传统估计器进行比较。第三,我们调查并验证在信息理论概念下基于DNN的通信系统中如何播放信息。我们的分析开发了一种简洁的方式,可以在物理层通信中打开DNN的“黑匣子”,可用于支持基于DNN的智能通信技术的设计,并有助于提供可解释的性能评估。
translated by 谷歌翻译
我们提供了传递用于使用图形模型推断的新消息传递算法。我们的方法是为最困难的推理问题而设计的,即循环信念传播和其他启发式方法无法融合。当基础图形模型是无环时,信念的传播可以保证会收敛,但是当基础图具有复杂的拓扑结构时,可能会收敛,并且对初始化敏感。本文描述了对标准信念传播算法的修改,这些算法导致方法会收敛到具有任意拓扑和潜在功能的图形模型上的独特解决方案。
translated by 谷歌翻译
Communication and computation are often viewed as separate tasks. This approach is very effective from the perspective of engineering as isolated optimizations can be performed. On the other hand, there are many cases where the main interest is a function of the local information at the devices instead of the local information itself. For such scenarios, information theoretical results show that harnessing the interference in a multiple-access channel for computation, i.e., over-the-air computation (OAC), can provide a significantly higher achievable computation rate than the one with the separation of communication and computation tasks. Besides, the gap between OAC and separation in terms of computation rate increases with more participating nodes. Given this motivation, in this study, we provide a comprehensive survey on practical OAC methods. After outlining fundamentals related to OAC, we discuss the available OAC schemes with their pros and cons. We then provide an overview of the enabling mechanisms and relevant metrics to achieve reliable computation in the wireless channel. Finally, we summarize the potential applications of OAC and point out some future directions.
translated by 谷歌翻译
Deep learning-based approaches have been developed to solve challenging problems in wireless communications, leading to promising results. Early attempts adopted neural network architectures inherited from applications such as computer vision. They often yield poor performance in large scale networks (i.e., poor scalability) and unseen network settings (i.e., poor generalization). To resolve these issues, graph neural networks (GNNs) have been recently adopted, as they can effectively exploit the domain knowledge, i.e., the graph topology in wireless communications problems. GNN-based methods can achieve near-optimal performance in large-scale networks and generalize well under different system settings, but the theoretical underpinnings and design guidelines remain elusive, which may hinder their practical implementations. This paper endeavors to fill both the theoretical and practical gaps. For theoretical guarantees, we prove that GNNs achieve near-optimal performance in wireless networks with much fewer training samples than traditional neural architectures. Specifically, to solve an optimization problem on an $n$-node graph (where the nodes may represent users, base stations, or antennas), GNNs' generalization error and required number of training samples are $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$ times lower than the unstructured multi-layer perceptrons. For design guidelines, we propose a unified framework that is applicable to general design problems in wireless networks, which includes graph modeling, neural architecture design, and theory-guided performance enhancement. Extensive simulations, which cover a variety of important problems and network settings, verify our theory and the effectiveness of the proposed design framework.
translated by 谷歌翻译
查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效果,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差率性能之间的权衡。然后,采用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实现友好的解决方案,又比独立参数导致更快的培训收敛。我们为5G低密度均衡检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,平均消息量化位低于3.1位。此外,我们表明,学到的位宽也将其推广到其他代码速率和渠道。
translated by 谷歌翻译
虽然可以通过对位渠道进行排序来有效地实现连续策略解码的极性代码,但以有效且可扩展的方式为连续策略列表(SCL)解码找到最佳的极性代码结构,但仍在等待研究。本文提出了一个基于图形神经网络(GNN)基于迭代消息通话(IMP)算法的强化算法,以解决SCL解码的极性代码构建问题。该算法仅在极地代码的生成器矩阵诱导的图的局部结构上运行。 IMP模型的大小独立于区块长度和代码速率,从而使其可扩展到具有长块长度的极性代码。此外,单个受过训练的IMP模型可以直接应用于广泛的目标区块长度,代码速率和渠道条件,并且可以生成相应的极性代码,而无需单独的训练。数值实验表明,IMP算法找到了极性代码构建体,这些构建体在环状划分 - 检查辅助辅助AD的SCL(CA-SCL)解码下显着优于经典构建体。与针对SCL/CA-SCL解码量身定制的其他基于学习的施工方法相比,IMP算法构建具有可比或较低帧错误率的极地代码,同时通过消除每个目标阻止长度的单独训练的需求,从而大大降低了训练的复杂性,代码速率和通道状况。
translated by 谷歌翻译
Channel estimation is a critical task in multiple-input multiple-output (MIMO) digital communications that substantially effects end-to-end system performance. In this work, we introduce a novel approach for channel estimation using deep score-based generative models. A model is trained to estimate the gradient of the logarithm of a distribution and is used to iteratively refine estimates given measurements of a signal. We introduce a framework for training score-based generative models for wireless MIMO channels and performing channel estimation based on posterior sampling at test time. We derive theoretical robustness guarantees for channel estimation with posterior sampling in single-input single-output scenarios, and experimentally verify performance in the MIMO setting. Our results in simulated channels show competitive in-distribution performance, and robust out-of-distribution performance, with gains of up to $5$ dB in end-to-end coded communication performance compared to supervised deep learning methods. Simulations on the number of pilots show that high fidelity channel estimation with $25$% pilot density is possible for MIMO channel sizes of up to $64 \times 256$. Complexity analysis reveals that model size can efficiently trade performance for estimation latency, and that the proposed approach is competitive with compressed sensing in terms of floating-point operation (FLOP) count.
translated by 谷歌翻译
在这项工作中,我们提出了Reldec,一种用于顺序解码中等长度低密度奇偶校验(LDPC)代码的新方法。 Reldec背后的主要思想是,基于Markov决策过程(MDP),通过增强学习获得优化的解码策略。与我们以前的工作相比,如果代理学习在每个迭代的CNS的组(群集中)中只学习一个检查节点(CN),我们在这项工作中我们培训代理程序在群集中安排所有CN和所有集群在每一次迭代中。也就是说,在Reldec的每个学习步骤中,代理学会根据与调度特定群集的结果相关联的奖励来顺序地安排CN簇。我们还修改了MDP的状态空间表示,使RELDEC能够适用于比我们之前的工作中研究的更大的块长度LDPC代码。此外,为了在不同信道条件下进行解码,我们提出了两个相关方案,即敏捷元 - Reldec(AM-Reldec)和Meta-Reldec(M-Reldec),这两者都采用了元增强学习。所提出的Reldec计划显着优于各种LDPC代码的标准洪水和随机顺序解码,包括为5G新无线电设计的代码。
translated by 谷歌翻译