Communications systems to date are primarily designed with the goal of reliable (error-free) transfer of digital sequences (bits). Next generation (NextG) communication systems are beginning to explore shifting this design paradigm of reliably decoding bits to reliably executing a given task. Task-oriented communications system design is likely to find impactful applications, for example, considering the relative importance of messages. In this paper, a wireless signal classification is considered as the task to be performed in the NextG Radio Access Network (RAN) for signal intelligence and spectrum awareness applications such as user equipment (UE) identification and authentication, and incumbent signal detection for spectrum co-existence. For that purpose, edge devices collect wireless signals and communicate with the NextG base station (gNodeB) that needs to know the signal class. Edge devices may not have sufficient processing power and may not be trusted to perform the signal classification task, whereas the transfer of the captured signals from the edge devices to the gNodeB may not be efficient or even feasible subject to stringent delay, rate, and energy restrictions. We present a task-oriented communications approach, where all the transmitter, receiver and classifier functionalities are jointly trained as two deep neural networks (DNNs), one for the edge device and another for the gNodeB. We show that this approach achieves better accuracy with smaller DNNs compared to the baselines that treat communications and signal classification as two separate tasks. Finally, we discuss how adversarial machine learning poses a major security threat for the use of DNNs for task-oriented communications. We demonstrate the major performance loss under backdoor (Trojan) attacks and adversarial (evasion) attacks that target the training and test processes of task-oriented communications.
translated by 谷歌翻译
Semantic communications seeks to transfer information from a source while conveying a desired meaning to its destination. We model the transmitter-receiver functionalities as an autoencoder followed by a task classifier that evaluates the meaning of the information conveyed to the receiver. The autoencoder consists of an encoder at the transmitter to jointly model source coding, channel coding, and modulation, and a decoder at the receiver to jointly model demodulation, channel decoding and source decoding. By augmenting the reconstruction loss with a semantic loss, the two deep neural networks (DNNs) of this encoder-decoder pair are interactively trained with the DNN of the semantic task classifier. This approach effectively captures the latent feature space and reliably transfers compressed feature vectors with a small number of channel uses while keeping the semantic loss low. We identify the multi-domain security vulnerabilities of using the DNNs for semantic communications. Based on adversarial machine learning, we introduce test-time (targeted and non-targeted) adversarial attacks on the DNNs by manipulating their inputs at different stages of semantic communications. As a computer vision attack, small perturbations are injected to the images at the input of the transmitter's encoder. As a wireless attack, small perturbations signals are transmitted to interfere with the input of the receiver's decoder. By launching these stealth attacks individually or more effectively in a combined form as a multi-domain attack, we show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low. These multi-domain adversarial attacks pose as a serious threat to the semantics of information transfer (with larger impact than conventional jamming) and raise the need of defense methods for the safe adoption of semantic communications.
translated by 谷歌翻译
This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks. Semantic communications aims to convey a desired meaning while transferring information from a transmitter to its receiver. An encoder-decoder pair that is represented by two deep neural networks (DNNs) as part of an autoencoder is trained to reconstruct signals such as images at the receiver by transmitting latent features of small size over a limited number of channel uses. In the meantime, another DNN of a semantic task classifier at the receiver is jointly trained with the autoencoder to check the meaning conveyed to the receiver. The complex decision space of the DNNs makes semantic communications susceptible to adversarial manipulations. In a backdoor (Trojan) attack, the adversary adds triggers to a small portion of training samples and changes the label to a target label. When the transfer of images is considered, the triggers can be added to the images or equivalently to the corresponding transmitted or received signals. In test time, the adversary activates these triggers by providing poisoned samples as input to the encoder (or decoder) of semantic communications. The backdoor attack can effectively change the semantic information transferred for the poisoned input samples to a target meaning. As the performance of semantic communications improves with the signal-to-noise ratio and the number of channel uses, the success of the backdoor attack increases as well. Also, increasing the Trojan ratio in training data makes the attack more successful. In the meantime, the effect of this attack on the unpoisoned input samples remains limited. Overall, this paper shows that the backdoor attack poses a serious threat to semantic communications and presents novel design guidelines to preserve the meaning of transferred information in the presence of backdoor attacks.
translated by 谷歌翻译
本文提出了对基于深度学习的无线信号分类器的信道感知对抗攻击。有一个发射器,发送具有不同调制类型的信号。每个接收器使用深神经网络以将其超空气接收信号分类为调制类型。与此同时,对手将对手扰动(受到电力预算的影响)透射到欺骗接收器,以在作为透射信号叠加和对抗扰动的叠加接收的分类信号中进行错误。首先,当在设计对抗扰动时不考虑通道时,这些逃避攻击被证明会失败。然后,通过考虑来自每个接收器的对手的频道效应来提出现实攻击。在示出频道感知攻击是选择性的(即,它只影响扰动设计中的信道中考虑的接收器),通过制作常见的对抗扰动来呈现广播对抗攻击,以在不同接收器处同时欺骗分类器。通过占通道,发射机输入和分类器模型可用的不同信息,将调制分类器对过空中侵犯攻击的主要脆弱性。最后,引入了基于随机平滑的经过认证的防御,即增加了噪声训练数据,使调制分类器鲁棒到对抗扰动。
translated by 谷歌翻译
通过从大型天线移动到用于软件定义的无线系统的天线表面,可重新配置的智能表面(RISS)依赖于单元电池的阵列,以控制信号的散射和反射轮廓,减轻传播损耗和多路径衰减,从而改善覆盖范围和光谱效率。在本文中,在RIS存在下考虑了隐蔽的通信。虽然RIS升高了持续的传动,但是预期接收器和窃听者都可以单独尝试使用自己的深神经网络(DNN)分类器来检测该传输。 RIS交互向量是通过平衡将发送信号聚焦到接收器的两个(潜在冲突)目标而设计的,并将发送的信号远离窃听器。为了提高封面通信,对发射机的信号添加对抗扰动以欺骗窃听器的分类器,同时保持对接收器的影响。来自不同网络拓扑的结果表明,可以共同设计对抗扰动和RIS交互向量,以有效地提高接收器处的信号检测精度,同时降低窃听器的检测精度以实现封面通信。
translated by 谷歌翻译
人工智能(AI)将在蜂窝网络部署,配置和管理中发挥越来越多的作用。本文研究了AI驱动的6G无线电访问网络(RANS)的安全含义。尽管6G标准化的预期时间表仍在数年之外,但与6G安全有关的预标准化工作已经在进行中,并且将受益于基本和实验研究。Open Ran(O-Ran)描述了一个以行业为导向的开放式体系结构和用于使用AI控制的下一代架设的接口。考虑到这种体系结构,我们确定了对数据驱动网络和物理层元素,相应的对策和研究方向的关键威胁。
translated by 谷歌翻译
本文提出了一种新的方法,用于可重新配置智能表面(RIS)和发射器 - 接收器对的联合设计,其作为一组深神经网络(DNN)培训,以优化端到端通信性能接收者。 RIS是一种软件定义的单位单元阵列,其可以根据散射和反射轮廓来控制,以将来自发射机的传入信号集中到接收器。 RIS的好处是通过克服视线(LOS)链路的物理障碍来提高无线通信的覆盖率和光谱效率。 RIS波束码字(从预定义的码本)的选择过程被配制为DNN,而发射器 - 接收器对的操作被建模为两个DNN,一个用于编码器(在发射器)和另一个一个用于AutoEncoder的解码器(在接收器处),通过考虑包括由in之间引起的频道效应。底层DNN共同训练,以最小化接收器处的符号误差率。数值结果表明,所提出的设计在各种基线方案中实现了误差性能的主要增益,其中使用了没有RIS或者将RIS光束的选择与发射器 - 接收器对的设计分离。
translated by 谷歌翻译
尽管语义通信对大量任务表现出令人满意的性能,但语义噪声和系统的鲁棒性的影响尚未得到很好的研究。语义噪声是指预期的语义符号和接收到的语义符号之间的误导性,从而导致任务失败。在本文中,我们首先提出了一个框架,用于稳健的端到端语义通信系统来对抗语义噪声。特别是,我们分析了样品依赖性和样本无关的语义噪声。为了打击语义噪声,开发了具有重量扰动的对抗训练,以在训练数据集中纳入带有语义噪声的样品。然后,我们建议掩盖一部分输入,在该输入中,语义噪声经常出现,并通过噪声相关的掩蔽策略设计蒙版vector量化量化的量化自动编码器(VQ-VAE)。我们使用发射器共享的离​​散代码簿和接收器用于编码功能表示。为了进一步提高系统鲁棒性,我们开发了一个功能重要性模块(FIM),以抑制与噪声相关和任务无关的功能。因此,发射器只需要在代码簿中传输这些重要的任务相关功能的索引即可。仿真结果表明,所提出的方法可以应用于许多下游任务,并显着提高针对语义噪声的鲁棒性,并显着减少了传输开销。
translated by 谷歌翻译
This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification. NextG systems such as the one envisioned for a massive number of IoT devices can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users (such as in the Citizens Broadband Radio Service (CBRS) band). By training another DNN as the surrogate model, an adversary can launch an inference (exploratory) attack to learn the behavior of the victim model, predict successful operation modes (e.g., channel access), and jam them. A defense mechanism can increase the adversary's uncertainty by introducing controlled errors in the victim model's decisions (i.e., poisoning the adversary's training data). This defense is effective against an attack but reduces the performance when there is no attack. The interactions between the defender and the adversary are formulated as a non-cooperative game, where the defender selects the probability of defending or the defense level itself (i.e., the ratio of falsified decisions) and the adversary selects the probability of attacking. The defender's objective is to maximize its reward (e.g., throughput or transmission success ratio), whereas the adversary's objective is to minimize this reward and its attack cost. The Nash equilibrium strategies are determined as operation modes such that no player can unilaterally improve its utility given the other's strategy is fixed. A fictitious play is formulated for each player to play the game repeatedly in response to the empirical frequency of the opponent's actions. The performance in Nash equilibrium is compared to the fixed attack and defense cases, and the resilience of NextG signal classification against attacks is quantified.
translated by 谷歌翻译
第五代(5G)网络和超越设想巨大的东西互联网(物联网)推出,以支持延长现实(XR),增强/虚拟现实(AR / VR),工业自动化,自主驾驶和智能所有带来的破坏性应用一起占用射频(RF)频谱的大规模和多样化的IOT设备。随着频谱嘎嘎和吞吐量挑战,这种大规模的无线设备暴露了前所未有的威胁表面。 RF指纹识别是预约的作为候选技术,可以与加密和零信任安全措施相结合,以确保无线网络中的数据隐私,机密性和完整性。在未来的通信网络中,在这项工作中,在未来的通信网络中的相关性,我们对RF指纹识别方法进行了全面的调查,从传统观点到最近的基于深度学习(DL)的算法。现有的调查大多专注于无线指纹方法的受限制呈现,然而,许多方面仍然是不可能的。然而,在这项工作中,我们通过解决信号智能(SIGINT),应用程序,相关DL算法,RF指纹技术的系统文献综述来缓解这一点,跨越过去二十年的RF指纹技术的系统文献综述,对数据集和潜在研究途径的讨论 - 必须以百科全书的方式阐明读者的必要条件。
translated by 谷歌翻译
鉴于无线频谱的有限性和对无线通信最近的技术突破产生的频谱使用不断增加的需求,干扰问题仍在继续持续存在。尽管最近解决干涉问题的进步,但干扰仍然呈现出有效使用频谱的挑战。这部分是由于Wi-Fi的无许可和管理共享乐队使用的升高,长期演进(LTE)未许可(LTE-U),LTE许可辅助访问(LAA),5G NR等机会主义频谱访问解决方案。因此,需要对干扰稳健的有效频谱使用方案的需求从未如此重要。在过去,通过使用避免技术以及非AI缓解方法(例如,自适应滤波器)来解决问题的大多数解决方案。非AI技术的关键缺陷是需要提取或开发信号特征的域专业知识,例如CycrationArity,带宽和干扰信号的调制。最近,研究人员已成功探索了AI / ML的物理(PHY)层技术,尤其是深度学习,可减少或补偿干扰信号,而不是简单地避免它。 ML基于ML的方法的潜在思想是学习来自数据的干扰或干扰特性,从而使需要对抑制干扰的域专业知识进行侧联。在本文中,我们审查了广泛的技术,这些技术已经深入了解抑制干扰。我们为干扰抑制中许多不同类型的深度学习技术提供比较和指导。此外,我们突出了在干扰抑制中成功采用深度学习的挑战和潜在的未来研究方向。
translated by 谷歌翻译
State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, these DNNs are location and time sensitive, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. In this paper, we introduce AirNet, a novel training and transmission method that allows efficient wireless delivery of DNNs under stringent transmit power and latency constraints. We first train the DNN with noise injection to counter the wireless channel noise. Then we employ pruning to reduce the network size to the available channel bandwidth, and perform knowledge distillation from a larger model to achieve satisfactory performance, despite pruning. We show that AirNet achieves significantly higher test accuracy compared to digital alternatives under the same bandwidth and power constraints. The accuracy of the network at the receiver also exhibits graceful degradation with channel quality, which reduces the requirement for accurate channel estimation. We further improve the performance of AirNet by pruning the network below the available bandwidth, and using channel expansion to provide better robustness against channel noise. We also benefit from unequal error protection (UEP) by selectively expanding more important layers of the network. Finally, we develop an ensemble training approach, which trains a whole spectrum of DNNs, each of which can be used at different channel condition, resolving the impractical memory requirements.
translated by 谷歌翻译
Effective and adaptive interference management is required in next generation wireless communication systems. To address this challenge, Rate-Splitting Multiple Access (RSMA), relying on multi-antenna rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receivers, has been intensively studied in recent years, albeit mostly under the assumption of perfect Channel State Information at the Receiver (CSIR) and ideal capacity-achieving modulation and coding schemes. To assess its practical performance, benefits, and limits under more realistic conditions, this work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods, which aims to unite the simple structure of the conventional SIC receiver and the robustness and model agnosticism of deep learning techniques. The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS), and average training overhead. Also, a comparison with the SIC receiver, with perfect and imperfect CSIR, is given. Results reveal that the MBDL receiver outperforms by a significant margin the SIC receiver with imperfect CSIR, due to its ability to generate on demand non-linear symbol detection boundaries in a pure data-driven manner.
translated by 谷歌翻译
未来的通信网络必须解决稀缺范围,以适应异质无线设备的广泛增长。无线信号识别对于频谱监视,频谱管理,安全通信等越来越重要。因此,对边缘的综合频谱意识有可能成为超越5G网络的新兴推动力。该领域的最新研究具有(i)仅关注单个任务 - 调制或信号(协议)分类 - 在许多情况下,该系统不足以对系统作用,(ii)考虑要么考虑雷达或通信波形(同质波形类别),(iii)在神经网络设计阶段没有解决边缘部署。在这项工作中,我们首次在无线通信域中,我们利用了基于深神经网络的多任务学习(MTL)框架的潜力,同时学习调制和信号分类任务,同时考虑异质无线信号,例如雷达和通信波形。在电磁频谱中。提出的MTL体系结构受益于两项任务之间的相互关系,以提高分类准确性以及使用轻型神经网络模型的学习效率。此外,我们还将对模型进行实验评估,并通过空中收集的样品进行了对模型压缩的第一手洞察力,以及在资源受限的边缘设备上部署的深度学习管道。我们在两个参考体系结构上展示了所提出的模型的显着计算,记忆和准确性提高。除了建模适用于资源约束的嵌入式无线电平台的轻型MTL模型外,我们还提供了一个全面的异质无线信号数据集,以供公众使用。
translated by 谷歌翻译
互联网连接系统的指数增长产生了许多挑战,例如频谱短缺问题,需要有效的频谱共享(SS)解决方案。复杂和动态的SS系统可以接触不同的潜在安全性和隐私问题,需要保护机制是自适应,可靠和可扩展的。基于机器学习(ML)的方法经常提议解决这些问题。在本文中,我们对最近的基于ML的SS方法,最关键的安全问题和相应的防御机制提供了全面的调查。特别是,我们详细说明了用于提高SS通信系统的性能的最先进的方法,包括基于ML基于ML的基于的数据库辅助SS网络,ML基于基于的数据库辅助SS网络,包括基于ML的数据库辅助的SS网络,基于ML的LTE-U网络,基于ML的环境反向散射网络和其他基于ML的SS解决方案。我们还从物理层和基于ML算法的相应防御策略的安全问题,包括主要用户仿真(PUE)攻击,频谱感测数据伪造(SSDF)攻击,干扰攻击,窃听攻击和隐私问题。最后,还给出了对ML基于ML的开放挑战的广泛讨论。这种全面的审查旨在为探索新出现的ML的潜力提供越来越复杂的SS及其安全问题,提供基础和促进未来的研究。
translated by 谷歌翻译
神经形态计算是一种新兴的计算范式,它从批处理的处理转向在线,事件驱动的流数据处理。当神经形态芯片与基于尖峰的传感器结合在一起时,只有在峰值时间内记录相关事件并证明对变化条件的低延迟响应时,才能通过消耗能量来固有地适应数据分布的“语义”。环境。本文为神经形态无线网络系统系统提出了端到端设计,该系统集成了基于尖峰的传感,处理和通信。在拟议的神经系统系统中,每个传感设备都配备了神经形态传感器,尖峰神经网络(SNN)和带有多个天线的脉冲无线电发射器。传输发生在配备了多Antenna脉冲无线电接收器和SNN的接收器上的共享褪色通道上进行。为了使接收器适应褪色的通道条件,我们引入了一项超网络,以使用飞行员控制解码SNN的权重。飞行员,编码SNN,解码SNN和超网络经过多个通道实现的共同训练。该系统被证明可以显着改善基于传统的基于框架的数字解决方案以及替代性非自适应训练方法,从时间到准确性和能源消耗指标方面。
translated by 谷歌翻译
Spectrum coexistence is essential for next generation (NextG) systems to share the spectrum with incumbent (primary) users and meet the growing demand for bandwidth. One example is the 3.5 GHz Citizens Broadband Radio Service (CBRS) band, where the 5G and beyond communication systems need to sense the spectrum and then access the channel in an opportunistic manner when the incumbent user (e.g., radar) is not transmitting. To that end, a high-fidelity classifier based on a deep neural network is needed for low misdetection (to protect incumbent users) and low false alarm (to achieve high throughput for NextG). In a dynamic wireless environment, the classifier can only be used for a limited period of time, i.e., coherence time. A portion of this period is used for learning to collect sensing results and train a classifier, and the rest is used for transmissions. In spectrum sharing systems, there is a well-known tradeoff between the sensing time and the transmission time. While increasing the sensing time can increase the spectrum sensing accuracy, there is less time left for data transmissions. In this paper, we present a generative adversarial network (GAN) approach to generate synthetic sensing results to augment the training data for the deep learning classifier so that the sensing time can be reduced (and thus the transmission time can be increased) while keeping high accuracy of the classifier. We consider both additive white Gaussian noise (AWGN) and Rayleigh channels, and show that this GAN-based approach can significantly improve both the protection of the high-priority user and the throughput of the NextG user (more in Rayleigh channels than AWGN channels).
translated by 谷歌翻译
迄今为止,通信系统主要旨在可靠地交流位序列。这种方法提供了有效的工程设计,这些设计对消息的含义或消息交换所旨在实现的目标不可知。但是,下一代系统可以通过将消息语义和沟通目标折叠到其设计中来丰富。此外,可以使这些系统了解进行交流交流的环境,从而为新颖的设计见解提供途径。本教程总结了迄今为止的努力,从早期改编,语义意识和以任务为导向的通信开始,涵盖了基础,算法和潜在的实现。重点是利用信息理论提供基础的方法,以及学习在语义和任务感知通信中的重要作用。
translated by 谷歌翻译
深度学习(DL)在无线领域中找到了丰富的应用,以提高频谱意识。通常,DL模型要么是根据统计分布后随机初始初始初始初始初始初始初始初始初始初始化,要么在其他数据域(例如计算机视觉)(以转移学习的形式)上进行鉴定,而无需考虑无线信号的唯一特征。即使只有有限的带有标签的培训数据样本,自我监督的学习也能够从射频(RF)信号本身中学习有用的表示形式。我们通过专门制定一组转换以捕获无线信号特征来提出第一个自我监督的RF信号表示学习模型,并将其应用于自动调制识别(AMR)任务。我们表明,通过学习信号表示具有自我监督的学习,可以显着提高样本效率(实现一定准确性性能所需的标记样品数量)。这转化为大量时间和节省成本。此外,与最先进的DL方法相比,自我监管的学习可以提高模型的准确性,即使使用了一小部分训练数据样本,也可以保持高精度。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译