This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks. Semantic communications aims to convey a desired meaning while transferring information from a transmitter to its receiver. An encoder-decoder pair that is represented by two deep neural networks (DNNs) as part of an autoencoder is trained to reconstruct signals such as images at the receiver by transmitting latent features of small size over a limited number of channel uses. In the meantime, another DNN of a semantic task classifier at the receiver is jointly trained with the autoencoder to check the meaning conveyed to the receiver. The complex decision space of the DNNs makes semantic communications susceptible to adversarial manipulations. In a backdoor (Trojan) attack, the adversary adds triggers to a small portion of training samples and changes the label to a target label. When the transfer of images is considered, the triggers can be added to the images or equivalently to the corresponding transmitted or received signals. In test time, the adversary activates these triggers by providing poisoned samples as input to the encoder (or decoder) of semantic communications. The backdoor attack can effectively change the semantic information transferred for the poisoned input samples to a target meaning. As the performance of semantic communications improves with the signal-to-noise ratio and the number of channel uses, the success of the backdoor attack increases as well. Also, increasing the Trojan ratio in training data makes the attack more successful. In the meantime, the effect of this attack on the unpoisoned input samples remains limited. Overall, this paper shows that the backdoor attack poses a serious threat to semantic communications and presents novel design guidelines to preserve the meaning of transferred information in the presence of backdoor attacks.
translated by 谷歌翻译
Semantic communications seeks to transfer information from a source while conveying a desired meaning to its destination. We model the transmitter-receiver functionalities as an autoencoder followed by a task classifier that evaluates the meaning of the information conveyed to the receiver. The autoencoder consists of an encoder at the transmitter to jointly model source coding, channel coding, and modulation, and a decoder at the receiver to jointly model demodulation, channel decoding and source decoding. By augmenting the reconstruction loss with a semantic loss, the two deep neural networks (DNNs) of this encoder-decoder pair are interactively trained with the DNN of the semantic task classifier. This approach effectively captures the latent feature space and reliably transfers compressed feature vectors with a small number of channel uses while keeping the semantic loss low. We identify the multi-domain security vulnerabilities of using the DNNs for semantic communications. Based on adversarial machine learning, we introduce test-time (targeted and non-targeted) adversarial attacks on the DNNs by manipulating their inputs at different stages of semantic communications. As a computer vision attack, small perturbations are injected to the images at the input of the transmitter's encoder. As a wireless attack, small perturbations signals are transmitted to interfere with the input of the receiver's decoder. By launching these stealth attacks individually or more effectively in a combined form as a multi-domain attack, we show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low. These multi-domain adversarial attacks pose as a serious threat to the semantics of information transfer (with larger impact than conventional jamming) and raise the need of defense methods for the safe adoption of semantic communications.
translated by 谷歌翻译
Communications systems to date are primarily designed with the goal of reliable (error-free) transfer of digital sequences (bits). Next generation (NextG) communication systems are beginning to explore shifting this design paradigm of reliably decoding bits to reliably executing a given task. Task-oriented communications system design is likely to find impactful applications, for example, considering the relative importance of messages. In this paper, a wireless signal classification is considered as the task to be performed in the NextG Radio Access Network (RAN) for signal intelligence and spectrum awareness applications such as user equipment (UE) identification and authentication, and incumbent signal detection for spectrum co-existence. For that purpose, edge devices collect wireless signals and communicate with the NextG base station (gNodeB) that needs to know the signal class. Edge devices may not have sufficient processing power and may not be trusted to perform the signal classification task, whereas the transfer of the captured signals from the edge devices to the gNodeB may not be efficient or even feasible subject to stringent delay, rate, and energy restrictions. We present a task-oriented communications approach, where all the transmitter, receiver and classifier functionalities are jointly trained as two deep neural networks (DNNs), one for the edge device and another for the gNodeB. We show that this approach achieves better accuracy with smaller DNNs compared to the baselines that treat communications and signal classification as two separate tasks. Finally, we discuss how adversarial machine learning poses a major security threat for the use of DNNs for task-oriented communications. We demonstrate the major performance loss under backdoor (Trojan) attacks and adversarial (evasion) attacks that target the training and test processes of task-oriented communications.
translated by 谷歌翻译
本文提出了对基于深度学习的无线信号分类器的信道感知对抗攻击。有一个发射器,发送具有不同调制类型的信号。每个接收器使用深神经网络以将其超空气接收信号分类为调制类型。与此同时,对手将对手扰动(受到电力预算的影响)透射到欺骗接收器,以在作为透射信号叠加和对抗扰动的叠加接收的分类信号中进行错误。首先,当在设计对抗扰动时不考虑通道时,这些逃避攻击被证明会失败。然后,通过考虑来自每个接收器的对手的频道效应来提出现实攻击。在示出频道感知攻击是选择性的(即,它只影响扰动设计中的信道中考虑的接收器),通过制作常见的对抗扰动来呈现广播对抗攻击,以在不同接收器处同时欺骗分类器。通过占通道,发射机输入和分类器模型可用的不同信息,将调制分类器对过空中侵犯攻击的主要脆弱性。最后,引入了基于随机平滑的经过认证的防御,即增加了噪声训练数据,使调制分类器鲁棒到对抗扰动。
translated by 谷歌翻译
尽管语义通信对大量任务表现出令人满意的性能,但语义噪声和系统的鲁棒性的影响尚未得到很好的研究。语义噪声是指预期的语义符号和接收到的语义符号之间的误导性,从而导致任务失败。在本文中,我们首先提出了一个框架,用于稳健的端到端语义通信系统来对抗语义噪声。特别是,我们分析了样品依赖性和样本无关的语义噪声。为了打击语义噪声,开发了具有重量扰动的对抗训练,以在训练数据集中纳入带有语义噪声的样品。然后,我们建议掩盖一部分输入,在该输入中,语义噪声经常出现,并通过噪声相关的掩蔽策略设计蒙版vector量化量化的量化自动编码器(VQ-VAE)。我们使用发射器共享的离​​散代码簿和接收器用于编码功能表示。为了进一步提高系统鲁棒性,我们开发了一个功能重要性模块(FIM),以抑制与噪声相关和任务无关的功能。因此,发射器只需要在代码簿中传输这些重要的任务相关功能的索引即可。仿真结果表明,所提出的方法可以应用于许多下游任务,并显着提高针对语义噪声的鲁棒性,并显着减少了传输开销。
translated by 谷歌翻译
通过从大型天线移动到用于软件定义的无线系统的天线表面,可重新配置的智能表面(RISS)依赖于单元电池的阵列,以控制信号的散射和反射轮廓,减轻传播损耗和多路径衰减,从而改善覆盖范围和光谱效率。在本文中,在RIS存在下考虑了隐蔽的通信。虽然RIS升高了持续的传动,但是预期接收器和窃听者都可以单独尝试使用自己的深神经网络(DNN)分类器来检测该传输。 RIS交互向量是通过平衡将发送信号聚焦到接收器的两个(潜在冲突)目标而设计的,并将发送的信号远离窃听器。为了提高封面通信,对发射机的信号添加对抗扰动以欺骗窃听器的分类器,同时保持对接收器的影响。来自不同网络拓扑的结果表明,可以共同设计对抗扰动和RIS交互向量,以有效地提高接收器处的信号检测精度,同时降低窃听器的检测精度以实现封面通信。
translated by 谷歌翻译
Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model's test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have been extensively studied on images, few works have investigated the threat of backdoor attacks on time series data. To fill this gap, in this paper we present a novel generative approach for time series backdoor attacks against deep learning based time series classifiers. Backdoor attacks have two main goals: high stealthiness and high attack success rate. We find that, compared to images, it can be more challenging to achieve the two goals on time series. This is because time series have fewer input dimensions and lower degrees of freedom, making it hard to achieve a high attack success rate without compromising stealthiness. Our generative approach addresses this challenge by generating trigger patterns that are as realistic as real-time series patterns while achieving a high attack success rate without causing a significant drop in clean accuracy. We also show that our proposed attack is resistant to potential backdoor defenses. Furthermore, we propose a novel universal generator that can poison any type of time series with a single generator that allows universal attacks without the need to fine-tune the generative model for new time series datasets.
translated by 谷歌翻译
本文提出了一种新的方法,用于可重新配置智能表面(RIS)和发射器 - 接收器对的联合设计,其作为一组深神经网络(DNN)培训,以优化端到端通信性能接收者。 RIS是一种软件定义的单位单元阵列,其可以根据散射和反射轮廓来控制,以将来自发射机的传入信号集中到接收器。 RIS的好处是通过克服视线(LOS)链路的物理障碍来提高无线通信的覆盖率和光谱效率。 RIS波束码字(从预定义的码本)的选择过程被配制为DNN,而发射器 - 接收器对的操作被建模为两个DNN,一个用于编码器(在发射器)和另一个一个用于AutoEncoder的解码器(在接收器处),通过考虑包括由in之间引起的频道效应。底层DNN共同训练,以最小化接收器处的符号误差率。数值结果表明,所提出的设计在各种基线方案中实现了误差性能的主要增益,其中使用了没有RIS或者将RIS光束的选择与发射器 - 接收器对的设计分离。
translated by 谷歌翻译
与令人印象深刻的进步触动了我们社会的各个方面,基于深度神经网络(DNN)的AI技术正在带来越来越多的安全问题。虽然在考试时间运行的攻击垄断了研究人员的初始关注,但是通过干扰培训过程来利用破坏DNN模型的可能性,代表了破坏训练过程的可能性,这是破坏AI技术的可靠性的进一步严重威胁。在后门攻击中,攻击者损坏了培训数据,以便在测试时间诱导错误的行为。然而,测试时间误差仅在存在与正确制作的输入样本对应的触发事件的情况下被激活。通过这种方式,损坏的网络继续正常输入的预期工作,并且只有当攻击者决定激活网络内隐藏的后门时,才会发生恶意行为。在过去几年中,后门攻击一直是强烈的研究活动的主题,重点是新的攻击阶段的发展,以及可能对策的提议。此概述文件的目标是审查发表的作品,直到现在,分类到目前为止提出的不同类型的攻击和防御。指导分析的分类基于攻击者对培训过程的控制量,以及防御者验证用于培训的数据的完整性,并监控DNN在培训和测试中的操作时间。因此,拟议的分析特别适合于参考他们在运营的应用方案的攻击和防御的强度和弱点。
translated by 谷歌翻译
6G时代的语义沟通被认为是一个有希望的沟通范式,可以突破传统通信的瓶颈。但是,其在多用户方案中的应用程序,尤其是广播案例,仍未探索。为了有效利用语义沟通启用的好处,在本文中,我们提出了一个一对一的语义通信系统。具体而言,我们建议使用一个启用的深神经网络(DNN),称为MR \ _DeepSc。通过为不同用户的语义功能利用语义功能,基于预训练的模型即Distilbert的语义识别器是为了区分不同用户的。此外,采用转移学习来加快新接收器网络的培训。仿真结果表明,在不同的通道条件下,提出的MR \ _DeepSc可以比其他基准测试获得最佳性能,尤其是在低信噪比(SNR)方面。
translated by 谷歌翻译
机器学习(ML)已将自身驾驶到认证系统的自主驱动范围的各种关键应用的基石。然而,随着机器学习模型的增加,已经出现了多次攻击。一类这样的攻击正在培训时间攻击,由此对手在机器学习模型培训之前或期间执行他们的攻击。在这项工作中,我们提出了一种对基于计算机视觉的机器学习模型的新培训时间攻击,即模型劫持攻击。对手旨在劫持目标模型,而不是模特所有者注意到的原始任务。模型劫持可能会导致问责制和安全风险,因为可以将劫持型号所有者构成,以便拥有其型号提供非法或不道德的服务。模型劫持攻击以与现有数据中毒攻击相同的方式启动。然而,模型劫持攻击的一个要求是隐身,即劫持目标模型的数据样本应该类似于模型的原始训练数据集。为此,我们提出了两种不同的模型劫持攻击,即Chameleon和不良变色龙,基于新颖的编码器解码器样式ML模型,即Camouflager。我们的评价表明,我们的模型劫持攻击都达到了高攻击成功率,模型实用程序下降了不计。
translated by 谷歌翻译
This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification. NextG systems such as the one envisioned for a massive number of IoT devices can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users (such as in the Citizens Broadband Radio Service (CBRS) band). By training another DNN as the surrogate model, an adversary can launch an inference (exploratory) attack to learn the behavior of the victim model, predict successful operation modes (e.g., channel access), and jam them. A defense mechanism can increase the adversary's uncertainty by introducing controlled errors in the victim model's decisions (i.e., poisoning the adversary's training data). This defense is effective against an attack but reduces the performance when there is no attack. The interactions between the defender and the adversary are formulated as a non-cooperative game, where the defender selects the probability of defending or the defense level itself (i.e., the ratio of falsified decisions) and the adversary selects the probability of attacking. The defender's objective is to maximize its reward (e.g., throughput or transmission success ratio), whereas the adversary's objective is to minimize this reward and its attack cost. The Nash equilibrium strategies are determined as operation modes such that no player can unilaterally improve its utility given the other's strategy is fixed. A fictitious play is formulated for each player to play the game repeatedly in response to the empirical frequency of the opponent's actions. The performance in Nash equilibrium is compared to the fixed attack and defense cases, and the resilience of NextG signal classification against attacks is quantified.
translated by 谷歌翻译
现有的深度学习的语义通信系统通常依赖于包含经验数据及其相关语义信息的发射器和接收器之间的共同背景知识。实际上,语义信息是由接收器的务实任务定义的,发射器不能知道。发射机上的实际可观察​​数据也可以具有与共享背景知识库中的经验数据相同的分布。为了解决这些实际问题,本文提出了一个新的基于神经网络的语义通信系统,用于图像传输,该任务在发射器上不知道,并且数据环境是动态的。该系统由两个主要部分组成,即语义编码(SC)网络和数据适应(DA)网络。 SC网络学习如何使用接收器领导训练过程提取和传输语义信息。通过使用传输学习的域适应技术,DA网络学习了如何将观察到的数据转换为SC网络可以在不进行重新验证的情况下进行处理的类似形式的经验数据。数值实验表明,所提出的方法可以适应可观察的数据集,同时在数据恢复和任务执行方面保持高性能。
translated by 谷歌翻译
迄今为止,通信系统主要旨在可靠地交流位序列。这种方法提供了有效的工程设计,这些设计对消息的含义或消息交换所旨在实现的目标不可知。但是,下一代系统可以通过将消息语义和沟通目标折叠到其设计中来丰富。此外,可以使这些系统了解进行交流交流的环境,从而为新颖的设计见解提供途径。本教程总结了迄今为止的努力,从早期改编,语义意识和以任务为导向的通信开始,涵盖了基础,算法和潜在的实现。重点是利用信息理论提供基础的方法,以及学习在语义和任务感知通信中的重要作用。
translated by 谷歌翻译
典型的深神经网络(DNN)后门攻击基于输入中嵌入的触发因素。现有的不可察觉的触发因素在计算上昂贵或攻击成功率低。在本文中,我们提出了一个新的后门触发器,该扳机易于生成,不可察觉和高效。新的触发器是一个均匀生成的三维(3D)二进制图案,可以水平和/或垂直重复和镜像,并将其超级贴在三通道图像上,以训练后式DNN模型。新型触发器分散在整个图像中,对单个像素产生微弱的扰动,但共同拥有强大的识别模式来训练和激活DNN的后门。我们还通过分析表明,随着图像的分辨率提高,触发因素越来越有效。实验是使用MNIST,CIFAR-10和BTSR数据集上的RESNET-18和MLP模型进行的。在无遗象的方面,新触发的表现优于现有的触发器,例如Badnet,Trojaned NN和隐藏的后门。新的触发因素达到了几乎100%的攻击成功率,仅将分类准确性降低了不到0.7%-2.4%,并使最新的防御技术无效。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
近年来,提出了基于培训数据中毒的许多后门攻击。然而,在实践中,这些后门攻击容易受到图像压缩的影响。当压缩后门实例时,将销毁特定后门触发器的特征,这可能导致后门攻击性能恶化。在本文中,我们提出了一种基于特征一致性培训的压缩后门攻击。据我们所知,这是第一个对图像压缩强大的后门攻击。首先,将返回码图像及其压缩版本输入深神经网络(DNN)进行培训。然后,通过DNN的内部层提取每个图像的特征。接下来,最小化后门图像和其压缩版本之间的特征差异。结果,DNN将压缩图像的特征视为特征空间中的后门图像的特征。培训后,对抗DNN的后门攻击是对图像压缩的强大。此外,我们考虑了三种不同的图像按压(即,JPEG,JPEG2000,WEBP),使得后门攻击对多个图像压缩算法具有鲁棒性。实验结果表明了拟议的后门攻击的有效性和稳健性。当后门实例被压缩时,常见后攻击攻击的攻击成功率低于10%,而我们压缩后门的攻击成功率大于97%。即使在低压缩质量压缩后,压缩攻击也仍然是坚固的。此外,广泛的实验表明,我们的压缩后卫攻击具有抗拒未在训练过程中使用的图像压缩的泛化能力。
translated by 谷歌翻译
A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model-malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input-a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.
translated by 谷歌翻译
野外的机器学习模型已被证明在训练过程中容易受到特洛伊木马攻击的影响。尽管已经提出了许多检测机制,但已证明强大的适应性攻击者对他们有效。在本文中,我们旨在回答考虑一个聪明和适应性对手的问题:(i)强大的攻击者将木马所需的最小实例数量是多少? (ii)这样的攻击者是否有可能绕过强大的检测机制?我们提供了这种模型中发生的对抗和检测机制之间的对抗能力和战略相互作用的分析表征。我们根据输入数据集的分数来表征对手的能力,该输入数据集的分数可以嵌入特洛伊木马触发器。我们表明,损耗函数具有一个集中结构,该结构导致设计有效的算法,以确定这一部分,并在最优性方面可证明的界限。我们提出了一种子模型特洛伊算法,以确定样品的最小分数,以注入特洛伊木马触发器。为了逃避对木马模型的检测,我们将对手和特洛伊木马检测机制之间的战略相互作用建模为两人游戏。我们表明,对手以概率赢得了游戏,从而绕开了检测。我们通过证明特洛伊木马模型和干净模型的输出概率分布在遵循Min-Max(MM)Trojan算法时相同。我们对MNIST,CIFAR-10和EUROSAT数据集进行了广泛的评估。结果表明,(i)使用subsodular trojan算法,对手需要将特洛伊木马扳机嵌入很少的样品中,以在Trojan和干净的样品上获得高精度,以及(ii)MM Trojan算法会产生训练有素的经训练的Trojan以概率1逃避检测的模型。
translated by 谷歌翻译
由于具有强大的功能学习能力和高效率,深层哈希在大规模图像检索中取得了巨大的成功。同时,广泛的作品表明,深层神经网络(DNN)容易受到对抗例子的影响,并且探索针对深哈希的对抗性攻击吸引了许多研究工作。然而,尚未对Backdoor攻击(对DNNS的另一个著名威胁)进行深入研究。尽管图像分类领域已经提出了各种后门攻击,但现有方法未能实现真正的不可思议的后门攻击,该攻击享受着隐形触发器并同时享受清洁标签设置,而且它们也无法满足图像检索后门的内在需求。在本文中,我们提出了Badhash,这是第一个基于生成的无透感的后门攻击,对深哈希的攻击,它可以有效地用干净的标签产生隐形和投入特定的中毒图像。具体而言,我们首先提出了一种新的条件生成对抗网络(CGAN)管道,以有效生成中毒样品。对于任何给定的良性图像,它试图产生具有独特无形扳机的自然中毒对应物。为了提高攻击效果,我们引入了基于标签的对比学习网络LabCln来利用不同标签的语义特征,随后将其用于混淆和误导目标模型以学习嵌入式触发器。我们终于探索了在哈希空间中对图像检索的后门攻击的机制。在多个基准数据集上进行的广泛实验证明,Badhash可以生成不察觉的中毒样本,具有强大的攻击能力和对最新的深层哈希方案的可转移性。主要主题领域:[参与]多媒体搜索和建议
translated by 谷歌翻译