This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification. NextG systems such as the one envisioned for a massive number of IoT devices can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users (such as in the Citizens Broadband Radio Service (CBRS) band). By training another DNN as the surrogate model, an adversary can launch an inference (exploratory) attack to learn the behavior of the victim model, predict successful operation modes (e.g., channel access), and jam them. A defense mechanism can increase the adversary's uncertainty by introducing controlled errors in the victim model's decisions (i.e., poisoning the adversary's training data). This defense is effective against an attack but reduces the performance when there is no attack. The interactions between the defender and the adversary are formulated as a non-cooperative game, where the defender selects the probability of defending or the defense level itself (i.e., the ratio of falsified decisions) and the adversary selects the probability of attacking. The defender's objective is to maximize its reward (e.g., throughput or transmission success ratio), whereas the adversary's objective is to minimize this reward and its attack cost. The Nash equilibrium strategies are determined as operation modes such that no player can unilaterally improve its utility given the other's strategy is fixed. A fictitious play is formulated for each player to play the game repeatedly in response to the empirical frequency of the opponent's actions. The performance in Nash equilibrium is compared to the fixed attack and defense cases, and the resilience of NextG signal classification against attacks is quantified.
translated by 谷歌翻译
This paper presents a game theoretic framework for participation and free-riding in federated learning (FL), and determines the Nash equilibrium strategies when FL is executed over wireless links. To support spectrum sensing for NextG communications, FL is used by clients, namely spectrum sensors with limited training datasets and computation resources, to train a wireless signal classifier while preserving privacy. In FL, a client may be free-riding, i.e., it does not participate in FL model updates, if the computation and transmission cost for FL participation is high, and receives the global model (learned by other clients) without incurring a cost. However, the free-riding behavior may potentially decrease the global accuracy due to lack of contribution to global model learning. This tradeoff leads to a non-cooperative game where each client aims to individually maximize its utility as the difference between the global model accuracy and the cost of FL participation. The Nash equilibrium strategies are derived for free-riding probabilities such that no client can unilaterally increase its utility given the strategies of its opponents remain the same. The free-riding probability increases with the FL participation cost and the number of clients, and a significant optimality gap exists in Nash equilibrium with respect to the joint optimization for all clients. The optimality gap increases with the number of clients and the maximum gap is evaluated as a function of the cost. These results quantify the impact of free-riding on the resilience of FL in NextG networks and indicate operational modes for FL participation.
translated by 谷歌翻译
Communications systems to date are primarily designed with the goal of reliable (error-free) transfer of digital sequences (bits). Next generation (NextG) communication systems are beginning to explore shifting this design paradigm of reliably decoding bits to reliably executing a given task. Task-oriented communications system design is likely to find impactful applications, for example, considering the relative importance of messages. In this paper, a wireless signal classification is considered as the task to be performed in the NextG Radio Access Network (RAN) for signal intelligence and spectrum awareness applications such as user equipment (UE) identification and authentication, and incumbent signal detection for spectrum co-existence. For that purpose, edge devices collect wireless signals and communicate with the NextG base station (gNodeB) that needs to know the signal class. Edge devices may not have sufficient processing power and may not be trusted to perform the signal classification task, whereas the transfer of the captured signals from the edge devices to the gNodeB may not be efficient or even feasible subject to stringent delay, rate, and energy restrictions. We present a task-oriented communications approach, where all the transmitter, receiver and classifier functionalities are jointly trained as two deep neural networks (DNNs), one for the edge device and another for the gNodeB. We show that this approach achieves better accuracy with smaller DNNs compared to the baselines that treat communications and signal classification as two separate tasks. Finally, we discuss how adversarial machine learning poses a major security threat for the use of DNNs for task-oriented communications. We demonstrate the major performance loss under backdoor (Trojan) attacks and adversarial (evasion) attacks that target the training and test processes of task-oriented communications.
translated by 谷歌翻译
互联网连接系统的指数增长产生了许多挑战,例如频谱短缺问题,需要有效的频谱共享(SS)解决方案。复杂和动态的SS系统可以接触不同的潜在安全性和隐私问题,需要保护机制是自适应,可靠和可扩展的。基于机器学习(ML)的方法经常提议解决这些问题。在本文中,我们对最近的基于ML的SS方法,最关键的安全问题和相应的防御机制提供了全面的调查。特别是,我们详细说明了用于提高SS通信系统的性能的最先进的方法,包括基于ML基于ML的基于的数据库辅助SS网络,ML基于基于的数据库辅助SS网络,包括基于ML的数据库辅助的SS网络,基于ML的LTE-U网络,基于ML的环境反向散射网络和其他基于ML的SS解决方案。我们还从物理层和基于ML算法的相应防御策略的安全问题,包括主要用户仿真(PUE)攻击,频谱感测数据伪造(SSDF)攻击,干扰攻击,窃听攻击和隐私问题。最后,还给出了对ML基于ML的开放挑战的广泛讨论。这种全面的审查旨在为探索新出现的ML的潜力提供越来越复杂的SS及其安全问题,提供基础和促进未来的研究。
translated by 谷歌翻译
本文提出了对基于深度学习的无线信号分类器的信道感知对抗攻击。有一个发射器,发送具有不同调制类型的信号。每个接收器使用深神经网络以将其超空气接收信号分类为调制类型。与此同时,对手将对手扰动(受到电力预算的影响)透射到欺骗接收器,以在作为透射信号叠加和对抗扰动的叠加接收的分类信号中进行错误。首先,当在设计对抗扰动时不考虑通道时,这些逃避攻击被证明会失败。然后,通过考虑来自每个接收器的对手的频道效应来提出现实攻击。在示出频道感知攻击是选择性的(即,它只影响扰动设计中的信道中考虑的接收器),通过制作常见的对抗扰动来呈现广播对抗攻击,以在不同接收器处同时欺骗分类器。通过占通道,发射机输入和分类器模型可用的不同信息,将调制分类器对过空中侵犯攻击的主要脆弱性。最后,引入了基于随机平滑的经过认证的防御,即增加了噪声训练数据,使调制分类器鲁棒到对抗扰动。
translated by 谷歌翻译
互联网连接系统的规模大大增加,这些系统比以往任何时候都更接触到网络攻击。网络攻击的复杂性和动态需要保护机制响应,自适应和可扩展。机器学习,或更具体地说,深度增强学习(DRL),方法已经广泛提出以解决这些问题。通过将深入学习纳入传统的RL,DRL能够解决复杂,动态,特别是高维的网络防御问题。本文提出了对为网络安全开发的DRL方法进行了调查。我们触及不同的重要方面,包括基于DRL的网络 - 物理系统的安全方法,自主入侵检测技术和基于多元的DRL的游戏理论模拟,用于防范策略对网络攻击。还给出了对基于DRL的网络安全的广泛讨论和未来的研究方向。我们预计这一全面审查提供了基础,并促进了未来的研究,探讨了越来越复杂的网络安全问题。
translated by 谷歌翻译
This paper highlights vulnerabilities of deep learning-driven semantic communications to backdoor (Trojan) attacks. Semantic communications aims to convey a desired meaning while transferring information from a transmitter to its receiver. An encoder-decoder pair that is represented by two deep neural networks (DNNs) as part of an autoencoder is trained to reconstruct signals such as images at the receiver by transmitting latent features of small size over a limited number of channel uses. In the meantime, another DNN of a semantic task classifier at the receiver is jointly trained with the autoencoder to check the meaning conveyed to the receiver. The complex decision space of the DNNs makes semantic communications susceptible to adversarial manipulations. In a backdoor (Trojan) attack, the adversary adds triggers to a small portion of training samples and changes the label to a target label. When the transfer of images is considered, the triggers can be added to the images or equivalently to the corresponding transmitted or received signals. In test time, the adversary activates these triggers by providing poisoned samples as input to the encoder (or decoder) of semantic communications. The backdoor attack can effectively change the semantic information transferred for the poisoned input samples to a target meaning. As the performance of semantic communications improves with the signal-to-noise ratio and the number of channel uses, the success of the backdoor attack increases as well. Also, increasing the Trojan ratio in training data makes the attack more successful. In the meantime, the effect of this attack on the unpoisoned input samples remains limited. Overall, this paper shows that the backdoor attack poses a serious threat to semantic communications and presents novel design guidelines to preserve the meaning of transferred information in the presence of backdoor attacks.
translated by 谷歌翻译
Semantic communications seeks to transfer information from a source while conveying a desired meaning to its destination. We model the transmitter-receiver functionalities as an autoencoder followed by a task classifier that evaluates the meaning of the information conveyed to the receiver. The autoencoder consists of an encoder at the transmitter to jointly model source coding, channel coding, and modulation, and a decoder at the receiver to jointly model demodulation, channel decoding and source decoding. By augmenting the reconstruction loss with a semantic loss, the two deep neural networks (DNNs) of this encoder-decoder pair are interactively trained with the DNN of the semantic task classifier. This approach effectively captures the latent feature space and reliably transfers compressed feature vectors with a small number of channel uses while keeping the semantic loss low. We identify the multi-domain security vulnerabilities of using the DNNs for semantic communications. Based on adversarial machine learning, we introduce test-time (targeted and non-targeted) adversarial attacks on the DNNs by manipulating their inputs at different stages of semantic communications. As a computer vision attack, small perturbations are injected to the images at the input of the transmitter's encoder. As a wireless attack, small perturbations signals are transmitted to interfere with the input of the receiver's decoder. By launching these stealth attacks individually or more effectively in a combined form as a multi-domain attack, we show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low. These multi-domain adversarial attacks pose as a serious threat to the semantics of information transfer (with larger impact than conventional jamming) and raise the need of defense methods for the safe adoption of semantic communications.
translated by 谷歌翻译
新一代僵尸网络利用人工智能(AI)技术来隐藏BotMasters的身份和避免检测的攻击意图。不幸的是,还没有现有的评估工具,能够评估现有防御战略对这种基于AI的僵尸攻击的有效性。在本文中,我们提出了一个顺序博弈论模型,能够分析突发策略僵尸网络攻击者和防守者可以使用达到纳什均衡(NE)的细节。当攻击者在最低攻击成本启动最大DDOS攻击时,在假设下计算实用程序功能,而后卫利用最大防御策略的最大防御策略。我们根据不同(模拟)云带大小的各种防御策略进行数值分析,与不同的攻击成功率值相关。我们的实验结果证实,国防的成功高度取决于根据仔细评估攻击率使用的防御策略数量。
translated by 谷歌翻译
车辆(IOV)互联网(IOV),其中互连的车辆彼此通信并在公共网络上与道路基础设施通信,具有令人市性的社会经济利益,但也造成了新的网络身体威胁。车辆攻击者的数据可以使用像蜜罐等系统使用网络威胁情报进行现实地收集。不可否认,配置蜜罐在蜜罐攻击者互动的级别和执行和监测这些蜜罐的任何产生的开销和成本之间引入权衡。我们认为,通过战略性地配置蜜罐来代表IOV的组成部分,可以实现有效的欺骗,并参与攻击者来收集网络威胁情报。在本文中,我们展示了HoneyCar,这是IOV中蜜罐欺骗的新决策支持框架。 Honeycar在国家漏洞数据库(NVD)中的常见漏洞和曝光(CVE)中发现的自主和连通车辆的已知漏洞的存储库,以计算最佳蜜罐配置策略。通过采取游戏理论方法,我们将对手交互模拟作为重复的不完美信息零和游戏,其中IOV网络管理员选择一组漏洞,以便在蜜罐中提供,并且战略攻击者选择IOV的脆弱性在不确定性下剥削。我们的调查是通过检查两种不同版本的游戏,并没有重新配置成本来证实,以赋予网络管理员来确定最佳蜜罐配置。我们在一个现实用例中评估Honeycar,以支持决策者,以确定IOV中的战略部署的最佳蜜罐配置策略。
translated by 谷歌翻译
对抗性深度学习是为了训练强大的DNN,以防止对抗性攻击,这是深度学习的主要研究之一。游戏理论已被用来回答有关对抗性深度学习的一些基本问题,例如具有最佳鲁棒性的分类器的存在以及给定类别的分类器的最佳对抗样本。在以前的大多数工作中,对抗性深度学习是同时进行的,并且假定策略空间是某些概率分布,以使NASH平衡存在。但是,此假设不适用于实际情况。在本文中,我们通过将对抗性深度学习作为顺序游戏提出,为分类器是具有给定结构的DNN的实际情况提供了这些基本问题的答案。证明了这些游戏的Stackelberg Equilibria的存在。此外,当使用Carlini-Wagner的边缘损失时,平衡DNN具有相同结构的所有DNN中最大的对抗精度。从游戏理论方面也研究了对抗性深度学习的鲁棒性和准确性之间的权衡。
translated by 谷歌翻译
数字化和远程连接扩大了攻击面,使网络系统更脆弱。由于攻击者变得越来越复杂和资源丰富,仅仅依赖传统网络保护,如入侵检测,防火墙和加密,不足以保护网络系统。网络弹性提供了一种新的安全范式,可以使用弹性机制来补充保护不足。一种网络弹性机制(CRM)适应了已知的或零日威胁和实际威胁和不确定性,并对他们进行战略性地响应,以便在成功攻击时保持网络系统的关键功能。反馈架构在启用CRM的在线感应,推理和致动过程中发挥关键作用。强化学习(RL)是一个重要的工具,对网络弹性的反馈架构构成。它允许CRM提供有限或没有事先知识和攻击者的有限攻击的顺序响应。在这项工作中,我们审查了Cyber​​恢复力的RL的文献,并讨论了对三种主要类型的漏洞,即姿势有关,与信息相关的脆弱性的网络恢复力。我们介绍了三个CRM的应用领域:移动目标防御,防守网络欺骗和辅助人类安全技术。 RL算法也有漏洞。我们解释了RL的三个漏洞和目前的攻击模型,其中攻击者针对环境与代理商之间交换的信息:奖励,国家观察和行动命令。我们展示攻击者可以通过最低攻击努力来欺骗RL代理商学习邪恶的政策。最后,我们讨论了RL为基于RL的CRM的网络安全和恢复力和新兴应用的未来挑战。
translated by 谷歌翻译
对于多个用户的多波段无线临时网络,研究了用户和干扰器之间的反界游戏。在此游戏中,用户(干扰分子)希望最大程度地提高(分别最小化)用户的预期奖励考虑了各种因素,例如沟通率,跳高成本和干扰损失。我们根据马尔可夫决策过程(MDP)分析了游戏的军备竞赛,并在军备竞赛的每个阶段得出了最佳的频率跳跃政策。通过分析表明,几次武器竞赛在几轮后达到平衡,并且表征了频率的策略和平衡的干扰策略。我们提出了两种避免碰撞协议,以确保最多有一个用户在每个频带中进行通信,并提供各种数值结果,以显示奖励参数和避免碰撞协议对最佳频率跳跃策略的影响以及在预期的奖励上平衡。此外,我们讨论了干扰者采用一些不可预测的策略的情况。
translated by 谷歌翻译
Spectrum coexistence is essential for next generation (NextG) systems to share the spectrum with incumbent (primary) users and meet the growing demand for bandwidth. One example is the 3.5 GHz Citizens Broadband Radio Service (CBRS) band, where the 5G and beyond communication systems need to sense the spectrum and then access the channel in an opportunistic manner when the incumbent user (e.g., radar) is not transmitting. To that end, a high-fidelity classifier based on a deep neural network is needed for low misdetection (to protect incumbent users) and low false alarm (to achieve high throughput for NextG). In a dynamic wireless environment, the classifier can only be used for a limited period of time, i.e., coherence time. A portion of this period is used for learning to collect sensing results and train a classifier, and the rest is used for transmissions. In spectrum sharing systems, there is a well-known tradeoff between the sensing time and the transmission time. While increasing the sensing time can increase the spectrum sensing accuracy, there is less time left for data transmissions. In this paper, we present a generative adversarial network (GAN) approach to generate synthetic sensing results to augment the training data for the deep learning classifier so that the sensing time can be reduced (and thus the transmission time can be increased) while keeping high accuracy of the classifier. We consider both additive white Gaussian noise (AWGN) and Rayleigh channels, and show that this GAN-based approach can significantly improve both the protection of the high-priority user and the throughput of the NextG user (more in Rayleigh channels than AWGN channels).
translated by 谷歌翻译
In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have. In order to give defenders an advantage, it is important to understand an attacker's motivation and their likely next best action. As a first step in modeling this behavior, we introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment, focusing on the factors that drive decision making for attackers and defenders in the variants of the game with full knowledge and observability, knowledge of the parameters but no observability of the state (``partial knowledge''), and zero knowledge or observability (``zero knowledge''). We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes. Furthermore, there is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes, suggesting that a more false-positive prone environment may be acceptable under conditions where true-positives are also higher.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
联邦学习(FL)变得流行,并在训练大型机器学习(ML)模型的情况下表现出很大的潜力,而不会使所有者的原始数据曝光。在FL中,数据所有者可以根据其本地数据培训ML模型,并且仅将模型更新发送到模型更新,而不是原始数据到模型所有者进行聚合。为了提高模型准确性和培训完成时间的学习绩效,招募足够的参与者至关重要。同时,数据所有者是理性的,可能不愿意由于资源消耗而参与协作学习过程。为了解决这些问题,最近有各种作品旨在激励数据业主贡献其资源。在本文中,我们为文献中提出的经济和游戏理论方法提供了全面的审查,以设计刺激数据业主参加流程培训过程的各种计划。特别是,我们首先在激励机制设计中常用的佛罗里达州的基础和背景,经济理论。然后,我们审查博弈理论和经济方法应用于FL的激励机制的应用。最后,我们突出了一些开放的问题和未来关于FL激励机制设计的研究方向。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
人工智能(AI)将在蜂窝网络部署,配置和管理中发挥越来越多的作用。本文研究了AI驱动的6G无线电访问网络(RANS)的安全含义。尽管6G标准化的预期时间表仍在数年之外,但与6G安全有关的预标准化工作已经在进行中,并且将受益于基本和实验研究。Open Ran(O-Ran)描述了一个以行业为导向的开放式体系结构和用于使用AI控制的下一代架设的接口。考虑到这种体系结构,我们确定了对数据驱动网络和物理层元素,相应的对策和研究方向的关键威胁。
translated by 谷歌翻译
无线系统应用中深度学习(DL)的成功出现引起了人们对与安全有关的新挑战的担忧。一个这样的安全挑战是对抗性攻击。尽管已经有很多工作证明了基于DL的分类任务对对抗性攻击的敏感性,但是从攻击的角度来看,尚未对无线系统的基于回归的问题进行基于回归的问题。本文的目的是双重的:(i)我们在无线设置中考虑回归问题,并表明对抗性攻击可以打破基于DL的方法,并且(ii)我们将对抗性训练作为对抗性环境中的防御技术的有效性分析并表明基于DL的无线系统对攻击的鲁棒性有了显着改善。具体而言,本文考虑的无线应用程序是基于DL的功率分配,以多细胞大量多输入 - 销售输出系统的下行链路分配,攻击的目的是通过DL模型产生不可行的解决方案。我们扩展了基于梯度的对抗性攻击:快速梯度标志方法(FGSM),动量迭代FGSM和预计的梯度下降方法,以分析具有和没有对抗性训练的考虑的无线应用的敏感性。我们对这些攻击进行了分析深度神经网络(DNN)模型的性能,在这些攻击中,使用白色框和黑盒攻击制作了对抗性扰动。
translated by 谷歌翻译