由于联邦学习(FL)的分布性质,研究人员发现FL容易受到后门攻击的影响,该攻击旨在将子任务注入FL而不破坏主要任务的性能。当在FL模型收敛上注入时,单发后门攻击在主要任务和后门子任务上都可以达到高度精度。但是,早期注射的单发后门攻击是无效的,因为:(1)由于正常局部更新的稀释效果,在注射时未达到最大的后门效果; (2)后门效应迅速下降,因为后门将被新的普通本地更新所覆盖。在本文中,我们利用FL模型信息泄漏加强了早期注射的单发后门攻击。我们表明,如果客户在模拟整个人群的分布和梯度的数据集上进行训练,则可以加快FL收敛速度。基于这一观察结果,我们提出了两阶段的后门攻击,其中包括随后的后门攻击的初步阶段。在初步阶段,受攻击者控制的客户首先启动了整个人口分布推理攻击,然后在本地制作的数据集上进行训练,该数据集与梯度和推断分布保持一致。从初步阶段中受益,后来注射的后门实现了更好的有效性,因为后门效应不太可能被普通模型更新稀释。在各种数据异质性设置下,在MNIST数据集上进行了广泛的实验,以评估拟议的后门攻击的有效性。结果表明,即使有防御机制,该提议的后门以成功率和寿命都优于现有的后门攻击。
translated by 谷歌翻译
图神经网络(GNN)是一类用于处理图形域信息的基于深度学习的方法。 GNN最近已成为一种广泛使用的图形分析方法,因为它们可以为复杂的图形数据学习表示形式。但是,由于隐私问题和法规限制,集中的GNN可能很难应用于数据敏感的情况。 Federated学习(FL)是一种新兴技术,为保护隐私设置而开发,当几个方需要协作培训共享的全球模型时。尽管几项研究工作已应用于培训GNN(联邦GNN),但对他们对后门攻击的稳健性没有研究。本文通过在联邦GNN中进行两种类型的后门攻击来弥合这一差距:集中式后门攻击(CBA)和分发后门攻击(DBA)。我们的实验表明,在几乎所有评估的情况下,DBA攻击成功率高于CBA。对于CBA,即使对抗方的训练集嵌入了全球触发因素,所有本地触发器的攻击成功率也类似于全球触发因素。为了进一步探索联邦GNN中两次后门攻击的属性,我们评估了不同数量的客户,触发尺寸,中毒强度和触发密度的攻击性能。此外,我们探讨了DBA和CBA对两个最先进的防御能力的鲁棒性。我们发现,两次攻击都对被调查的防御能力进行了强大的强大,因此需要考虑将联邦GNN中的后门攻击视为需要定制防御的新威胁。
translated by 谷歌翻译
Federated Learning has emerged to cope with raising concerns about privacy breaches in using Machine or Deep Learning models. This new paradigm allows the leverage of deep learning models in a distributed manner, enhancing privacy preservation. However, the server's blindness to local datasets introduces its vulnerability to model poisoning attacks and data heterogeneity, tampering with the global model performance. Numerous works have proposed robust aggregation algorithms and defensive mechanisms, but the approaches are orthogonal to individual attacks or issues. FedCC, the proposed method, provides robust aggregation by comparing the Centered Kernel Alignment of Penultimate Layers Representations. The experiment results on FedCC demonstrate that it mitigates untargeted and targeted model poisoning or backdoor attacks while also being effective in non-Independently and Identically Distributed data environments. By applying FedCC against untargeted attacks, global model accuracy is recovered the most. Against targeted backdoor attacks, FedCC nullified attack confidence while preserving the test accuracy. Most of the experiment results outstand the baseline methods.
translated by 谷歌翻译
联合学习(FL)提供了一个有效的范式,可以共同培训分布式用户的数据的全球模型。由于本地培训数据来自可能不值得信赖的不同用户,因此一些研究表明,FL容易受到中毒攻击的影响。同时,为了保护本地用户的隐私,FL始终以差异性私人方式(DPFL)进行培训。因此,在本文中,我们问:我们是否可以利用DPFL的先天隐私权来提供对中毒攻击的认证鲁棒性?我们可以进一步改善FL的隐私以改善这种认证吗?我们首先研究了FL的用户级和实例级别的隐私,并提出了新的机制以获得改进的实例级隐私。然后,我们提供两个鲁棒性认证标准:两级DPFL的认证预测和认证攻击成本。从理论上讲,我们证明了DPFL在有限数量的对抗用户或实例下的认证鲁棒性。从经验上讲,我们进行了广泛的实验,以在对不同数据集的一系列攻击下验证我们的理论。我们表明,具有更严格的隐私保证的DPFL总是在认证攻击成本方面提供更强的鲁棒性认证,但是在隐私保护和公用事业损失之间的适当平衡下,获得了最佳认证预测。
translated by 谷歌翻译
Federated learning enables thousands of participants to construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a next-word predictor for keyboards without revealing what individual users type.Federated models are created by aggregating model updates submitted by participants. To protect confidentiality of the training data, the aggregator by design has no visibility into how these updates are generated. We show that this makes federated learning vulnerable to a model-poisoning attack that is significantly more powerful than poisoning attacks that target only the training data.A malicious participant can use model replacement to introduce backdoor functionality into the joint model, e.g., modify an image classifier so that it assigns an attacker-chosen label to images with certain features, or force a word predictor to complete certain sentences with an attacker-chosen word. These attacks can be performed by a single participant or multiple colluding participants. We evaluate model replacement under different assumptions for the standard federated-learning tasks and show that it greatly outperforms training-data poisoning.Federated learning employs secure aggregation to protect confidentiality of participants' local models and thus cannot prevent our attack by detecting anomalies in participants' contributions to the joint model. To demonstrate that anomaly detection would not have been effective in any case, we also develop and evaluate a generic constrain-and-scale technique that incorporates the evasion of defenses into the attacker's loss function during training. ! "#$%" train & % '() * '()! +%$,-##.
translated by 谷歌翻译
Federated Learning is a distributed machine learning framework designed for data privacy preservation i.e., local data remain private throughout the entire training and testing procedure. Federated Learning is gaining popularity because it allows one to use machine learning techniques while preserving privacy. However, it inherits the vulnerabilities and susceptibilities raised in deep learning techniques. For instance, Federated Learning is particularly vulnerable to data poisoning attacks that may deteriorate its performance and integrity due to its distributed nature and inaccessibility to the raw data. In addition, it is extremely difficult to correctly identify malicious clients due to the non-Independently and/or Identically Distributed (non-IID) data. The real-world data can be complex and diverse, making them hardly distinguishable from the malicious data without direct access to the raw data. Prior research has focused on detecting malicious clients while treating only the clients having IID data as benign. In this study, we propose a method that detects and classifies anomalous clients from benign clients when benign ones have non-IID data. Our proposed method leverages feature dimension reduction, dynamic clustering, and cosine similarity-based clipping. The experimental results validates that our proposed method not only classifies the malicious clients but also alleviates their negative influences from the entire procedure. Our findings may be used in future studies to effectively eliminate anomalous clients when building a model with diverse data.
translated by 谷歌翻译
Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients. Instead of sharing the data, each client trains the model locally, resulting in improved privacy. However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model. Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions. However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model. Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods. We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation. To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments. To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers. We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios. We show in our extensive evaluation that CrowdGuard can effectively mitigate targeted poisoning attacks and achieve in various scenarios a True-Positive-Rate of 100% and a True-Negative-Rate of 100%.
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
拜占庭式联合学习(FL)旨在对抗恶意客户并培训准确的全球模型,同时保持极低的攻击成功率。然而,大多数现有系统仅在诚实/半hon最达克的多数设置中都具有强大的功能。 FLTRUST(NDSS '21)将上下文扩展到对客户的恶意多数,但在训练之前,应在训练之前为服务器提供辅助数据集,以便过滤恶意输入。私人火焰/flguard(Usenix '22)提供了一种解决方案,以确保在半多数上下文中既有稳健性和更新机密性。到目前为止,不可能平衡恶意背景,鲁棒性和更新机密性之间的权衡。为了解决这个问题,我们提出了一种新颖的拜占庭式bybust和隐私的FL系统,称为简介,以捕获恶意的少数群体和多数服务器和客户端。具体而言,基于DBSCAN算法,我们设计了一种通过成对调整的余弦相似性聚类的新方法,以提高聚类结果的准确性。为了阻止多数攻击恶意的攻击,我们开发了一种称为模型分割的算法,在该算法中,同一集群中的本地更新聚集在一起,并且将聚合正确地发送回相应的客户端。我们还利用多种密码工具来执行聚类任务,而无需牺牲培训正确性并更新机密性。我们介绍了详细的安全证明和经验评估以及简要的收敛分析。实验结果表明,简介的测试精度实际上接近FL基线(平均为0.8%的差距)。同时,攻击成功率约为0%-5%。我们进一步优化了设计,以便可以分别降低{67%-89.17%和66.05%-68.75%}的通信开销和运行时。
translated by 谷歌翻译
联合学习(FL)是一种分布式机器学习方法,其中多个客户在不交换数据的情况下协作培训联合模型。尽管FL在数据隐私保护方面取得了前所未有的成功,但其对自由骑手攻击的脆弱性吸引了人们越来越多的关注。现有的防御能力可能对高度伪装或高百分比的自由骑手无效。为了应对这些挑战,我们从新颖的角度重新考虑防御,即模型重量不断发展的频率。从经验上讲,我们获得了一种新颖的见解,即在FL的训练中,模型权重的频率不断发展,自由骑机的频率和良性客户的频率显着不同的。受到这种见解的启发,我们提出了一种基于模型权重演化频率的新型防御方法,称为WEF-DEFENSE。特别是,我们在本地训练期间首先收集重量演变的频率(定义为WEF-MATRIX)。对于每个客户端,它将本地型号的WEF-Matrix与每个迭代的模型重量一起上传到服务器。然后,服务器根据WEF-Matrix的差异将自由骑士与良性客户端分开。最后,服务器使用个性化方法为相应的客户提供不同的全局模型。在五个数据集和五个模型上进行的全面实验表明,与最先进的基线相比,WEF防御能力更好。
translated by 谷歌翻译
虽然最近的作品表明,联邦学习(FL)可能易受受损客户的袭击攻击,但它们对生产流系统的实际影响尚未完全理解。在这项工作中,我们的目标是通过枚举所有可能的威胁模型,中毒变化和对手的能力来制定综合系统化。我们专注于我们对未明确的中毒攻击,正如我们认为它们与生产流动部署有关。我们通过仔细表征现实威胁模型和对抗性能力,对实际生产的流动环境下无明显中毒攻击的关键分析。我们的研究结果令人惊讶:与既定信念相反,我们表明,即使使用简单,低成本的防御,我们也会在实践中非常强大。我们进一步进一步提出了新颖的,最先进的数据和模型中毒攻击,并通过三个基准数据集进行了广泛的实验,如何(在)有效中毒攻击在存在简单的防御机制中。我们的目标是纠正以前的误解,并提供关于对本主题更准确的(更现实)的研究的具体指导。
translated by 谷歌翻译
联合学习是用于培训分布式,敏感数据的培训模型的流行策略,同时保留了数据隐私。先前的工作确定了毒害数据或模型的联合学习方案的一系列安全威胁。但是,联合学习是一个网络系统,客户与服务器之间的通信对于学习任务绩效起着至关重要的作用。我们强调了沟通如何在联邦学习中引入另一个漏洞表面,并研究网络级对手对训练联合学习模型的影响。我们表明,从精心选择的客户中删除网络流量的攻击者可以大大降低目标人群的模型准确性。此外,我们表明,来自少数客户的协调中毒运动可以扩大降低攻击。最后,我们开发了服务器端防御,通过识别和上采样的客户可能对目标准确性做出积极贡献,从而减轻了攻击的影响。我们在三个数据集上全面评估了我们的攻击和防御,假设具有网络部分可见性的加密通信渠道和攻击者。
translated by 谷歌翻译
联合学习(FL)允许多个客户端在私人数据上协作训练神经网络(NN)模型,而不会显示数据。最近,已经介绍了针对FL的几种针对性的中毒攻击。这些攻击将后门注入到所产生的模型中,允许对抗控制的输入被错误分类。抵抗后门攻击的现有对策效率低,并且通常仅旨在排除偏离聚合的偏离模型。然而,这种方法还删除了具有偏离数据分布的客户端的良性模型,导致聚合模型对这些客户端执行不佳。为了解决这个问题,我们提出了一种深入的模型过滤方法,用于减轻后门攻击。它基于三种新颖的技术,允许表征用于培训模型更新的数据的分布,并寻求测量NNS内部结构和输出中的细粒度差异。使用这些技术,DeepSight可以识别可疑的模型更新。我们还开发了一种可以准确集群模型更新的方案。结合两个组件的结果,DeepSight能够识别和消除含有高攻击模型的模型集群,具有高攻击影响。我们还表明,可以通过现有的基于重量剪切的防御能力减轻可能未被发现的中毒模型的后门贡献。我们评估了深度的性能和有效性,并表明它可以减轻最先进的后门攻击,对模型对良性数据的性能的影响忽略不计。
translated by 谷歌翻译
联合学习(FL)是一项广泛采用的分布式学习范例,在实践中,打算在利用所有参与者的整个数据集进行培训的同时保护用户的数据隐私。在FL中,多种型号在用户身上独立培训,集中聚合以在迭代过程中更新全局模型。虽然这种方法在保护隐私方面是优异的,但FL仍然遭受攻击或拜占庭故障等质量问题。最近的一些尝试已经解决了对FL的强大聚集技术的这种质量挑战。然而,最先进的(SOTA)强大的技术的有效性尚不清楚并缺乏全面的研究。因此,为了更好地了解这些SOTA流域的当前质量状态和挑战在存在攻击和故障的情况下,我们进行了大规模的实证研究,以研究SOTA FL的质量,从多个攻击角度,模拟故障(通过突变运算符)和聚合(防御)方法。特别是,我们对两个通用图像数据集和一个现实世界联邦医学图像数据集进行了研究。我们还系统地调查了攻击用户和独立和相同分布的(IID)因子,每个数据集的攻击/故障的分布对鲁棒性结果的影响。经过496个配置进行大规模分析后,我们发现每个用户的大多数突变者对最终模型具有可忽略不计的影响。此外,选择最强大的FL聚合器取决于攻击和数据集。最后,我们说明了可以实现几乎在所有攻击和配置上的任何单个聚合器以及具有简单集合模型的所有攻击和配置的常用解决方案的通用解决方案。
translated by 谷歌翻译
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.
translated by 谷歌翻译
联合学习(FL)是分散机器学习的新型框架。由于FL的分散特征,它很容易受到训练程序中的对抗攻击的影响,例如,后门攻击。后门攻击旨在将后门注入机器学习模型中,以便该模型会在测试样本上任意使用一些特定的后门触发器。即使已经引入了一系列FL的后门攻击方法,但也有针对它们进行防御的方法。许多捍卫方法都利用了带有后门的模型的异常特征,或带有后门和常规模型的模型之间的差异。为了绕过这些防御,我们需要减少差异和异常特征。我们发现这种异常的来源是,后门攻击将在中毒数据时直接翻转数据标签。但是,当前对FL后门攻击的研究并不主要集中在减少带有后门和常规模型的模型之间的差异。在本文中,我们提出了对抗性知识蒸馏(ADVKD),一种方法将知识蒸馏与FL中的后门攻击结合在一起。通过知识蒸馏,我们可以减少标签翻转导致模型中的异常特征,因此该模型可以绕过防御措施。与当前方法相比,我们表明ADVKD不仅可以达到更高的攻击成功率,而且还可以在其他方法失败时成功绕过防御。为了进一步探索ADVKD的性能,我们测试参数如何影响不同情况下的ADVKD的性能。根据实验结果,我们总结了如何在不同情况下调整参数以获得更好的性能。我们还使用多种方法可视化不同攻击的效果并解释Advkd的有效性。
translated by 谷歌翻译
最近出现的联邦学习(FL)是一个有吸引力的分布式学习框架,其中许多无线最终用户设备可以训练全局模型,数据仍然自动加载。与传统的机器学习框架相比,收集集中存储的用户数据,这为数据隐私带来了巨大的沟通负担和担忧,这种方法不仅可以保存网络带宽,还可以保护数据隐私。尽管前景有前景,但拜占庭袭击,传统分布式网络中的棘手威胁,也被发现对FL相当有效。在本文中,我们对佛罗里达州的抗议袭击进行了全面调查了捍卫拜占庭袭击的最先进战略。我们首先根据他们使用的技术为现有的防御解决方案提供分类法,然后是在整个板上的比较和讨论。然后,我们提出了一种新的拜占庭攻击方法,称为重量攻击,以击败这些防御计划,并进行实验以证明其威胁。结果表明,现有的防御解决方案虽然丰富,但仍远未完全保护FL。最后,我们表明体重攻击可能的可能对策,并突出了一些挑战和未来的研究方向,以减轻百灵鱼袭击杂志。
translated by 谷歌翻译
联邦学习本质上很容易模拟中毒攻击,因为其分散性质允许攻击者参与受损的设备。在模型中毒攻击中,攻击者通过上传“中毒”更新来降低目标子任务(例如,作为鸟类的分类平面)模型的性能。在本报告中,我们介绍\ algoname {},这是一种使用全局Top-K更新稀疏和设备级渐变剪辑来减轻模型中毒攻击的新型防御。我们提出了一个理论框架,用于分析防御抗毒攻击的稳健性,并提供我们算法的鲁棒性和收敛性分析。为了验证其经验效率,我们在跨多个基准数据集中进行开放源评估,用于计算机愿景和联合学习。
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
联合学习(FL)容易受到模型中毒攻击的影响,在该攻击中,恶意客户通过将操纵模型更新发送到服务器来破坏全局模型。现有的防御措施主要依靠拜占庭式抗体方法,即使某些客户是恶意的,旨在学习准确的全球模型。但是,在实践中,他们只能抵抗少数恶意客户。如何与大量恶意客户抗衡模型中毒攻击仍然是一个公开挑战。我们的fldetector通过检测恶意客户来应对这一挑战。 FLDETECTOR旨在检测和删除大多数恶意客户,以便拜占庭式的fl方法可以使用其余客户学习准确的全球模型。我们的主要观察结果是,在模型中毒攻击中,在多次迭代中的客户更新的模型更新是不一致的。因此,FLDetector通过检查其模型更高的一致性来检测恶意客户端。大致来说,服务器根据其历史模型更新使用Cauchy Mean Valie Therorem和L-BFG预测客户端的模型更新在多个迭代中不一致。我们在三个基准数据集上进行的广泛实验表明,FLDETECTOR可以准确检测到多种最新模型中毒攻击中的恶意客户。在删除了被检测到的恶意客户端后,现有的拜占庭式FL方法可以学习准确的全球模型。
translated by 谷歌翻译