对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
Federated learning enables thousands of participants to construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a next-word predictor for keyboards without revealing what individual users type.Federated models are created by aggregating model updates submitted by participants. To protect confidentiality of the training data, the aggregator by design has no visibility into how these updates are generated. We show that this makes federated learning vulnerable to a model-poisoning attack that is significantly more powerful than poisoning attacks that target only the training data.A malicious participant can use model replacement to introduce backdoor functionality into the joint model, e.g., modify an image classifier so that it assigns an attacker-chosen label to images with certain features, or force a word predictor to complete certain sentences with an attacker-chosen word. These attacks can be performed by a single participant or multiple colluding participants. We evaluate model replacement under different assumptions for the standard federated-learning tasks and show that it greatly outperforms training-data poisoning.Federated learning employs secure aggregation to protect confidentiality of participants' local models and thus cannot prevent our attack by detecting anomalies in participants' contributions to the joint model. To demonstrate that anomaly detection would not have been effective in any case, we also develop and evaluate a generic constrain-and-scale technique that incorporates the evasion of defenses into the attacker's loss function during training. ! "#$%" train & % '() * '()! +%$,-##.
translated by 谷歌翻译
In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down ``data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.
translated by 谷歌翻译
联邦学习一直是一个热门的研究主题,使不同组织的机器学习模型的协作培训在隐私限制下。随着研究人员试图支持更多具有不同隐私方法的机器学习模型,需要开发系统和基础设施,以便于开发各种联合学习算法。类似于Pytorch和Tensorflow等深度学习系统,可以增强深度学习的发展,联邦学习系统(FLSS)是等效的,并且面临各个方面的面临挑战,如有效性,效率和隐私。在本调查中,我们对联合学习系统进行了全面的审查。为实现流畅的流动和引导未来的研究,我们介绍了联合学习系统的定义并分析了系统组件。此外,我们根据六种不同方面提供联合学习系统的全面分类,包括数据分布,机器学习模型,隐私机制,通信架构,联合集市和联合的动机。分类可以帮助设计联合学习系统,如我们的案例研究所示。通过系统地总结现有联合学习系统,我们展示了设计因素,案例研究和未来的研究机会。
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
联邦学习的出现在维持隐私的同时,促进了机器学习模型之间的大规模数据交换。尽管历史悠久,但联邦学习正在迅速发展,以使更广泛的使用更加实用。该领域中最重要的进步之一是将转移学习纳入联邦学习,这克服了主要联合学习的基本限制,尤其是在安全方面。本章从安全的角度进行了有关联合和转移学习的交集的全面调查。这项研究的主要目标是发现可能损害使用联合和转移学习的系统的隐私和性能的潜在脆弱性和防御机制。
translated by 谷歌翻译
联合学习(FL)是一项广泛采用的分布式学习范例,在实践中,打算在利用所有参与者的整个数据集进行培训的同时保护用户的数据隐私。在FL中,多种型号在用户身上独立培训,集中聚合以在迭代过程中更新全局模型。虽然这种方法在保护隐私方面是优异的,但FL仍然遭受攻击或拜占庭故障等质量问题。最近的一些尝试已经解决了对FL的强大聚集技术的这种质量挑战。然而,最先进的(SOTA)强大的技术的有效性尚不清楚并缺乏全面的研究。因此,为了更好地了解这些SOTA流域的当前质量状态和挑战在存在攻击和故障的情况下,我们进行了大规模的实证研究,以研究SOTA FL的质量,从多个攻击角度,模拟故障(通过突变运算符)和聚合(防御)方法。特别是,我们对两个通用图像数据集和一个现实世界联邦医学图像数据集进行了研究。我们还系统地调查了攻击用户和独立和相同分布的(IID)因子,每个数据集的攻击/故障的分布对鲁棒性结果的影响。经过496个配置进行大规模分析后,我们发现每个用户的大多数突变者对最终模型具有可忽略不计的影响。此外,选择最强大的FL聚合器取决于攻击和数据集。最后,我们说明了可以实现几乎在所有攻击和配置上的任何单个聚合器以及具有简单集合模型的所有攻击和配置的常用解决方案的通用解决方案。
translated by 谷歌翻译
更广泛的覆盖范围和更好的解决方案延迟减少5G需要其与多访问边缘计算(MEC)技术的组合。分散的深度学习(DDL),如联邦学习和群体学习作为对数百万智能边缘设备的隐私保留数据处理的有希望的解决方案,利用了本地客户端网络内的多层神经网络的分布式计算,而无需披露原始本地培训数据。值得注意的是,在金融和医疗保健等行业中,谨慎维护交易和个人医疗记录的敏感数据,DDL可以促进这些研究所的合作,以改善培训模型的性能,同时保护参与客户的数据隐私。在本调查论文中,我们展示了DDL的技术基础,通过分散的学习使社会许多人走。此外,我们通过概述DDL的挑战以及从新颖的沟通效率和可靠性的观点来概述目前本领域最先进的全面概述。
translated by 谷歌翻译
Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients. Instead of sharing the data, each client trains the model locally, resulting in improved privacy. However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model. Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions. However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model. Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods. We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation. To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments. To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers. We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios. We show in our extensive evaluation that CrowdGuard can effectively mitigate targeted poisoning attacks and achieve in various scenarios a True-Positive-Rate of 100% and a True-Negative-Rate of 100%.
translated by 谷歌翻译
在联合学习(FL)中,一组参与者共享与将更新结合到全局模型中的聚合服务器在本地数据上计算的更新。但是,将准确性与隐私和安全性进行调和是FL的挑战。一方面,诚实参与者发送的良好更新可能会揭示其私人本地信息,而恶意参与者发送的中毒更新可能会损害模型的可用性和/或完整性。另一方面,通过更新失真赔偿准确性增强隐私,而通过更新聚合损坏安全性,因为它不允许服务器过滤掉单个中毒更新。为了解决准确性私人关系冲突,我们提出{\ em碎片的联合学习}(FFL),其中参与者在将其发送到服务器之前,随机交换并混合其更新的片段。为了获得隐私,我们设计了一个轻巧的协议,该协议允许参与者私下交换和混合其更新的加密片段,以便服务器既不能获得单个更新,也不能将其链接到其发起人。为了实现安全性,我们设计了针对FFL量身定制的基于声誉的防御,该防御根据他们交换的片段质量以及他们发送的混合更新来建立对参与者及其混合更新的信任。由于交换的片段的参数可以保持其原始坐标和攻击者可以中和,因此服务器可以从接收到的混合更新中正确重建全局模型而不会准确损失。四个真实数据集的实验表明,FFL可以防止半冬季服务器安装隐私攻击,可以有效地抵抗中毒攻击,并可以保持全局模型的准确性。
translated by 谷歌翻译
联合学习(FL)允许多个客户端在私人数据上协作训练神经网络(NN)模型,而不会显示数据。最近,已经介绍了针对FL的几种针对性的中毒攻击。这些攻击将后门注入到所产生的模型中,允许对抗控制的输入被错误分类。抵抗后门攻击的现有对策效率低,并且通常仅旨在排除偏离聚合的偏离模型。然而,这种方法还删除了具有偏离数据分布的客户端的良性模型,导致聚合模型对这些客户端执行不佳。为了解决这个问题,我们提出了一种深入的模型过滤方法,用于减轻后门攻击。它基于三种新颖的技术,允许表征用于培训模型更新的数据的分布,并寻求测量NNS内部结构和输出中的细粒度差异。使用这些技术,DeepSight可以识别可疑的模型更新。我们还开发了一种可以准确集群模型更新的方案。结合两个组件的结果,DeepSight能够识别和消除含有高攻击模型的模型集群,具有高攻击影响。我们还表明,可以通过现有的基于重量剪切的防御能力减轻可能未被发现的中毒模型的后门贡献。我们评估了深度的性能和有效性,并表明它可以减轻最先进的后门攻击,对模型对良性数据的性能的影响忽略不计。
translated by 谷歌翻译
Federated Learning is a distributed machine learning framework designed for data privacy preservation i.e., local data remain private throughout the entire training and testing procedure. Federated Learning is gaining popularity because it allows one to use machine learning techniques while preserving privacy. However, it inherits the vulnerabilities and susceptibilities raised in deep learning techniques. For instance, Federated Learning is particularly vulnerable to data poisoning attacks that may deteriorate its performance and integrity due to its distributed nature and inaccessibility to the raw data. In addition, it is extremely difficult to correctly identify malicious clients due to the non-Independently and/or Identically Distributed (non-IID) data. The real-world data can be complex and diverse, making them hardly distinguishable from the malicious data without direct access to the raw data. Prior research has focused on detecting malicious clients while treating only the clients having IID data as benign. In this study, we propose a method that detects and classifies anomalous clients from benign clients when benign ones have non-IID data. Our proposed method leverages feature dimension reduction, dynamic clustering, and cosine similarity-based clipping. The experimental results validates that our proposed method not only classifies the malicious clients but also alleviates their negative influences from the entire procedure. Our findings may be used in future studies to effectively eliminate anomalous clients when building a model with diverse data.
translated by 谷歌翻译
制药行业可以更好地利用其数据资产来通过协作机器学习平台虚拟化药物发现。另一方面,由于参与者的培训数据的意外泄漏,存在不可忽略的风险,因此,对于这样的平台,必须安全和隐私权。本文介绍了在药物发现的临床前阶段进行协作建模的隐私风险评估,以加快有前途的候选药物的选择。在最新推理攻击的简短分类法之后,我们采用并定制了几种基础情况。最后,我们用一些相关的隐私保护技术来描述和实验,以减轻此类攻击。
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
图神经网络(GNN)是一类用于处理图形域信息的基于深度学习的方法。 GNN最近已成为一种广泛使用的图形分析方法,因为它们可以为复杂的图形数据学习表示形式。但是,由于隐私问题和法规限制,集中的GNN可能很难应用于数据敏感的情况。 Federated学习(FL)是一种新兴技术,为保护隐私设置而开发,当几个方需要协作培训共享的全球模型时。尽管几项研究工作已应用于培训GNN(联邦GNN),但对他们对后门攻击的稳健性没有研究。本文通过在联邦GNN中进行两种类型的后门攻击来弥合这一差距:集中式后门攻击(CBA)和分发后门攻击(DBA)。我们的实验表明,在几乎所有评估的情况下,DBA攻击成功率高于CBA。对于CBA,即使对抗方的训练集嵌入了全球触发因素,所有本地触发器的攻击成功率也类似于全球触发因素。为了进一步探索联邦GNN中两次后门攻击的属性,我们评估了不同数量的客户,触发尺寸,中毒强度和触发密度的攻击性能。此外,我们探讨了DBA和CBA对两个最先进的防御能力的鲁棒性。我们发现,两次攻击都对被调查的防御能力进行了强大的强大,因此需要考虑将联邦GNN中的后门攻击视为需要定制防御的新威胁。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
联合学习中的隐私(FL)以两种不同的粒度进行了研究:项目级,该项目级别保护单个数据点和用户级别,该数据点保护联邦中的每个用户(参与者)。几乎所有的私人文献都致力于研究这两种粒度的隐私攻击和防御。最近,主题级隐私已成为一种替代性隐私粒度,以保护个人(数据主体)的隐私(数据主题),其数据分布在跨索洛FL设置中的多个(组织)用户。对手可能有兴趣通过攻击受过训练的模型来恢复有关这些人(又称emph {data主体})的私人信息。对这些模式的系统研究需要对联邦的完全控制,而实际数据集是不可能的。我们设计了一个模拟器,用于生成各种合成联邦配置,使我们能够研究数据的属性,模型设计和培训以及联合会本身如何影响主题隐私风险。我们提出了\ emph {主题成员推理}的三个攻击,并检查影响攻击功效的联邦中所有因素之间的相互作用。我们还研究了差异隐私在减轻这种威胁方面的有效性。我们的收获概括到像女权主义者这样的现实世界数据集中,对我们的发现赋予了信任。
translated by 谷歌翻译
虽然最近的作品表明,联邦学习(FL)可能易受受损客户的袭击攻击,但它们对生产流系统的实际影响尚未完全理解。在这项工作中,我们的目标是通过枚举所有可能的威胁模型,中毒变化和对手的能力来制定综合系统化。我们专注于我们对未明确的中毒攻击,正如我们认为它们与生产流动部署有关。我们通过仔细表征现实威胁模型和对抗性能力,对实际生产的流动环境下无明显中毒攻击的关键分析。我们的研究结果令人惊讶:与既定信念相反,我们表明,即使使用简单,低成本的防御,我们也会在实践中非常强大。我们进一步进一步提出了新颖的,最先进的数据和模型中毒攻击,并通过三个基准数据集进行了广泛的实验,如何(在)有效中毒攻击在存在简单的防御机制中。我们的目标是纠正以前的误解,并提供关于对本主题更准确的(更现实)的研究的具体指导。
translated by 谷歌翻译
由于联邦学习(FL)的分布性质,研究人员发现FL容易受到后门攻击的影响,该攻击旨在将子任务注入FL而不破坏主要任务的性能。当在FL模型收敛上注入时,单发后门攻击在主要任务和后门子任务上都可以达到高度精度。但是,早期注射的单发后门攻击是无效的,因为:(1)由于正常局部更新的稀释效果,在注射时未达到最大的后门效果; (2)后门效应迅速下降,因为后门将被新的普通本地更新所覆盖。在本文中,我们利用FL模型信息泄漏加强了早期注射的单发后门攻击。我们表明,如果客户在模拟整个人群的分布和梯度的数据集上进行训练,则可以加快FL收敛速度。基于这一观察结果,我们提出了两阶段的后门攻击,其中包括随后的后门攻击的初步阶段。在初步阶段,受攻击者控制的客户首先启动了整个人口分布推理攻击,然后在本地制作的数据集上进行训练,该数据集与梯度和推断分布保持一致。从初步阶段中受益,后来注射的后门实现了更好的有效性,因为后门效应不太可能被普通模型更新稀释。在各种数据异质性设置下,在MNIST数据集上进行了广泛的实验,以评估拟议的后门攻击的有效性。结果表明,即使有防御机制,该提议的后门以成功率和寿命都优于现有的后门攻击。
translated by 谷歌翻译