Deep learning (DL) methods have been widely applied to anomaly-based network intrusion detection system (NIDS) to detect malicious traffic. To expand the usage scenarios of DL-based methods, the federated learning (FL) framework allows multiple users to train a global model on the basis of respecting individual data privacy. However, it has not yet been systematically evaluated how robust FL-based NIDSs are against existing privacy attacks under existing defenses. To address this issue, we propose two privacy evaluation metrics designed for FL-based NIDSs, including (1) privacy score that evaluates the similarity between the original and recovered traffic features using reconstruction attacks, and (2) evasion rate against NIDSs using Generative Adversarial Network-based adversarial attack with the reconstructed benign traffic. We conduct experiments to show that existing defenses provide little protection that the corresponding adversarial traffic can even evade the SOTA NIDS Kitsune. To defend against such attacks and build a more robust FL-based NIDS, we further propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee. It achieves both high utility by minimizing the gradient distance and strong privacy protection by maximizing the input distance. We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection with up to 7 times higher privacy score, while maintaining model accuracy loss within 3% under optimal parameter combination.
translated by 谷歌翻译
在联合学习(FL)中,一组参与者共享与将更新结合到全局模型中的聚合服务器在本地数据上计算的更新。但是,将准确性与隐私和安全性进行调和是FL的挑战。一方面,诚实参与者发送的良好更新可能会揭示其私人本地信息,而恶意参与者发送的中毒更新可能会损害模型的可用性和/或完整性。另一方面,通过更新失真赔偿准确性增强隐私,而通过更新聚合损坏安全性,因为它不允许服务器过滤掉单个中毒更新。为了解决准确性私人关系冲突,我们提出{\ em碎片的联合学习}(FFL),其中参与者在将其发送到服务器之前,随机交换并混合其更新的片段。为了获得隐私,我们设计了一个轻巧的协议,该协议允许参与者私下交换和混合其更新的加密片段,以便服务器既不能获得单个更新,也不能将其链接到其发起人。为了实现安全性,我们设计了针对FFL量身定制的基于声誉的防御,该防御根据他们交换的片段质量以及他们发送的混合更新来建立对参与者及其混合更新的信任。由于交换的片段的参数可以保持其原始坐标和攻击者可以中和,因此服务器可以从接收到的混合更新中正确重建全局模型而不会准确损失。四个真实数据集的实验表明,FFL可以防止半冬季服务器安装隐私攻击,可以有效地抵抗中毒攻击,并可以保持全局模型的准确性。
translated by 谷歌翻译
联邦学习(FL)引起了人们对在存储在多个用户中的数据中启用隐私的机器学习的兴趣,同时避免将数据移动到偏离设备上。但是,尽管数据永远不会留下用户的设备,但仍然无法保证隐私,因为用户培训数据的重大计算以训练有素的本地模型的形式共享。最近,这些本地模型通过不同的隐私攻击(例如模型反演攻击)构成了实质性的隐私威胁。作为一种补救措施,通过保证服务器只能学习全局聚合模型更新,而不是单个模型更新,从而开发了安全汇总(SA)作为保护佛罗里达隐私的框架。尽管SA确保没有泄漏有关单个模型更新超出汇总模型更新的其他信息,但对于SA实际上可以提供多少私密性fl,没有正式的保证;由于有关单个数据集的信息仍然可以通过在服务器上计算的汇总模型泄漏。在这项工作中,我们对使用SA的FL的正式隐私保证进行了首次分析。具体而言,我们使用共同信息(MI)作为定量度量,并在每个用户数据集的信息上可以通过汇总的模型更新泄漏有关多少信息。当使用FEDSGD聚合算法时,我们的理论界限表明,隐私泄漏量随着SA参与FL的用户数量而线性减少。为了验证我们的理论界限,我们使用MI神经估计量来凭经验评估MNIST和CIFAR10数据集的不同FL设置下的隐私泄漏。我们的实验验证了FEDSGD的理论界限,随着用户数量和本地批量的增长,隐私泄漏的减少,并且随着培训回合的数量,隐私泄漏的增加。
translated by 谷歌翻译
Federated Learning (FL) is pervasive in privacy-focused IoT environments since it enables avoiding privacy leakage by training models with gradients instead of data. Recent works show the uploaded gradients can be employed to reconstruct data, i.e., gradient leakage attacks, and several defenses are designed to alleviate the risk by tweaking the gradients. However, these defenses exhibit weak resilience against threatening attacks, as the effectiveness builds upon the unrealistic assumptions that deep neural networks are simplified as linear models. In this paper, without such unrealistic assumptions, we present a novel defense, called Refiner, instead of perturbing gradients, which refines ground-truth data to craft robust data that yields sufficient utility but with the least amount of privacy information, and then the gradients of robust data are uploaded. To craft robust data, Refiner promotes the gradients of critical parameters associated with robust data to close ground-truth ones while leaving the gradients of trivial parameters to safeguard privacy. Moreover, to exploit the gradients of trivial parameters, Refiner utilizes a well-designed evaluation network to steer robust data far away from ground-truth data, thereby alleviating privacy leakage risk. Extensive experiments across multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art threats.
translated by 谷歌翻译
在联合学习(FL)中,数据不会在联合培训机器学习模型时留下个人设备。相反,这些设备与中央党(例如,公司)共享梯度。因为数据永远不会“离开”个人设备,因此FL作为隐私保留呈现。然而,最近显示这种保护是一个薄的外观,甚至是一种被动攻击者观察梯度可以重建各个用户的数据。在本文中,我们争辩说,事先工作仍然很大程度上低估了FL的脆弱性。这是因为事先努力专门考虑被动攻击者,这些攻击者是诚实但好奇的。相反,我们介绍了一个活跃和不诚实的攻击者,作为中央会,他们能够在用户计算模型渐变之前修改共享模型的权重。我们称之为修改的重量“陷阱重量”。我们的活跃攻击者能够完全恢复用户数据,并在接近零成本时:攻击不需要复杂的优化目标。相反,它利用了模型梯度的固有数据泄漏,并通过恶意改变共享模型的权重来放大这种效果。这些特异性使我们的攻击能够扩展到具有大型迷你批次数据的模型。如果来自现有工作的攻击者需要小时才能恢复单个数据点,我们的方法需要毫秒来捕获完全连接和卷积的深度神经网络的完整百分之批次数据。最后,我们考虑缓解。我们观察到,FL中的差异隐私(DP)的当前实现是有缺陷的,因为它们明确地信任中央会,并在增加DP噪音的关键任务,因此不提供对恶意中央党的保护。我们还考虑其他防御,并解释为什么它们类似地不足。它需要重新设计FL,为用户提供任何有意义的数据隐私。
translated by 谷歌翻译
联合学习已被提议作为隐私的机器学习框架,该框架使多个客户能够在不共享原始数据的情况下进行协作。但是,在此框架中,设计并不能保证客户隐私保护。先前的工作表明,联邦学习中的梯度共享策略可能容易受到数据重建攻击的影响。但是,实际上,考虑到高沟通成本或由于增强隐私要求,客户可能不会传输原始梯度。实证研究表明,梯度混淆,包括通过梯度噪声注入和通过梯度压缩的无意化混淆的意图混淆,可以提供更多的隐私保护,以防止重建攻击。在这项工作中,我们提出了一个针对联合学习中图像分类任务的新数据重建攻击框架。我们表明,通常采用的梯度后处理程序,例如梯度量化,梯度稀疏和梯度扰动,可能会在联合学习中具有错误的安全感。与先前的研究相反,我们认为不应将隐私增强视为梯度压缩的副产品。此外,我们在提出的框架下设计了一种新方法,以在语义层面重建图像。我们量化语义隐私泄漏,并根据图像相似性分数进行比较。我们的比较挑战了文献中图像数据泄漏评估方案。结果强调了在现有联合学习算法中重新审视和重新设计对客户数据的隐私保护机制的重要性。
translated by 谷歌翻译
In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down ``data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.
translated by 谷歌翻译
联合学习使多个用户能够通过共享其模型更新(渐变)来构建联合模型,而其原始数据在其设备上保持本地。与常见的信念相比,这提供了隐私福利,我们在共享渐变时,我们在这里增加了隐私风险的最新结果。具体而言,我们调查梯度(LLG)的标签泄漏,这是一种新建攻击,从他们的共享梯度提取用户培训数据的标签。该攻击利用梯度的方向和幅度来确定任何标签的存在或不存在。 LLG简单且有效,能够泄漏由标签表示的电位敏感信息,并缩放到任意批量尺寸和多个类别。在数学上以及经验上证明了不同设置下攻击的有效性。此外,经验结果表明,LLG在模型训练的早期阶段以高精度成功提取标签。我们还讨论了针对这种泄漏的不同防御机制。我们的研究结果表明,梯度压缩是减轻攻击的实用技术。
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
联合学习允许一组用户在私人训练数据集中培训深度神经网络。在协议期间,数据集永远不会留下各个用户的设备。这是通过要求每个用户向中央服务器发送“仅”模型更新来实现,从而汇总它们以更新深神经网络的参数。然而,已经表明,每个模型更新都具有关于用户数据集的敏感信息(例如,梯度反转攻击)。联合学习的最先进的实现通过利用安全聚合来保护这些模型更新:安全监控协议,用于安全地计算用户的模型更新的聚合。安全聚合是关键,以保护用户的隐私,因为它会阻碍服务器学习用户提供的个人模型更新的源,防止推断和数据归因攻击。在这项工作中,我们表明恶意服务器可以轻松地阐明安全聚合,就像后者未到位一样。我们设计了两种不同的攻击,能够在参与安全聚合的用户数量上,独立于参与安全聚合的用户数。这使得它们在大规模现实世界联邦学习应用中的具体威胁。攻击是通用的,不瞄准任何特定的安全聚合协议。即使安全聚合协议被其理想功能替换为提供完美的安全性的理想功能,它们也同样有效。我们的工作表明,安全聚合与联合学习相结合,当前实施只提供了“虚假的安全感”。
translated by 谷歌翻译
联合学习(FL)提供了一个有效的范式,可以共同培训分布式用户的数据的全球模型。由于本地培训数据来自可能不值得信赖的不同用户,因此一些研究表明,FL容易受到中毒攻击的影响。同时,为了保护本地用户的隐私,FL始终以差异性私人方式(DPFL)进行培训。因此,在本文中,我们问:我们是否可以利用DPFL的先天隐私权来提供对中毒攻击的认证鲁棒性?我们可以进一步改善FL的隐私以改善这种认证吗?我们首先研究了FL的用户级和实例级别的隐私,并提出了新的机制以获得改进的实例级隐私。然后,我们提供两个鲁棒性认证标准:两级DPFL的认证预测和认证攻击成本。从理论上讲,我们证明了DPFL在有限数量的对抗用户或实例下的认证鲁棒性。从经验上讲,我们进行了广泛的实验,以在对不同数据集的一系列攻击下验证我们的理论。我们表明,具有更严格的隐私保证的DPFL总是在认证攻击成本方面提供更强的鲁棒性认证,但是在隐私保护和公用事业损失之间的适当平衡下,获得了最佳认证预测。
translated by 谷歌翻译
已经提出了安全的多方计算(MPC),以允许多个相互不信任的数据所有者在其合并数据上共同训练机器学习(ML)模型。但是,通过设计,MPC协议忠实地计算了训练功能,对抗性ML社区已证明该功能泄漏了私人信息,并且可以在中毒攻击中篡改。在这项工作中,我们认为在我们的框架中实现的模型合奏是一种称为Safenet的框架,是MPC的高度无限方法,可以避免许多对抗性ML攻击。 MPC培训中所有者之间数据的自然分区允许这种方法在训练时间高度可扩展,可证明可保护免受中毒攻击的保护,并证明可以防御许多隐私攻击。我们展示了Safenet对在端到端和转移学习方案训练的几个机器学习数据集和模型上中毒的效率,准确性和韧性。例如,Safenet可显着降低后门攻击的成功,同时获得$ 39 \ times $ $的培训,$ 36 \ times $ $ $少于达尔斯科夫(Dalskov)等人的四方MPC框架。我们的实验表明,即使在许多非IID设置中,结合也能保留这些好处。结合的简单性,廉价的设置和鲁棒性属性使其成为MPC私下培训ML模型的强大首选。
translated by 谷歌翻译
在联邦学习方案中,多方共同从其各自的数据中学习模型,有两个相互矛盾的目标是选择适当的算法。一方面,必须在存在\ textit {semi-honest}合作伙伴的情况下尽可能保持私人和敏感的培训数据,而另一方面,必须在不同方之间交换一定数量的信息学习实用程序。这样的挑战要求采用隐私的联合学习解决方案,该解决方案最大程度地提高了学习模型的效用,并维护参与各方的私人数据的可证明的隐私保证。本文说明了一个一般框架,即a)从统一信息理论的角度来制定隐私损失和效用损失之间的权衡,而b)在包括随机化,包括随机性,包括随机的机制,包括随机性,,包括随机性,,包括随机性,,包括随机性,,包括随机性,,包括随机性,,包括随机性,,包括随机性,包括随机性,,使用稀疏性和同态加密。结果表明,一般而言\ textit {没有免费的午餐来进行隐私 - 私人权衡取舍},并且必须用一定程度的降级效用进行保存隐私。本文中说明的定量分析可以作为实用联合学习算法设计的指导。
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
Federated Learning (FL) has been widely accepted as the solution for privacy-preserving machine learning without collecting raw data. While new technologies proposed in the past few years do evolve the FL area, unfortunately, the evaluation results presented in these works fall short in integrity and are hardly comparable because of the inconsistent evaluation metrics and experimental settings. In this paper, we propose a holistic evaluation framework for FL called FedEval, and present a benchmarking study on seven state-of-the-art FL algorithms. Specifically, we first introduce the core evaluation taxonomy model, called FedEval-Core, which covers four essential evaluation aspects for FL: Privacy, Robustness, Effectiveness, and Efficiency, with various well-defined metrics and experimental settings. Based on the FedEval-Core, we further develop an FL evaluation platform with standardized evaluation settings and easy-to-use interfaces. We then provide an in-depth benchmarking study between the seven well-known FL algorithms, including FedSGD, FedAvg, FedProx, FedOpt, FedSTC, SecAgg, and HEAgg. We comprehensively analyze the advantages and disadvantages of these algorithms and further identify the suitable practical scenarios for different algorithms, which is rarely done by prior work. Lastly, we excavate a set of take-away insights and future research directions, which are very helpful for researchers in the FL area.
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
拜占庭式联合学习(FL)旨在对抗恶意客户并培训准确的全球模型,同时保持极低的攻击成功率。然而,大多数现有系统仅在诚实/半hon最达克的多数设置中都具有强大的功能。 FLTRUST(NDSS '21)将上下文扩展到对客户的恶意多数,但在训练之前,应在训练之前为服务器提供辅助数据集,以便过滤恶意输入。私人火焰/flguard(Usenix '22)提供了一种解决方案,以确保在半多数上下文中既有稳健性和更新机密性。到目前为止,不可能平衡恶意背景,鲁棒性和更新机密性之间的权衡。为了解决这个问题,我们提出了一种新颖的拜占庭式bybust和隐私的FL系统,称为简介,以捕获恶意的少数群体和多数服务器和客户端。具体而言,基于DBSCAN算法,我们设计了一种通过成对调整的余弦相似性聚类的新方法,以提高聚类结果的准确性。为了阻止多数攻击恶意的攻击,我们开发了一种称为模型分割的算法,在该算法中,同一集群中的本地更新聚集在一起,并且将聚合正确地发送回相应的客户端。我们还利用多种密码工具来执行聚类任务,而无需牺牲培训正确性并更新机密性。我们介绍了详细的安全证明和经验评估以及简要的收敛分析。实验结果表明,简介的测试精度实际上接近FL基线(平均为0.8%的差距)。同时,攻击成功率约为0%-5%。我们进一步优化了设计,以便可以分别降低{67%-89.17%和66.05%-68.75%}的通信开销和运行时。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
联邦学习的出现在维持隐私的同时,促进了机器学习模型之间的大规模数据交换。尽管历史悠久,但联邦学习正在迅速发展,以使更广泛的使用更加实用。该领域中最重要的进步之一是将转移学习纳入联邦学习,这克服了主要联合学习的基本限制,尤其是在安全方面。本章从安全的角度进行了有关联合和转移学习的交集的全面调查。这项研究的主要目标是发现可能损害使用联合和转移学习的系统的隐私和性能的潜在脆弱性和防御机制。
translated by 谷歌翻译