基于学习的网络入侵检测系统(NIDS)被广泛部署用于捍卫各种网络攻击。现有的基于学习的NID主要使用神经网络(NN)作为依赖于网络图克数据的质量和数量的分类器。这种基于NN的方法也很难解释提高效率和可扩展性。在本文中,我们通过组合可解释的梯度升压决策树(GBDT)和联合学习(FL)框架来设计一个新的本地全局计算范例,基于新的学习的NID。具体地,联合纤维公司由多个客户端组成,该客户端提取用于服务器的本地网络基地数据功能以培训模型和检测入侵。在Fedlorest中还提出了一种隐私增强技术,以进一步击败流动系统的隐私。关于4个网络内人数据集的广泛实验,不同任务表明,联邦纤维公司是有效,高效,可解释和可延伸的。 Fedlorest在中国大学生的协同学习和网络安全竞赛中排名第一。
translated by 谷歌翻译
这项工作调查了联合学习的可能性,了解IOT恶意软件检测,并研究该新学习范式固有的安全问题。在此上下文中,呈现了一种使用联合学习来检测影响物联网设备的恶意软件的框架。 n-baiot,一个数据集在由恶意软件影响的几个实际物联网设备的网络流量,已被用于评估所提出的框架。经过培训和评估监督和无监督和无监督的联邦模型(多层Perceptron和AutoEncoder)能够检测到MATEN和UNEEN的IOT设备的恶意软件,并进行了培训和评估。此外,它们的性能与两种传统方法进行了比较。第一个允许每个参与者在本地使用自己的数据局面训练模型,而第二个包括使参与者与负责培训全局模型的中央实体共享他们的数据。这种比较表明,在联合和集中方法中完成的使用更多样化和大数据,对模型性能具有相当大的积极影响。此外,联邦模型,同时保留了参与者的隐私,将类似的结果与集中式相似。作为额外的贡献,并衡量联邦方法的稳健性,已经考虑了具有若干恶意参与者中毒联邦模型的对抗性设置。即使使用单个对手,大多数联邦学习算法中使用的基线模型聚合平均步骤也很容易受到不同攻击的影响。因此,在相同的攻击方案下评估了作为对策的其他模型聚合函数的性能。这些职能对恶意参与者提供了重大改善,但仍然需要更多的努力来使联邦方法强劲。
translated by 谷歌翻译
联邦学习一直是一个热门的研究主题,使不同组织的机器学习模型的协作培训在隐私限制下。随着研究人员试图支持更多具有不同隐私方法的机器学习模型,需要开发系统和基础设施,以便于开发各种联合学习算法。类似于Pytorch和Tensorflow等深度学习系统,可以增强深度学习的发展,联邦学习系统(FLSS)是等效的,并且面临各个方面的面临挑战,如有效性,效率和隐私。在本调查中,我们对联合学习系统进行了全面的审查。为实现流畅的流动和引导未来的研究,我们介绍了联合学习系统的定义并分析了系统组件。此外,我们根据六种不同方面提供联合学习系统的全面分类,包括数据分布,机器学习模型,隐私机制,通信架构,联合集市和联合的动机。分类可以帮助设计联合学习系统,如我们的案例研究所示。通过系统地总结现有联合学习系统,我们展示了设计因素,案例研究和未来的研究机会。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
通常利用机器学习方法并有效地将智能电表读数从家庭级别分解为设备级消耗,可以帮助分析用户的电力消耗行为并启用实用智能能源和智能网格申请。最近的研究提出了许多基于联邦深度学习(FL)的新型NILM框架。但是,缺乏综合研究,探讨了不同基于FL的NILM应用程序方案中的实用性优化方案和隐私保护方案。在本文中,我们首次尝试通过开发分布式和隐私的尼尔姆(DP2-NILM)框架来进行基于FL的NILM,重点关注实用程序优化和隐私保护,并在实用的NILM场景上进行比较实验基于现实世界的智能电表数据集。具体而言,在实用程序优化方案(即FedAvg和FedProx)中检查了两种替代联合学习策略。此外,DP2-NILM提供了不同级别的隐私保证,即联合学习的当地差异隐私学习和联合的全球差异隐私学习。在三个现实世界数据集上进行了广泛的比较实验,以评估所提出的框架。
translated by 谷歌翻译
本文提出并表征了联合学习(OARF)的开放应用程序存储库,是联合机器学习系统的基准套件。以前可用的联合学习基准主要集中在合成数据集上,并使用有限数量的应用程序。 OARF模仿更现实的应用方案,具有公开的数据集,如图像,文本和结构数据中的不同数据孤岛。我们的表征表明,基准套件在数据大小,分布,特征分布和学习任务复杂性中多样化。与参考实施的广泛评估显示了联合学习系统的重要方面的未来研究机会。我们开发了参考实现,并评估了联合学习的重要方面,包括模型准确性,通信成本,吞吐量和收敛时间。通过这些评估,我们发现了一些有趣的发现,例如联合学习可以有效地提高端到端吞吐量。
translated by 谷歌翻译
Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data, which greatly enhances the user privacy. Beside, federated recommendation systems enable to collaborate with other data platforms to improve recommended model performance while meeting the regulation and privacy constraints. However, federated recommendation systems faces many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we-(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review some robust aggregation strategies and several novel attacks against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4)introduce some open source platforms that can be used to build federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas.
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
联合学习(FL)是一种分布式机器学习方法,其中多个客户在不交换数据的情况下协作培训联合模型。尽管FL在数据隐私保护方面取得了前所未有的成功,但其对自由骑手攻击的脆弱性吸引了人们越来越多的关注。现有的防御能力可能对高度伪装或高百分比的自由骑手无效。为了应对这些挑战,我们从新颖的角度重新考虑防御,即模型重量不断发展的频率。从经验上讲,我们获得了一种新颖的见解,即在FL的训练中,模型权重的频率不断发展,自由骑机的频率和良性客户的频率显着不同的。受到这种见解的启发,我们提出了一种基于模型权重演化频率的新型防御方法,称为WEF-DEFENSE。特别是,我们在本地训练期间首先收集重量演变的频率(定义为WEF-MATRIX)。对于每个客户端,它将本地型号的WEF-Matrix与每个迭代的模型重量一起上传到服务器。然后,服务器根据WEF-Matrix的差异将自由骑士与良性客户端分开。最后,服务器使用个性化方法为相应的客户提供不同的全局模型。在五个数据集和五个模型上进行的全面实验表明,与最先进的基线相比,WEF防御能力更好。
translated by 谷歌翻译
联邦学习的出现在维持隐私的同时,促进了机器学习模型之间的大规模数据交换。尽管历史悠久,但联邦学习正在迅速发展,以使更广泛的使用更加实用。该领域中最重要的进步之一是将转移学习纳入联邦学习,这克服了主要联合学习的基本限制,尤其是在安全方面。本章从安全的角度进行了有关联合和转移学习的交集的全面调查。这项研究的主要目标是发现可能损害使用联合和转移学习的系统的隐私和性能的潜在脆弱性和防御机制。
translated by 谷歌翻译
Federated Learning (FL) has been widely accepted as the solution for privacy-preserving machine learning without collecting raw data. While new technologies proposed in the past few years do evolve the FL area, unfortunately, the evaluation results presented in these works fall short in integrity and are hardly comparable because of the inconsistent evaluation metrics and experimental settings. In this paper, we propose a holistic evaluation framework for FL called FedEval, and present a benchmarking study on seven state-of-the-art FL algorithms. Specifically, we first introduce the core evaluation taxonomy model, called FedEval-Core, which covers four essential evaluation aspects for FL: Privacy, Robustness, Effectiveness, and Efficiency, with various well-defined metrics and experimental settings. Based on the FedEval-Core, we further develop an FL evaluation platform with standardized evaluation settings and easy-to-use interfaces. We then provide an in-depth benchmarking study between the seven well-known FL algorithms, including FedSGD, FedAvg, FedProx, FedOpt, FedSTC, SecAgg, and HEAgg. We comprehensively analyze the advantages and disadvantages of these algorithms and further identify the suitable practical scenarios for different algorithms, which is rarely done by prior work. Lastly, we excavate a set of take-away insights and future research directions, which are very helpful for researchers in the FL area.
translated by 谷歌翻译
联合学习(FL)根据多个本地客户端协同聚合共享全球模型,同时保持培训数据分散以保护数据隐私。但是,标准的FL方法忽略了嘈杂的客户问题,这可能会损害聚合模型的整体性能。在本文中,我们首先分析了嘈杂的客户声明,然后用不同的噪声分布模型噪声客户端(例如,Bernoulli和截断的高斯分布)。要使用嘈杂的客户,我们提出了一个简单但有效的FL框架,名为联邦嘈杂的客户学习(FED-NCL),它是一个即插即用算法,并包含两个主要组件:动态的数据质量测量(DQM)量化每个参与客户端的数据质量,以及噪声鲁棒聚合(NRA),通过共同考虑本地训练数据和每个客户端的数据质量来自适应地聚合每个客户端的本地模型。我们的FED-NCL可以轻松应用于任何标准的流行流以处理嘈杂的客户端问题。各种数据集的实验结果表明,我们的算法提高了具有嘈杂客户端的不同现实系统的性能。
translated by 谷歌翻译
Deep learning (DL) methods have been widely applied to anomaly-based network intrusion detection system (NIDS) to detect malicious traffic. To expand the usage scenarios of DL-based methods, the federated learning (FL) framework allows multiple users to train a global model on the basis of respecting individual data privacy. However, it has not yet been systematically evaluated how robust FL-based NIDSs are against existing privacy attacks under existing defenses. To address this issue, we propose two privacy evaluation metrics designed for FL-based NIDSs, including (1) privacy score that evaluates the similarity between the original and recovered traffic features using reconstruction attacks, and (2) evasion rate against NIDSs using Generative Adversarial Network-based adversarial attack with the reconstructed benign traffic. We conduct experiments to show that existing defenses provide little protection that the corresponding adversarial traffic can even evade the SOTA NIDS Kitsune. To defend against such attacks and build a more robust FL-based NIDS, we further propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee. It achieves both high utility by minimizing the gradient distance and strong privacy protection by maximizing the input distance. We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection with up to 7 times higher privacy score, while maintaining model accuracy loss within 3% under optimal parameter combination.
translated by 谷歌翻译
更广泛的覆盖范围和更好的解决方案延迟减少5G需要其与多访问边缘计算(MEC)技术的组合。分散的深度学习(DDL),如联邦学习和群体学习作为对数百万智能边缘设备的隐私保留数据处理的有希望的解决方案,利用了本地客户端网络内的多层神经网络的分布式计算,而无需披露原始本地培训数据。值得注意的是,在金融和医疗保健等行业中,谨慎维护交易和个人医疗记录的敏感数据,DDL可以促进这些研究所的合作,以改善培训模型的性能,同时保护参与客户的数据隐私。在本调查论文中,我们展示了DDL的技术基础,通过分散的学习使社会许多人走。此外,我们通过概述DDL的挑战以及从新颖的沟通效率和可靠性的观点来概述目前本领域最先进的全面概述。
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
联合学习(FL)允许多个客户端在私人数据上协作训练神经网络(NN)模型,而不会显示数据。最近,已经介绍了针对FL的几种针对性的中毒攻击。这些攻击将后门注入到所产生的模型中,允许对抗控制的输入被错误分类。抵抗后门攻击的现有对策效率低,并且通常仅旨在排除偏离聚合的偏离模型。然而,这种方法还删除了具有偏离数据分布的客户端的良性模型,导致聚合模型对这些客户端执行不佳。为了解决这个问题,我们提出了一种深入的模型过滤方法,用于减轻后门攻击。它基于三种新颖的技术,允许表征用于培训模型更新的数据的分布,并寻求测量NNS内部结构和输出中的细粒度差异。使用这些技术,DeepSight可以识别可疑的模型更新。我们还开发了一种可以准确集群模型更新的方案。结合两个组件的结果,DeepSight能够识别和消除含有高攻击模型的模型集群,具有高攻击影响。我们还表明,可以通过现有的基于重量剪切的防御能力减轻可能未被发现的中毒模型的后门贡献。我们评估了深度的性能和有效性,并表明它可以减轻最先进的后门攻击,对模型对良性数据的性能的影响忽略不计。
translated by 谷歌翻译
In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down ``data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.
translated by 谷歌翻译