近年来经历的计算设备部署爆炸,由诸如互联网(物联网)和5G的技术(IOT)和5G等技术的激励,导致了全局情景,随着网络安全的风险和威胁的增加。其中,设备欺骗和模糊的网络攻击因其影响而脱颖而出,并且通常需要推出的低复杂性。为了解决这个问题,已经出现了几种解决方案,以根据行为指纹和机器/深度学习(ML / DL)技术的组合来识别设备模型和类型。但是,这些解决方案不适合数据隐私和保护的方案,因为它们需要数据集中处理以进行处理。在这种情况下,尚未完全探索较新的方法,例如联合学习(FL),特别是当恶意客户端存在于场景设置时。目前的工作分析并比较了使用基于执行时间的事件的v一体的集中式DL模型的设备模型识别性能。对于实验目的,已经收集并公布了属于四种不同模型的55个覆盆子PI的执行时间特征的数据集。使用此数据集,所提出的解决方案在两个设置,集中式和联合中实现了0.9999的精度,在保留数据隐私时显示没有性能下降。后来,使用几种聚集机制作为对策,评估标签翻转攻击在联邦模型训练期间的影响。 ZENO和协调明智的中值聚合表现出最佳性能,尽管当他们的性能大大降低时,当完全恶意客户(所有培训样本中毒)增长超过50%时大大降临。
translated by 谷歌翻译
这项工作调查了联合学习的可能性,了解IOT恶意软件检测,并研究该新学习范式固有的安全问题。在此上下文中,呈现了一种使用联合学习来检测影响物联网设备的恶意软件的框架。 n-baiot,一个数据集在由恶意软件影响的几个实际物联网设备的网络流量,已被用于评估所提出的框架。经过培训和评估监督和无监督和无监督的联邦模型(多层Perceptron和AutoEncoder)能够检测到MATEN和UNEEN的IOT设备的恶意软件,并进行了培训和评估。此外,它们的性能与两种传统方法进行了比较。第一个允许每个参与者在本地使用自己的数据局面训练模型,而第二个包括使参与者与负责培训全局模型的中央实体共享他们的数据。这种比较表明,在联合和集中方法中完成的使用更多样化和大数据,对模型性能具有相当大的积极影响。此外,联邦模型,同时保留了参与者的隐私,将类似的结果与集中式相似。作为额外的贡献,并衡量联邦方法的稳健性,已经考虑了具有若干恶意参与者中毒联邦模型的对抗性设置。即使使用单个对手,大多数联邦学习算法中使用的基线模型聚合平均步骤也很容易受到不同攻击的影响。因此,在相同的攻击方案下评估了作为对策的其他模型聚合函数的性能。这些职能对恶意参与者提供了重大改善,但仍然需要更多的努力来使联邦方法强劲。
translated by 谷歌翻译
联合学习(FL)是一项广泛采用的分布式学习范例,在实践中,打算在利用所有参与者的整个数据集进行培训的同时保护用户的数据隐私。在FL中,多种型号在用户身上独立培训,集中聚合以在迭代过程中更新全局模型。虽然这种方法在保护隐私方面是优异的,但FL仍然遭受攻击或拜占庭故障等质量问题。最近的一些尝试已经解决了对FL的强大聚集技术的这种质量挑战。然而,最先进的(SOTA)强大的技术的有效性尚不清楚并缺乏全面的研究。因此,为了更好地了解这些SOTA流域的当前质量状态和挑战在存在攻击和故障的情况下,我们进行了大规模的实证研究,以研究SOTA FL的质量,从多个攻击角度,模拟故障(通过突变运算符)和聚合(防御)方法。特别是,我们对两个通用图像数据集和一个现实世界联邦医学图像数据集进行了研究。我们还系统地调查了攻击用户和独立和相同分布的(IID)因子,每个数据集的攻击/故障的分布对鲁棒性结果的影响。经过496个配置进行大规模分析后,我们发现每个用户的大多数突变者对最终模型具有可忽略不计的影响。此外,选择最强大的FL聚合器取决于攻击和数据集。最后,我们说明了可以实现几乎在所有攻击和配置上的任何单个聚合器以及具有简单集合模型的所有攻击和配置的常用解决方案的通用解决方案。
translated by 谷歌翻译
Federated Learning has emerged to cope with raising concerns about privacy breaches in using Machine or Deep Learning models. This new paradigm allows the leverage of deep learning models in a distributed manner, enhancing privacy preservation. However, the server's blindness to local datasets introduces its vulnerability to model poisoning attacks and data heterogeneity, tampering with the global model performance. Numerous works have proposed robust aggregation algorithms and defensive mechanisms, but the approaches are orthogonal to individual attacks or issues. FedCC, the proposed method, provides robust aggregation by comparing the Centered Kernel Alignment of Penultimate Layers Representations. The experiment results on FedCC demonstrate that it mitigates untargeted and targeted model poisoning or backdoor attacks while also being effective in non-Independently and Identically Distributed data environments. By applying FedCC against untargeted attacks, global model accuracy is recovered the most. Against targeted backdoor attacks, FedCC nullified attack confidence while preserving the test accuracy. Most of the experiment results outstand the baseline methods.
translated by 谷歌翻译
联合学习是一种数据解散隐私化技术,用于以安全的方式执行机器或深度学习。在本文中,我们介绍了有关联合学习的理论方面客户次数有所不同的用例。具体而言,使用从开放数据存储库中获得的胸部X射线图像提出了医学图像分析的用例。除了与隐私相关的优势外,还将研究预测的改进(就曲线下的准确性和面积而言)和减少执行时间(集中式方法)。将从培训数据中模拟不同的客户,以不平衡的方式选择,即,他们并非都有相同数量的数据。考虑三个或十个客户之间的结果与集中案件相比。间歇性客户将分析两种遵循方法,就像在实际情况下,某些客户可能会离开培训,一些新的新方法可能会进入培训。根据准确性,曲线下的区域和执行时间的结果,结果的结果的演变显示为原始数据被划分的客户次数。最后,提出了该领域的改进和未来工作。
translated by 谷歌翻译
Continuous behavioural authentication methods add a unique layer of security by allowing individuals to verify their unique identity when accessing a device. Maintaining session authenticity is now feasible by monitoring users' behaviour while interacting with a mobile or Internet of Things (IoT) device, making credential theft and session hijacking ineffective. Such a technique is made possible by integrating the power of artificial intelligence and Machine Learning (ML). Most of the literature focuses on training machine learning for the user by transmitting their data to an external server, subject to private user data exposure to threats. In this paper, we propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data. We present a warmup approach that provides a significant accuracy increase. In addition, we leverage the transfer learning technique based on feature extraction to boost the models' performance. Our extensive experiments based on four datasets: MNIST, FEMNIST, CIFAR-10 and UMDAA-02-FD, show a significant increase in user authentication accuracy while maintaining user privacy and data security.
translated by 谷歌翻译
更广泛的覆盖范围和更好的解决方案延迟减少5G需要其与多访问边缘计算(MEC)技术的组合。分散的深度学习(DDL),如联邦学习和群体学习作为对数百万智能边缘设备的隐私保留数据处理的有希望的解决方案,利用了本地客户端网络内的多层神经网络的分布式计算,而无需披露原始本地培训数据。值得注意的是,在金融和医疗保健等行业中,谨慎维护交易和个人医疗记录的敏感数据,DDL可以促进这些研究所的合作,以改善培训模型的性能,同时保护参与客户的数据隐私。在本调查论文中,我们展示了DDL的技术基础,通过分散的学习使社会许多人走。此外,我们通过概述DDL的挑战以及从新颖的沟通效率和可靠性的观点来概述目前本领域最先进的全面概述。
translated by 谷歌翻译
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification. However, some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, without degrading its performance.
translated by 谷歌翻译
由于对个人数据隐私的不断增长和当地客户的迅速增长的数据量,Federated Learnated(FL)的动机已成为新的机器学习设置。 FL系统由中央参数服务器和多个本地客户端组成。它将数据保留在本地客户端,并通过共享本地学到的模型参数来学习集中式模型。不需要共享本地数据,并且可以很好地保护隐私。然而,由于它是模型而不是共享的原始数据,因此系统可以暴露于恶意客户端发起的中毒模型攻击。此外,由于服务器上没有本地客户端数据,因此确定恶意客户端是一项挑战。此外,仍然可以使用上载模型估算客户本地数据,从而导致隐私披露。在这项工作中,我们首先提出了一个基于模型更新的联合平均算法,以防御拜占庭式攻击,例如加性噪声攻击和弹药攻击。提出了单个客户模型初始化方法,以通过隐藏各个本地机器学习模型来提供进一步的隐私保护。在结合这两个方案时,隐私和安全性都可以有效地增强。当没有攻击时,提出的方案被证明在非IID数据分布下实验会收敛。在拜占庭式攻击下,提议的方案的表现要比基于经典模型的FedAvg算法要好得多。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
联邦学习的出现在维持隐私的同时,促进了机器学习模型之间的大规模数据交换。尽管历史悠久,但联邦学习正在迅速发展,以使更广泛的使用更加实用。该领域中最重要的进步之一是将转移学习纳入联邦学习,这克服了主要联合学习的基本限制,尤其是在安全方面。本章从安全的角度进行了有关联合和转移学习的交集的全面调查。这项研究的主要目标是发现可能损害使用联合和转移学习的系统的隐私和性能的潜在脆弱性和防御机制。
translated by 谷歌翻译
联合学习(FL)允许多个客户端在私人数据上协作训练神经网络(NN)模型,而不会显示数据。最近,已经介绍了针对FL的几种针对性的中毒攻击。这些攻击将后门注入到所产生的模型中,允许对抗控制的输入被错误分类。抵抗后门攻击的现有对策效率低,并且通常仅旨在排除偏离聚合的偏离模型。然而,这种方法还删除了具有偏离数据分布的客户端的良性模型,导致聚合模型对这些客户端执行不佳。为了解决这个问题,我们提出了一种深入的模型过滤方法,用于减轻后门攻击。它基于三种新颖的技术,允许表征用于培训模型更新的数据的分布,并寻求测量NNS内部结构和输出中的细粒度差异。使用这些技术,DeepSight可以识别可疑的模型更新。我们还开发了一种可以准确集群模型更新的方案。结合两个组件的结果,DeepSight能够识别和消除含有高攻击模型的模型集群,具有高攻击影响。我们还表明,可以通过现有的基于重量剪切的防御能力减轻可能未被发现的中毒模型的后门贡献。我们评估了深度的性能和有效性,并表明它可以减轻最先进的后门攻击,对模型对良性数据的性能的影响忽略不计。
translated by 谷歌翻译
Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients. Instead of sharing the data, each client trains the model locally, resulting in improved privacy. However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model. Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions. However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model. Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods. We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation. To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments. To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers. We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios. We show in our extensive evaluation that CrowdGuard can effectively mitigate targeted poisoning attacks and achieve in various scenarios a True-Positive-Rate of 100% and a True-Negative-Rate of 100%.
translated by 谷歌翻译
隐私法规法(例如GDPR)将透明度和安全性作为数据处理算法的设计支柱。在这种情况下,联邦学习是保护隐私的分布式机器学习的最具影响力的框架之一,从而实现了许多自然语言处理和计算机视觉任务的惊人结果。一些联合学习框架采用差异隐私,以防止私人数据泄漏到未经授权的政党和恶意攻击者。但是,许多研究突出了标准联邦学习对中毒和推理的脆弱性,因此引起了人们对敏感数据潜在风险的担忧。为了解决此问题,我们提出了SGDE,这是一种生成数据交换协议,可改善跨索洛联合会中的用户安全性和机器学习性能。 SGDE的核心是共享具有强大差异隐私的数据生成器,保证了对私人数据培训的培训,而不是通信显式梯度信息。这些发电机合成了任意大量数据,这些数据保留了私人样品的独特特征,但有很大差异。我们展示了将SGDE纳入跨核心联合网络如何提高对联邦学习最有影响力的攻击的弹性。我们在图像和表格数据集上测试我们的方法,利用β变量自动编码器作为数据生成器,并突出了对非生成数据的本地和联合学习的公平性和绩效改进。
translated by 谷歌翻译
通过参与大规模联合学习(FL)优化的设备的异构性质的激励,我们专注于由区块链(BC)技术赋予的异步服务器的FL解决方案。与主要采用的FL方法相比,假设同步操作,我们提倡一个异步方法,由此,模型聚合作为客户端提交本地更新。异步设置与具有异构客户端的实际大规模设置中的联合优化思路非常适合。因此,它可能导致通信开销和空闲时段的效率提高。为了评估启用了BC启用的FL的学习完成延迟,我们提供了基于批量服务队列理论的分析模型。此外,我们提供仿真结果以评估同步和异步机制的性能。涉及BC启用的流量的重要方面,例如网络大小,链路容量或用户要求,并分析并分析。随着我们的结果表明,同步设置导致比异步案例更高的预测精度。然而,异步联合优化在许多情况下提供了更低的延迟,从而在处理大数据集时成为一种吸引力的FL解决方案,严重的时序约束(例如,近实时应用)或高度不同的训练数据。
translated by 谷歌翻译
联合学习使不同的各方能够在服务器的编排下协作建立全球模型,同时将培训数据保留在客户的设备上。但是,当客户具有异质数据时,性能会受到影响。为了解决这个问题,我们假设尽管数据异质性,但有些客户的数据分布可以集群。在以前的方法中,为了群集客户端,服务器要求客户端同时发送参数。但是,在有大量参与者可能有限的参与者的情况下,这可能是有问题的。为了防止这种瓶颈,我们提出了FLIC(使用增量聚类的联合学习),其中服务器利用客户在联合培训期间发送的客户发送的更新,而不是要求他们同时发送参数。因此,除了经典的联合学习所需的内容外,服务器与客户之间没有任何其他沟通。我们从经验上证明了各种非IID案例,我们的方法成功地按照相同的数据分布将客户分组分组。我们还通过研究其能力在联邦学习过程的早期阶段对客户进行分配的能力来确定FLIC的局限性。我们进一步将对模型的攻击作为数据异质性的一种形式,并从经验上表明,即使恶意客户的比例高于50 \%,FLIC也是针对中毒攻击的强大防御。
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
图神经网络(GNN)是一类用于处理图形域信息的基于深度学习的方法。 GNN最近已成为一种广泛使用的图形分析方法,因为它们可以为复杂的图形数据学习表示形式。但是,由于隐私问题和法规限制,集中的GNN可能很难应用于数据敏感的情况。 Federated学习(FL)是一种新兴技术,为保护隐私设置而开发,当几个方需要协作培训共享的全球模型时。尽管几项研究工作已应用于培训GNN(联邦GNN),但对他们对后门攻击的稳健性没有研究。本文通过在联邦GNN中进行两种类型的后门攻击来弥合这一差距:集中式后门攻击(CBA)和分发后门攻击(DBA)。我们的实验表明,在几乎所有评估的情况下,DBA攻击成功率高于CBA。对于CBA,即使对抗方的训练集嵌入了全球触发因素,所有本地触发器的攻击成功率也类似于全球触发因素。为了进一步探索联邦GNN中两次后门攻击的属性,我们评估了不同数量的客户,触发尺寸,中毒强度和触发密度的攻击性能。此外,我们探讨了DBA和CBA对两个最先进的防御能力的鲁棒性。我们发现,两次攻击都对被调查的防御能力进行了强大的强大,因此需要考虑将联邦GNN中的后门攻击视为需要定制防御的新威胁。
translated by 谷歌翻译