联邦学习的出现在维持隐私的同时,促进了机器学习模型之间的大规模数据交换。尽管历史悠久,但联邦学习正在迅速发展,以使更广泛的使用更加实用。该领域中最重要的进步之一是将转移学习纳入联邦学习,这克服了主要联合学习的基本限制,尤其是在安全方面。本章从安全的角度进行了有关联合和转移学习的交集的全面调查。这项研究的主要目标是发现可能损害使用联合和转移学习的系统的隐私和性能的潜在脆弱性和防御机制。
translated by 谷歌翻译
In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down ``data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
联邦学习一直是一个热门的研究主题,使不同组织的机器学习模型的协作培训在隐私限制下。随着研究人员试图支持更多具有不同隐私方法的机器学习模型,需要开发系统和基础设施,以便于开发各种联合学习算法。类似于Pytorch和Tensorflow等深度学习系统,可以增强深度学习的发展,联邦学习系统(FLSS)是等效的,并且面临各个方面的面临挑战,如有效性,效率和隐私。在本调查中,我们对联合学习系统进行了全面的审查。为实现流畅的流动和引导未来的研究,我们介绍了联合学习系统的定义并分析了系统组件。此外,我们根据六种不同方面提供联合学习系统的全面分类,包括数据分布,机器学习模型,隐私机制,通信架构,联合集市和联合的动机。分类可以帮助设计联合学习系统,如我们的案例研究所示。通过系统地总结现有联合学习系统,我们展示了设计因素,案例研究和未来的研究机会。
translated by 谷歌翻译
联合学习(FL)是一个系统,中央聚合器协调多个客户解决机器学习问题的努力。此设置允许分散培训数据以保护隐私。本文的目的是提供针对医疗保健的FL系统的概述。 FL在此根据其框架,架构和应用程序进行评估。这里显示的是,FL通过中央聚合器服务器通过共享的全球深度学习(DL)模型解决了前面的问题。本文研究了最新的发展,并提供了来自FL研究的快速增长的启发,列出了未解决的问题。在FL的背景下,描述了几种隐私方法,包括安全的多方计算,同态加密,差异隐私和随机梯度下降。此外,还提供了对各种FL类的综述,例如水平和垂直FL以及联合转移学习。 FL在无线通信,服务建议,智能医学诊断系统和医疗保健方面有应用,本文将在本文中进行讨论。我们还对现有的FL挑战进行了彻底的审查,例如隐私保护,沟通成本,系统异质性和不可靠的模型上传,然后是未来的研究指示。
translated by 谷歌翻译
Speech-centric machine learning systems have revolutionized many leading domains ranging from transportation and healthcare to education and defense, profoundly changing how people live, work, and interact with each other. However, recent studies have demonstrated that many speech-centric ML systems may need to be considered more trustworthy for broader deployment. Specifically, concerns over privacy breaches, discriminating performance, and vulnerability to adversarial attacks have all been discovered in ML research fields. In order to address the above challenges and risks, a significant number of efforts have been made to ensure these ML systems are trustworthy, especially private, safe, and fair. In this paper, we conduct the first comprehensive survey on speech-centric trustworthy ML topics related to privacy, safety, and fairness. In addition to serving as a summary report for the research community, we point out several promising future research directions to inspire the researchers who wish to explore further in this area.
translated by 谷歌翻译
更广泛的覆盖范围和更好的解决方案延迟减少5G需要其与多访问边缘计算(MEC)技术的组合。分散的深度学习(DDL),如联邦学习和群体学习作为对数百万智能边缘设备的隐私保留数据处理的有希望的解决方案,利用了本地客户端网络内的多层神经网络的分布式计算,而无需披露原始本地培训数据。值得注意的是,在金融和医疗保健等行业中,谨慎维护交易和个人医疗记录的敏感数据,DDL可以促进这些研究所的合作,以改善培训模型的性能,同时保护参与客户的数据隐私。在本调查论文中,我们展示了DDL的技术基础,通过分散的学习使社会许多人走。此外,我们通过概述DDL的挑战以及从新颖的沟通效率和可靠性的观点来概述目前本领域最先进的全面概述。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
通信技术和互联网的最新进展与人工智能(AI)启用了智能医疗保健。传统上,由于现代医疗保健网络的高性性和日益增长的数据隐私问题,AI技术需要集中式数据收集和处理,这可能在现实的医疗环境中可能是不可行的。作为一个新兴的分布式协作AI范例,通过协调多个客户(例如,医院)来执行AI培训而不共享原始数据,对智能医疗保健特别有吸引力。因此,我们对智能医疗保健的使用提供了全面的调查。首先,我们在智能医疗保健中展示了近期进程,动机和使用FL的要求。然后讨论了近期智能医疗保健的FL设计,从资源感知FL,安全和隐私感知到激励FL和个性化FL。随后,我们对关键医疗领域的FL新兴应用提供了最先进的综述,包括健康数据管理,远程健康监测,医学成像和Covid-19检测。分析了几个最近基于智能医疗保健项目,并突出了从调查中学到的关键经验教训。最后,我们讨论了智能医疗保健未来研究的有趣研究挑战和可能的指示。
translated by 谷歌翻译
机器学习中的隐私和安全挑战(ML)已成为ML普遍的开发以及最近对大型攻击表面的展示,已成为一个关键的话题。作为一种成熟的以系统为导向的方法,在学术界和行业中越来越多地使用机密计算来改善各种ML场景的隐私和安全性。在本文中,我们将基于机密计算辅助的ML安全性和隐私技术的发现系统化,以提供i)保密保证和ii)完整性保证。我们进一步确定了关键挑战,并提供有关ML用例现有可信赖的执行环境(TEE)系统中限制的专门分析。我们讨论了潜在的工作,包括基础隐私定义,分区的ML执行,针对ML的专用发球台设计,TEE Awawe Aware ML和ML Full Pipeline保证。这些潜在的解决方案可以帮助实现强大的TEE ML,以保证无需引入计算和系统成本。
translated by 谷歌翻译
随着物联网,AI和ML/DL算法的出现,数据驱动的医疗应用已成为一种有前途的工具,用于从医学数据设计可靠且可扩展的诊断和预后模型。近年来,这引起了从学术界到工业的广泛关注。这无疑改善了医疗保健提供的质量。但是,由于这些基于AI的医疗应用程序在满足严格的安全性,隐私和服务标准(例如低延迟)方面的困难,因此仍然采用较差。此外,医疗数据通常是分散的和私人的,这使得在人群之间产生强大的结果具有挑战性。联邦学习(FL)的最新发展使得以分布式方式训练复杂的机器学习模型成为可能。因此,FL已成为一个积极的研究领域,尤其是以分散的方式处理网络边缘的医疗数据,以保护隐私和安全问题。为此,本次调查论文重点介绍了数据共享是重大负担的医疗应用中FL技术的当前和未来。它还审查并讨论了当前的研究趋势及其设计可靠和可扩展模型的结果。我们概述了FL将军的统计问题,设备挑战,安全性,隐私问题及其在医疗领域的潜力。此外,我们的研究还集中在医疗应用上,我们重点介绍了全球癌症的负担以及有效利用FL来开发计算机辅助诊断工具来解决这些诊断工具。我们希望这篇评论是一个检查站,以彻底的方式阐明现有的最新最新作品,并为该领域提供开放的问题和未来的研究指示。
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
Federated Learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy. With the growing interest in having collaboration among data owners, FL has gained significant attention of organizations. The idea of FL is to enable collaborating participants train machine learning (ML) models on decentralized data without breaching privacy. In simpler words, federated learning is the approach of ``bringing the model to the data, instead of bringing the data to the mode''. Federated learning, when applied to data which is partitioned vertically across participants, is able to build a complete ML model by combining local models trained only using the data with distinct features at the local sites. This architecture of FL is referred to as vertical federated learning (VFL), which differs from the conventional FL on horizontally partitioned data. As VFL is different from conventional FL, it comes with its own issues and challenges. In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL. Additionally, the literature review highlights the existing solutions to challenges in VFL and provides potential research directions in this domain.
translated by 谷歌翻译
制药行业可以更好地利用其数据资产来通过协作机器学习平台虚拟化药物发现。另一方面,由于参与者的培训数据的意外泄漏,存在不可忽略的风险,因此,对于这样的平台,必须安全和隐私权。本文介绍了在药物发现的临床前阶段进行协作建模的隐私风险评估,以加快有前途的候选药物的选择。在最新推理攻击的简短分类法之后,我们采用并定制了几种基础情况。最后,我们用一些相关的隐私保护技术来描述和实验,以减轻此类攻击。
translated by 谷歌翻译
联合学习(FL)是一项新兴技术,可在保持数据分布和私密的同时向多个客户培训机器学习模型。根据参与的客户和模型培训量表,可以将联合学习分为两种类型:跨设备FL,客户通常是移动设备,客户编号可以达到数百万的规模;客户是组织或公司,并且客户编号通常很小(例如,一百之内)。尽管现有研究主要集中于跨设备FL,但本文旨在提供跨索洛FL的概述。更具体地说,我们首先讨论了交叉Silo FL的应用,并概述了其主要挑战。然后,我们通过关注与跨设备FL的联系和差异,对Cross-Silo FL挑战的现有方法进行系统的概述。最后,我们讨论了未来的方向和开放问题,值得社区的研究工作。
translated by 谷歌翻译
Today's AI still faces two major challenges. One is that in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated learning framework, which includes horizontal federated learning, vertical federated learning and federated transfer learning. We provide definitions, architectures and applications for the federated learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data, which greatly enhances the user privacy. Beside, federated recommendation systems enable to collaborate with other data platforms to improve recommended model performance while meeting the regulation and privacy constraints. However, federated recommendation systems faces many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we-(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review some robust aggregation strategies and several novel attacks against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4)introduce some open source platforms that can be used to build federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas.
translated by 谷歌翻译