本文介绍了一个修改后的用户数据报协议(UDP),用于联合学习,以确保模型参数传输过程中的效率和可靠性,从而在每个联合学习回合中最大程度地发挥全局模型的潜力。在开发和测试此协议时,使用NS3模拟器来模拟通过网络的数据包传输,而Google TensorFlow用于创建自定义的联合学习环境。在此初步实现中,模拟包含三个节点,其中两个节点是客户端节点,一个是服务器节点。本文获得的结果提供了对未来联邦学习的协议能力的信心协议和修改后的UDP协议将进行模拟。还将探索修改后的UDP的优化,以提高效率,同时确保可靠性。
translated by 谷歌翻译
联合学习(FL)是标准集中学习范式的最吸引人的替代方案之一,允许异质的设备集训练机器学习模型而无需共享其原始数据。但是,FL需要中央服务器来协调学习过程,从而引入潜在的可扩展性和安全性问题。在文献中,已经提出了诸如八卦联合学习(GFL)和支持区块链的联合学习(BFL)之类的无服务器的方法来减轻这些问题。在这项工作中,我们提出了这三种技术的完整概述,该技术根据整体性能指标进行比较,包括模型准确性,时间复杂性,交流开销,收敛时间和能源消耗。广泛的模拟活动允许进行定量分析。特别是,GFL能够节省18%的训练时间,68%的能源和51%的数据相对于CFL解决方案,但无法达到CFL的准确性水平。另一方面,BFL代表了一个可行的解决方案,用于以更高级别的安全性实施分散的学习,以额外的能源使用和数据共享为代价。最后,我们确定了两个分散的联合学习实施的开放问题,并就该新研究领域的潜在扩展和可能的研究方向提供见解。
translated by 谷歌翻译
联合学习(FL)是一种机器学习(ML)技术,旨在减少对用户数据隐私的威胁。培训是使用用户设备上的原始数据(称为客户端)进行的,只有称为梯度的培训结果被发送到服务器进行汇总并生成更新的模型。但是,我们不能假设可以使用私人信息来信任服务器,例如与数据所有者或数据源相关的元数据。因此,将客户信息隐藏在服务器中有助于减少与隐私相关的攻击。因此,客户身份的隐私以及客户数据的隐私是使此类攻击更加困难的必要条件。本文提出了基于组签名的FL的高效和隐私权协议。一个名为GSFL的新组合签名旨在保护客户数据和身份的隐私,而且考虑考虑到联合学习的迭代过程,还大大降低了计算和通信成本。我们表明,在计算,通信和信号成本方面,GSFL优于现有方法。另外,我们表明所提出的协议可以在联合学习环境中处理各种安全攻击。
translated by 谷歌翻译
提出了联合学习(FL),以促进分布式环境中模型的培训。它支持(本地)数据隐私的保护,并使用本地资源进行模型培训。到目前为止,大多数研究一直致力于“核心问题”,例如机器学习算法对FL,数据隐私保护或处理客户之间不均匀数据分布的影响。此贡献锚定在实际的用例中,在这种情况下,FL将实际部署在生态系统的互联网中。因此,在文献中发现了一些流行的考虑之外,还需要考虑一些不同的问题。此外,引入了一种构建灵活和适应性的FL解决方案的体系结构。
translated by 谷歌翻译
We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.
translated by 谷歌翻译
联合学习(FL)作为边缘设备的有希望的技术,以协作学习共享预测模型,同时保持其训练数据,从而解耦了从需要存储云中的数据的机器学习的能力。然而,在规模和系统异质性方面,FL难以现实地实现。虽然有许多用于模拟FL算法的研究框架,但它们不支持在异构边缘设备上进行可扩展的流程。在本文中,我们呈现花 - 一种全面的FL框架,通过提供新的设施来执行大规模的FL实验并考虑丰富的异构流程来区分现有平台。我们的实验表明花卉可以仅使用一对高端GPU在客户尺寸下进行FL实验。然后,研究人员可以将实验无缝地迁移到真实设备中以检查设计空间的其他部分。我们认为花卉为社区提供了一个批判性的新工具,用于研究和发展。
translated by 谷歌翻译
传统的深度学习方法(DL)需要在中央服务器上收集和处理的培训数据,这些中央服务器通常在保健等隐私敏感域中挑战。为此,提出了一种新的学习范式,称为联合学习(FL),在解决隐私和数据所有权问题的同时将DL的潜力带到了这些域。 FL使远程客户端能够在保持数据本地时学习共享ML模型。然而,传统的FL系统面临多种挑战,例如可扩展性,复杂的基础设施管理,并且由于空闲客户端而被浪费的计算和产生的成本。 FL系统的这些挑战与无服务器计算和功能 - AS-Service(FAAS)平台旨在解决的核心问题密切对齐。这些包括快速可扩展性,无基础设施管理,自动缩放为空闲客户端,以及每次使用付费计费模型。为此,我们为无服务器FL展示了一个新颖的系统和框架,称为不发烟。我们的系统支持多个商业和自主主机的FAAS提供商,可以在机构数据中心和边缘设备上部署在云端,内部部署。据我们所知,我们是第一个能够在一大面料的异构FAAS提供商中启用FL,同时提供安全性和差异隐私等重要功能。我们展示了全面的实验,即使用我们的系统可以成功地培训多达200个客户功能的不同任务,更容易实现。此外,我们通过将其与传统的FL系统进行比较来证明我们的方法的实际可行性,并表明它可以更便宜,更资源效率更便宜。
translated by 谷歌翻译
联合学习(FL)已成为协作分布式学习的隐私解决方案,客户直接在其设备上训练AI模型,而不是与集中式(潜在的对手)服务器共享数据。尽管FL在某种程度上保留了本地数据隐私,但已显示有关客户数据的信息仍然可以从模型更新中推断出来。近年来,已经制定了各种隐私计划来解决这种隐私泄漏。但是,它们通常以牺牲模型性能或系统效率为代价提供隐私,而在实施FL计划时,平衡这些权衡是一个至关重要的挑战。在本手稿中,我们提出了一个保护隐私的联合学习(PPFL)框架,该框架建立在控制理论中的矩阵加密和系统沉浸工具的协同作用上。这个想法是将学习算法(随机梯度体面(SGD))浸入更高维度的系统(所谓的目标系统)中,并设计目标系统的动力学,以便:浸入原始SGD的轨迹: /嵌入其轨迹中,并在加密数据上学习(在这里我们使用随机矩阵加密)。矩阵加密是在服务器上重新重新格式化的,作为将原始参数映射到更高维的参数空间的坐标的随机更改,并强制执行目标SGD收敛到原始SGD Optiral解决方案的加密版本。服务器使用浸入式地图的左侧逆汇总模型解密。我们表明,我们的算法提供与标准FL相同的准确性和收敛速度,而计算成本可忽略不计,同时却没有透露有关客户数据的信息。
translated by 谷歌翻译
In recent years the applications of machine learning models have increased rapidly, due to the large amount of available data and technological progress.While some domains like web analysis can benefit from this with only minor restrictions, other fields like in medicine with patient data are strongerregulated. In particular \emph{data privacy} plays an important role as recently highlighted by the trustworthy AI initiative of the EU or general privacy regulations in legislation. Another major challenge is, that the required training \emph{data is} often \emph{distributed} in terms of features or samples and unavailable for classicalbatch learning approaches. In 2016 Google came up with a framework, called \emph{Federated Learning} to solve both of these problems. We provide a brief overview on existing Methods and Applications in the field of vertical and horizontal \emph{Federated Learning}, as well as \emph{Fderated Transfer Learning}.
translated by 谷歌翻译
联合学习允许多个参与者在不公开数据隐私的情况下协作培训高效模型。但是,这种分布式的机器学习培训方法容易受到拜占庭客户的攻击,拜占庭客户通过修改模型或上传假梯度来干扰全球模型的训练。在本文中,我们提出了一种基于联邦学习(CMFL)的新型无服务器联合学习框架委员会机制,该机制可以确保算法具有融合保证的鲁棒性。在CMFL中,设立了一个委员会系统,以筛选上载已上传的本地梯度。 The committee system selects the local gradients rated by the elected members for the aggregation procedure through the selection strategy, and replaces the committee member through the election strategy.基于模型性能和防御的不同考虑,设计了两种相反的选择策略是为了精确和鲁棒性。广泛的实验表明,与典型的联邦学习相比,与传统的稳健性相比,CMFL的融合和更高的准确性比传统的稳健性,以分散的方法的方式获得了传统的耐受性算法。此外,我们理论上分析并证明了在不同的选举和选择策略下CMFL的收敛性,这与实验结果一致。
translated by 谷歌翻译
随着对数据隐私和所有权的越来越关注,近年来见证了机器学习(ML)的范式转移。新兴的范式,联合学习(FL)引起了人们的关注,并已成为机器学习实现的新设计。 FL可以在中央服务器的协调下启用数据筒仓的ML模型培训,从而消除了开销,而无需共享原始数据。在本文中,我们对FL范式进行了综述,尤其是比较类型,网络结构和全局模型聚合方法。然后,我们对能源域中的FL应用进行了全面审查(请参阅本文的智能电网)。我们提供FL的主题分类,以解决各种与能源有关的问题,包括需求响应,识别,预测和联合优化。我们详细描述了分类法,并以讨论各个方面的讨论,包括其能源信息学应用程序中的挑战,机会和局限性,例如能源系统建模和设计,隐私和进化。
translated by 谷歌翻译
Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.
translated by 谷歌翻译
交叉设备联合学习(FL)是一种分布式学习范例,具有几种挑战,这些挑战将其区分离为传统的分布式学习,每个设备上的系统特征的可变性,以及数百万客户端与主要服务器协调。文献中描述的大多数FL系统是同步的 - 它们从各个客户端执行模型更新的同步聚合。缩放同步FL是挑战,因为增加了并行培训的客户数量导致训练速度的回报递减,类似于大批培训。而且,陷阱妨碍了同步流动训练。在这项工作中,我们概述了一种生产异步流行系统设计。我们的工作解决了上述问题,一些系统设计挑战及其解决方案的草图,并触及了为数百万客户建立生产流系统的原则。凭经验,我们证明异步流量在跨越近一亿台设备时比同步液更快地收敛。特别地,在高并发设置中,异步FL速度快5倍,并且具有比同步FL更小的通信开销差距。
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
In this paper, we increase the availability and integration of devices in the learning process to enhance the convergence of federated learning (FL) models. To address the issue of having all the data in one location, federated learning, which maintains the ability to learn over decentralized data sets, combines privacy and technology. Until the model converges, the server combines the updated weights obtained from each dataset over a number of rounds. The majority of the literature suggested client selection techniques to accelerate convergence and boost accuracy. However, none of the existing proposals have focused on the flexibility to deploy and select clients as needed, wherever and whenever that may be. Due to the extremely dynamic surroundings, some devices are actually not available to serve as clients in FL, which affects the availability of data for learning and the applicability of the existing solution for client selection. In this paper, we address the aforementioned limitations by introducing an On-Demand-FL, a client deployment approach for FL, offering more volume and heterogeneity of data in the learning process. We make use of the containerization technology such as Docker to build efficient environments using IoT and mobile devices serving as volunteers. Furthermore, Kubernetes is used for orchestration. The Genetic algorithm (GA) is used to solve the multi-objective optimization problem due to its evolutionary strategy. The performed experiments using the Mobile Data Challenge (MDC) dataset and the Localfed framework illustrate the relevance of the proposed approach and the efficiency of the on-the-fly deployment of clients whenever and wherever needed with less discarded rounds and more available data.
translated by 谷歌翻译
联合学习是一种数据解散隐私化技术,用于以安全的方式执行机器或深度学习。在本文中,我们介绍了有关联合学习的理论方面客户次数有所不同的用例。具体而言,使用从开放数据存储库中获得的胸部X射线图像提出了医学图像分析的用例。除了与隐私相关的优势外,还将研究预测的改进(就曲线下的准确性和面积而言)和减少执行时间(集中式方法)。将从培训数据中模拟不同的客户,以不平衡的方式选择,即,他们并非都有相同数量的数据。考虑三个或十个客户之间的结果与集中案件相比。间歇性客户将分析两种遵循方法,就像在实际情况下,某些客户可能会离开培训,一些新的新方法可能会进入培训。根据准确性,曲线下的区域和执行时间的结果,结果的结果的演变显示为原始数据被划分的客户次数。最后,提出了该领域的改进和未来工作。
translated by 谷歌翻译
通过参与大规模联合学习(FL)优化的设备的异构性质的激励,我们专注于由区块链(BC)技术赋予的异步服务器的FL解决方案。与主要采用的FL方法相比,假设同步操作,我们提倡一个异步方法,由此,模型聚合作为客户端提交本地更新。异步设置与具有异构客户端的实际大规模设置中的联合优化思路非常适合。因此,它可能导致通信开销和空闲时段的效率提高。为了评估启用了BC启用的FL的学习完成延迟,我们提供了基于批量服务队列理论的分析模型。此外,我们提供仿真结果以评估同步和异步机制的性能。涉及BC启用的流量的重要方面,例如网络大小,链路容量或用户要求,并分析并分析。随着我们的结果表明,同步设置导致比异步案例更高的预测精度。然而,异步联合优化在许多情况下提供了更低的延迟,从而在处理大数据集时成为一种吸引力的FL解决方案,严重的时序约束(例如,近实时应用)或高度不同的训练数据。
translated by 谷歌翻译
Creating high-performance generalizable deep neural networks for phytoplankton monitoring requires utilizing large-scale data coming from diverse global water sources. A major challenge to training such networks lies in data privacy, where data collected at different facilities are often restricted from being transferred to a centralized location. A promising approach to overcome this challenge is federated learning, where training is done at site level on local data, and only the model parameters are exchanged over the network to generate a global model. In this study, we explore the feasibility of leveraging federated learning for privacy-preserving training of deep neural networks for phytoplankton classification. More specifically, we simulate two different federated learning frameworks, federated learning (FL) and mutually exclusive FL (ME-FL), and compare their performance to a traditional centralized learning (CL) framework. Experimental results from this study demonstrate the feasibility and potential of federated learning for phytoplankton monitoring.
translated by 谷歌翻译
联合学习偏离“将数据发送到模型”的规范“向数据发送模型”。当在边缘生态系统中使用时,许多异构边缘设备通过不同的方式收集数据并通过不同的网络信道连接参与培训过程。由于设备故障或网络问题,这种生态系统中的边缘设备的失败很可能。在本文中,我们首先分析边缘设备数量对FL模型的影响,并提供一种选择有助于该模型的最佳设备的策略。我们观察所选设备失败并提供缓解策略以确保强大的联合学习技术的影响。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译