联邦学习文学中的许多假设存在于最实际应用中不能满足的最佳情况。异步设置反映了逼真的环境,其中联合学习方法必须能够可靠地运行。除了参与者的不同数量的非IID数据之外,由于可用的计算电源和电池约束,异步设置模拟异构客户端参与,并且还考虑了客户端和服务器之间的延迟通信。为了减少与异步在线联合学习(ASO Fed)相关的通信开销,我们使用基于部分共享的通信的原则。以这种方式,我们减少了参与者的通信负载,因此,渲染参与学习任务更可访问。我们证明了拟议的ASO供给的融合并提供了进一步分析其行为的模拟。模拟显示,在异步设置中,可以实现与联邦随机梯度(在线FEDSGD)相同的收敛,同时减少通信十倍。
translated by 谷歌翻译
联合学习(FL)使多个设备能够在不共享其个人数据的情况下协作学习全局模型。在现实世界应用中,不同的各方可能具有异质数据分布和有限的通信带宽。在本文中,我们有兴趣提高FL系统的通信效率。我们根据梯度规范的重要性调查和设计设备选择策略。特别是,我们的方法包括在每个通信轮中选择具有最高梯度值的最高规范的设备。我们研究了这种选择技术的收敛性和性能,并将其与现有技术进行比较。我们用非IID设置执行几个实验。结果显示了我们的方法的收敛性,与随机选择比较的测试精度相当大。
translated by 谷歌翻译
联合学习(FL)已成为协作分布式学习的隐私解决方案,客户直接在其设备上训练AI模型,而不是与集中式(潜在的对手)服务器共享数据。尽管FL在某种程度上保留了本地数据隐私,但已显示有关客户数据的信息仍然可以从模型更新中推断出来。近年来,已经制定了各种隐私计划来解决这种隐私泄漏。但是,它们通常以牺牲模型性能或系统效率为代价提供隐私,而在实施FL计划时,平衡这些权衡是一个至关重要的挑战。在本手稿中,我们提出了一个保护隐私的联合学习(PPFL)框架,该框架建立在控制理论中的矩阵加密和系统沉浸工具的协同作用上。这个想法是将学习算法(随机梯度体面(SGD))浸入更高维度的系统(所谓的目标系统)中,并设计目标系统的动力学,以便:浸入原始SGD的轨迹: /嵌入其轨迹中,并在加密数据上学习(在这里我们使用随机矩阵加密)。矩阵加密是在服务器上重新重新格式化的,作为将原始参数映射到更高维的参数空间的坐标的随机更改,并强制执行目标SGD收敛到原始SGD Optiral解决方案的加密版本。服务器使用浸入式地图的左侧逆汇总模型解密。我们表明,我们的算法提供与标准FL相同的准确性和收敛速度,而计算成本可忽略不计,同时却没有透露有关客户数据的信息。
translated by 谷歌翻译
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
分布式深度学习框架,如联合学习(FL)及其变体都是在广泛的Web客户端和移动/ IOT设备上实现个性化体验。然而,由于模型参数的爆炸增长(例如,十亿参数模型),基于FL的框架受到客户的计算资源的限制。拆分学习(SL),最近的框架,通过拆分客户端和服务器之间的模型培训来减少客户端计算负载。这种灵活性对于低计算设置非常有用,但通常以带宽消耗的增加成本而实现,并且可能导致次优化会聚,尤其是当客户数据异构时。在这项工作中,我们介绍了adasplit,通过降低带宽消耗并提高异构客户端的性能,使得能够将SL有效地缩放到低资源场景。为了捕获和基准的分布式深度学习的多维性质,我们还介绍了C3分数,是评估资源预算下的性能。我们通过与强大联邦和分裂学习基线的大量实验比较进行了大量实验比较,验证了adasplit在有限的资源下的有效性。我们还展示了adasplit中关键设计选择的敏感性分析,该选择验证了adasplit在可变资源预算中提供适应性权衡的能力。
translated by 谷歌翻译
联合学习(FL)是一种有效的分布式机器学习范式,以隐私的方式采用私人数据集。 FL的主要挑战是,END设备通常具有各种计算和通信功能,其培训数据并非独立且分布相同(非IID)。由于在移动网络中此类设备的通信带宽和不稳定的可用性,因此只能在每个回合中选择最终设备(也称为参与者或客户端的参与者或客户端)。因此,使用有效的参与者选择方案来最大程度地提高FL的性能,包括最终模型的准确性和训练时间,这一点至关重要。在本文中,我们对FL的参与者选择技术进行了评论。首先,我们介绍FL并突出参与者选择期间的主要挑战。然后,我们根据其解决方案来审查现有研究并将其分类。最后,根据我们对该主题领域最新的分析的分析,我们为FL的参与者选择提供了一些未来的指示。
translated by 谷歌翻译
可扩展性和隐私是交叉设备联合学习(FL)系统的两个关键问题。在这项工作中,我们确定了FL中的客户端更新的同步流动聚合不能高效地缩放到几百个并行培训之外。它导致ModelPerforce和训练速度的回报递减,Ampanysto大批量培训。另一方面,FL(即异步FL)中的客户端更新的异步聚合减轻了可扩展性问题。但是,聚合个性链子更新与安全聚合不兼容,这可能导致系统的不良隐私水平。为了解决这些问题,我们提出了一种新颖的缓冲异步聚合方法FedBuff,这是不可知的优化器的选择,并结合了同步和异步FL的最佳特性。我们经验证明FEDBuff比同步FL更有效,比异步FL效率更高3.3倍,同时兼容保留保护技术,如安全聚合和差异隐私。我们在平滑的非凸设置中提供理论融合保证。最后,我们显示在差异私有培训下,FedBuff可以在低隐私设置下占FEDAVGM并实现更高隐私设置的相同实用程序。
translated by 谷歌翻译
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$\times$ speedup compared to other methods.
translated by 谷歌翻译
我们考虑开放的联合学习(FL)系统,客户可以在FL过程中加入和/或离开系统。鉴于当前客户端数量的差异,在开放系统中不能保证与固定模型的收敛性。取而代之的是,我们求助于一个新的性能指标,该指标称我们的开放式FL系统的稳定性为量,该指标量化了开放系统中学习模型的幅度。在假设本地客户端的功能强烈凸出和平滑的假设下,我们从理论上量化了两种FL算法的稳定性半径,即本地SGD和本地ADAM。我们观察到此半径依赖于几个关键参数,包括功能条件号以及随机梯度的方差。通过对合成和现实世界基准数据集的数值模拟,我们的理论结果得到了进一步验证。
translated by 谷歌翻译
本文调查了拜占庭袭击的空中联邦学习的稳健性。通过空中计算的模型更新的简单平均使得学习任务容易受到某些恶意客户端的本地模型更新的随机或预期修改。我们向这种攻击提出了一种强大的传输和聚合框架,同时保留对联邦学习的空中计算的益处。对于提出的强肥大学学习,参与客户端被随机分为组,并将传输时间槽分配给每个组。参数服务器使用强大的聚合技术聚合不同组的结果,并将结果传送给客户端以进行另一个训练。我们还分析了所提出的算法的收敛性。数值模拟证实了提出的拜占庭攻击方法的鲁棒性。
translated by 谷歌翻译
Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process by training and uploading their local models to an FL server in each global iteration. Upon receiving the local models from all the clients, the FL server generates a global model by aggregating the received local models. This traditional FL process may suffer from the straggler problem in heterogeneous client settings, where the FL server has to wait for slow clients to upload their local models in each global iteration, thus increasing the overall training time. One of the solutions is to set up a deadline and only the clients that can upload their local models before the deadline would be selected in the FL process. This solution may lead to a slow convergence rate and global model overfitting issues due to the limited client selection. In this paper, we propose the Latency awarE Semi-synchronous client Selection and mOdel aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies. That is, faster clients would be scheduled to upload their models more frequently than slow clients, thus resolving the straggler problem and accelerating the convergence speed, while avoiding model overfitting. Also, LESSON is capable of adjusting the tradeoff between the model accuracy and convergence rate by varying the deadline. Extensive simulations have been conducted to compare the performance of LESSON with the other two baseline methods, i.e., FedAvg and FedCS. The simulation results demonstrate that LESSON achieves faster convergence speed than FedAvg and FedCS, and higher model accuracy than FedCS.
translated by 谷歌翻译
联合学习(FL)是一个新的人工智能概念,它使得互联网(IoT)设备能够学习协作模型,而无需将原始数据发送到集中的节点进行处理。尽管有许多优势,但在物联网设备上的计算资源较低,交换模型参数的高通信成本使得FL在大型物联网网络中的应用非常有限。在这项工作中,我们为非常大的物联网网络开发了一种新型的FL压缩方案,称为高压联合学习(HCFL)。 HCFL可以减少FL过程的数据负载,而无需更改其结构和超参数。通过这种方式,我们不仅可以显着降低沟通成本,而且使密集学习过程更适应低计算资源的物联网设备。此外,我们研究了IoT设备数量与FL模型的收敛水平之间的关系,从而更好地评估了FL过程的质量。我们在模拟和数学分析中演示了HCFL方案。我们提出的理论研究可以用作最低满意度的水平,证明在满足确定的配置时,FL过程可以实现良好的性能。因此,我们表明HCFL适用于具有许多物联网设备的任何FLENTECTED网络。
translated by 谷歌翻译
We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.
translated by 谷歌翻译
联合学习(FL)可以从云到资源限制的边缘设备分发机器学习工作负载。遗憾的是,当前的深网络不仅对边缘设备的推理和培训造成了太重,而且对于在带宽约束网络上传送更新,也太大了。在本文中,我们开发,实施和实验验证了所谓的联合动态稀疏训练(FEDDST)的新型FL框架,通过该训练可以通过该培训和培训复杂的神经网络,在设备上计算和网络内通信中具有基本上提高的效率。在FEDDST的核心是一个动态过程,可以从目标完整网络中提取和列出稀疏子网。通过这个方案,“两只鸟类用一块石头杀死:”而不是完整的模型,每个客户端都会对自己的稀疏网络进行有效的培训,并且在设备和云之间仅传输稀疏网络。此外,我们的结果表明,在流动训练期间的动态稀疏性更灵活地容纳比固定的共用稀疏面具的局部异质性。此外,动态稀疏性自然地引入了培训动态的“时间自化效应”,即使通过密集训练也会提高流程。在一个现实和挑战的非I.I.D。 FL Setting,FEDDST始终如一地优于我们的实验中的竞争算法:例如,在非IID CIFAR-10上的任何固定上传数据帽时,在给定相同的上传数据帽时,它会在FedVGM上获得令人印象深刻的精度优势;即使在上传数据帽2倍,也可以进一步展示FEDDST的疗效,即使FEDAVGM为2X,即使将FEDAVGM提供精度差距也会保持3%。代码可用:https://github.com/bibikar/feddst。
translated by 谷歌翻译
在低地球轨道(LEO)Mega Constellation中,有相关的用例,例如基于卫星成像的推断,其中大量卫星在不共享其本地数据集的情况下协作机器学习模型。为了解决这个问题,我们提出了一种基于联合学习(FL)的新一套算法,包括基于FedAVG的新型异步流程,其对异构情景具有比最先进的异构情景更好的鲁棒性。基于MNIST和CIFAR-10数据集的广泛数值评估突出了所提出的方法的快速收敛速度和优异的渐近试验精度。
translated by 谷歌翻译
在点击率(CTR)预测的联合学习(FL)中,用户的数据未共享以保护隐私。学习是通过在客户端设备上本地培训进行的,并仅将模型更改传达给服务器。有两个主要的挑战:(i)客户异质性,制作使用加权平均来汇总客户模型更新的FL算法的进步缓慢且学习结果不令人满意; (ii)由于每个实验所需的大量计算时间和资源,因此使用反复试验方法调整服务器学习率的困难。为了应对这些挑战,我们提出了一种简单的在线元学习方法,以学习汇总模型更新的策略,该方法根据客户属性适应客户的重要性并调整更新的步骤大小。我们在公共数据集上进行广泛的评估。我们的方法在收敛速度和最终学习结果的质量方面都大大优于最先进的方法。
translated by 谷歌翻译
大规模的神经网络具有相当大的表现力。它们非常适合工业应用中的复杂学习任务。但是,在当前联邦学习(FL)范式下,大型模型对训练构成了重大挑战。现有的有效FL训练的方法通常利用模型参数辍学。但是,操纵单个模型参数不仅在训练大规模FL模型时有意义地减少通信开销效率低下,而且还可能不利于缩放工作和模型性能,如最近的研究所示。为了解决这些问题,我们提出了联合的机会障碍辍学方法(FEDOBD)方法。关键的新颖性是,它将大规模模型分解为语义块,以便FL参与者可以机会上传量化的块,这些块被认为对训练该模型非常重要,以供FL服务器进行聚合。基于多个现实世界数据集的五种最先进方法评估FEDOBD的广泛实验表明,与最佳性能基线方法相比,它将整体通信开销降低了70%以上,同时达到了最高的测试准确性。据我们所知,FEDOBD是在块级别而不是在单个参数级别上执行FL模型上辍学的第一种方法。
translated by 谷歌翻译
Federated learning (FL) enables distributed model training from local data collected by users. In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem. Existing works have separately considered different configurations to make FL more efficient, such as infrequent transmission of model updates, client subsampling, and compression of update vectors. However, an important open problem is how to jointly apply and tune these control knobs in a single FL algorithm, to achieve the best performance by allowing a high degree of freedom in control decisions. In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation at clients and arbitrary amounts of communication between clients and the server, making both the computation and communication resource consumption adjustable. We prove a convergence upper bound of this algorithm. Based on this result, we further propose a stochastic optimization formulation and algorithm to determine the control decisions that (approximately) minimize the convergence bound, while conforming to constraints related to resource consumption. The advantage of our approach is also verified using experiments.
translated by 谷歌翻译