我们提出了一种用于分布式培训神经网络模型的新型联合学习方法,其中服务器在每轮中随机选择的设备的子集之间编制协作。我们主要从通信角度查看联合学习问题,并允许更多设备级别计算来节省传输成本。我们指出了一个基本的困境,因为当地 - 设备水平的最低实证损失与全球经验损失的最小值不一致。与最近的事先有关的不同,尝试无所作用的最小化或利用用于并行化梯度计算的设备,我们为每轮的每个设备提出动态规范器,以便在极限中,全局和设备解决方案对齐。我们通过实证结果对真实的和合成数据以及我们的方案在凸和非凸面设置中导致有效培训的分析结果,同时对设备异质性完全不可知,以及大量设备,部分参与和不平衡的数据。
translated by 谷歌翻译
从经验上证明,在跨客户聚集之前应用多个本地更新的实践是克服联合学习(FL)中的通信瓶颈的成功方法。在这项工作中,我们提出了一种通用食谱,即FedShuffle,可以更好地利用FL中的本地更新,尤其是在异质性方面。与许多先前的作品不同,FedShuffle在每个设备的更新数量上没有任何统一性。我们的FedShuffle食谱包括四种简单的功能成分:1)数据的本地改组,2)调整本地学习率,3)更新加权,4)减少动量方差(Cutkosky and Orabona,2019年)。我们对FedShuffle进行了全面的理论分析,并表明从理论和经验上讲,我们的方法都不遭受FL方法中存在的目标功能不匹配的障碍,这些方法假设在异质FL设置中,例如FedAvg(McMahan等人,McMahan等, 2017)。此外,通过将上面的成分结合起来,FedShuffle在Fednova上改善(Wang等,2020),以前提议解决此不匹配。我们还表明,在Hessian相似性假设下,通过降低动量方差的FedShuffle可以改善非本地方法。最后,通过对合成和现实世界数据集的实验,我们说明了FedShuffle中使用的四种成分中的每种如何有助于改善FL中局部更新的使用。
translated by 谷歌翻译
数据异构联合学习(FL)系统遭受了两个重要的收敛误差来源:1)客户漂移错误是由于在客户端执行多个局部优化步骤而引起的,以及2)部分客户参与错误,这是一个事实,仅一小部分子集边缘客户参加每轮培训。我们发现其中,只有前者在文献中受到了极大的关注。为了解决这个问题,我们提出了FedVarp,这是在服务器上应用的一种新颖的差异算法,它消除了由于部分客户参与而导致的错误。为此,服务器只是将每个客户端的最新更新保持在内存中,并将其用作每回合中非参与客户的替代更新。此外,为了减轻服务器上的内存需求,我们提出了一种新颖的基于聚类的方差降低算法clusterfedvarp。与以前提出的方法不同,FedVarp和ClusterFedVarp均不需要在客户端上进行其他计算或其他优化参数的通信。通过广泛的实验,我们表明FedVarp优于最先进的方法,而ClusterFedVarp实现了与FedVarp相当的性能,并且记忆要求较少。
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FEDAVG) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including ADAGRAD, ADAM, and YOGI, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
translated by 谷歌翻译
Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization: (1) significant variability in terms of the systems characteristics on each device in the network (systems heterogeneity), and (2) non-identically distributed data across the network (statistical heterogeneity). In this work, we introduce a framework, FedProx, to tackle heterogeneity in federated networks. FedProx can be viewed as a generalization and re-parametrization of FedAvg, the current state-of-the-art method for federated learning. While this re-parameterization makes only minor modifications to the method itself, these modifications have important ramifications both in theory and in practice. Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity). Practically, we demonstrate that FedProx allows for more robust convergence than FedAvg across a suite of realistic federated datasets. In particular, in highly heterogeneous settings, FedProx demonstrates significantly more stable and accurate convergence behavior relative to FedAvg-improving absolute test accuracy by 22% on average.
translated by 谷歌翻译
FEDPROX算法是一种简单但功能强大的分布式近端优化方法,广泛用于联合学习(FL)而不是异质数据。尽管在实践中看到了它的知名度和杰出的成功,但对FEDPROX的理论理解在很大程度上是不足的:FedProx的吸引人的融合行为迄今在某些非标准和不切实际的地方功能的差异假设下的特征是,结果的优化仅限于优化的限制。问题。为了解决这些缺陷,我们通过算法稳定性的镜头开发了FedProx及其Minibatch随机扩展的新型局部差异不变理论。结果,我们有助于得出对FedProx的几个新的和更深入的见解,以实现联合优化的非凸面,包括:1)收敛确保独立于局部差异类型条件; 2)融合保证非平滑FL问题; 3)关于Minibatch的尺寸和采样设备的数量,线性加速。我们的理论首次揭示了局部差异和平稳性对于FedProx获得有利的复杂性界限并不是必备的。据报道,一系列基准FL数据集的初步实验结果证明了小型匹配以提高FEDPROX的样品效率的好处。
translated by 谷歌翻译
Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm.
translated by 谷歌翻译
联合学习(FL)可以培训全球模型,而无需共享存储在多个设备上的分散的原始数据以保护数据隐私。由于设备的能力多样化,FL框架难以解决Straggler效应和过时模型的问题。此外,数据异质性在FL训练过程中会导致全球模型的严重准确性降解。为了解决上述问题,我们提出了一个层次同步FL框架,即Fedhisyn。 Fedhisyn首先根据其计算能力将所有可​​用的设备簇分为少数类别。经过一定的本地培训间隔后,将不同类别培训的模型同时上传到中央服务器。在单个类别中,设备根据环形拓扑会相互传达局部更新的模型权重。随着环形拓扑中训练的效率更喜欢具有均匀资源的设备,基于计算能力的分类减轻了Straggler效应的影响。此外,多个类别的同步更新与单个类别中的设备通信的组合有助于解决数据异质性问题,同时达到高精度。我们评估了基于MNIST,EMNIST,CIFAR10和CIFAR100数据集的提议框架以及设备的不同异质设置。实验结果表明,在训练准确性和效率方面,Fedhisyn的表现优于六种基线方法,例如FedAvg,脚手架和Fedat。
translated by 谷歌翻译
联合学习(FL)旨在最大程度地减少培训模型的沟通复杂性,而不是在许多客户中分发的异质数据。一种常见的方法是本地方法,在与服务器通信之前,客户端在本地数据(例如FedAvg)之前对本地数据进行了多个优化步骤。本地方法可以利用客户数据之间的相似性。但是,在现有的分析中,这是以依赖对通信的数量的依赖为代价的。另一方面,全球方法,客户只是在每个回合中返回梯度向量(例如,SGD) ,以R的速度更快,但即使客户均匀,也无法利用客户之间的相似性。我们提出了FedChain,这是一种算法框架,结合了本地方法和全球方法的优势,以实现R的快速收敛,同时利用客户之间的相似性。使用Fedchain,我们实例化了在一般凸和PL设置中先前已知的速率改进的算法,并且在满足强凸度的问题方面几乎是最佳的(通过我们显示的算法独立的下限)。经验结果支持现有方法的理论增益。
translated by 谷歌翻译
我们展示了一个联合学习框架,旨在强大地提供具有异构数据的各个客户端的良好预测性能。所提出的方法对基于SuperQualile的学习目标铰接,捕获异构客户端的误差分布的尾统计。我们提出了一种随机训练算法,其与联合平均步骤交织差异私人客户重新重量步骤。该提出的算法支持有限时间收敛保证,保证覆盖凸和非凸面设置。关于联邦学习的基准数据集的实验结果表明,我们的方法在平均误差方面与古典误差竞争,并且在误差的尾统计方面优于它们。
translated by 谷歌翻译
在分布式和联合学习中实现全球融合的主要障碍是由于分布式数据的异质性和随机性的客户端跨越梯度的未对准。在这项工作中,我们表明,实际上可以利用数据异质性来通过隐式正规化提高泛化性能。缓解异质性影响的一种方法是在整个训练中鼓励在不同客户端中的渐变对齐。我们的分析表明,通过利用复制SGD的隐式正则化效果的正确优化方法可以实现这一目标,从而导致梯度对准以及测试精度的改进。由于SGD中该正则化的存在完全依赖于在训练期间的不同迷你批次的顺序使用,因此在用大型批次进行训练时固有地没有。为了在增加并行性的同时获得该正则化的泛化效益,我们提出了一种新的渐变算法,其诱导相同的隐式正则化,同时允许在每个更新中使用任意大的批次。我们通过在不同分布式和联合学习设置中实验验证我们算法的优势。
translated by 谷歌翻译
由于参与客户的异构特征,联邦学习往往受到不稳定和缓慢的收敛。当客户参与比率低时,这种趋势加剧了,因为从每个轮的客户收集的信息容易更加不一致。为了解决挑战,我们提出了一种新的联合学习框架,这提高了服务器端聚合步骤的稳定性,这是通过将客户端发送与全局梯度估计的加速模型来引导本地梯度更新来实现的。我们的算法自然地聚合并将全局更新信息与没有额外的通信成本的参与者传达,并且不需要将过去的模型存储在客户端中。我们还规范了本地更新,以进一步降低偏差并提高本地更新的稳定性。我们根据各种设置执行了关于实际数据的全面实证研究,与最先进的方法相比,在准确性和通信效率方面表现出了拟议方法的显着性能,特别是具有低客户参与率。我们的代码可在https://github.com/ninigapa0 / fedagm获得
translated by 谷歌翻译
In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose Fed-Nova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systemsoriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized messagepassing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
translated by 谷歌翻译
当上行链路和下行链路通信都有错误时联合学习(FL)工作吗?通信噪音可以处理多少,其对学习性能的影响是什么?这项工作致力于通过明确地纳入流水线中的上行链路和下行链路嘈杂的信道来回答这些实际重要的问题。我们在同时上行链路和下行链路嘈杂通信通道上提供了多种新的融合分析,其包括完整和部分客户端参与,直接模型和模型差分传输,以及非独立和相同分布的(IID)本地数据集。这些分析表征了嘈杂通道的流动条件,使其具有与无通信错误的理想情况相同的融合行为。更具体地,为了保持FEDAVG的O(1 / T)具有完美通信的O(1 / T)收敛速率,应控制用于直接模型传输的上行链路和下行链路信噪比(SNR),使得它们被缩放为O(t ^ 2)其中T是通信轮的索引,但可以保持常量的模型差分传输。这些理论结果的关键洞察力是“雷达下的飞行”原则 - 随机梯度下降(SGD)是一个固有的噪声过程,并且可以容忍上行链路/下行链路通信噪声,只要它们不占据时变的SGD噪声即可。我们举例说明了具有两种广泛采用的通信技术 - 传输功率控制和多样性组合的这些理论发现 - 并通过使用多个真实世界流动任务的广泛数值实验进一步通过标准方法验证它们的性能优势。
translated by 谷歌翻译
在联合学习(FL)中,通过跨设备的模型更新进行合作学习全球模型的目的倾向于通过本地信息反对个性化的目标。在这项工作中,我们通过基于多准则优化的框架以定量的方式校准了这一权衡,我们将其作为一个受约束的程序进行了:设备的目标是其本地目标,它试图最大程度地减少在满足非线性约束的同时,以使其满足非线性约束,这些目标是其本地目标。量化本地模型和全局模型之间的接近度。通过考虑该问题的拉格朗日放松,我们开发了一种算法,该算法允许每个节点通过查询到一阶梯度Oracle将其Lagrangian的本地组件最小化。然后,服务器执行Lagrange乘法器上升步骤,然后进行Lagrange乘法器加权步骤。我们称这种实例化的原始偶对方法是联合学习超出共识($ \ texttt {fedBc} $)的实例。从理论上讲,我们确定$ \ texttt {fedBc} $以与最算好状态相匹配的速率收敛到一阶固定点,直到额外的错误项,取决于由于接近性约束而产生的公差参数。总体而言,该分析是针对非凸鞍点问题的原始偶对偶的方法的新颖表征。最后,我们证明了$ \ texttt {fedBc} $平衡了整个数据集(合成,MNIST,CIFAR-10,莎士比亚)的全球和本地模型测试精度指标,从而与艺术现状达到了竞争性能。
translated by 谷歌翻译
As a novel distributed learning paradigm, federated learning (FL) faces serious challenges in dealing with massive clients with heterogeneous data distribution and computation and communication resources. Various client-variance-reduction schemes and client sampling strategies have been respectively introduced to improve the robustness of FL. Among others, primal-dual algorithms such as the alternating direction of method multipliers (ADMM) have been found being resilient to data distribution and outperform most of the primal-only FL algorithms. However, the reason behind remains a mystery still. In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm. While this explains the inherent robustness of federated ADMM, the vanilla version of it lacks the ability to be adaptive to the degree of client heterogeneity. Besides, the global model at the server under client sampling is biased which slows down the practical convergence. To go beyond ADMM, we propose a novel primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model. In addition, FedVRA unifies several representative FL algorithms in the sense that they are either special instances of FedVRA or are close to it. Extensions of FedVRA to semi/un-supervised learning are also presented. Experiments based on (semi-)supervised image classification tasks demonstrate superiority of FedVRA over the existing schemes in learning scenarios with massive heterogeneous clients and client sampling.
translated by 谷歌翻译
This study investigates clustered federated learning (FL), one of the formulations of FL with non-i.i.d. data, where the devices are partitioned into clusters and each cluster optimally fits its data with a localized model. We propose a novel clustered FL framework, which applies a nonconvex penalty to pairwise differences of parameters. This framework can automatically identify clusters without a priori knowledge of the number of clusters and the set of devices in each cluster. To implement the proposed framework, we develop a novel clustered FL method called FPFC. Advancing from the standard ADMM, our method is implemented in parallel, updates only a subset of devices at each communication round, and allows each participating device to perform a variable amount of work. This greatly reduces the communication cost while simultaneously preserving privacy, making it practical for FL. We also propose a new warmup strategy for hyperparameter tuning under FL settings and consider the asynchronous variant of FPFC (asyncFPFC). Theoretically, we provide convergence guarantees of FPFC for general nonconvex losses and establish the statistical convergence rate under a linear model with squared loss. Our extensive experiments demonstrate the advantages of FPFC over existing methods.
translated by 谷歌翻译