Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systemsoriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized messagepassing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
translated by 谷歌翻译
与训练数据中心的训练传统机器学习(ML)模型相反,联合学习(FL)训练ML模型,这些模型在资源受限的异质边缘设备上包含的本地数据集上。现有的FL算法旨在为所有参与的设备学习一个单一的全球模型,这对于所有参与培训的设备可能没有帮助,这是由于整个设备的数据的异质性。最近,Hanzely和Richt \'{A} Rik(2020)提出了一种新的配方,以培训个性化的FL模型,旨在平衡传统的全球模型与本地模型之间的权衡,该模型可以使用其私人数据对单个设备进行培训只要。他们得出了一种称为无环梯度下降(L2GD)的新算法,以解决该算法,并表明该算法会在需要更多个性化的情况下,可以改善沟通复杂性。在本文中,我们为其L2GD算法配备了双向压缩机制,以进一步减少本地设备和服务器之间的通信瓶颈。与FL设置中使用的其他基于压缩的算法不同,我们的压缩L2GD算法在概率通信协议上运行,在概率通信协议中,通信不会按固定的时间表进行。此外,我们的压缩L2GD算法在没有压缩的情况下保持与香草SGD相似的收敛速率。为了验证算法的效率,我们在凸和非凸问题上都进行了多种数值实验,并使用各种压缩技术。
translated by 谷歌翻译
FEDPROX算法是一种简单但功能强大的分布式近端优化方法,广泛用于联合学习(FL)而不是异质数据。尽管在实践中看到了它的知名度和杰出的成功,但对FEDPROX的理论理解在很大程度上是不足的:FedProx的吸引人的融合行为迄今在某些非标准和不切实际的地方功能的差异假设下的特征是,结果的优化仅限于优化的限制。问题。为了解决这些缺陷,我们通过算法稳定性的镜头开发了FedProx及其Minibatch随机扩展的新型局部差异不变理论。结果,我们有助于得出对FedProx的几个新的和更深入的见解,以实现联合优化的非凸面,包括:1)收敛确保独立于局部差异类型条件; 2)融合保证非平滑FL问题; 3)关于Minibatch的尺寸和采样设备的数量,线性加速。我们的理论首次揭示了局部差异和平稳性对于FedProx获得有利的复杂性界限并不是必备的。据报道,一系列基准FL数据集的初步实验结果证明了小型匹配以提高FEDPROX的样品效率的好处。
translated by 谷歌翻译
In large-scale distributed learning, security issues have become increasingly important. Particularly in a decentralized environment, some computing units may behave abnormally, or even exhibit Byzantine failures-arbitrary and potentially adversarial behavior. In this paper, we develop distributed learning algorithms that are provably robust against such failures, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for three kinds of population loss functions: strongly convex, nonstrongly convex, and smooth non-convex. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.
translated by 谷歌翻译
我们考虑了分布式随机优化问题,其中$ n $代理想要最大程度地减少代理本地函数总和给出的全局函数,并专注于当代理的局部函数在非i.i.i.d上定义时,专注于异质设置。数据集。我们研究本地SGD方法,在该方法中,代理执行许多局部随机梯度步骤,并偶尔与中央节点进行通信以改善其本地优化任务。我们分析了本地步骤对局部SGD的收敛速率和通信复杂性的影响。特别是,我们允许在$ i $ th的通信回合($ h_i $)期间允许在所有通信回合中进行固定数量的本地步骤。我们的主要贡献是将本地SGD的收敛速率表征为$ \ {h_i \} _ {i = 1}^r $在强烈凸,convex和nonconvex local函数下的函数,其中$ r $是沟通总数。基于此特征,我们在序列$ \ {h_i \} _ {i = 1}^r $上提供足够的条件,使得本地SGD可以相对于工人数量实现线性加速。此外,我们提出了一种新的沟通策略,将本地步骤提高,优于现有的沟通策略,以突出局部功能。另一方面,对于凸和非凸局局功能,我们认为固定的本地步骤是本地SGD的最佳通信策略,并恢复了最新的收敛速率结果。最后,我们通过广泛的数值实验证明我们的理论结果是合理的。
translated by 谷歌翻译
梯度压缩是一种流行的技术,可改善机器学习模型分布式培训中随机一阶方法的沟通复杂性。但是,现有作品仅考虑随机梯度的替换采样。相比之下,在实践中众所周知,最近从理论上证实,基于没有替代抽样的随机方法,例如随机改组方法(RR)方法,其性能要比用更换梯度进行梯度的方法更好。在这项工作中,我们在文献中缩小了这一差距,并通过梯度压缩和没有替代抽样的方法提供了第一次分析方法。我们首先使用梯度压缩(Q-RR)开发一个随机重新填充的分布式变体,并展示如何通过使用控制迭代来减少梯度量化的方差。接下来,为了更好地适合联合学习应用程序,我们结合了本地计算,并提出了一种称为Q-Nastya的Q-RR的变体。 Q-Nastya使用本地梯度步骤以及不同的本地和全球步骤。接下来,我们还展示了如何在此设置中减少压缩差异。最后,我们证明了所提出的方法的收敛结果,并概述了它们在现有算法上改进的几种设置。
translated by 谷歌翻译
我们开发了一种新方法来解决中央服务器中分布式学习问题中的通信约束。我们提出和分析了一种执行双向压缩的新算法,并仅使用uplink(从本地工人到中央服务器)压缩达到与算法相同的收敛速率。为了获得此改进,我们设计了MCM,一种算法,使下行链路压缩仅影响本地模型,而整体模型则保留。结果,与以前的工作相反,本地服务器上的梯度是在干扰模型上计算的。因此,融合证明更具挑战性,需要精确控制这种扰动。为了确保它,MCM还将模型压缩与存储机制相结合。该分析打开了新的门,例如纳入依赖工人的随机模型和部分参与。
translated by 谷歌翻译
我们介绍了一个框架 - Artemis-,以解决分布式或联合设置中的学习问题,并具有通信约束和设备部分参与。几位工人(随机抽样)使用中央服务器执行优化过程来汇总其计算。为了减轻通信成本,Artemis允许在两个方向上(从工人到服务器,相反)将发送的信息与内存机制相结合。它改进了仅考虑单向压缩(对服务器)的现有算法,或在压缩操作员上使用非常强大的假设,并且通常不考虑设备的部分参与。我们在非I.I.D中的随机梯度(仅在最佳点界定的噪声方差)提供了快速的收敛速率(线性最高到阈值)。设置,突出显示内存对单向和双向压缩的影响,分析Polyak-Ruppert平均。我们在分布中使用收敛性,以获得渐近方差的下限,该方差突出了实际的压缩极限。我们提出了两种方法,以解决设备部分参与的具有挑战性的案例,并提供实验结果以证明我们的分析有效性。
translated by 谷歌翻译
联合学习(FL)是机器学习的一个子领域,在该子机学习中,多个客户试图在通信约束下通过网络进行协作学习模型。我们考虑在二阶功能相似性条件和强凸度下联合优化的有限和联合优化,并提出了两种新算法:SVRP和催化的SVRP。这种二阶相似性条件最近越来越流行,并且在包括分布式统计学习和差异性经验风险最小化在内的许多应用中得到满足。第一种算法SVRP结合了近似随机点评估,客户采样和降低方差。我们表明,当功能相似性足够高时,SVRP是沟通有效的,并且在许多现有算法上取得了卓越的性能。我们的第二个算法,催化的SVRP,是SVRP的催化剂加速变体,在二阶相似性和强凸度下,现有的联合优化算法可实现更好的性能,并均匀地改善了现有的算法。在分析这些算法的过程中,我们提供了可能具有独立关注的随机近端方法(SPPM)的新分析。我们对SPPM的分析很简单,允许进行近似近端评估,不需要任何平滑度假设,并且在通信复杂性上比普通分布式随机梯度下降显示出明显的好处。
translated by 谷歌翻译
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adapt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.Recently, the idea of personalization in FL and its connections with MAML has gained a lot of attention. In particular, [32] considers a formulation and algorithm similar to our paper, and elaborates
translated by 谷歌翻译
Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization: (1) significant variability in terms of the systems characteristics on each device in the network (systems heterogeneity), and (2) non-identically distributed data across the network (statistical heterogeneity). In this work, we introduce a framework, FedProx, to tackle heterogeneity in federated networks. FedProx can be viewed as a generalization and re-parametrization of FedAvg, the current state-of-the-art method for federated learning. While this re-parameterization makes only minor modifications to the method itself, these modifications have important ramifications both in theory and in practice. Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity). Practically, we demonstrate that FedProx allows for more robust convergence than FedAvg across a suite of realistic federated datasets. In particular, in highly heterogeneous settings, FedProx demonstrates significantly more stable and accurate convergence behavior relative to FedAvg-improving absolute test accuracy by 22% on average.
translated by 谷歌翻译
在过去的几年中,各种通信压缩技术已经出现为一个不可或缺的工具,有助于缓解分布式学习中的通信瓶颈。然而,尽管{\ em偏见}压缩机经常在实践中显示出卓越的性能,但与更多的研究和理解的{\ EM无偏见}压缩机相比,非常少见。在这项工作中,我们研究了三类偏置压缩操作员,其中两个是新的,并且它们在施加到(随机)梯度下降和分布(随机)梯度下降时的性能。我们首次展示偏置压缩机可以在单个节点和分布式设置中导致线性收敛速率。我们证明了具有错误反馈机制的分布式压缩SGD方法,享受ergodic速率$ \ mathcal {o} \ left(\ delta l \ exp [ - \ frac {\ mu k} {\ delta l}] + \ frac {(c + \ delta d)} {k \ mu} \右)$,其中$ \ delta \ ge1 $是一个压缩参数,它在应用更多压缩时增长,$ l $和$ \ mu $是平滑性和强凸常数,$ C $捕获随机渐变噪声(如果在每个节点上计算完整渐变,则$ C = 0 $如果在每个节点上计算),则$ D $以最佳($ d = 0 $ for over参数化模型)捕获渐变的方差)。此外,通过对若干合成和经验的通信梯度分布的理论研究,我们阐明了为什么和通过多少偏置压缩机优于其无偏的变体。最后,我们提出了几种具有有希望理论担保和实际表现的新型偏置压缩机。
translated by 谷歌翻译
具有周期性模型的本地随机梯度下降(SGD)平均(FEDAVG)是联合学习中的基础算法。该算法在多个工人上独立运行SGD,并定期平均所有工人的模型。然而,当本地SGD与许多工人一起运行时,周期性平均导致跨越工人的重大模型差异,使全局损失缓慢收敛。虽然最近的高级优化方法解决了专注于非IID设置的问题,但由于底层定期模型平均而仍存在模型差异问题。我们提出了一个部分模型平均框架,这些框架减轻了联合学习中的模型差异问题。部分平均鼓励本地模型在参数空间上保持彼此接近,并且它可以更有效地最小化全局损失。鉴于固定数量的迭代和大量工人(128),验证精度高达2.2%的验证精度高于周期性的完整平均值。
translated by 谷歌翻译
In distributed training of deep neural networks, parallel minibatch SGD is widely used to speed up the training process by using multiple workers. It uses multiple workers to sample local stochastic gradient in parallel, aggregates all gradients in a single server to obtain the average, and update each worker's local model using a SGD update with the averaged gradient. Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker. However, such linear scalability in practice is significantly limited by the growing demand for gradient communication as more workers are involved. Model averaging, which periodically averages individual models trained over parallel workers, is another common practice used for distributed training of deep neural networks since (Zinkevich et al. 2010) (McDonald, Hall, andMann 2010). Compared with parallel mini-batch SGD, the communication overhead of model averaging is significantly reduced. Impressively, tremendous experimental works have verified that model averaging can still achieve a good speed-up of the training time as long as the averaging interval is carefully controlled. However, it remains a mystery in theory why such a simple heuristic works so well. This paper provides a thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead.
translated by 谷歌翻译
联合学习(FL)是一种新兴的范式,可实现对机器学习模型的大规模分布培训,同时仍提供隐私保证。在这项工作中,我们在将联合优化扩展到大节点计数时共同解决了两个主要的实际挑战:中央权威和单个计算节点之间紧密同步的需求以及中央服务器和客户端之间的传输成本较大。具体而言,我们提出了经典联合平均(FedAvg)算法的新变体,该算法支持异步通信和通信压缩。我们提供了一种新的分析技术,该技术表明,尽管有这些系统放松,但在合理的参数设置下,我们的算法基本上与FedAvg的最著名界限相匹配。在实验方面,我们表明我们的算法确保标准联合任务的快速实用收敛。
translated by 谷歌翻译
Federated Learning是一种机器学习培训范式,它使客户能够共同培训模型而无需共享自己的本地化数据。但是,实践中联合学习的实施仍然面临许多挑战,例如由于重复的服务器 - 客户同步以及基于SGD的模型更新缺乏适应性,大型通信开销。尽管已经提出了各种方法来通过梯度压缩或量化来降低通信成本,并且提出了联合版本的自适应优化器(例如FedAdam)来增加适应性,目前的联合学习框架仍然无法立即解决上述挑战。在本文中,我们提出了一种具有理论融合保证的新型沟通自适应联合学习方法(FedCAMS)。我们表明,在非convex随机优化设置中,我们提出的fedcams的收敛率与$ o(\ frac {1} {\ sqrt {tkm}})$与其非压缩的对应物相同。各种基准的广泛实验验证了我们的理论分析。
translated by 谷歌翻译
我们展示了一个联合学习框架,旨在强大地提供具有异构数据的各个客户端的良好预测性能。所提出的方法对基于SuperQualile的学习目标铰接,捕获异构客户端的误差分布的尾统计。我们提出了一种随机训练算法,其与联合平均步骤交织差异私人客户重新重量步骤。该提出的算法支持有限时间收敛保证,保证覆盖凸和非凸面设置。关于联邦学习的基准数据集的实验结果表明,我们的方法在平均误差方面与古典误差竞争,并且在误差的尾统计方面优于它们。
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
联合学习(FL)算法通常在每个圆数(部分参与)大并且服务器的通信带宽有限时对每个轮子(部分参与)进行分数。近期对FL的收敛分析的作品专注于无偏见的客户采样,例如,随机均匀地采样,由于高度的系统异质性和统计异质性而均匀地采样。本文旨在设计一种自适应客户采样算法,可以解决系统和统计异质性,以最小化壁时钟收敛时间。我们获得了具有任意客户端采样概率的流动算法的新的遗传融合。基于界限,我们分析了建立了总学习时间和采样概率之间的关系,这导致了用于训练时间最小化的非凸优化问题。我们设计一种高效的算法来学习收敛绑定中未知参数,并开发低复杂性算法以大致解决非凸面问题。硬件原型和仿真的实验结果表明,与几个基线采样方案相比,我们所提出的采样方案显着降低了收敛时间。值得注意的是,我们的硬件原型的方案比均匀的采样基线花费73%,以达到相同的目标损失。
translated by 谷歌翻译
数据异构联合学习(FL)系统遭受了两个重要的收敛误差来源:1)客户漂移错误是由于在客户端执行多个局部优化步骤而引起的,以及2)部分客户参与错误,这是一个事实,仅一小部分子集边缘客户参加每轮培训。我们发现其中,只有前者在文献中受到了极大的关注。为了解决这个问题,我们提出了FedVarp,这是在服务器上应用的一种新颖的差异算法,它消除了由于部分客户参与而导致的错误。为此,服务器只是将每个客户端的最新更新保持在内存中,并将其用作每回合中非参与客户的替代更新。此外,为了减轻服务器上的内存需求,我们提出了一种新颖的基于聚类的方差降低算法clusterfedvarp。与以前提出的方法不同,FedVarp和ClusterFedVarp均不需要在客户端上进行其他计算或其他优化参数的通信。通过广泛的实验,我们表明FedVarp优于最先进的方法,而ClusterFedVarp实现了与FedVarp相当的性能,并且记忆要求较少。
translated by 谷歌翻译