在许多机器学习应用中,在许多移动或物联网设备上生成大规模和隐私敏感数据,在集中位置收集数据可能是禁止的。因此,在保持数据本地化的同时估计移动或物联网设备上的参数越来越吸引人。这种学习设置被称为交叉设备联合学习。在本文中,我们提出了第一理论上保证的跨装置联合学习设置中的一般Minimax问题的算法。我们的算法仅在每轮训练中只需要一小部分设备,这克服了设备的低可用性引入​​的困难。通过在与服务器通信之前对客户端执行多个本地更新步骤,并利用全局梯度估计来进一步减少通信开销,并利用全局梯度估计来校正由数据异质性引入的本地更新方向上的偏置。通过基于新型潜在功能的开发分析,我们为我们的算法建立了理论融合保障。 AUC最大化,强大的对抗网络培训和GAN培训任务的实验结果展示了我们算法的效率。
translated by 谷歌翻译
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning. However, due to the more complicated inter-outer problem structure in federated min-max learning, theoretical understandings of communication complexity for federated min-max learning with infrequent communications remain very limited in the literature. This is particularly true for settings with non-i.i.d. datasets and partial client participation. To address this challenge, in this paper, we propose a new algorithmic framework called stochastic sampling averaging gradient descent ascent (SAGDA), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps, which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of SAGDA, we immediately arrive at an $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA. Therefore, through the lens of SAGDA, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning.
translated by 谷歌翻译
标准联合优化方法成功地适用于单层结构的随机问题。然而,许多当代的ML问题 - 包括对抗性鲁棒性,超参数调整和参与者 - 批判性 - 属于嵌套的双层编程,这些编程包含微型型和组成优化。在这项工作中,我们提出了\ fedblo:一种联合交替的随机梯度方法来解决一般的嵌套问题。我们在存在异质数据的情况下为\ fedblo建立了可证明的收敛速率,并引入了二聚体,最小值和组成优化的变化。\ fedblo引入了多种创新,包括联邦高级计算和降低方差,以解决内部级别的异质性。我们通过有关超参数\&超代理学习和最小值优化的实验来补充我们的理论,以证明我们方法在实践中的好处。代码可在https://github.com/ucr-optml/fednest上找到。
translated by 谷歌翻译
本文重点介绍了解决光滑非凸强凹入最小问题的随机方法,这导致了由于其深度学习中的潜在应用而受到越来越长的关注(例如,深度AUC最大化,分布鲁棒优化)。然而,大多数现有算法在实践中都很慢,并且它们的分析围绕到几乎静止点的收敛。我们考虑利用Polyak-\ L Ojasiewicz(PL)条件来设计更快的随机算法,具有更强的收敛保证。尽管已经用于设计许多随机最小化算法的PL条件,但它们对非凸敏最大优化的应用仍然罕见。在本文中,我们提出并分析了基于近端的跨越时代的方法的通用框架,许多众所周知的随机更新嵌入。以{\ BF原始物镜差和二元间隙}的方式建立快速收敛。与现有研究相比,(i)我们的分析基于一个新的Lyapunov函数,包括原始物理差距和正则化功能的二元间隙,(ii)结果更加全面,提高了更好的依赖性的速率不同假设下的条件号。我们还开展深层和非深度学习实验,以验证我们的方法的有效性。
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
As a novel distributed learning paradigm, federated learning (FL) faces serious challenges in dealing with massive clients with heterogeneous data distribution and computation and communication resources. Various client-variance-reduction schemes and client sampling strategies have been respectively introduced to improve the robustness of FL. Among others, primal-dual algorithms such as the alternating direction of method multipliers (ADMM) have been found being resilient to data distribution and outperform most of the primal-only FL algorithms. However, the reason behind remains a mystery still. In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm. While this explains the inherent robustness of federated ADMM, the vanilla version of it lacks the ability to be adaptive to the degree of client heterogeneity. Besides, the global model at the server under client sampling is biased which slows down the practical convergence. To go beyond ADMM, we propose a novel primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model. In addition, FedVRA unifies several representative FL algorithms in the sense that they are either special instances of FedVRA or are close to it. Extensions of FedVRA to semi/un-supervised learning are also presented. Experiments based on (semi-)supervised image classification tasks demonstrate superiority of FedVRA over the existing schemes in learning scenarios with massive heterogeneous clients and client sampling.
translated by 谷歌翻译
在这项工作中,我们提出了FedSSO,这是一种用于联合学习的服务器端二阶优化方法(FL)。与以前朝这个方向的工作相反,我们在准牛顿方法中采用了服务器端近似,而无需客户的任何培训数据。通过这种方式,我们不仅将计算负担从客户端转移到服务器,而且还消除了客户和服务器之间二阶更新的附加通信。我们为我们的新方法的收敛提供了理论保证,并从经验上证明了我们在凸面和非凸面设置中的快速收敛和沟通节省。
translated by 谷歌翻译
数据异构联合学习(FL)系统遭受了两个重要的收敛误差来源:1)客户漂移错误是由于在客户端执行多个局部优化步骤而引起的,以及2)部分客户参与错误,这是一个事实,仅一小部分子集边缘客户参加每轮培训。我们发现其中,只有前者在文献中受到了极大的关注。为了解决这个问题,我们提出了FedVarp,这是在服务器上应用的一种新颖的差异算法,它消除了由于部分客户参与而导致的错误。为此,服务器只是将每个客户端的最新更新保持在内存中,并将其用作每回合中非参与客户的替代更新。此外,为了减轻服务器上的内存需求,我们提出了一种新颖的基于聚类的方差降低算法clusterfedvarp。与以前提出的方法不同,FedVarp和ClusterFedVarp均不需要在客户端上进行其他计算或其他优化参数的通信。通过广泛的实验,我们表明FedVarp优于最先进的方法,而ClusterFedVarp实现了与FedVarp相当的性能,并且记忆要求较少。
translated by 谷歌翻译
从经验上证明,在跨客户聚集之前应用多个本地更新的实践是克服联合学习(FL)中的通信瓶颈的成功方法。在这项工作中,我们提出了一种通用食谱,即FedShuffle,可以更好地利用FL中的本地更新,尤其是在异质性方面。与许多先前的作品不同,FedShuffle在每个设备的更新数量上没有任何统一性。我们的FedShuffle食谱包括四种简单的功能成分:1)数据的本地改组,2)调整本地学习率,3)更新加权,4)减少动量方差(Cutkosky and Orabona,2019年)。我们对FedShuffle进行了全面的理论分析,并表明从理论和经验上讲,我们的方法都不遭受FL方法中存在的目标功能不匹配的障碍,这些方法假设在异质FL设置中,例如FedAvg(McMahan等人,McMahan等, 2017)。此外,通过将上面的成分结合起来,FedShuffle在Fednova上改善(Wang等,2020),以前提议解决此不匹配。我们还表明,在Hessian相似性假设下,通过降低动量方差的FedShuffle可以改善非本地方法。最后,通过对合成和现实世界数据集的实验,我们说明了FedShuffle中使用的四种成分中的每种如何有助于改善FL中局部更新的使用。
translated by 谷歌翻译
FEDPROX算法是一种简单但功能强大的分布式近端优化方法,广泛用于联合学习(FL)而不是异质数据。尽管在实践中看到了它的知名度和杰出的成功,但对FEDPROX的理论理解在很大程度上是不足的:FedProx的吸引人的融合行为迄今在某些非标准和不切实际的地方功能的差异假设下的特征是,结果的优化仅限于优化的限制。问题。为了解决这些缺陷,我们通过算法稳定性的镜头开发了FedProx及其Minibatch随机扩展的新型局部差异不变理论。结果,我们有助于得出对FedProx的几个新的和更深入的见解,以实现联合优化的非凸面,包括:1)收敛确保独立于局部差异类型条件; 2)融合保证非平滑FL问题; 3)关于Minibatch的尺寸和采样设备的数量,线性加速。我们的理论首次揭示了局部差异和平稳性对于FedProx获得有利的复杂性界限并不是必备的。据报道,一系列基准FL数据集的初步实验结果证明了小型匹配以提高FEDPROX的样品效率的好处。
translated by 谷歌翻译
由于客户端之间标签不平衡的普遍性,联邦对抗域适应是一种独特的分布式Minimax培训任务,每个客户端只看到培训全局模型所需的标签类的子集。为了解决这个问题,我们提出了一个分布式Minimax优化器,称为FEDMM,专为联邦对抗域适应问题而设计。即使在每个客户端具有不同的标签类,某些客户端只有无监督的任务,它也运作良好。我们证明了FEDMM确保将达到域移位无监督数据的静止点收敛。在各种基准数据集中,广泛的实验表明,基于梯度下降升降算法例如,当从头划伤训练时,它以相同的通信回合占据了其他基于GDA的联合平均方法的准确性约为20%;当从预先训练的模型培训时,它始终如一地优于不同网络的5.4 \%$ 9 \%$ 9 \%$。
translated by 谷歌翻译
众所周知,客户师沟通可能是联邦学习中的主要瓶颈。在这项工作中,我们通过一种新颖的客户端采样方案解决了这个问题,我们将允许的客户数量限制为将其更新传达给主节点的数量。在每个通信回合中,所有参与的客户都会计算他们的更新,但只有具有“重要”更新的客户可以与主人通信。我们表明,可以仅使用更新的规范来衡量重要性,并提供一个公式以最佳客户参与。此公式将所有客户参与的完整更新与我们有限的更新(参与客户数量受到限制)之间的距离最小化。此外,我们提供了一种简单的算法,该算法近似于客户参与的最佳公式,该公式仅需要安全的聚合,因此不会损害客户的隐私。我们在理论上和经验上都表明,对于分布式SGD(DSGD)和联合平均(FedAvg),我们的方法的性能可以接近完全参与,并且优于基线,在参与客户均匀地采样的基线。此外,我们的方法与现有的减少通信开销(例如本地方法和通信压缩方法)的现有方法兼容。
translated by 谷歌翻译
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systemsoriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized messagepassing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
translated by 谷歌翻译
Federated Learning是一种机器学习培训范式,它使客户能够共同培训模型而无需共享自己的本地化数据。但是,实践中联合学习的实施仍然面临许多挑战,例如由于重复的服务器 - 客户同步以及基于SGD的模型更新缺乏适应性,大型通信开销。尽管已经提出了各种方法来通过梯度压缩或量化来降低通信成本,并且提出了联合版本的自适应优化器(例如FedAdam)来增加适应性,目前的联合学习框架仍然无法立即解决上述挑战。在本文中,我们提出了一种具有理论融合保证的新型沟通自适应联合学习方法(FedCAMS)。我们表明,在非convex随机优化设置中,我们提出的fedcams的收敛率与$ o(\ frac {1} {\ sqrt {tkm}})$与其非压缩的对应物相同。各种基准的广泛实验验证了我们的理论分析。
translated by 谷歌翻译
虽然减少方差方法在解决大规模优化问题方面取得了巨大成功,但其中许多人遭受了累积错误,因此应定期需要进行完整的梯度计算。在本文中,我们提出了一种用于有限的和非convex优化的单环算法(梯度估计器的单环方法),该算法不需要定期刷新梯度估计器,但实现了几乎最佳的梯度复杂性。与现有方法不同,雪橇具有多功能性的优势。 (i)二阶最优性,(ii)PL区域中的指数收敛性,以及(iii)在较小的数据异质性下较小的复杂性。我们通过利用这些有利的特性来构建有效的联合学习算法。我们展示了输出的一阶和二阶最优性,并在PL条件下提供分析。当本地预算足够大,并且客户少(Hessian-)〜异质时,该算法需要较少的通信回合,而不是现有方法,例如FedAvg,脚手架和Mime。我们方法的优势在数值实验中得到了验证。
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译
大规模凸孔concave minimax问题在许多应用中出现,包括游戏理论,强大的培训和生成对抗网络的培训。尽管它们的适用性广泛,但使用现有的随机最小值方法在大量数据的情况下,有效,有效地解决此类问题是具有挑战性的。我们研究了一类随机最小值方法,并开发了一种沟通效率的分布式随机外算法Localadaseg,其自适应学习速率适合在参数 - 服务器模型中求解凸Conconcove minimax问题。 Localadaseg具有三个主要功能:(i)定期沟通策略,可降低工人与服务器之间的通信成本; (ii)在本地计算并允许无调实现的自适应学习率; (iii)从理论上讲,在随机梯度的估计中,相对于主要差异项的几乎线性加速在平滑和非平滑凸凸环设置中都证明了。 Localadaseg用于解决随机双线游戏,并训练生成的对抗网络。我们将localadaseg与几个用于最小问题的现有优化者进行了比较,并通过在均质和异质环境中的几个实验来证明其功效。
translated by 谷歌翻译
在最近的联邦学习研究中,使用大批量提高了收敛率,但是与使用小批量相比,它需要额外的计算开销。为了克服这一限制,我们提出了一个统一的框架,该框架基于时间变化的概率将参与者分为锚和矿工组。锚点组中的每个客户都使用大批量计算梯度,该梯度被视为其靶心。矿工组中的客户使用串行迷你批次执行多个本地更新,并且每个本地更新也受到客户平均值Bullseyes的平均值的全局目标的间接调节。结果,矿工组遵循了对全球最小化器的近乎最佳更新,该更新适合更新全局模型。通过$ \ epsilon $ - Approximation衡量,FedAmd通过以恒定概率对锚点进行采样锚点,在非convex目标下达到了$ o(1/\ epsilon)$的收敛速率。理论上的结果大大超过了最先进的算法BVR-l-SGD $ O(1/\ Epsilon^{3/2})$,而FedAmd至少减少了$ O(1/\ Epsilon)$沟通开销。关于现实世界数据集的实证研究验证了FedAmd的有效性,并证明了我们提出的算法的优势。
translated by 谷歌翻译
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FEDAVG) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including ADAGRAD, ADAM, and YOGI, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
translated by 谷歌翻译
In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose Fed-Nova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
translated by 谷歌翻译