最近,模型 - 不可知的元学习(MAML)已经获得了巨大的关注。然而,MAML的随机优化仍然不成熟。 MAML的现有算法利用“剧集”思想,通过对每个迭代的每个采样任务进行采样和一些数据点来更新元模型。但是,它们不一定能够以恒定的小批量大小保证收敛,或者需要在每次迭代时处理大量任务,这对于持续学习或跨设备联合学习不可行,其中仅提供少量任务每次迭代或每轮。本文通过(i)提出了与消失收敛误差的有效的基于内存的随机算法提出了基于存储的基于存储器的随机算法,这只需要采样恒定数量的任务和恒定数量的每次迭代数据样本; (ii)提出基于通信的分布式内存基于存储器的MAML算法,用于跨设备(带客户端采样)和跨筒仓(无客户采样)设置中的个性化联合学习。理论结果显着改善了MAML的优化理论,实证结果也证实了理论。
translated by 谷歌翻译
标准联合优化方法成功地适用于单层结构的随机问题。然而,许多当代的ML问题 - 包括对抗性鲁棒性,超参数调整和参与者 - 批判性 - 属于嵌套的双层编程,这些编程包含微型型和组成优化。在这项工作中,我们提出了\ fedblo:一种联合交替的随机梯度方法来解决一般的嵌套问题。我们在存在异质数据的情况下为\ fedblo建立了可证明的收敛速率,并引入了二聚体,最小值和组成优化的变化。\ fedblo引入了多种创新,包括联邦高级计算和降低方差,以解决内部级别的异质性。我们通过有关超参数\&超代理学习和最小值优化的实验来补充我们的理论,以证明我们方法在实践中的好处。代码可在https://github.com/ucr-optml/fednest上找到。
translated by 谷歌翻译
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adapt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.Recently, the idea of personalization in FL and its connections with MAML has gained a lot of attention. In particular, [32] considers a formulation and algorithm similar to our paper, and elaborates
translated by 谷歌翻译
数据异构联合学习(FL)系统遭受了两个重要的收敛误差来源:1)客户漂移错误是由于在客户端执行多个局部优化步骤而引起的,以及2)部分客户参与错误,这是一个事实,仅一小部分子集边缘客户参加每轮培训。我们发现其中,只有前者在文献中受到了极大的关注。为了解决这个问题,我们提出了FedVarp,这是在服务器上应用的一种新颖的差异算法,它消除了由于部分客户参与而导致的错误。为此,服务器只是将每个客户端的最新更新保持在内存中,并将其用作每回合中非参与客户的替代更新。此外,为了减轻服务器上的内存需求,我们提出了一种新颖的基于聚类的方差降低算法clusterfedvarp。与以前提出的方法不同,FedVarp和ClusterFedVarp均不需要在客户端上进行其他计算或其他优化参数的通信。通过广泛的实验,我们表明FedVarp优于最先进的方法,而ClusterFedVarp实现了与FedVarp相当的性能,并且记忆要求较少。
translated by 谷歌翻译
从经验上证明,在跨客户聚集之前应用多个本地更新的实践是克服联合学习(FL)中的通信瓶颈的成功方法。在这项工作中,我们提出了一种通用食谱,即FedShuffle,可以更好地利用FL中的本地更新,尤其是在异质性方面。与许多先前的作品不同,FedShuffle在每个设备的更新数量上没有任何统一性。我们的FedShuffle食谱包括四种简单的功能成分:1)数据的本地改组,2)调整本地学习率,3)更新加权,4)减少动量方差(Cutkosky and Orabona,2019年)。我们对FedShuffle进行了全面的理论分析,并表明从理论和经验上讲,我们的方法都不遭受FL方法中存在的目标功能不匹配的障碍,这些方法假设在异质FL设置中,例如FedAvg(McMahan等人,McMahan等, 2017)。此外,通过将上面的成分结合起来,FedShuffle在Fednova上改善(Wang等,2020),以前提议解决此不匹配。我们还表明,在Hessian相似性假设下,通过降低动量方差的FedShuffle可以改善非本地方法。最后,通过对合成和现实世界数据集的实验,我们说明了FedShuffle中使用的四种成分中的每种如何有助于改善FL中局部更新的使用。
translated by 谷歌翻译
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data. One challenge associated with FL is statistical diversity among clients, which restricts the global model from delivering good performance on each client's task. To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. Theoretically, we show that pFedMe's convergence rate is state-of-the-art: achieving quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. Experimentally, we verify that pFedMe excels at empirical performance compared with the vanilla FedAvg and Per-FedAvg, a meta-learning based personalized FL algorithm.
translated by 谷歌翻译
In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose Fed-Nova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译
In this paper, we tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing FL algorithms are applicable. In particular, the objective has the form of $\mathbb E_{z\sim S_1} f(\mathbb E_{z'\sim S_2} \ell(w; z, z'))$, where two sets of data $S_1, S_2$ are distributed over multiple machines, $\ell(\cdot)$ is a pairwise loss that only depends on the prediction outputs of the input data pairs $(z, z')$, and $f(\cdot)$ is possibly a non-linear non-convex function. This problem has important applications in machine learning, e.g., AUROC maximization with a pairwise loss, and partial AUROC maximization with a compositional loss. The challenges for designing an FL algorithm lie in the non-decomposability of the objective over multiple machines and the interdependency between different machines. To address the challenges, we propose an active-passive decomposition framework that decouples the gradient's components with two types, namely active parts and passive parts, where the active parts depend on local data that are computed with the local model and the passive parts depend on other machines that are communicated/computed based on historical models and samples. Under this framework, we develop two provable FL algorithms (FeDXL) for handling linear and nonlinear $f$, respectively, based on federated averaging and merging. We develop a novel theoretical analysis to combat the latency of the passive parts and the interdependency between the local model parameters and the involved data for computing local gradient estimators. We establish both iteration and communication complexities and show that using the historical samples and models for computing the passive parts do not degrade the complexities. We conduct empirical studies of FeDXL for deep AUROC and partial AUROC maximization, and demonstrate their performance compared with several baselines.
translated by 谷歌翻译
FEDPROX算法是一种简单但功能强大的分布式近端优化方法,广泛用于联合学习(FL)而不是异质数据。尽管在实践中看到了它的知名度和杰出的成功,但对FEDPROX的理论理解在很大程度上是不足的:FedProx的吸引人的融合行为迄今在某些非标准和不切实际的地方功能的差异假设下的特征是,结果的优化仅限于优化的限制。问题。为了解决这些缺陷,我们通过算法稳定性的镜头开发了FedProx及其Minibatch随机扩展的新型局部差异不变理论。结果,我们有助于得出对FedProx的几个新的和更深入的见解,以实现联合优化的非凸面,包括:1)收敛确保独立于局部差异类型条件; 2)融合保证非平滑FL问题; 3)关于Minibatch的尺寸和采样设备的数量,线性加速。我们的理论首次揭示了局部差异和平稳性对于FedProx获得有利的复杂性界限并不是必备的。据报道,一系列基准FL数据集的初步实验结果证明了小型匹配以提高FEDPROX的样品效率的好处。
translated by 谷歌翻译
众所周知,客户师沟通可能是联邦学习中的主要瓶颈。在这项工作中,我们通过一种新颖的客户端采样方案解决了这个问题,我们将允许的客户数量限制为将其更新传达给主节点的数量。在每个通信回合中,所有参与的客户都会计算他们的更新,但只有具有“重要”更新的客户可以与主人通信。我们表明,可以仅使用更新的规范来衡量重要性,并提供一个公式以最佳客户参与。此公式将所有客户参与的完整更新与我们有限的更新(参与客户数量受到限制)之间的距离最小化。此外,我们提供了一种简单的算法,该算法近似于客户参与的最佳公式,该公式仅需要安全的聚合,因此不会损害客户的隐私。我们在理论上和经验上都表明,对于分布式SGD(DSGD)和联合平均(FedAvg),我们的方法的性能可以接近完全参与,并且优于基线,在参与客户均匀地采样的基线。此外,我们的方法与现有的减少通信开销(例如本地方法和通信压缩方法)的现有方法兼容。
translated by 谷歌翻译
虽然减少方差方法在解决大规模优化问题方面取得了巨大成功,但其中许多人遭受了累积错误,因此应定期需要进行完整的梯度计算。在本文中,我们提出了一种用于有限的和非convex优化的单环算法(梯度估计器的单环方法),该算法不需要定期刷新梯度估计器,但实现了几乎最佳的梯度复杂性。与现有方法不同,雪橇具有多功能性的优势。 (i)二阶最优性,(ii)PL区域中的指数收敛性,以及(iii)在较小的数据异质性下较小的复杂性。我们通过利用这些有利的特性来构建有效的联合学习算法。我们展示了输出的一阶和二阶最优性,并在PL条件下提供分析。当本地预算足够大,并且客户少(Hessian-)〜异质时,该算法需要较少的通信回合,而不是现有方法,例如FedAvg,脚手架和Mime。我们方法的优势在数值实验中得到了验证。
translated by 谷歌翻译
客户端之间的非独立和相同分布(非IID)数据分布被视为降低联合学习(FL)性能的关键因素。处理非IID数据(如个性化FL和联邦多任务学习(FMTL)的几种方法对研究社区有很大兴趣。在这项工作中,首先,我们使用Laplacian正规化制定FMTL问题,明确地利用客户模型之间的关系进行多任务学习。然后,我们介绍了FMTL问题的新视图,首次表明配制的FMTL问题可用于传统的FL和个性化FL。我们还提出了两种算法FEDU和DFEDU,分别解决了通信集中和分散方案中的配制FMTL问题。从理论上讲,我们证明了两种算法的收敛速率实现了用于非凸起目标的强大凸起和载位加速的线性加速。实验,我们表明我们的算法优于FL设置的传统算法FedVG,在FMTL设置中的Mocha,以及个性化流程中的PFEDME和PER-FEDAVG。
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systemsoriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized messagepassing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
translated by 谷歌翻译
在这项工作中,我们提出了FedSSO,这是一种用于联合学习的服务器端二阶优化方法(FL)。与以前朝这个方向的工作相反,我们在准牛顿方法中采用了服务器端近似,而无需客户的任何培训数据。通过这种方式,我们不仅将计算负担从客户端转移到服务器,而且还消除了客户和服务器之间二阶更新的附加通信。我们为我们的新方法的收敛提供了理论保证,并从经验上证明了我们在凸面和非凸面设置中的快速收敛和沟通节省。
translated by 谷歌翻译
本文研究了一系列组成函数的随机优化,其中每个汇总的内部函数与相应的求和指数耦合。我们将这个问题家族称为有限和耦合的组成优化(FCCO)。它在机器学习中具有广泛的应用,用于优化非凸或凸组成措施/目标,例如平均精度(AP),p-norm推动,列表排名损失,邻居组成分析(NCA),深度生存分析,深层可变模型等等,这应该得到更精细的分析。然而,现有的算法和分析在一个或其他方面受到限制。本文的贡献是为非凸和凸目标的简单随机算法提供全面的收敛分析。我们的关键结果是通过使用带有微型批次的基于移动平均的估计器,通过并行加速提高了Oracle的复杂性。我们的理论分析还展示了通过对外部和内部水平相等大小的批量来改善实际实现的新见解。关于AP最大化,NCA和P-norm推动的数值实验证实了该理论的某些方面。
translated by 谷歌翻译
Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, the federated learning in practice still faces numerous challenges, such as the large training iterations to converge since the sizes of models and datasets keep increasing, and the lack of adaptivity by SGD-based model updates. Meanwhile, the study of adaptive methods in federated learning is scarce and existing works either lack a complete theoretical convergence guarantee or have slow sample complexity. In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on the momentum-based variance reduced technique in cross-silo FL. We first explore how to design the adaptive algorithm in the FL setting. By providing a counter-example, we prove that a simple combination of FL and adaptive methods could lead to divergence. More importantly, we provide a convergence analysis for our method and prove that our algorithm is the first adaptive FL algorithm to reach the best-known samples $O(\epsilon^{-3})$ and $O(\epsilon^{-2})$ communication rounds to find an $\epsilon$-stationary point without large batches. The experimental results on the language modeling task and image classification task with heterogeneous data demonstrate the efficiency of our algorithms.
translated by 谷歌翻译
我们研究基于{\ em本地培训(LT)}范式的分布式优化方法:通过在参数平均之前对客户进行基于本地梯度的培训来实现沟通效率。回顾田地的进度,我们{\ em识别5代LT方法}:1)启发式,2)均匀,3)sublinear,4)线性和5)加速。由Mishchenko,Malinovsky,Stich和Richt \'{A} Rik(2022)发起的5 $ {}^{\ rm th} $生成,由Proxskip方法发起通信加速机制。受到最近进度的启发,我们通过证明可以使用{\ em差异}进一步增强它们,为5 $ {}^{\ rm th} $生成LT方法的生成。尽管LT方法的所有以前的所有理论结果都完全忽略了本地工作的成本,并且仅根据交流回合的数量而被构成,但我们证明我们的方法在{\ em总培训成本方面都比{\ em em总培训成本}大得多当本地计算足够昂贵时,在制度中的理论和实践中,最先进的方法是proxskip。我们从理论上表征了这个阈值,并通过经验结果证实了我们的理论预测。
translated by 谷歌翻译
在本文中,我们研究了模型 - 不可知的元学习(MAML)算法的泛化特性,用于监督学习问题。我们专注于我们培训MAML模型超过$ M $任务的设置,每个都有$ n $数据点,并从两个视角表征其泛化错误:首先,我们假设测试时间的新任务是其中之一培训任务,我们表明,对于强烈凸的客观函数,预期的多余人口损失是由$ {\ mathcal {o}}(1 / mn)$的界限。其次,我们考虑MAML算法的概念任务的泛化,并表明产生的泛化误差取决于新任务的底层分布与培训过程中观察到的任务之间的总变化距离。我们的校对技术依赖于算法稳定性与算法的泛化界之间的连接。特别是,我们为元学习算法提出了一种新的稳定性定义,这使我们能够捕获每项任务的任务数量的任务数量的角色$ N $对MAML的泛化误差。
translated by 谷歌翻译