由于客户端之间标签不平衡的普遍性,联邦对抗域适应是一种独特的分布式Minimax培训任务,每个客户端只看到培训全局模型所需的标签类的子集。为了解决这个问题,我们提出了一个分布式Minimax优化器,称为FEDMM,专为联邦对抗域适应问题而设计。即使在每个客户端具有不同的标签类,某些客户端只有无监督的任务,它也运作良好。我们证明了FEDMM确保将达到域移位无监督数据的静止点收敛。在各种基准数据集中,广泛的实验表明,基于梯度下降升降算法例如,当从头划伤训练时,它以相同的通信回合占据了其他基于GDA的联合平均方法的准确性约为20%;当从预先训练的模型培训时,它始终如一地优于不同网络的5.4 \%$ 9 \%$ 9 \%$。
translated by 谷歌翻译
标准联合优化方法成功地适用于单层结构的随机问题。然而,许多当代的ML问题 - 包括对抗性鲁棒性,超参数调整和参与者 - 批判性 - 属于嵌套的双层编程,这些编程包含微型型和组成优化。在这项工作中,我们提出了\ fedblo:一种联合交替的随机梯度方法来解决一般的嵌套问题。我们在存在异质数据的情况下为\ fedblo建立了可证明的收敛速率,并引入了二聚体,最小值和组成优化的变化。\ fedblo引入了多种创新,包括联邦高级计算和降低方差,以解决内部级别的异质性。我们通过有关超参数\&超代理学习和最小值优化的实验来补充我们的理论,以证明我们方法在实践中的好处。代码可在https://github.com/ucr-optml/fednest上找到。
translated by 谷歌翻译
As a novel distributed learning paradigm, federated learning (FL) faces serious challenges in dealing with massive clients with heterogeneous data distribution and computation and communication resources. Various client-variance-reduction schemes and client sampling strategies have been respectively introduced to improve the robustness of FL. Among others, primal-dual algorithms such as the alternating direction of method multipliers (ADMM) have been found being resilient to data distribution and outperform most of the primal-only FL algorithms. However, the reason behind remains a mystery still. In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm. While this explains the inherent robustness of federated ADMM, the vanilla version of it lacks the ability to be adaptive to the degree of client heterogeneity. Besides, the global model at the server under client sampling is biased which slows down the practical convergence. To go beyond ADMM, we propose a novel primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model. In addition, FedVRA unifies several representative FL algorithms in the sense that they are either special instances of FedVRA or are close to it. Extensions of FedVRA to semi/un-supervised learning are also presented. Experiments based on (semi-)supervised image classification tasks demonstrate superiority of FedVRA over the existing schemes in learning scenarios with massive heterogeneous clients and client sampling.
translated by 谷歌翻译
在许多机器学习应用中,在许多移动或物联网设备上生成大规模和隐私敏感数据,在集中位置收集数据可能是禁止的。因此,在保持数据本地化的同时估计移动或物联网设备上的参数越来越吸引人。这种学习设置被称为交叉设备联合学习。在本文中,我们提出了第一理论上保证的跨装置联合学习设置中的一般Minimax问题的算法。我们的算法仅在每轮训练中只需要一小部分设备,这克服了设备的低可用性引入​​的困难。通过在与服务器通信之前对客户端执行多个本地更新步骤,并利用全局梯度估计来进一步减少通信开销,并利用全局梯度估计来校正由数据异质性引入的本地更新方向上的偏置。通过基于新型潜在功能的开发分析,我们为我们的算法建立了理论融合保障。 AUC最大化,强大的对抗网络培训和GAN培训任务的实验结果展示了我们算法的效率。
translated by 谷歌翻译
在这项工作中,我们提出了FedSSO,这是一种用于联合学习的服务器端二阶优化方法(FL)。与以前朝这个方向的工作相反,我们在准牛顿方法中采用了服务器端近似,而无需客户的任何培训数据。通过这种方式,我们不仅将计算负担从客户端转移到服务器,而且还消除了客户和服务器之间二阶更新的附加通信。我们为我们的新方法的收敛提供了理论保证,并从经验上证明了我们在凸面和非凸面设置中的快速收敛和沟通节省。
translated by 谷歌翻译
我们提出了一个新颖的框架,以研究异步联合学习优化,并在梯度更新中延迟。我们的理论框架通过引入随机聚合权重来表示客户更新时间的可变性,从而扩展了标准的FedAvg聚合方案,例如异质硬件功能。我们的形式主义适用于客户具有异质数据集并至少执行随机梯度下降(SGD)的一步。我们证明了这种方案的收敛性,并为相关最小值提供了足够的条件,使其成为联邦问题的最佳选择。我们表明,我们的一般框架适用于现有的优化方案,包括集中学习,FedAvg,异步FedAvg和FedBuff。这里提供的理论允许绘制有意义的指南,以设计在异质条件下的联合学习实验。特别是,我们在这项工作中开发了FedFix,这是FedAvg的新型扩展,从而实现了有效的异步联合训练,同时保留了同步聚合的收敛稳定性。我们在一系列实验上凭经验证明了我们的理论,表明异步FedAvg以稳定性为代价导致快速收敛,我们最终证明了FedFix比同步和异步FedAvg的改善。
translated by 谷歌翻译
In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose Fed-Nova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
传统上,联邦学习(FL)旨在培训单个全球模型,同时使用多个客户和服务器进行协作。 FL算法面临的两个自然挑战是跨客户的数据中的异质性以及{\ em多样性资源}客户的协作。在这项工作中,我们介绍了\ textit {量化}和\ textit {个性化} fl算法quped,通过\ textit {knowledge蒸馏}(kd)促进集体(个性化模型压缩)培训,这些客户可以访问异物质数据和资源的客户。对于个性化,我们允许客户学习\ textit {压缩个性化模型},具有不同的量化参数和模型维度/结构。为此,首先,我们提出了一种通过放松的优化问题来学习量化模型的算法,在该问题上也优化了量化值。当每个参与(联合)学习过程的客户对压缩模型(无论是模型维度还是精度)都有不同的要求时,我们通过为当地客户目标引入知识蒸馏损失来制定一个压缩个性化框架,该框架通过全球模型进行协作。我们开发了一个交替的近端梯度更新,以解决此压缩个性化问题,并分析其收敛属性。从数值上讲,我们验证了在各种异质环境中对客户的竞争性个性化方法,FedAvg和本地培训的验证。
translated by 谷歌翻译
数据异构联合学习(FL)系统遭受了两个重要的收敛误差来源:1)客户漂移错误是由于在客户端执行多个局部优化步骤而引起的,以及2)部分客户参与错误,这是一个事实,仅一小部分子集边缘客户参加每轮培训。我们发现其中,只有前者在文献中受到了极大的关注。为了解决这个问题,我们提出了FedVarp,这是在服务器上应用的一种新颖的差异算法,它消除了由于部分客户参与而导致的错误。为此,服务器只是将每个客户端的最新更新保持在内存中,并将其用作每回合中非参与客户的替代更新。此外,为了减轻服务器上的内存需求,我们提出了一种新颖的基于聚类的方差降低算法clusterfedvarp。与以前提出的方法不同,FedVarp和ClusterFedVarp均不需要在客户端上进行其他计算或其他优化参数的通信。通过广泛的实验,我们表明FedVarp优于最先进的方法,而ClusterFedVarp实现了与FedVarp相当的性能,并且记忆要求较少。
translated by 谷歌翻译
Recently, lots of algorithms have been proposed for learning a fair classifier from decentralized data. However, many theoretical and algorithmic questions remain open. First, is federated learning necessary, i.e., can we simply train locally fair classifiers and aggregate them? In this work, we first propose a new theoretical framework, with which we demonstrate that federated learning can strictly boost model fairness compared with such non-federated algorithms. We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data. To bridge this gap, we propose FedFB, a private fair learning algorithm on decentralized data. The key idea is to modify the FedAvg protocol so that it can effectively mimic the centralized fair learning. Our experimental results show that FedFB significantly outperforms existing approaches, sometimes matching the performance of the centrally trained model.
translated by 谷歌翻译
在联合学习(FL)中,通过跨设备的模型更新进行合作学习全球模型的目的倾向于通过本地信息反对个性化的目标。在这项工作中,我们通过基于多准则优化的框架以定量的方式校准了这一权衡,我们将其作为一个受约束的程序进行了:设备的目标是其本地目标,它试图最大程度地减少在满足非线性约束的同时,以使其满足非线性约束,这些目标是其本地目标。量化本地模型和全局模型之间的接近度。通过考虑该问题的拉格朗日放松,我们开发了一种算法,该算法允许每个节点通过查询到一阶梯度Oracle将其Lagrangian的本地组件最小化。然后,服务器执行Lagrange乘法器上升步骤,然后进行Lagrange乘法器加权步骤。我们称这种实例化的原始偶对方法是联合学习超出共识($ \ texttt {fedBc} $)的实例。从理论上讲,我们确定$ \ texttt {fedBc} $以与最算好状态相匹配的速率收敛到一阶固定点,直到额外的错误项,取决于由于接近性约束而产生的公差参数。总体而言,该分析是针对非凸鞍点问题的原始偶对偶的方法的新颖表征。最后,我们证明了$ \ texttt {fedBc} $平衡了整个数据集(合成,MNIST,CIFAR-10,莎士比亚)的全球和本地模型测试精度指标,从而与艺术现状达到了竞争性能。
translated by 谷歌翻译
作为一个普遍的分布式学习范式,联邦学习(FL)训练了大量通信的大量设备的全球模型。本文研究了FL设置中的一类复合优化和统计恢复问题,其损失函数由数据依赖的平滑损耗和非平滑正常器组成。示例包括使用套索的稀疏线性回归,使用核标准正则化等等的低级矩阵恢复等。在现有文献中,联合复合优化算法仅从优化的角度设计,而无需任何统计保证。此外,他们不考虑在统计恢复问题中常用(受限)强凸度。从优化和统计角度来看,我们都会推进此问题的前沿。从优化的前期,我们提出了一种名为\ textit {快速联合双平均}的新算法,用于强烈凸出和平滑损失,并在复合设置中建立最新的迭代和通信复杂性。特别是,我们证明它具有快速的速度,线性加速和减少的沟通回合。从统计前期开始,对于受限制的强烈凸出和平滑损失,我们设计了另一种算法,即\ textIt {多阶段联合双重平均},并证明了与线性加速绑定到最佳统计精度的高概率复杂性。合成数据和真实数据的实验表明,我们的方法的性能优于其他基线。据我们所知,这是为FL中复合问题提供快速优化算法和统计恢复保证的第一项工作。
translated by 谷歌翻译
我们展示了一个联合学习框架,旨在强大地提供具有异构数据的各个客户端的良好预测性能。所提出的方法对基于SuperQualile的学习目标铰接,捕获异构客户端的误差分布的尾统计。我们提出了一种随机训练算法,其与联合平均步骤交织差异私人客户重新重量步骤。该提出的算法支持有限时间收敛保证,保证覆盖凸和非凸面设置。关于联邦学习的基准数据集的实验结果表明,我们的方法在平均误差方面与古典误差竞争,并且在误差的尾统计方面优于它们。
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译
Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm.
translated by 谷歌翻译
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning. However, due to the more complicated inter-outer problem structure in federated min-max learning, theoretical understandings of communication complexity for federated min-max learning with infrequent communications remain very limited in the literature. This is particularly true for settings with non-i.i.d. datasets and partial client participation. To address this challenge, in this paper, we propose a new algorithmic framework called stochastic sampling averaging gradient descent ascent (SAGDA), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps, which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of SAGDA, we immediately arrive at an $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA. Therefore, through the lens of SAGDA, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning.
translated by 谷歌翻译
联合学习(FL)是一个分布式的机器学习框架,可以减轻数据孤岛,在该筒仓中,分散的客户在不共享其私人数据的情况下协作学习全球模型。但是,客户的非独立且相同分布的(非IID)数据对训练有素的模型产生了负面影响,并且具有不同本地更新的客户可能会在每个通信回合中对本地梯度造成巨大差距。在本文中,我们提出了一种联合矢量平均(FedVeca)方法来解决上述非IID数据问题。具体而言,我们为与本地梯度相关的全球模型设定了一个新的目标。局部梯度定义为具有步长和方向的双向向量,其中步长为局部更新的数量,并且根据我们的定义将方向分为正和负。在FedVeca中,方向受步尺的影响,因此我们平均双向向量,以降低不同步骤尺寸的效果。然后,我们理论上分析了步骤大小与全球目标之间的关系,并在每个通信循环的步骤大小上获得上限。基于上限,我们为服务器和客户端设计了一种算法,以自适应调整使目标接近最佳的步骤大小。最后,我们通过构建原型系统对不同数据集,模型和场景进行实验,实验结果证明了FedVeca方法的有效性和效率。
translated by 谷歌翻译
联合学习(FL)是一种在不获取客户私有数据的情况下培训全球模型的协同机器学习技术。 FL的主要挑战是客户之间的统计多样性,客户设备之间的计算能力有限,以及服务器和客户之间的过度沟通开销。为解决这些挑战,我们提出了一种通过最大化FEDMAC的相关性稀疏个性化联合学习计划。通过将近似的L1-norm和客户端模型与全局模型之间的相关性结合到标准流失函数中,提高了统计分集数据的性能,并且与非稀疏FL相比,网络所需的通信和计算负载减少。收敛分析表明,FEDMAC中的稀疏约束不会影响全球模型的收敛速度,理论结果表明,FEDMAC可以实现良好的稀疏个性化,这比基于L2-NOM的个性化方法更好。实验,我们展示了与最先进的个性化方法相比的这种稀疏个性化建筑的益处(例如,FEDMAC分别达到98.95%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,99.37%,高精度,FMNIST,CIFAR-100和非IID变体下的合成数据集)。
translated by 谷歌翻译
联合学习(FL)是一种趋势培训范式,用于利用分散培训数据。 FL允许客户端在本地更新几个时期的模型参数,然后将它们共享到全局模型以进行聚合。在聚集之前,该训练范式具有多本地步骤更新,使对抗性攻击暴露了独特的漏洞。对手训练是一种流行而有效的方法,可以提高网络对抗者的鲁棒性。在这项工作中,我们制定了一种一般形式的联邦对抗学习(FAL),该形式是从集中式环境中的对抗性学习改编而成的。在FL培训的客户端,FAL具有一个内部循环,可以生成对抗性样本进行对抗训练和外循环以更新本地模型参数。在服务器端,FAL汇总了本地模型更新并广播聚合的模型。我们设计了全球强大的训练损失,并将FAL培训作为最小最大优化问题。与依赖梯度方向的经典集中式培训中的收敛分析不同,由于三个原因,很难在FAL中分析FAL的收敛性:1)Min-Max优化的复杂性,2)模型未在梯度方向上更新聚合之前的客户端和3)客户间异质性的多局部更新。我们通过使用适当的梯度近似和耦合技术来应对这些挑战,并在过度参数化的制度中介绍收敛分析。从理论上讲,我们的主要结果表明,我们的算法下的最小损失可以收敛到$ \ epsilon $ Small,并具有所选的学习率和交流回合。值得注意的是,我们的分析对于非IID客户是可行的。
translated by 谷歌翻译