Federated Learning is a machine learning setting where the goal is to train a highquality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude. * Work performed while also affiliated with University of Edinburgh.
translated by 谷歌翻译
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning.We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100× as compared to synchronized stochastic gradient descent.
translated by 谷歌翻译
Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning however comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods however are only of limited utility in the Federated Learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions such as iid distribution of the client data, which typically can not be found in Federated Learning. In this work, we propose Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. STC extends the existing compression technique of top-k gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms Federated Averaging in common Federated Learning scenarios where clients either a) hold non-iid data, b) use small batch sizes during training, or where c) the number of clients is large and the participation rate in every communication round is low. We furthermore show that even if the clients hold iid data and use medium sized batches for training, STC still behaves paretosuperior to Federated Averaging in the sense that it achieves fixed target accuracies on our benchmarks within both fewer training iterations and a smaller communication budget. These results advocate for a paradigm shift in Federated optimization towards high-frequency low-bitwidth communication, in particular in bandwidth-constrained learning environments.
translated by 谷歌翻译
联合学习(FL)是以保护隐私方式在异质客户设备上进行机器学习的框架。迄今为止,大多数FL算法都在多个回合中学习一个“全局”服务器模型。在每回合中,相同的服务器模型都向所有参与的客户端广播,在本地更新,然后跨客户端进行汇总。在这项工作中,我们提出了一个更一般的过程,客户“选择”了发送给他们的值的程序。值得注意的是,这使客户可以在较小的数据依赖性切片上操作。为了使这种实用性,我们概述了原始的联合选择,该选择可以在现实的FL系统中进行特定于客户的选择。我们讨论了如何使用联合选择进行模型培训,并表明它可以导致通信和客户记忆使用情况的急剧减少,从而有可能使模型的训练太大而无法适合处个设备。我们还讨论了联邦选择对隐私和信任的含义,这反过来影响了可能的系统约束和设计。最后,我们讨论有关模型体系结构,隐私保护技术和实用FL系统的开放问题。
translated by 谷歌翻译
分布式平均值估计(DME)是联邦学习中的一个中央构建块,客户将本地梯度发送到参数服务器,以平均和更新模型。由于通信限制,客户经常使用有损压缩技术来压缩梯度,从而导致估计不准确。当客户拥有多种网络条件(例如限制的通信预算和数据包损失)时,DME更具挑战性。在这种情况下,DME技术通常会导致估计误差显着增加,从而导致学习绩效退化。在这项工作中,我们提出了一种名为Eden的强大DME技术,该技术自然会处理异质通信预算和数据包损失。我们为伊甸园提供了有吸引力的理论保证,并通过经验进行评估。我们的结果表明,伊甸园对最先进的DME技术持续改进。
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
跨设备联合学习是一种越来越受欢迎的机器学习设置,可以通过利用大量具有高隐私和安全保证的客户设备来培训模型。但是,在将联合学习扩展到生产环境时,沟通效率仍然是一个主要的瓶颈,尤其是由于上行链路沟通过程中的带宽限制。在本文中,我们在安全的聚合原始词下正式化并解决了压缩客户对服务器模型更新的问题,这是联合学习管道的核心组成部分,该管道允许服务器汇总客户端更新而不单独访问它们。特别是,我们调整标准标量量化和修剪方法以确保聚合并提出安全索引,这是一个安全聚合的变体,支持量化以进行极端压缩。我们在安全联合学习设置中建立了最新的叶基准测试结果,与未压缩基线相比,在上行链路通信中最多40美元$ \ times $ compression,无意义的损失。
translated by 谷歌翻译
联合学习(FL)是一种新兴的范式,可实现对机器学习模型的大规模分布培训,同时仍提供隐私保证。在这项工作中,我们在将联合优化扩展到大节点计数时共同解决了两个主要的实际挑战:中央权威和单个计算节点之间紧密同步的需求以及中央服务器和客户端之间的传输成本较大。具体而言,我们提出了经典联合平均(FedAvg)算法的新变体,该算法支持异步通信和通信压缩。我们提供了一种新的分析技术,该技术表明,尽管有这些系统放松,但在合理的参数设置下,我们的算法基本上与FedAvg的最著名界限相匹配。在实验方面,我们表明我们的算法确保标准联合任务的快速实用收敛。
translated by 谷歌翻译
联合学习(FL)已成为边缘设备的一种有前途的技术,可以协作学习共享的机器学习模型,同时将培训数据保留在设备上,从而消除了在云中存储和访问完整数据的需求。但是,考虑到公共边缘设备设置中的异质性,FL很难实施,测试和部署在实践中,从而使研究人员从根本上难以有效原型和测试其优化算法。在这项工作中,我们的目的是通过引入FL_PYTORCH:用Python编写的一套开源软件来减轻此问题,该软件以最受欢迎的研究深度学习(DL)框架Pytorch为基础。我们构建了FL_PYTORCH作为FL的研究模拟器,以实现快速开发,原型制作和实验新的和现有的FL优化算法。我们的系统支持摘要,为研究人员提供足够的灵活性,以实验现有和新颖的方法以推进最先进的方法。此外,FL_PYTORCH是一个易于使用的控制台系统,允许使用本地CPU或GPU同时运行多个客户端,甚至可以远程计算设备,而无需用户提供的任何分布式实现。 FL_PYTORCH还提供图形用户界面。对于新方法,研究人员仅提供其算法的集中实施。为了展示系统的可能性和实用性,我们尝试了几种著名的最先进的FL算法和一些最常见的FL数据集。
translated by 谷歌翻译
由于服务器客户的通信和设备计算的瓶颈,大多数研究联合学习的研究都集中在小型模型上。在这项工作中,我们利用各种技术来缓解这些瓶颈,以在联合学习的跨设备中训练更大的语言模型。借助部分模型培训,量化,有效的转移学习和沟通效率优化器的系统应用,我们能够培训$ 21 $ M的参数变压器和20.2美元的参数构象异构体,这些构象异构体与类似大小相同或更好的困惑LSTM具有$ \ sim10 \ times $ $较小的客户到服务器通信成本,比文献中常见的较小的LSTMS $ 11 \%$ $ $ $。
translated by 谷歌翻译
隐私和沟通效率是联邦神经网络培训中的重要挑战,并将它们组合仍然是一个公开的问题。在这项工作中,我们开发了一种统一高度压缩通信和差异隐私(DP)的方法。我们引入基于相对熵编码(REC)到联合设置的压缩技术。通过对REC进行微小的修改,我们获得了一种可怕的私立学习算法,DP-REC,并展示了如何计算其隐私保证。我们的实验表明,DP-REC大大降低了通信成本,同时提供与最先进的隐私保证。
translated by 谷歌翻译
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
translated by 谷歌翻译
联合学习用于大量(数百万)边缘移动设备的机器学习模型的分散培训。它充满挑战,因为移动设备通常具有有限的通信带宽和本地计算资源。因此,提高联合学习的效率对于可扩展性和可用性至关重要。在本文中,我们建议利用部分训练的神经网络,该网络在整个训练过程中冻结了一部分模型参数,以降低对模型性能的影响几乎没有影响的通信成本。通过广泛的实验,我们经验证明,部分培训的神经网络(FEDPT)的联合学习可能导致卓越的通信准确性权衡,通信成本高达46美元,以小的准确度成本。我们的方法还实现了更快的培训,具有较小的内存占用空间,更好的效用,以便强​​大的差异隐私保证。对于推动设备上学习中的过度参数化的局限性,所提出的FEDPT方法可以特别有趣。
translated by 谷歌翻译
我们研究了在通信约束下的分布式平均值估计和优化问题。我们提出了一个相关的量化协议,该协议的误差保证中的主项取决于数据点的平均偏差,而不仅仅是它们的绝对范围。该设计不需要关于数据集的集中属性的任何先验知识,这是在以前的工作中获得这种依赖所必需的。我们表明,在分布式优化算法中应用提出的协议作为子规则会导致更好的收敛速率。我们还在轻度假设下证明了我们的方案的最佳性。实验结果表明,我们提出的算法在各种任务方面优于现有的平均估计协议。
translated by 谷歌翻译
联邦学习是一种快速增长的研究领域,使大量客户能够在私人持有数据上共同列车机器学习模型。更广泛采用联合学习的最大障碍之一是向客户发送模型更新的通信成本,这是由于许多这些设备都是带宽约束的事实而强调的。在本文中,我们旨在通过优化在完整参数空间的子空间内的网络,称为机器学习理论界中的内在维度的思想来解决这个问题。我们使用内在维度和梯度可压缩性之间的对应关系来导出我们称之为内在梯度压缩算法的低带宽优化算法。具体而言,我们在这个家庭中展示了三种算法,其中包含不同级别的上传和下载带宽,以便在各种联合设置中使用,以及它们性能的理论保证。最后,在具有高达100米参数的模型的大型联合学习实验中,我们表明我们的算法与当前最先进的梯度压缩方法相比表现得非常好。
translated by 谷歌翻译
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, workers transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. We show that our proposed method can reduce quantization error and converges faster compared with the methods directly quantizing the model updates.
translated by 谷歌翻译
由于客户端的通信资源有限和大量的模型参数,大规模分布式学习任务遭受通信瓶颈。梯度压缩是通过传输压缩梯度来减少通信负载的有效方法。由于在随机梯度下降的情况下,相邻轮的梯度可能具有高相关,因为他们希望学习相同的模型,提出了一种用于联合学习的实用梯度压缩方案,它使用历史梯度来压缩梯度并且基于Wyner-Ziv编码但没有任何概率的假设。我们还在实时数据集上实现了我们的渐变量化方法,我们的方法的性能优于前一个方案。
translated by 谷歌翻译
Federated Learning is a machine learning paradigm where we aim to train machine learning models in a distributed fashion. Many clients/edge devices collaborate with each other to train a single model on the central. Clients do not share their own datasets with each other, decoupling computation and data on the same device. In this paper, we propose yet another adaptive federated optimization method and some other ideas in the field of federated learning. We also perform experiments using these methods and showcase the improvement in the overall performance of federated learning.
translated by 谷歌翻译
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FEDAVG) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including ADAGRAD, ADAM, and YOGI, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
translated by 谷歌翻译
分布式深度学习框架,如联合学习(FL)及其变体都是在广泛的Web客户端和移动/ IOT设备上实现个性化体验。然而,由于模型参数的爆炸增长(例如,十亿参数模型),基于FL的框架受到客户的计算资源的限制。拆分学习(SL),最近的框架,通过拆分客户端和服务器之间的模型培训来减少客户端计算负载。这种灵活性对于低计算设置非常有用,但通常以带宽消耗的增加成本而实现,并且可能导致次优化会聚,尤其是当客户数据异构时。在这项工作中,我们介绍了adasplit,通过降低带宽消耗并提高异构客户端的性能,使得能够将SL有效地缩放到低资源场景。为了捕获和基准的分布式深度学习的多维性质,我们还介绍了C3分数,是评估资源预算下的性能。我们通过与强大联邦和分裂学习基线的大量实验比较进行了大量实验比较,验证了adasplit在有限的资源下的有效性。我们还展示了adasplit中关键设计选择的敏感性分析,该选择验证了adasplit在可变资源预算中提供适应性权衡的能力。
translated by 谷歌翻译