Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning however comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods however are only of limited utility in the Federated Learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions such as iid distribution of the client data, which typically can not be found in Federated Learning. In this work, we propose Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. STC extends the existing compression technique of top-k gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms Federated Averaging in common Federated Learning scenarios where clients either a) hold non-iid data, b) use small batch sizes during training, or where c) the number of clients is large and the participation rate in every communication round is low. We furthermore show that even if the clients hold iid data and use medium sized batches for training, STC still behaves paretosuperior to Federated Averaging in the sense that it achieves fixed target accuracies on our benchmarks within both fewer training iterations and a smaller communication budget. These results advocate for a paradigm shift in Federated optimization towards high-frequency low-bitwidth communication, in particular in bandwidth-constrained learning environments.
translated by 谷歌翻译
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning.We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100× as compared to synchronized stochastic gradient descent.
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
联合学习(FL)是一个新的人工智能概念,它使得互联网(IoT)设备能够学习协作模型,而无需将原始数据发送到集中的节点进行处理。尽管有许多优势,但在物联网设备上的计算资源较低,交换模型参数的高通信成本使得FL在大型物联网网络中的应用非常有限。在这项工作中,我们为非常大的物联网网络开发了一种新型的FL压缩方案,称为高压联合学习(HCFL)。 HCFL可以减少FL过程的数据负载,而无需更改其结构和超参数。通过这种方式,我们不仅可以显着降低沟通成本,而且使密集学习过程更适应低计算资源的物联网设备。此外,我们研究了IoT设备数量与FL模型的收敛水平之间的关系,从而更好地评估了FL过程的质量。我们在模拟和数学分析中演示了HCFL方案。我们提出的理论研究可以用作最低满意度的水平,证明在满足确定的配置时,FL过程可以实现良好的性能。因此,我们表明HCFL适用于具有许多物联网设备的任何FLENTECTED网络。
translated by 谷歌翻译
Federated Learning is a machine learning setting where the goal is to train a highquality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude. * Work performed while also affiliated with University of Edinburgh.
translated by 谷歌翻译
联合学习是一种在不违反隐私限制的情况下对分布式数据集进行统计模型培训统计模型的最新方法。通过共享模型而不是客户和服务器之间的数据来保留数据位置原则。这带来了许多优势,但也带来了新的挑战。在本报告中,我们探讨了这个新的研究领域,并执行了几项实验,以加深我们对这些挑战的理解以及不同的问题设置如何影响最终模型的性能。最后,我们为这些挑战之一提供了一种新颖的方法,并将其与文献中的其他方法进行了比较。
translated by 谷歌翻译
Federated Learning (FL) is currently the most widely adopted framework for collaborative training of (deep) machine learning models under privacy constraints. Albeit it's popularity, it has been observed that Federated Learning yields suboptimal results if the local clients' data distributions diverge. To address this issue, we present Clustered Federated Learning (CFL), a novel Federated Multi-Task Learning (FMTL) framework, which exploits geometric properties of the FL loss surface, to group the client population into clusters with jointly trainable data distributions. In contrast to existing FMTL approaches, CFL does not require any modifications to the FL communication protocol to be made, is applicable to general non-convex objectives (in particular deep neural networks) and comes with strong mathematical guarantees on the clustering quality. CFL is flexible enough to handle client populations that vary over time and can be implemented in a privacy preserving way. As clustering is only performed after Federated Learning has converged to a stationary point, CFL can be viewed as a post-processing method that will always achieve greater or equal performance than conventional FL by allowing clients to arrive at more specialized models. We verify our theoretical analysis in experiments with deep convolutional and recurrent neural networks on commonly used Federated Learning datasets.
translated by 谷歌翻译
联合学习可以使资源受限的边缘计算设备(例如手机和物联网设备)学习一个共享模型以进行预测,同时保持培训数据本地。这种分散的火车模型方法可提供隐私,安全,监管和经济利益。在这项工作中,我们关注联合学习的统计挑战,当时本地数据是非IID的。我们首先表明,联合学习的准确性大大降低了,对于接受高度偏斜的非IID数据训练的神经网络,最多可降低55%,其中每个客户端设备仅在一类数据上训练。我们进一步表明,可以通过重量差异来解释这种准确性的降低,这可以通过每个设备上类和种群分布的类别的分布之间的地球搬运工距离(EMD)来量化。作为解决方案,我们提出了一种策略,通过创建一小部分数据来改善对非IID数据的培训,该数据在所有边缘设备之间全球共享。实验表明,CIFAR-10数据集只有5%全球共享数据,可以提高精度30%。
translated by 谷歌翻译
当可用的硬件无法满足内存和计算要求以有效地训练高性能的机器学习模型时,需要妥协训练质量或模型复杂性。在联合学习(FL)中,节点是比传统服务器级硬件更具限制的数量级,并且通常是电池供电的,严重限制了可以在此范式下训练的模型的复杂性。尽管大多数研究都集中在设计更好的聚合策略上以提高收敛速度并减轻FL的沟通成本,但更少的努力致力于加快设备培训。这样的阶段重复数百次(即每回合)并可能涉及数千个设备,这是培训联合模型所需的大部分时间,以及客户端的全部能源消耗。在这项工作中,我们介绍了第一个研究在FL工作负载中培训时间引入稀疏性时出现的独特方面的研究。然后,我们提出了Zerofl,该框架依赖于高度稀疏的操作来加快设备训练。与通过将最先进的稀疏训练框架适应FL设置相比,接受Zerofl和95%稀疏性训练的模型高达2.3%的精度。
translated by 谷歌翻译
由于参与客户的异构特征,联邦学习往往受到不稳定和缓慢的收敛。当客户参与比率低时,这种趋势加剧了,因为从每个轮的客户收集的信息容易更加不一致。为了解决挑战,我们提出了一种新的联合学习框架,这提高了服务器端聚合步骤的稳定性,这是通过将客户端发送与全局梯度估计的加速模型来引导本地梯度更新来实现的。我们的算法自然地聚合并将全局更新信息与没有额外的通信成本的参与者传达,并且不需要将过去的模型存储在客户端中。我们还规范了本地更新,以进一步降低偏差并提高本地更新的稳定性。我们根据各种设置执行了关于实际数据的全面实证研究,与最先进的方法相比,在准确性和通信效率方面表现出了拟议方法的显着性能,特别是具有低客户参与率。我们的代码可在https://github.com/ninigapa0 / fedagm获得
translated by 谷歌翻译
隐私和沟通效率是联邦神经网络培训中的重要挑战,并将它们组合仍然是一个公开的问题。在这项工作中,我们开发了一种统一高度压缩通信和差异隐私(DP)的方法。我们引入基于相对熵编码(REC)到联合设置的压缩技术。通过对REC进行微小的修改,我们获得了一种可怕的私立学习算法,DP-REC,并展示了如何计算其隐私保证。我们的实验表明,DP-REC大大降低了通信成本,同时提供与最先进的隐私保证。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
分布式深度学习框架,如联合学习(FL)及其变体都是在广泛的Web客户端和移动/ IOT设备上实现个性化体验。然而,由于模型参数的爆炸增长(例如,十亿参数模型),基于FL的框架受到客户的计算资源的限制。拆分学习(SL),最近的框架,通过拆分客户端和服务器之间的模型培训来减少客户端计算负载。这种灵活性对于低计算设置非常有用,但通常以带宽消耗的增加成本而实现,并且可能导致次优化会聚,尤其是当客户数据异构时。在这项工作中,我们介绍了adasplit,通过降低带宽消耗并提高异构客户端的性能,使得能够将SL有效地缩放到低资源场景。为了捕获和基准的分布式深度学习的多维性质,我们还介绍了C3分数,是评估资源预算下的性能。我们通过与强大联邦和分裂学习基线的大量实验比较进行了大量实验比较,验证了adasplit在有限的资源下的有效性。我们还展示了adasplit中关键设计选择的敏感性分析,该选择验证了adasplit在可变资源预算中提供适应性权衡的能力。
translated by 谷歌翻译
可扩展性和隐私是交叉设备联合学习(FL)系统的两个关键问题。在这项工作中,我们确定了FL中的客户端更新的同步流动聚合不能高效地缩放到几百个并行培训之外。它导致ModelPerforce和训练速度的回报递减,Ampanysto大批量培训。另一方面,FL(即异步FL)中的客户端更新的异步聚合减轻了可扩展性问题。但是,聚合个性链子更新与安全聚合不兼容,这可能导致系统的不良隐私水平。为了解决这些问题,我们提出了一种新颖的缓冲异步聚合方法FedBuff,这是不可知的优化器的选择,并结合了同步和异步FL的最佳特性。我们经验证明FEDBuff比同步FL更有效,比异步FL效率更高3.3倍,同时兼容保留保护技术,如安全聚合和差异隐私。我们在平滑的非凸设置中提供理论融合保证。最后,我们显示在差异私有培训下,FedBuff可以在低隐私设置下占FEDAVGM并实现更高隐私设置的相同实用程序。
translated by 谷歌翻译
当客户具有不同的数据分布时,最新的联合学习方法的性能比其集中式同行差得多。对于神经网络,即使集中式SGD可以轻松找到同时执行所有客户端的解决方案,当前联合优化方法也无法收敛到可比的解决方案。我们表明,这种性能差异很大程度上可以归因于非概念性提出的优化挑战。具体来说,我们发现网络的早期层确实学习了有用的功能,但是最后一层无法使用它们。也就是说,适用于此非凸问题的联合优化扭曲了最终层的学习。利用这一观察结果,我们提出了一个火车征征训练(TCT)程序来避开此问题:首先,使用现成方法(例如FedAvg)学习功能;然后,优化从网络的经验神经切线核近似获得的共透性问题。当客户具有不同的数据时,我们的技术可在FMNIST上的准确性提高高达36%,而CIFAR10的准确性提高了 +37%。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
Federated learning enables cooperative training among massively distributed clients by sharing their learned local model parameters. However, with increasing model size, deploying federated learning requires a large communication bandwidth, which limits its deployment in wireless networks. To address this bottleneck, we introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training. In particular, we integrate two pairs of shared predictors for the model prediction in both server-to-client and client-to-server communication. By employing a common prediction rule, both locally and globally updated models are always fully recoverable in clients and the server. We highlight that the residuals only indicate the quasi-update of a model in a single inter-round, and hence contain more dense information and have a lower entropy than the model, comparing to model weights and gradients. Based on this property, we further conduct lossy compression of the residuals by sparsification and quantization and encode them for efficient communication. The experimental evaluation shows that our ResFed needs remarkably less communication costs and achieves better accuracy by leveraging less sensitive residuals, compared to standard federated learning. For instance, to train a 4.08 MB CNN model on CIFAR-10 with 10 clients under non-independent and identically distributed (Non-IID) setting, our approach achieves a compression ratio over 700X in each communication round with minimum impact on the accuracy. To reach an accuracy of 70%, it saves around 99% of the total communication volume from 587.61 Mb to 6.79 Mb in up-streaming and to 4.61 Mb in down-streaming on average for all clients.
translated by 谷歌翻译
我们介绍了一个新颖的联合学习框架FedD3,该框架减少了整体沟通量,并开放了联合学习的概念,从而在网络受限的环境中进行了更多的应用程序场景。它通过利用本地数据集蒸馏而不是传统的学习方法(i)大大减少沟通量,并(ii)将转移限制为一击通信,而不是迭代的多路交流来实现这一目标。 FedD3允许连接的客户独立提炼本地数据集,然后汇总那些去中心化的蒸馏数据集(通常以几个无法识别的图像,通常小于模型小于模型),而不是像其他联合学习方法共享模型更新,而是允许连接的客户独立提炼本地数据集。在整个网络上仅一次形成最终模型。我们的实验结果表明,FedD3在所需的沟通量方面显着优于其他联合学习框架,同时,根据使用情况或目标数据集,它为能够在准确性和沟通成本之间的权衡平衡。例如,要在具有10个客户的非IID CIFAR-10数据集上训练Alexnet模型,FedD3可以通过相似的通信量增加准确性超过71%,或者节省98%的通信量,同时达到相同的准确性与其他联合学习方法相比。
translated by 谷歌翻译
Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm.
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译