联邦学习(FL)最近由于其在保留隐私而使用分散数据的能力,最近引起了人们的关注。但是,这也提出了与参与设备的异质性有关的其他挑战,无论是在其计算能力和贡献数据方面。同时,神经体系结构搜索(NAS)已成功用于集中式数据集,从而产生了最新的结果,从而获得了受限(硬件意识)和不受约束的设置。但是,即使是在NAS和FL的交集的最新工作,也假定了与数据中心硬件的均匀计算环境,并且无法解决使用受约束,异质设备的问题。结果,在联合环境中对NAS的实际用法仍然是我们在工作中解决的一个空旷的问题。我们设计我们的系统Fedoras,在处理具有非IID分布数据的不同功能的设备时发现和培训有希望的体系结构,并提供了其在不同环境中有效性的经验证据。具体而言,我们在跨越三种不同模式(视觉,语音,文本)的数据集中评估了Fedoras,并且与最先进的联合解决方案相比,其性能更好,同时保持资源效率。
translated by 谷歌翻译
联邦学习(FL)一直在不同的ML任务中获得显着的牵引力,从视野到键盘预测。在大规模的部署中,客户异质性是一个事实,并构成公平,培训性能和准确性的主要问题。虽然已经进行了统计数据异质性的重大努力,但是作为系统异质性称为客户端的处理能力和网络带宽的多样性仍然很大程度上是未开发的。当前解决方案无论是忽略大部分可用的设备,也无限制地设定均匀限制,由最低能力的参与者限制。在这项工作中,我们介绍了有序的辍学,这是一种机制,实现了深度神经网络(DNN)中的有序,嵌套的知识表示,并且能够在不需要再培训的情况下提取较低的脚印子模型。我们进一步表明,对于线性地图,我们的订购辍学等同于SVD。我们采用这种技术,以及一种自蒸馏方法,在一个叫做峡湾的框架中。 Fjord通过将模型宽度定制到客户端的功能来减轻客户体系异质性的问题。在各种方式上对CNN和RNN的广泛评估表明,峡湾始终如一地导致最先进的基线的显着性能,同时保持其嵌套结构。
translated by 谷歌翻译
在存在数据掠夺性保存问题的情况下,有效地在许多设备和资源限制上(尤其是在边缘设备上)的有效部署深度神经网络是最具挑战性的问题之一。传统方法已经演变为改善单个全球模型,同时保持每个本地培训数据分散(即数据杂质性),或者培训一个曾经是一个曾经是一个曾经是的网络,该网络支持多样化的建筑设置,以解决配备不同计算功能的异质系统(即模型杂种)。但是,很少的研究同时考虑了这两个方向。在这项工作中,我们提出了一个新颖的框架来考虑两种情况,即超级网训练联合会(FEDSUP),客户在该场景中发送和接收一条超级网,其中包含从本身中采样的所有可能的体系结构。它的灵感来自联邦学习模型聚合阶段(FL)中平均参数的启发,类似于超级网训练中的体重分享。具体而言,在FedSup框架中,训练单射击模型中广泛使用的重量分享方法与联邦学习的平均(FedAvg)结合在一起。在我们的框架下,我们通过将子模型发送给广播阶段的客户来降低沟通成本和培训间接费用,提出有效的算法(电子馈SUP)。我们展示了几种增强FL环境中超网训练的策略,并进行广泛的经验评估。结果框架被证明为在几个标准基准上的数据和模型杂质性的鲁棒性铺平了道路。
translated by 谷歌翻译
联合学习(FL)是一种有效的学习框架,可帮助由于隐私和监管限制无法与集中式服务器共享数据时,帮助分布式机器学习。 FL使用基于预定义体系结构的学习的最新进展。然而,考虑到客户端的数据对服务器和数据分布是不可相同的客户端,在集中设置中发现的预定义体系结构可能不是FL中所有客户端的最佳解决方案。在这项工作中受到这项挑战的动机,我们介绍了蜘蛛,这是一种旨在搜索用于联合学习的个性化神经结构的算法框架。蜘蛛是根据两个独特特征设计的:(1)交替地以通用的方式优化一个架构 - 均匀的全球模型(Supernet),一个架构 - 异构本地模型,由基于重量共享的正则化连接到全球模型(2通过新颖的神经结构搜索(NAS)方法实现架构异构本地模型,其可以使用对准确值的操作级别扰动来逐渐选择最佳子网。实验结果表明,蜘蛛优于其他最先进的个性化方法,搜索的个性化架构更加推理效率。
translated by 谷歌翻译
联合学习(FL)可以对机器学习模型进行分布式培训,同时将个人数据保存在用户设备上。尽管我们目睹了FL在移动传感领域的越来越多的应用,例如人类活动识别(HAR),但在多设备环境(MDE)的背景下,尚未对FL进行研究,其中每个用户都拥有多个数据生产设备。随着移动设备和可穿戴设备的扩散,MDE在Ubicomp设置中越来越受欢迎,因此需要对其中的FL进行研究。 MDE中的FL的特征是在客户和设备异质性的存在中并不复杂,并不是独立的,并且在客户端之间并非独立分布(非IID)。此外,确保在MDE中有效利用佛罗里达州客户的系统资源仍然是一个重要的挑战。在本文中,我们提出了以用户为中心的FL培训方法来应对MDE中的统计和系统异质性,并在设备之间引起推理性能的一致性。火焰功能(i)以用户为中心的FL培训,利用同一用户的设备之间的时间对齐; (ii)准确性和效率感知设备的选择; (iii)对设备的个性化模型。我们还提出了具有现实的能量流量和网络带宽配置文件的FL评估测试,以及一种基于类的新型数据分配方案,以将现有HAR数据集扩展到联合设置。我们在三个多设备HAR数据集上的实验结果表明,火焰的表现优于各种基准,F1得分高4.3-25.8%,能源效率提高1.02-2.86倍,并高达2.06倍的收敛速度,以通过FL的公平分布来获得目标准确性工作量。
translated by 谷歌翻译
当可用的硬件无法满足内存和计算要求以有效地训练高性能的机器学习模型时,需要妥协训练质量或模型复杂性。在联合学习(FL)中,节点是比传统服务器级硬件更具限制的数量级,并且通常是电池供电的,严重限制了可以在此范式下训练的模型的复杂性。尽管大多数研究都集中在设计更好的聚合策略上以提高收敛速度并减轻FL的沟通成本,但更少的努力致力于加快设备培训。这样的阶段重复数百次(即每回合)并可能涉及数千个设备,这是培训联合模型所需的大部分时间,以及客户端的全部能源消耗。在这项工作中,我们介绍了第一个研究在FL工作负载中培训时间引入稀疏性时出现的独特方面的研究。然后,我们提出了Zerofl,该框架依赖于高度稀疏的操作来加快设备训练。与通过将最先进的稀疏训练框架适应FL设置相比,接受Zerofl和95%稀疏性训练的模型高达2.3%的精度。
translated by 谷歌翻译
Federated Learning (FL) is extensively used to train AI/ML models in distributed and privacy-preserving settings. Participant edge devices in FL systems typically contain non-independent and identically distributed~(Non-IID) private data and unevenly distributed computational resources. Preserving user data privacy while optimizing AI/ML models in a heterogeneous federated network requires us to address data heterogeneity and system/resource heterogeneity. Hence, we propose \underline{R}esource-\underline{a}ware \underline{F}ederated \underline{L}earning~(RaFL) to address these challenges. RaFL allocates resource-aware models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion. Integrating NAS into FL enables on-demand customized model deployment for resource-diverse edge devices. Furthermore, we propose a multi-model architecture fusion scheme allowing the aggregation of the distributed learning results. Results demonstrate RaFL's superior resource efficiency compared to SoTA.
translated by 谷歌翻译
联合学习(FL)作为边缘设备的有希望的技术,以协作学习共享预测模型,同时保持其训练数据,从而解耦了从需要存储云中的数据的机器学习的能力。然而,在规模和系统异质性方面,FL难以现实地实现。虽然有许多用于模拟FL算法的研究框架,但它们不支持在异构边缘设备上进行可扩展的流程。在本文中,我们呈现花 - 一种全面的FL框架,通过提供新的设施来执行大规模的FL实验并考虑丰富的异构流程来区分现有平台。我们的实验表明花卉可以仅使用一对高端GPU在客户尺寸下进行FL实验。然后,研究人员可以将实验无缝地迁移到真实设备中以检查设计空间的其他部分。我们认为花卉为社区提供了一个批判性的新工具,用于研究和发展。
translated by 谷歌翻译
Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients while keeping the training data local. While most prior work on designing systems for FL has focused on using stateful always running components, recent work has shown that components in an FL system can greatly benefit from the usage of serverless computing and Function-as-a-Service technologies. To this end, distributed training of models with severless FL systems can be more resource-efficient and cheaper than conventional FL systems. However, serverless FL systems still suffer from the presence of stragglers, i.e., slow clients due to their resource and statistical heterogeneity. While several strategies have been proposed for mitigating stragglers in FL, most methodologies do not account for the particular characteristics of serverless environments, i.e., cold-starts, performance variations, and the ephemeral stateless nature of the function instances. Towards this, we propose FedLesScan, a novel clustering-based semi-asynchronous training strategy, specifically tailored for serverless FL. FedLesScan dynamically adapts to the behaviour of clients and minimizes the effect of stragglers on the overall system. We implement our strategy by extending an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate our strategy using the 2nd generation Google Cloud Functions with four datasets and varying percentages of stragglers. Results from our experiments show that compared to other approaches FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
translated by 谷歌翻译
为了保留用户隐私,在实现移动智能的同时,已经提出了技术来培训有关分散数据的深神经网络。但是,对分散数据的培训使神经体系结构的设计非常困难。在设计和部署异质移​​动平台的不同神经体系结构时,这种困难将进一步扩大。在这项工作中,我们提出了一个自动的神经体系结构搜索,以分散的培训,这是一种新的DNN培训范式,称为联合神经建筑搜索,即Federated Nas。为了应对有限的客户计算和通信资源的主要挑战,我们提出了FedNAS,这是一个高度优化的有效联合NAS的框架。 FedNAS充分利用了在建筑搜索过程中重新训练模型候选人不足的关键机会,并结合了三个关键的优化:对偏见客户培训的平行候选人,早期降低了较不优点的候选人和动态的回合数。在大规模数据集和典型的CNN体​​系结构上测试,FedNAS可以达到可比较的模型精度作为最先进的NAS NAS算法,该算法训练具有集中式数据的模型,并且与直接的直线相比,最多将客户成本降低了两个幅度。联邦NAS的设计。
translated by 谷歌翻译
高效联合学习是在边缘设备上培训和部署AI模型的关键挑战之一。然而,在联合学习中维护数据隐私提出了几种挑战,包括数据异质性,昂贵的通信成本和有限的资源。在本文中,我们通过(a)通过基于本地客户端的深度增强学习引入突出参数选择代理的上述问题,并在中央服务器上聚合所选择的突出参数,(b)分割正常的深度学习模型〜 (例如,CNNS)作为共享编码器和本地预测器,并通过联合学习训练共享编码器,同时通过本地自定义预测器将其知识传送到非IID客户端。所提出的方法(a)显着降低了联合学习的通信开销,并加速了模型推断,而方法(b)则在联合学习中解决数据异质性问题。此外,我们利用梯度控制机制来校正客户之间的梯度异质性。这使得训练过程更稳定并更快地收敛。实验表明,我们的方法产生了稳定的训练过程,并与最先进的方法相比实现了显着的结果。在培训VGG-11时,我们的方法明显降低了通信成本最高108 GB,并在培训Reset-20时需要7.6美元的通信开销,同时通过减少高达39.7 \%$ 39.7 \%$ vgg- 11.
translated by 谷歌翻译
作为一种有希望的隐私机器学习方法,联合学习(FL)可以使客户跨客户培训,而不会损害其机密的本地数据。但是,现有的FL方法遇到了不均分布数据的推理性能低的问题,因为它们中的大多数依赖于联合平均(FIDAVG)基于联合的聚合。通过以粗略的方式平均模型参数,FedAvg将局部模型的个体特征黯然失色,这极大地限制了FL的推理能力。更糟糕的是,在每一轮FL培训中,FedAvg向客户端向客户派遣了相同的初始本地模型,这很容易导致对最佳全局模型的局限性搜索。为了解决上述问题,本文提出了一种新颖有效的FL范式,名为FEDMR(联合模型重组)。与传统的基于FedAvg的方法不同,FEDMR的云服务器将收集到的本地型号的每一层层混合,并重组它们以实现新的模型,以供客户端培训。由于在每场FL比赛中进行了细粒度的模型重组和本地培训,FEDMR可以迅速为所有客户找出一个全球最佳模型。全面的实验结果表明,与最先进的FL方法相比,FEDMR可以显着提高推理准确性而不会引起额外的通信开销。
translated by 谷歌翻译
Federated learning (FL) on deep neural networks facilitates new applications at the edge, especially for wearable and Internet-of-Thing devices. Such devices capture a large and diverse amount of data, but they have memory, compute, power, and connectivity constraints which hinder their participation in FL. We propose Centaur, a multitier FL framework, enabling ultra-constrained devices to efficiently participate in FL on large neural nets. Centaur combines two major ideas: (i) a data selection scheme to choose a portion of samples that accelerates the learning, and (ii) a partition-based training algorithm that integrates both constrained and powerful devices owned by the same user. Evaluations, on four benchmark neural nets and three datasets, show that Centaur gains ~10% higher accuracy than local training on constrained devices with ~58% energy saving on average. Our experimental results also demonstrate the superior efficiency of Centaur when dealing with imbalanced data, client participation heterogeneity, and various network connection probabilities.
translated by 谷歌翻译
Most cross-device federated learning (FL) studies focus on the model-homogeneous setting where the global server model and local client models are identical. However, such constraint not only excludes low-end clients who would otherwise make unique contributions to model training but also restrains clients from training large models due to on-device resource bottlenecks. In this work, we propose FedRolex, a partial training (PT)-based approach that enables model-heterogeneous FL and can train a global server model larger than the largest client model. At its core, FedRolex employs a rolling sub-model extraction scheme that allows different parts of the global server model to be evenly trained, which mitigates the client drift induced by the inconsistency between individual client models and server model architectures. We show that FedRolex outperforms state-of-the-art PT-based model-heterogeneous FL methods (e.g. Federated Dropout) and reduces the gap between model-heterogeneous and model-homogeneous FL, especially under the large-model large-dataset regime. In addition, we provide theoretical statistical analysis on its advantage over Federated Dropout and evaluate FedRolex on an emulated real-world device distribution to show that FedRolex can enhance the inclusiveness of FL and boost the performance of low-end devices that would otherwise not benefit from FL. Our code is available at https://github.com/MSU-MLSys-Lab/FedRolex.
translated by 谷歌翻译
The heterogeneity of hardware and data is a well-known and studied problem in the community of Federated Learning (FL) as running under heterogeneous settings. Recently, custom-size client models trained with Knowledge Distillation (KD) has emerged as a viable strategy for tackling the heterogeneity challenge. However, previous efforts in this direction are aimed at client model tuning rather than their impact onto the knowledge aggregation of the global model. Despite performance of global models being the primary objective of FL systems, under heterogeneous settings client models have received more attention. Here, we provide more insights into how the chosen approach for training custom client models has an impact on the global model, which is essential for any FL application. We show the global model can fully leverage the strength of KD with heterogeneous data. Driven by empirical observations, we further propose a new approach that combines KD and Learning without Forgetting (LwoF) to produce improved personalised models. We bring heterogeneous FL on pair with the mighty FedAvg of homogeneous FL, in realistic deployment scenarios with dropping clients.
translated by 谷歌翻译
尽管结果令人印象深刻,但深度学习的技术还引起了经常在数据中心进行的培训程序引起的严重隐私和环境问题。作为回应,已经出现了集中培训的替代方案,例如联邦学习(FL)。也许出乎意料的是,FL开始在全球范围内部署,这些公司必须遵守源自倡导隐私保护的政府和社会团体的新法律要求和政策。 \ textit {但是,与FL有关的潜在环境影响仍然不清楚和未开发。本文提供了有关佛罗里达碳足迹的首次系统研究。然后,我们将FL的碳足迹与传统的集中学习进行了比较。我们的发现表明,根据配置,FL可以比集中的机器学习高达两个数量级。但是,在某些情况下,由于嵌入式设备的能源消耗减少,它可以与集中学习相提并论。我们使用FL进行了不同类型的数据集,设置和各种深度学习模型的广泛实验。最后,我们强调并将报告的结果与FL的未来挑战和趋势联系起来,以减少其环境影响,包括算法效率,硬件能力和更强的行业透明度。
translated by 谷歌翻译
自动化机器学习(AUTOML)是使机器学习模型被广泛应用于解决现实世界问题的重要步骤。尽管有许多研究的进步,但机器学习方法主要由于其数据隐私和安全法规而尚未完全被行业利用,因此在中心位置存储和计算增加数据量的高成本以及最重要的是缺乏专业知识。因此,我们介绍了一个新颖的框架,hanf -$ \ textbf {h} $ yperparameter $ \ textbf {a} $ nd $ \ textbf {n} $ earural架构搜索$ \ textbf {f}为在几个数据所有者服务器上分布的数据建立一个自动框架,而无需将数据带到中心位置。 HANF使用基于梯度的神经体系结构搜索和数据分布式设置中分别使用基于梯度的神经体系结构搜索和$ n $ armed Bandit方法来共同优化学习算法的神经体系结构和非构造超参数。我们表明,HANF有效地找到了优化的神经体系结构,并在数据所有者服务器上调整了超参数。此外,HANF可以在联合和非填充设置中应用。从经验上讲,我们表明HANF使用图像分类任务收敛于合适的体系结构和非架构高参数集。
translated by 谷歌翻译
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
最近联合学习(FL)范式的潜在假设是本地模型通常与全局模型共享与全局模型相同的网络架构,这对于具有不同的硬件和基础架构的移动和IOT设备变得不切实际。可扩展的联合学习框架应该解决配备不同计算和通信功能的异构客户端。为此,本文提出了一种新的联合模型压缩框架,它将异构低级模型分配给客户端,然后将它们聚合到全局全级模型中。我们的解决方案使得能够培训具有不同计算复杂性的异构本地模型,并汇总单个全局模型。此外,FEDHM不仅降低了设备的计算复杂性,而且还通过使用低秩模型来降低通信成本。广泛的实验结果表明,我们提出的\ System在测试顶-1精度(平均精度4.6%的精度增益)方面优于现行修剪的液体方法,在各种异构流域下较小的型号尺寸(平均较小为1.5倍) 。
translated by 谷歌翻译