Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process by training and uploading their local models to an FL server in each global iteration. Upon receiving the local models from all the clients, the FL server generates a global model by aggregating the received local models. This traditional FL process may suffer from the straggler problem in heterogeneous client settings, where the FL server has to wait for slow clients to upload their local models in each global iteration, thus increasing the overall training time. One of the solutions is to set up a deadline and only the clients that can upload their local models before the deadline would be selected in the FL process. This solution may lead to a slow convergence rate and global model overfitting issues due to the limited client selection. In this paper, we propose the Latency awarE Semi-synchronous client Selection and mOdel aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies. That is, faster clients would be scheduled to upload their models more frequently than slow clients, thus resolving the straggler problem and accelerating the convergence speed, while avoiding model overfitting. Also, LESSON is capable of adjusting the tradeoff between the model accuracy and convergence rate by varying the deadline. Extensive simulations have been conducted to compare the performance of LESSON with the other two baseline methods, i.e., FedAvg and FedCS. The simulation results demonstrate that LESSON achieves faster convergence speed than FedAvg and FedCS, and higher model accuracy than FedCS.
translated by 谷歌翻译
联合学习(FL)是一个分布式的机器学习框架,可以减轻数据孤岛,在该筒仓中,分散的客户在不共享其私人数据的情况下协作学习全球模型。但是,客户的非独立且相同分布的(非IID)数据对训练有素的模型产生了负面影响,并且具有不同本地更新的客户可能会在每个通信回合中对本地梯度造成巨大差距。在本文中,我们提出了一种联合矢量平均(FedVeca)方法来解决上述非IID数据问题。具体而言,我们为与本地梯度相关的全球模型设定了一个新的目标。局部梯度定义为具有步长和方向的双向向量,其中步长为局部更新的数量,并且根据我们的定义将方向分为正和负。在FedVeca中,方向受步尺的影响,因此我们平均双向向量,以降低不同步骤尺寸的效果。然后,我们理论上分析了步骤大小与全球目标之间的关系,并在每个通信循环的步骤大小上获得上限。基于上限,我们为服务器和客户端设计了一种算法,以自适应调整使目标接近最佳的步骤大小。最后,我们通过构建原型系统对不同数据集,模型和场景进行实验,实验结果证明了FedVeca方法的有效性和效率。
translated by 谷歌翻译
联合学习(FL)能够通过定期聚合培训的本地参数来在多个边缘用户执行大的分布式机器学习任务。为了解决在无线迷雾云系统上实现支持的关键挑战(例如,非IID数据,用户异质性),我们首先基于联合平均(称为FedFog)的高效流行算法来执行梯度参数的本地聚合在云端的FOG服务器和全球培训更新。接下来,我们通过调查新的网络知识的流动系统,在无线雾云系统中雇用FEDFog,这促使了全局损失和完成时间之间的平衡。然后开发了一种迭代算法以获得系统性能的精确测量,这有助于设计有效的停止标准以输出适当数量的全局轮次。为了缓解级体效果,我们提出了一种灵活的用户聚合策略,可以先培训快速用户在允许慢速用户加入全局培训更新之前获得一定程度的准确性。提供了使用若干现实世界流行任务的广泛数值结果来验证FEDFOG的理论融合。我们还表明,拟议的FL和通信的共同设计对于在实现学习模型的可比准确性的同时,基本上提高资源利用是必要的。
translated by 谷歌翻译
最近,基于区块链的联合学习(BFL)引起了密集的研究关注,因为培训过程是可审核的,并且该体系结构无助于避免了Vanilla Federated学习(VFL)中参数服务器的单点故障。然而,BFL大大升级了通信流量量,因为BFL客户端获得的所有本地模型更新(即,模型参数的更改)都将转移给所有矿工进行验证以及所有客户端以进行聚合。相比之下,参数服务器和VFL中的客户端仅保留汇总模型更新。因此,BFL的巨大沟通流量将不可避免地损害培训效率,并阻碍BFL现实的部署。为了提高BFL的实用性,我们是第一个通过压缩BFL中的通信(称为BCFL)来提出基于快速区块链的联合学习框架的人之一。同时,我们得出了BCFL的收敛速率,而非凸损失损失。为了最大化最终模型的准确性,我们进一步提出问题,以最大程度地减少收敛率的训练损失,而相对于压缩率和块生成速率的训练时间有限,这是BI-CONVEX优化问题,可以是有效解决。最后,为了证明BCFL的效率,我们对标准CIFAR-10和女权主义数据集进行了广泛的实验。我们的实验结果不仅验证了我们的分析的正确性,而且还表明BCFL可以显着将通信流量降低95-98%,或者与BFL相比,训练时间缩短了90-95%。
translated by 谷歌翻译
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data. However, the existing works fail to address all these practical concerns in FL: limited communication resources, dynamic network conditions and heterogeneous client properties, which slow down the convergence of FL. To tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, the parameter server (PS) selects a representative client subset considering statistical heterogeneity and sends the global model to them. After local training, these selected clients upload compressed model updates matching their capabilities to the PS for aggregation, which significantly alleviates the communication load and mitigates the straggler effect. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the derived convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. Extensive experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$\times$ speedup compared to other methods.
translated by 谷歌翻译
跨不同边缘设备(客户)局部数据的分布不均匀,导致模型训练缓慢,并降低了联合学习的准确性。幼稚的联合学习(FL)策略和大多数替代解决方案试图通过加权跨客户的深度学习模型来实现更多公平。这项工作介绍了在现实世界数据集中遇到的一种新颖的非IID类型,即集群键,其中客户组具有具有相似分布的本地数据,从而导致全局模型收敛到过度拟合的解决方案。为了处理非IID数据,尤其是群集串数据的数据,我们提出了FedDrl,这是一种新型的FL模型,它采用了深厚的强化学习来适应每个客户的影响因素(将用作聚合过程中的权重)。在一组联合数据集上进行了广泛的实验证实,拟议的FEDDR可以根据CIFAR-100数据集的平均平均为FedAvg和FedProx方法提高了有利的改进,例如,高达4.05%和2.17%。
translated by 谷歌翻译
联合学习(FL)是一种有效的分布式机器学习范式,以隐私的方式采用私人数据集。 FL的主要挑战是,END设备通常具有各种计算和通信功能,其培训数据并非独立且分布相同(非IID)。由于在移动网络中此类设备的通信带宽和不稳定的可用性,因此只能在每个回合中选择最终设备(也称为参与者或客户端的参与者或客户端)。因此,使用有效的参与者选择方案来最大程度地提高FL的性能,包括最终模型的准确性和训练时间,这一点至关重要。在本文中,我们对FL的参与者选择技术进行了评论。首先,我们介绍FL并突出参与者选择期间的主要挑战。然后,我们根据其解决方案来审查现有研究并将其分类。最后,根据我们对该主题领域最新的分析的分析,我们为FL的参与者选择提供了一些未来的指示。
translated by 谷歌翻译
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices. It inherently assumes a uniform capacity among participants. However, participants have diverse computational resources in practice due to different conditions such as different energy budgets or executing parallel unrelated tasks. It is necessary to reduce the computation overhead for participants with inefficient computational resources, otherwise they would be unable to finish the full training process. To address the computation heterogeneity, in this paper we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Learning (CCFL), which allows each participant to determine whether to perform conventional local training or model estimation in each round based on its current computational resources. Both theoretical analysis and exhaustive experiments indicate that CCFL has the same convergence rate as FedAvg without resource constraints. Furthermore, CCFL can be viewed of a computation-efficient extension of FedAvg that retains model performance while considerably reducing computation overhead.
translated by 谷歌翻译
联合学习(FL)是一种新兴技术,用于协作训练全球机器学习模型,同时将数据局限于用户设备。FL实施实施的主要障碍是用户之间的非独立且相同的(非IID)数据分布,这会减慢收敛性和降低性能。为了解决这个基本问题,我们提出了一种方法(comfed),以增强客户端和服务器侧的整个培训过程。舒适的关键思想是同时利用客户端变量减少技术来促进服务器聚合和全局自适应更新技术以加速学习。我们在CIFAR-10分类任务上的实验表明,Comfed可以改善专用于非IID数据的最新算法。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
有限的通信资源,例如带宽和能源以及设备之间的数据异质性是联合学习的两个主要瓶颈(FL)。为了应对这些挑战,我们首先使用部分模型聚合(PMA)设计了一个新颖的FL框架,该框架仅汇总负责特征提取的神经网络的下层,而与复杂模式识别相对应的上层仍保留在个性化设备上。提出的PMA-FL能够解决数据异质性并减少无线通道中的传输信息。然后,我们在非convex损耗函数设置下获得了框架的收敛结合。借助此界限,我们定义了一个新的目标函数,名为“计划数据样本量”,以将原始的不明智优化问题转移到可用于设备调度,带宽分配,计算和通信时间分配的可拖动问题中。我们的分析表明,当PMA-FL的沟通和计算部分具有相同的功率时,可以实现最佳时段。我们还开发了一种二级方法来解决最佳带宽分配策略,并使用SET扩展算法来解决最佳设备调度。与最先进的基准测试相比,提议的PMA-FL在两个典型的异质数据集(即Minist和CIFAR-10)上提高了2.72%和11.6%的精度。此外,提出的联合动态设备调度和资源优化方法的精度比考虑的基准略高,但它们提供了令人满意的能量和时间缩短:MNIST的29%能量或20%的时间缩短; CIFAR-10的能量和25%的能量或12.5%的时间缩短。
translated by 谷歌翻译
联邦边缘学习(诱导)吸引了许多隐私范例的关注,以有效地纳入网络边缘的分布式数据来训练深度学习模型。然而,单个边缘服务器的有限覆盖范围导致参与者的客户节点数量不足,这可能会损害学习性能。在本文中,我们调查了一种新颖的感觉框架,即半分散的联邦边缘学习(SD-INES),其中采用多个边缘服务器集体协调大量客户端节点。通过利用边缘服务器之间的低延迟通信进行高效的模型共享,SD-Feels可以包含更多的培训数据,同时与传统联合学习相比享受更低的延迟。我们详细介绍了三个主要步骤的SD感觉的培训算法,包括本地模型更新,群集内部和群集间模型聚合。在非独立和相同分布的(非IID)数据上证明了该算法的收敛性,这也有助于揭示关键参数对培训效率的影响,并提供实用的设计指南。同时,边缘装置的异质性可能导致级体效应并降低SD感应的收敛速度。为了解决这个问题,我们提出了一种具有SD-Iave的稳定性舒长方案的异步训练算法,其中,还分析了收敛性能。模拟结果展示了所提出的SD感觉和证实我们分析的算法的有效性和效率。
translated by 谷歌翻译
We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.
translated by 谷歌翻译
联合学习(FL)使多个设备能够在不共享其个人数据的情况下协作学习全局模型。在现实世界应用中,不同的各方可能具有异质数据分布和有限的通信带宽。在本文中,我们有兴趣提高FL系统的通信效率。我们根据梯度规范的重要性调查和设计设备选择策略。特别是,我们的方法包括在每个通信轮中选择具有最高梯度值的最高规范的设备。我们研究了这种选择技术的收敛性和性能,并将其与现有技术进行比较。我们用非IID设置执行几个实验。结果显示了我们的方法的收敛性,与随机选择比较的测试精度相当大。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
联合学习(FL)是一个新的人工智能概念,它使得互联网(IoT)设备能够学习协作模型,而无需将原始数据发送到集中的节点进行处理。尽管有许多优势,但在物联网设备上的计算资源较低,交换模型参数的高通信成本使得FL在大型物联网网络中的应用非常有限。在这项工作中,我们为非常大的物联网网络开发了一种新型的FL压缩方案,称为高压联合学习(HCFL)。 HCFL可以减少FL过程的数据负载,而无需更改其结构和超参数。通过这种方式,我们不仅可以显着降低沟通成本,而且使密集学习过程更适应低计算资源的物联网设备。此外,我们研究了IoT设备数量与FL模型的收敛水平之间的关系,从而更好地评估了FL过程的质量。我们在模拟和数学分析中演示了HCFL方案。我们提出的理论研究可以用作最低满意度的水平,证明在满足确定的配置时,FL过程可以实现良好的性能。因此,我们表明HCFL适用于具有许多物联网设备的任何FLENTECTED网络。
translated by 谷歌翻译
个性化联合学习(PFL)是一种新的联邦学习(FL)方法,可解决分布式用户设备(UES)生成的数据集的异质性问题。但是,大多数现有的PFL实现都依赖于同步培训来确保良好的收敛性能,这可能会导致严重的散乱问题,在这种情况下,训练时间大量延长了最慢的UE。为了解决这个问题,我们提出了一种半同步PFL算法,被称为半同步个性化的FederatedAveraging(Perfeds $^2 $),而不是移动边缘网络。通过共同优化无线带宽分配和UE调度策略,它不仅减轻了Straggler问题,而且还提供了收敛的培训损失保证。我们根据每回合的参与者数量和回合数量来得出Perfeds2收敛速率的上限。在此基础上,可以使用分析解决方案解决带宽分配问题,并且可以通过贪婪算法获得UE调度策略。实验结果与同步和异步PFL算法相比,验证了Perfeds2在节省训练时间和保证训练损失的收敛方面的有效性。
translated by 谷歌翻译
Federated学习(FL)作为保护分布式机器学习框架引起了很多关注,许多客户通过将模型更新与参数服务器交换而不是共享其原始数据来协作训练机器学习模型。然而,FL培训遭受了缓慢的收敛性和不稳定的性能,这是由于客户的异质计算资源引起的散乱者和沟通率的波动。本文提出了一个编码的FL框架来减轻Straggler问题,即随机编码的联合学习(SCFL)。在此框架中,每个客户端通过将附加噪声添加到其本地数据的随机线性组合中,从而生成一个隐私的编码数据集。服务器从所有客户端收集编码的数据集来构建复合数据集,这有助于补偿散布效果。在培训过程中,服务器和客户端执行迷你批次随机梯度下降(SGD),并且服务器在模型聚合中添加了一个化妆术语,以获得无偏的梯度估计。我们通过共同信息差异隐私(MI-DP)来表征隐私保证,并分析联合学习中的收敛性能。此外,我们通过分析隐私约束对收敛率的影响,证明了拟议的SCFL方法的隐私性绩效权衡。最后,数值实验证实了我们的分析,并显示了SCFL在保持数据隐私的同时实现快速收敛的好处。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients while keeping the training data local. While most prior work on designing systems for FL has focused on using stateful always running components, recent work has shown that components in an FL system can greatly benefit from the usage of serverless computing and Function-as-a-Service technologies. To this end, distributed training of models with severless FL systems can be more resource-efficient and cheaper than conventional FL systems. However, serverless FL systems still suffer from the presence of stragglers, i.e., slow clients due to their resource and statistical heterogeneity. While several strategies have been proposed for mitigating stragglers in FL, most methodologies do not account for the particular characteristics of serverless environments, i.e., cold-starts, performance variations, and the ephemeral stateless nature of the function instances. Towards this, we propose FedLesScan, a novel clustering-based semi-asynchronous training strategy, specifically tailored for serverless FL. FedLesScan dynamically adapts to the behaviour of clients and minimizes the effect of stragglers on the overall system. We implement our strategy by extending an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate our strategy using the 2nd generation Google Cloud Functions with four datasets and varying percentages of stragglers. Results from our experiments show that compared to other approaches FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
translated by 谷歌翻译