Recently, local peer topology has been shown to influence the overall convergence of decentralized learning (DL) graphs in the presence of data heterogeneity. In this paper, we demonstrate the advantages of constructing a proxy-based locally heterogeneous DL topology to enhance convergence and maintain data privacy. In particular, we propose a novel peer clumping strategy to efficiently cluster peers before arranging them in a final training graph. By showing how locally heterogeneous graphs outperform locally homogeneous graphs of similar size and from the same global data distribution, we present a strong case for topological pre-processing. Moreover, we demonstrate the scalability of our approach by showing how the proposed topological pre-processing overhead remains small in large graphs while the performance gains get even more pronounced. Furthermore, we show the robustness of our approach in the presence of network partitions.
translated by 谷歌翻译
通过联合学习培训的机器学习模型的收敛速度受到异构数据分区的显着影响,甚至在没有中央服务器的完全分散的设置中。在本文中,我们表明,通过仔细设计潜在的通信拓扑,可以显着降低标签分布偏斜的影响,这是一种重要的数据异质性。我们呈现D-Cliques,一种新颖的拓扑,其通过在稀疏互连的批分中分组节点来减少梯度偏压,使得Clique中的标签分布代表全局标签分布。我们还展示了如何调整分散的SGD的更新,以获得不偏的渐变,并利用D-Cliques实现有效的动量。我们对MNIST和CIFAR10的广泛实证评估表明,我们的方法提供了类似的收敛速度作为完全连接的拓扑,这提供了数据异构设置中的最佳收敛性,并且在边缘和消息的数量下显着降低。在1000节点拓扑中,D-Cliques需要98%的边缘和96%的总信息,在跨越群体中使用小世界拓扑的进一步获得。
translated by 谷歌翻译
Federated Learning (FL) has become a key choice for distributed machine learning. Initially focused on centralized aggregation, recent works in FL have emphasized greater decentralization to adapt to the highly heterogeneous network edge. Among these, Hierarchical, Device-to-Device and Gossip Federated Learning (HFL, D2DFL \& GFL respectively) can be considered as foundational FL algorithms employing fundamental aggregation strategies. A number of FL algorithms were subsequently proposed employing multiple fundamental aggregation schemes jointly. Existing research, however, subjects the FL algorithms to varied conditions and gauges the performance of these algorithms mainly against Federated Averaging (FedAvg) only. This work consolidates the FL landscape and offers an objective analysis of the major FL algorithms through a comprehensive cross-evaluation for a wide range of operating conditions. In addition to the three foundational FL algorithms, this work also analyzes six derived algorithms. To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed for rapid configuration of multiple FL algorithms. Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions, including asynchronous aggregation and the presence of stragglers. Furthermore, decentralized FL can also operate in noisy environments and with a comparably higher local update rate. However, the impact of extremely skewed data distributions on decentralized FL is much more adverse than on centralized variants. The results indicate that it may not be necessary to restrict the devices to a single FL algorithm; rather, multi-FL nodes may operate with greater efficiency.
translated by 谷歌翻译
联合学习(FL)可以培训全球模型,而无需共享存储在多个设备上的分散的原始数据以保护数据隐私。由于设备的能力多样化,FL框架难以解决Straggler效应和过时模型的问题。此外,数据异质性在FL训练过程中会导致全球模型的严重准确性降解。为了解决上述问题,我们提出了一个层次同步FL框架,即Fedhisyn。 Fedhisyn首先根据其计算能力将所有可​​用的设备簇分为少数类别。经过一定的本地培训间隔后,将不同类别培训的模型同时上传到中央服务器。在单个类别中,设备根据环形拓扑会相互传达局部更新的模型权重。随着环形拓扑中训练的效率更喜欢具有均匀资源的设备,基于计算能力的分类减轻了Straggler效应的影响。此外,多个类别的同步更新与单个类别中的设备通信的组合有助于解决数据异质性问题,同时达到高精度。我们评估了基于MNIST,EMNIST,CIFAR10和CIFAR100数据集的提议框架以及设备的不同异质设置。实验结果表明,在训练准确性和效率方面,Fedhisyn的表现优于六种基线方法,例如FedAvg,脚手架和Fedat。
translated by 谷歌翻译
The space-air-ground integrated network (SAGIN), one of the key technologies for next-generation mobile communication systems, can facilitate data transmission for users all over the world, especially in some remote areas where vast amounts of informative data are collected by Internet of remote things (IoRT) devices to support various data-driven artificial intelligence (AI) services. However, training AI models centrally with the assistance of SAGIN faces the challenges of highly constrained network topology, inefficient data transmission, and privacy issues. To tackle these challenges, we first propose a novel topology-aware federated learning framework for the SAGIN, namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the ground layer leverage their private data to perform model training locally, while the air nodes in the air layer and the ring-structured low earth orbit (LEO) satellite constellation in the space layer are in charge of model aggregation (synchronization) at different scales.To further enhance communication efficiency and inference performance of OBL, an efficient Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm is designed by taking the data class distribution of the air nodes as well as their geographic locations into account. Furthermore, we extend our OBL framework and CNASA algorithm to adapt to more complex multi-orbit satellite networks. We analyze the convergence of our OBL framework and conclude that the CNASA algorithm contributes to the fast convergence of the global model. Extensive experiments based on realistic datasets corroborate the superior performance of our algorithm over the benchmark policies.
translated by 谷歌翻译
高效联合学习是在边缘设备上培训和部署AI模型的关键挑战之一。然而,在联合学习中维护数据隐私提出了几种挑战,包括数据异质性,昂贵的通信成本和有限的资源。在本文中,我们通过(a)通过基于本地客户端的深度增强学习引入突出参数选择代理的上述问题,并在中央服务器上聚合所选择的突出参数,(b)分割正常的深度学习模型〜 (例如,CNNS)作为共享编码器和本地预测器,并通过联合学习训练共享编码器,同时通过本地自定义预测器将其知识传送到非IID客户端。所提出的方法(a)显着降低了联合学习的通信开销,并加速了模型推断,而方法(b)则在联合学习中解决数据异质性问题。此外,我们利用梯度控制机制来校正客户之间的梯度异质性。这使得训练过程更稳定并更快地收敛。实验表明,我们的方法产生了稳定的训练过程,并与最先进的方法相比实现了显着的结果。在培训VGG-11时,我们的方法明显降低了通信成本最高108 GB,并在培训Reset-20时需要7.6美元的通信开销,同时通过减少高达39.7 \%$ 39.7 \%$ vgg- 11.
translated by 谷歌翻译
联合学习(FL)是一个蓬勃发展的分布式机器学习框架,其中中心参数服务器(PS)协调许多本地用户以训练全局一致的模型。传统的联合学习不可避免地依赖于具有PS的集中拓扑。因此,一旦PS失败,它将瘫痪。为了缓解如此单点故障,特别是在PS上,一些现有的工作已经提供了CDSGD和D-PSGD等分散的FL(DFL)实现,以便于分散拓扑中的流体。然而,这些方法仍存在一些问题,例如,在CDSGD中的用户最终模型和D-PSGD中的网络范围的模型平均必需品之间存在一些问题。为了解决这些缺陷,本文设计了一种作为DACFL的新DFL实现,其中每个用户使用自己的训练数据列举其模型,并通过对称和双随机矩阵将中间模型与其邻居交换。 DACFL将每个用户本地培训的进度视为离散时间过程,并采用第一个订单动态平均共识(FODAC)方法来跟踪\ Texit {平均模型}在没有PS的情况下。在本文中,我们还提供了DACFL的理论收敛性分析,即在I.I.D数据的前提下,以加强其合理性。 Mnist,Fashion-Mnist和CiFar-10的实验结果验证了我们在几间不变性和时变网络拓扑中的解决方案的可行性,并在大多数情况下声明DACFL优于D-PSGD和CDSGD。
translated by 谷歌翻译
在金融和医疗保健等高度监管域中的机构通常存在围绕数据共享的限制性规则。联合学习是一种分布式学习框架,可以实现对分散数据的多机构合作,并改善了每个合作师的数据隐私的保护。在本文中,我们提出了一种用于分散的联邦学习的通信有效的方案,称为ProxyFL或基于代理的联合学习。 ProxyFL中的每个参与者都维护了两个模型,私人模型和旨在保护参与者隐私的公开共享代理模型。代理模型允许参与者之间的高效信息交换,使用PushSum方法而无需集中式服务器。所提出的方法通过允许模型异质性消除了规范联合学习的显着限制;每个参与者都可以拥有任何架构的私有模型。此外,我们通过代理通信的协议导致使用差异隐私分析的隐私保障更强。对流行的图像数据集的实验,以及使用超过30,000多个高质量的千兆的千兆子痫组织的泛癌诊断问题整个幻灯片图像,表明ProxyFL可以优于现有的现有替代方案,越来越少的沟通开销和更强大的隐私。
translated by 谷歌翻译
Federated Learning (FL) allows training machine learning models in privacy-constrained scenarios by enabling the cooperation of edge devices without requiring local data sharing. This approach raises several challenges due to the different statistical distribution of the local datasets and the clients' computational heterogeneity. In particular, the presence of highly non-i.i.d. data severely impairs both the performance of the trained neural network and its convergence rate, increasing the number of communication rounds requested to reach a performance comparable to that of the centralized scenario. As a solution, we propose FedSeq, a novel framework leveraging the sequential training of subgroups of heterogeneous clients, i.e. superclients, to emulate the centralized paradigm in a privacy-compliant way. Given a fixed budget of communication rounds, we show that FedSeq outperforms or match several state-of-the-art federated algorithms in terms of final performance and speed of convergence. Finally, our method can be easily integrated with other approaches available in the literature. Empirical results show that combining existing algorithms with FedSeq further improves its final performance and convergence speed. We test our method on CIFAR-10 and CIFAR-100 and prove its effectiveness in both i.i.d. and non-i.i.d. scenarios.
translated by 谷歌翻译
With its capability to deal with graph data, which is widely found in practical applications, graph neural networks (GNNs) have attracted significant research attention in recent years. As societies become increasingly concerned with the need for data privacy protection, GNNs face the need to adapt to this new normal. Besides, as clients in Federated Learning (FL) may have relationships, more powerful tools are required to utilize such implicit information to boost performance. This has led to the rapid development of the emerging research field of federated graph neural networks (FedGNNs). This promising interdisciplinary field is highly challenging for interested researchers to grasp. The lack of an insightful survey on this topic further exacerbates the entry difficulty. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a 2-dimensional taxonomy of the FedGNNs literature: 1) the main taxonomy provides a clear perspective on the integration of GNNs and FL by analyzing how GNNs enhance FL training as well as how FL assists GNNs training, and 2) the auxiliary taxonomy provides a view on how FedGNNs deal with heterogeneity across FL clients. Through discussions of key ideas, challenges, and limitations of existing works, we envision future research directions that can help build more robust, explainable, efficient, fair, inductive, and comprehensive FedGNNs.
translated by 谷歌翻译
跨不同边缘设备(客户)局部数据的分布不均匀,导致模型训练缓慢,并降低了联合学习的准确性。幼稚的联合学习(FL)策略和大多数替代解决方案试图通过加权跨客户的深度学习模型来实现更多公平。这项工作介绍了在现实世界数据集中遇到的一种新颖的非IID类型,即集群键,其中客户组具有具有相似分布的本地数据,从而导致全局模型收敛到过度拟合的解决方案。为了处理非IID数据,尤其是群集串数据的数据,我们提出了FedDrl,这是一种新型的FL模型,它采用了深厚的强化学习来适应每个客户的影响因素(将用作聚合过程中的权重)。在一组联合数据集上进行了广泛的实验证实,拟议的FEDDR可以根据CIFAR-100数据集的平均平均为FedAvg和FedProx方法提高了有利的改进,例如,高达4.05%和2.17%。
translated by 谷歌翻译
最近,联邦学习(FL)获得了深入的研究,因为它具有为分散客户提供协作训练机器学习模型的数据隐私的能力。通常,部署了参数服务器(PS)来汇总不同客户端贡献的模型参数。分散的联合学习(DFL)已从FL升级,该学习允许客户直接与邻居聚集模型参数。 DFL对于车辆网络特别可行,因为车辆以车辆到车辆(V2V)方式相互通信。但是,由于车辆路线和通信距离的限制,单个车辆很难与他人充分交流模型。促成单个车辆模型的数据源可能没有足够多样化,从而导致模型准确性差。为了解决这个问题,我们提出了DFL-DDS(带有多元化数据源)算法的DFL-DDS,以使DFL中的数据源多样化。具体而言,每辆车都保持状态向量以记录每个数据源对其模型的贡献权重。采用Kullback-Leibler(KL)差异来衡量国家向量的多样性。为了提高DFL的收敛性,车辆通过最大程度地减少其状态向量的KL差异来调整每个数据源的聚合权重,并且可以在理论上证明其在多元化数据源中的有效性。最后,通过广泛的实验(使用MNIST和CIFAR-10数据集)评估DFL-DDS的优势,这些实验表明DFL-DD可以加速DFL的收敛性,并显着提高模型的准确性,并显着提高与最先进的盆地相比。
translated by 谷歌翻译
Federated Learning (FL) is extensively used to train AI/ML models in distributed and privacy-preserving settings. Participant edge devices in FL systems typically contain non-independent and identically distributed~(Non-IID) private data and unevenly distributed computational resources. Preserving user data privacy while optimizing AI/ML models in a heterogeneous federated network requires us to address data heterogeneity and system/resource heterogeneity. Hence, we propose \underline{R}esource-\underline{a}ware \underline{F}ederated \underline{L}earning~(RaFL) to address these challenges. RaFL allocates resource-aware models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion. Integrating NAS into FL enables on-demand customized model deployment for resource-diverse edge devices. Furthermore, we propose a multi-model architecture fusion scheme allowing the aggregation of the distributed learning results. Results demonstrate RaFL's superior resource efficiency compared to SoTA.
translated by 谷歌翻译
推荐系统被证明是提取与用户相关的内容帮助用户进行日常活动的宝贵工具(例如,找到相关的访问地点,要消费的内容,要购买的商品)。但是,为了有效,这些系统需要收集和分析大量个人数据(例如,位置检查,电影评分,点击率等),这使用户面临许多隐私威胁。在这种情况下,基于联合学习(FL)的推荐系统似乎是一个有前途的解决方案,可以在计算准确的建议的同时将个人数据保存在用户设备上时,是一个有前途的解决方案。但是,FL,因此基于FL的推荐系统,依靠中央服务器,除了容易受到攻击外,还可以遇到可伸缩性问题。为了解决这个问题,我们提出了基于八卦学习原理的分散推荐系统Pepper。在胡椒中,用户八卦模型更新并不同步。 Pepper的核心位于两个关键组成部分:一个个性化的同行采样协议,该协议保存在每个节点附近,这是与前者具有相似兴趣的节点的一部分,以及一个简单而有效的模型汇总功能,该功能构建了一个模型更适合每个用户。通过在三个实施两个用例的实验实验中进行实验:位置入住建议和电影推荐,我们证明我们的解决方案比其他分散的解决方案快42%收敛于42%与分散的竞争对手相比,长时间性能的命中率和高达21%的速度提高了21%。
translated by 谷歌翻译
聚集的联合学习(FL)已显示通过将客户分组为群集,从而产生有希望的结果。这在单独的客户群在其本地数据的分布方面有显着差异的情况下特别有效。现有的集群FL算法实质上是在试图将客户群体组合在一起,以便同一集群中的客户可以利用彼此的数据来更好地执行联合学习。但是,先前的群集FL算法试图在培训期间间接学习这些分布相似性,这可能会很耗时,因为可能需要许多回合的联合学习,直到群集的形成稳定为止。在本文中,我们提出了一种新的联合学习方法,该方法直接旨在通过分析客户数据子空间之间的主要角度来有效地识别客户之间的分布相似性。每个客户端都以单一的方式在其本地数据上应用截断的奇异值分解(SVD)步骤,以得出一小部分主向量,该量提供了一个签名,可简洁地捕获基础分布的主要特征。提供了一组主要的主向量,以便服务器可以直接识别客户端之间的分布相似性以形成簇。这是通过比较这些主要向量跨越的客户数据子空间之间主要角度的相似性来实现的。该方法提供了一个简单而有效的集群FL框架,该框架解决了广泛的数据异质性问题,而不是标签偏斜的更简单的非iids形式。我们的聚类FL方法还可以为非凸目标目标提供融合保证。我们的代码可在https://github.com/mmorafah/pacfl上找到。
translated by 谷歌翻译
分散的学习算法可以通过在不同设备和位置生成的大型分布式数据集对深度学习模型进行培训,而无需中央服务器。在实际情况下,分布式数据集可以在整个代理之间具有显着不同的数据分布。当前的最新分散算法主要假设数据分布是独立且分布相同的(IID)。本文的重点是用最小的计算和内存开销来改善非IID数据分布的分散学习。我们提出了邻居梯度聚类(NGC),这是一种新型的分散学习算法,使用自我和交叉梯度信息修改每个代理的局部梯度。特别是,所提出的方法用自级的加权平均值,模型变化的跨梯度(接收到的邻居模型参数相对于本地数据集的衍生物)和数据变化,将模型的局部梯度取代了模型变化的均值平均值交叉梯度(相对于其邻居数据集的本地模型的衍生物)。此外,我们提出了compngc,这是NGC的压缩版本,通过压缩交叉梯度将通信开销降低了$ 32 \ times $。我们证明了所提出的技术在各种模型体系结构和图形拓扑上采样的非IID数据分布上提出的技术的经验收敛性和效率。我们的实验表明,NGC和COMPNGC的表现优于现有的最先进的(SOTA)去中心化学习算法,而不是非IID数据的$ 1-5 \%$,其计算和内存需求明显降低。此外,我们还表明,所提出的NGC方法的表现优于$ 5-40 \%$,而没有其他交流。
translated by 谷歌翻译
通信成本是有效分布式学习算法设计的主要瓶颈。最近,已经提出了事件触发的技术来减少计算节点之间的交换信息,从而减轻通信成本。但是,大多数现有的事件触发的方法只考虑启发式事件触发的阈值。它们还忽略了计算和网络延迟的影响,这在培训表现上起着重要作用。在本文中,我们提出了一种异步事件触发的随机梯度下降(SGD)框架,称为AET-SGD,至i)降低计算节点之间的通信成本,并且II)减轻延迟的影响。与基线事件触发的方法相比,AET-SGD采用线性增加样本大小事件触发阈值,并且可以显着降低通信成本,同时保持良好的收敛性能。我们实现AET-SGD并评估其在多个代表数据集中的性能,包括Mnist,FashionMnist,KMnist和CiFar10。实验结果验证了设计的正确性,与现有技术相比,验证了设计的正确性降低了44倍至120倍。我们的结果还表明,AET-SGD可以在获得体面的性能和所需的加速度的同时抵抗来自斯特拉格勒节点的大延迟。
translated by 谷歌翻译
我们研究了在分散的点对点环境中培训个性化深度学习模型的问题,重点是客户之间的数据分布在客户之间有所不同,而不同的客户具有不同的本地学习任务。我们研究了协变量和标签变化,我们的贡献是一种算法,对于每个客户,根据本地任务的相似性估计,它都会发现有益的协作。我们的方法不依赖于难以估计的超参数,例如客户群的数量,而是使用基于新颖的自适应八卦算法的软群集分配不断适应网络拓扑。我们在各种设置中测试了所提出的方法,其中数据并非独立且在客户端之间分布相同。实验评估表明,对于此问题设置,所提出的方法的性能优于以前的最新算法,并且在以前的方法失败的情况下处理情况很好。
translated by 谷歌翻译
自从联合学习(FL)被引入具有隐私保护的分散学习技术以来,分布式数据的统计异质性是实现FL应用中实现稳健性能和稳定收敛性的主要障碍。已经研究了模型个性化方法来克服这个问题。但是,现有的方法主要是在完全标记的数据的先决条件下,这在实践中是不现实的,由于需要专业知识。由部分标记的条件引起的主要问题是,标记数据不足的客户可能会遭受不公平的性能增益,因为他们缺乏足够的本地分销见解来自定义全球模型。为了解决这个问题,1)我们提出了一个新型的个性化的半监督学习范式,该范式允许部分标记或未标记的客户寻求与数据相关的客户(助手代理)的标签辅助,从而增强他们对本地数据的认识; 2)基于此范式,我们设计了一个基于不确定性的数据关系度量,以确保选定的帮助者可以提供值得信赖的伪标签,而不是误导当地培训; 3)为了减轻助手搜索引入的网络过载,我们进一步开发了助手选择协议,以实现有效的绩效牺牲的有效沟通。实验表明,与其他具有部分标记数据的相关作品相比,我们提出的方法可以获得卓越的性能和更稳定的收敛性,尤其是在高度异质的环境中。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译