Decentralized and federated learning algorithms face data heterogeneity as one of the biggest challenges, especially when users want to learn a specific task. Even when personalized headers are used concatenated to a shared network (PF-MTL), aggregating all the networks with a decentralized algorithm can result in performance degradation as a result of heterogeneity in the data. Our algorithm uses exchanged gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other. This algorithm improves the learning performance and leads to faster convergence compared to the case where all clients are connected to each other regardless of their correlations. We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset. The experiment with the synthetic data illustrates that our proposed method is capable of detecting tasks that are positively and negatively correlated. Moreover, the results of the experiments with CelebA demonstrate that the proposed method may produce significantly faster training results than fully-connected networks.
translated by 谷歌翻译
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network where each task has a distinct personalized header network for fine-tuning. MTL can be integrated into a federated learning (FL) setting if tasks are distributed across clients and clients have a single shared network, leading to personalized federated learning (PFL). To cope with statistical heterogeneity in the federated setting across clients which can significantly degrade the learning performance, we use a distributed dynamic weighting approach. To perform the communication between the remote parameter server (PS) and the clients efficiently over the noisy channel in a power and bandwidth-limited regime, we utilize over-the-air (OTA) aggregation and hierarchical federated learning (HFL). Thus, we propose hierarchical over-the-air (HOTA) PFL with a dynamic weighting strategy which we call HOTA-FedGradNorm. Our algorithm considers the channel conditions during the dynamic weight selection process. We conduct experiments on a wireless communication system dataset (RadComDynamic). The experimental results demonstrate that the training speed with HOTA-FedGradNorm is faster compared to the algorithms with a naive static equal weighting strategy. In addition, HOTA-FedGradNorm provides robustness against the negative channel effects by compensating for the channel conditions during the dynamic weight selection process.
translated by 谷歌翻译
联合学习(FL)是一个蓬勃发展的分布式机器学习框架,其中中心参数服务器(PS)协调许多本地用户以训练全局一致的模型。传统的联合学习不可避免地依赖于具有PS的集中拓扑。因此,一旦PS失败,它将瘫痪。为了缓解如此单点故障,特别是在PS上,一些现有的工作已经提供了CDSGD和D-PSGD等分散的FL(DFL)实现,以便于分散拓扑中的流体。然而,这些方法仍存在一些问题,例如,在CDSGD中的用户最终模型和D-PSGD中的网络范围的模型平均必需品之间存在一些问题。为了解决这些缺陷,本文设计了一种作为DACFL的新DFL实现,其中每个用户使用自己的训练数据列举其模型,并通过对称和双随机矩阵将中间模型与其邻居交换。 DACFL将每个用户本地培训的进度视为离散时间过程,并采用第一个订单动态平均共识(FODAC)方法来跟踪\ Texit {平均模型}在没有PS的情况下。在本文中,我们还提供了DACFL的理论收敛性分析,即在I.I.D数据的前提下,以加强其合理性。 Mnist,Fashion-Mnist和CiFar-10的实验结果验证了我们在几间不变性和时变网络拓扑中的解决方案的可行性,并在大多数情况下声明DACFL优于D-PSGD和CDSGD。
translated by 谷歌翻译
分散和联合学习的关键挑战之一是设计算法,这些算法有效地处理跨代理商的高度异构数据分布。在本文中,我们在数据异质性下重新审视分散的随机梯度下降算法(D-SGD)的分析。我们在D-SGD的收敛速率上展示了新数量的关键作用,称为\ emph {邻居异质性}。通过结合通信拓扑结构和异质性,我们的分析阐明了这两个分散学习中这两个概念之间的相互作用较低。然后,我们认为邻里的异质性提供了一种自然标准,可以学习数据依赖性拓扑结构,以减少(甚至可以消除)数据异质性对D-SGD收敛时间的有害影响。对于与标签偏度分类的重要情况,我们制定了学习这样一个良好拓扑的问题,例如我们使用Frank-Wolfe算法解决的可拖动优化问题。如一组模拟和现实世界实验所示,我们的方法提供了一种设计稀疏拓扑的方法,可以在数据异质性下平衡D-SGD的收敛速度和D-SGD的触电沟通成本。
translated by 谷歌翻译
通过联合学习培训的机器学习模型的收敛速度受到异构数据分区的显着影响,甚至在没有中央服务器的完全分散的设置中。在本文中,我们表明,通过仔细设计潜在的通信拓扑,可以显着降低标签分布偏斜的影响,这是一种重要的数据异质性。我们呈现D-Cliques,一种新颖的拓扑,其通过在稀疏互连的批分中分组节点来减少梯度偏压,使得Clique中的标签分布代表全局标签分布。我们还展示了如何调整分散的SGD的更新,以获得不偏的渐变,并利用D-Cliques实现有效的动量。我们对MNIST和CIFAR10的广泛实证评估表明,我们的方法提供了类似的收敛速度作为完全连接的拓扑,这提供了数据异构设置中的最佳收敛性,并且在边缘和消息的数量下显着降低。在1000节点拓扑中,D-Cliques需要98%的边缘和96%的总信息,在跨越群体中使用小世界拓扑的进一步获得。
translated by 谷歌翻译
Recently, local peer topology has been shown to influence the overall convergence of decentralized learning (DL) graphs in the presence of data heterogeneity. In this paper, we demonstrate the advantages of constructing a proxy-based locally heterogeneous DL topology to enhance convergence and maintain data privacy. In particular, we propose a novel peer clumping strategy to efficiently cluster peers before arranging them in a final training graph. By showing how locally heterogeneous graphs outperform locally homogeneous graphs of similar size and from the same global data distribution, we present a strong case for topological pre-processing. Moreover, we demonstrate the scalability of our approach by showing how the proposed topological pre-processing overhead remains small in large graphs while the performance gains get even more pronounced. Furthermore, we show the robustness of our approach in the presence of network partitions.
translated by 谷歌翻译
联邦学习已成为不同领域培训机器学习模型的重要范式。对于诸如图形分类的图形级任务,图也可以被视为一种特殊类型的数据样本,可以收集并存储在单独的本地系统中。类似于其他域,多个本地系统,每个域每个保持一小集图,可以受益于协同训练强大的图形挖掘模型,例如流行的图形神经网络(GNN)。为了为这种努力提供更多的动机,我们分析了不同域的实际图形,以确认它们确实共享了与随机图纸相比统计上显着的某些图形属性。但是,我们还发现,即使来自同一个域或相同的数据集,也发现不同的图表是非IID,这对于图形结构和节点特征。为了处理这一点,我们提出了一种基于GNN的梯度的群集联合学习(GCFL)框架的图表集群联合学习(GCFL)框架,并且理论上可以证明这种群集可以减少本地系统所拥有的图形之间的结构和特征异质性。此外,我们观察到GNN的梯度在GCFL中强制波动,从而阻碍了高质量的聚类,并基于动态时间翘曲(GCFL +)设计了一种基于梯度序列的聚类机制。广泛的实验结果和深入分析证明了我们提出的框架的有效性。
translated by 谷歌翻译
本文提出了一种完全分散的联邦学习(FL)方案,用于通过多跳网络连接的所有内容(IOE)设备。由于FL算法几乎没有收敛机器学习(ML)模型的参数,因此本文侧重于功能空间中ML模型的收敛性。考虑到ML任务的代表性损失函数例如,均方误差(MSE)和Kullback-Leibler(KL)发散,是凸起的功能,直接更新功能空间中的功能的算法可以收敛到最佳解决方案。本文的关键概念是定制基于共识的优化算法,可以在功能空间中工作,以分布式方式实现全局最佳。本文首先分析了函数空间中所提出的算法的收敛,其被称为元算法,并且示出了频谱图理论可以以类似于数值矢量的方式应用于函数空间。然后,为神经网络(NN)开发了基于共识的多跳联盟蒸馏(CMFD)以实现元算法。 CMFD利用知识蒸馏来实现相邻器件之间的功能聚集而没有参数平均。 CMFD的一个优点是它即使在分布式学习者中使用不同的NN模型也是如此。虽然CMFD不完全反映元算法的行为,但元算法的融合属性的讨论促进了对CMFD的直观理解,并且模拟评估表明,NN模型会聚使用CMFD进行多种任务。仿真结果还表明,CMFD比弱连接网络的参数聚合实现更高的准确性,CMFD比参数聚合方法更稳定。
translated by 谷歌翻译
This study investigates clustered federated learning (FL), one of the formulations of FL with non-i.i.d. data, where the devices are partitioned into clusters and each cluster optimally fits its data with a localized model. We propose a novel clustered FL framework, which applies a nonconvex penalty to pairwise differences of parameters. This framework can automatically identify clusters without a priori knowledge of the number of clusters and the set of devices in each cluster. To implement the proposed framework, we develop a novel clustered FL method called FPFC. Advancing from the standard ADMM, our method is implemented in parallel, updates only a subset of devices at each communication round, and allows each participating device to perform a variable amount of work. This greatly reduces the communication cost while simultaneously preserving privacy, making it practical for FL. We also propose a new warmup strategy for hyperparameter tuning under FL settings and consider the asynchronous variant of FPFC (asyncFPFC). Theoretically, we provide convergence guarantees of FPFC for general nonconvex losses and establish the statistical convergence rate under a linear model with squared loss. Our extensive experiments demonstrate the advantages of FPFC over existing methods.
translated by 谷歌翻译
This paper deals with the problem of statistical and system heterogeneity in a cross-silo Federated Learning (FL) framework where there exist a limited number of Consumer Internet of Things (CIoT) devices in a smart building. We propose a novel Graph Signal Processing (GSP)-inspired aggregation rule based on graph filtering dubbed ``G-Fedfilt''. The proposed aggregator enables a structured flow of information based on the graph's topology. This behavior allows capturing the interconnection of CIoT devices and training domain-specific models. The embedded graph filter is equipped with a tunable parameter which enables a continuous trade-off between domain-agnostic and domain-specific FL. In the case of domain-agnostic, it forces G-Fedfilt to act similar to the conventional Federated Averaging (FedAvg) aggregation rule. The proposed G-Fedfilt also enables an intrinsic smooth clustering based on the graph connectivity without explicitly specified which further boosts the personalization of the models in the framework. In addition, the proposed scheme enjoys a communication-efficient time-scheduling to alleviate the system heterogeneity. This is accomplished by adaptively adjusting the amount of training data samples and sparsity of the models' gradients to reduce communication desynchronization and latency. Simulation results show that the proposed G-Fedfilt achieves up to $3.99\% $ better classification accuracy than the conventional FedAvg when concerning model personalization on the statistically heterogeneous local datasets, while it is capable of yielding up to $2.41\%$ higher accuracy than FedAvg in the case of testing the generalization of the models.
translated by 谷歌翻译
客户端之间的非独立和相同分布(非IID)数据分布被视为降低联合学习(FL)性能的关键因素。处理非IID数据(如个性化FL和联邦多任务学习(FMTL)的几种方法对研究社区有很大兴趣。在这项工作中,首先,我们使用Laplacian正规化制定FMTL问题,明确地利用客户模型之间的关系进行多任务学习。然后,我们介绍了FMTL问题的新视图,首次表明配制的FMTL问题可用于传统的FL和个性化FL。我们还提出了两种算法FEDU和DFEDU,分别解决了通信集中和分散方案中的配制FMTL问题。从理论上讲,我们证明了两种算法的收敛速率实现了用于非凸起目标的强大凸起和载位加速的线性加速。实验,我们表明我们的算法优于FL设置的传统算法FedVG,在FMTL设置中的Mocha,以及个性化流程中的PFEDME和PER-FEDAVG。
translated by 谷歌翻译
作为包含结构和特征信息的特殊信息载体,图被广泛用于图挖掘中,例如图形神经网络(GNNS)。但是,在某些实际情况下,图形数据分别存储在多个分布式各方中,由于利益冲突,可能不会直接共享。因此,提出了联合图神经网络来解决此类数据孤岛问题,同时保留各方(或客户)的隐私。然而,各方之间的不同图形数据分布(称为统计异质性)可能会降低诸如fedAvg之类的幼稚联合学习算法的性能。在本文中,我们提出了一个基于自我图形的联合图形学习框架Fedego,以应对上述挑战,每个客户将在此培训其本地模型,同时也为全球模型的培训做出贡献。 Fedego应用图形上的自我图形来充分利用结构信息,并利用混音来实现隐私问题。为了处理统计异质性,我们将个性化整合到学习中,并提出一种自适应混合系数策略,使客户能够实现最佳个性化。广泛的实验结果和深入分析证明了联邦的有效性。
translated by 谷歌翻译
Federated Learning (FL) has become a key choice for distributed machine learning. Initially focused on centralized aggregation, recent works in FL have emphasized greater decentralization to adapt to the highly heterogeneous network edge. Among these, Hierarchical, Device-to-Device and Gossip Federated Learning (HFL, D2DFL \& GFL respectively) can be considered as foundational FL algorithms employing fundamental aggregation strategies. A number of FL algorithms were subsequently proposed employing multiple fundamental aggregation schemes jointly. Existing research, however, subjects the FL algorithms to varied conditions and gauges the performance of these algorithms mainly against Federated Averaging (FedAvg) only. This work consolidates the FL landscape and offers an objective analysis of the major FL algorithms through a comprehensive cross-evaluation for a wide range of operating conditions. In addition to the three foundational FL algorithms, this work also analyzes six derived algorithms. To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed for rapid configuration of multiple FL algorithms. Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions, including asynchronous aggregation and the presence of stragglers. Furthermore, decentralized FL can also operate in noisy environments and with a comparably higher local update rate. However, the impact of extremely skewed data distributions on decentralized FL is much more adverse than on centralized variants. The results indicate that it may not be necessary to restrict the devices to a single FL algorithm; rather, multi-FL nodes may operate with greater efficiency.
translated by 谷歌翻译
在机器学习模型的数据并行优化中,工人协作以改善对模型的估计:更准确的梯度使他们可以使用更大的学习率并更快地优化。我们考虑所有工人从同一数据集进行采样的设置,并通过稀疏图(分散)进行通信。在这种情况下,当前的理论无法捕获现实世界行为的重要方面。首先,通信图的“光谱差距”不能预测其(深)学习中的经验表现。其次,当前的理论并不能解释合作可以比单独培训更大的学习率。实际上,它规定了较小的学习率,随着图表的变化而进一步降低,无法解释无限图中的收敛性。本文旨在在工人共享相同的数据分布时绘制出稀疏连接的分布式优化的准确图片。我们量化图形拓扑如何影响二次玩具问题中的收敛性,并为一般平滑和(强烈)凸目标提供理论结果。我们的理论与深度学习中的经验观察相匹配,并准确地描述了不同图形拓扑的相对优点。
translated by 谷歌翻译
联合学习允许一组分布式客户端培训私有数据的公共机器学习模型。模型更新的交换由中央实体或以分散的方式管理,例如,由一个区间的。但是,所有客户端的强大概括都使得这些方法不合适,不合适地分布(非IID)数据。我们提出了一个统一的统一方法,在联合学习中的权力下放和个性化,该方法是基于模型更新的定向非循环图(DAG)。客户端代替培训单个全局模型,客户端专门从事来自其他客户端的模型更新的本地数据,而不是依赖于各自数据的相似性。这种专业化从基于DAG的沟通和模型更新的选择隐含地出现。因此,我们启用专业模型的演变,它专注于数据的子集,因此覆盖非IID数据,而不是在基于区块的基于区块的设置中的联合学习。据我们所知,拟议的解决方案是第一个在完全分散的联邦学习中团结的个性化和中毒鲁棒性。我们的评价表明,模型的专业化直接从基于DAG的模型更新通信到三个不同的数据集。此外,与联合平均相比,我们在客户端展示稳定的模型精度和更少的方差。
translated by 谷歌翻译
With its capability to deal with graph data, which is widely found in practical applications, graph neural networks (GNNs) have attracted significant research attention in recent years. As societies become increasingly concerned with the need for data privacy protection, GNNs face the need to adapt to this new normal. Besides, as clients in Federated Learning (FL) may have relationships, more powerful tools are required to utilize such implicit information to boost performance. This has led to the rapid development of the emerging research field of federated graph neural networks (FedGNNs). This promising interdisciplinary field is highly challenging for interested researchers to grasp. The lack of an insightful survey on this topic further exacerbates the entry difficulty. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a 2-dimensional taxonomy of the FedGNNs literature: 1) the main taxonomy provides a clear perspective on the integration of GNNs and FL by analyzing how GNNs enhance FL training as well as how FL assists GNNs training, and 2) the auxiliary taxonomy provides a view on how FedGNNs deal with heterogeneity across FL clients. Through discussions of key ideas, challenges, and limitations of existing works, we envision future research directions that can help build more robust, explainable, efficient, fair, inductive, and comprehensive FedGNNs.
translated by 谷歌翻译
我们研究了在分散的点对点环境中培训个性化深度学习模型的问题,重点是客户之间的数据分布在客户之间有所不同,而不同的客户具有不同的本地学习任务。我们研究了协变量和标签变化,我们的贡献是一种算法,对于每个客户,根据本地任务的相似性估计,它都会发现有益的协作。我们的方法不依赖于难以估计的超参数,例如客户群的数量,而是使用基于新颖的自适应八卦算法的软群集分配不断适应网络拓扑。我们在各种设置中测试了所提出的方法,其中数据并非独立且在客户端之间分布相同。实验评估表明,对于此问题设置,所提出的方法的性能优于以前的最新算法,并且在以前的方法失败的情况下处理情况很好。
translated by 谷歌翻译
基于敏感数据的机器学习模型在现实世界的承诺中,在医学筛查到疾病爆发,农业,工业,国防科学等地区的进步。在许多应用中,学习参与者通信转舍受益于收集自己的私​​有数据集,在真实数据上教导详细的机器学习模型,并共享使用这些模型的好处。由于现有的隐私和安全问题,大多数人都避免敏感数据分享进行培训。如果没有每个用户向中央服务器演示其本地数据,联邦学习允许各方共同地在其共享数据上培训机器学习算法。这种集体隐私学习方法导致培训期间的重要沟通。大多数大型机器学习应用程序需要基于各种设备和地点生成的数据集的分散学习。这样的数据集代表了分散学习的基本障碍,因为它们的各种环境有助于跨设备和位置的数据交付的显着差异。研究人员提出了几种方法来实现联邦学习系统中的数据隐私。但是,仍存在均匀的本地数据仍存在挑战。该研究方法是选择节点(用户)以在联合学习中共享他们的数据,以便为基于独立的数据的平衡来提高准确性,降低培训时间和增加收敛。因此,本研究介绍了基于名为DQRE-SCNet的光谱聚类的组合的深度QREInforceNce学习合奏,以在每个通信中选择设备的子集。基于结果,展示了可以减少联合学习所需的通信轮数量。
translated by 谷歌翻译
联邦边缘学习(诱导)吸引了许多隐私范例的关注,以有效地纳入网络边缘的分布式数据来训练深度学习模型。然而,单个边缘服务器的有限覆盖范围导致参与者的客户节点数量不足,这可能会损害学习性能。在本文中,我们调查了一种新颖的感觉框架,即半分散的联邦边缘学习(SD-INES),其中采用多个边缘服务器集体协调大量客户端节点。通过利用边缘服务器之间的低延迟通信进行高效的模型共享,SD-Feels可以包含更多的培训数据,同时与传统联合学习相比享受更低的延迟。我们详细介绍了三个主要步骤的SD感觉的培训算法,包括本地模型更新,群集内部和群集间模型聚合。在非独立和相同分布的(非IID)数据上证明了该算法的收敛性,这也有助于揭示关键参数对培训效率的影响,并提供实用的设计指南。同时,边缘装置的异质性可能导致级体效应并降低SD感应的收敛速度。为了解决这个问题,我们提出了一种具有SD-Iave的稳定性舒长方案的异步训练算法,其中,还分析了收敛性能。模拟结果展示了所提出的SD感觉和证实我们分析的算法的有效性和效率。
translated by 谷歌翻译
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models. First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client, due to the inherent heterogeneity of local data distributions. Federated multi-task learning (MTL) approaches can learn personalized models by formulating an opportune penalized optimization problem. The penalization term can capture complex relations among personalized models, but eschews clear statistical assumptions about local data distributions. In this work, we propose to study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions. This assumption encompasses most of the existing personalized FL approaches and leads to federated EM-like algorithms for both client-server and fully decentralized settings. Moreover, it provides a principled way to serve personalized models to clients not seen at training time. The algorithms' convergence is analyzed through a novel federated surrogate optimization framework, which can be of general interest. Experimental results on FL benchmarks show that our approach provides models with higher accuracy and fairness than state-of-the-art methods.
translated by 谷歌翻译