A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps. \emph{i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
translated by 谷歌翻译
联邦学习(FL)变得流行,并在训练大型机器学习(ML)模型的情况下表现出很大的潜力,而不会使所有者的原始数据曝光。在FL中,数据所有者可以根据其本地数据培训ML模型,并且仅将模型更新发送到模型更新,而不是原始数据到模型所有者进行聚合。为了提高模型准确性和培训完成时间的学习绩效,招募足够的参与者至关重要。同时,数据所有者是理性的,可能不愿意由于资源消耗而参与协作学习过程。为了解决这些问题,最近有各种作品旨在激励数据业主贡献其资源。在本文中,我们为文献中提出的经济和游戏理论方法提供了全面的审查,以设计刺激数据业主参加流程培训过程的各种计划。特别是,我们首先在激励机制设计中常用的佛罗里达州的基础和背景,经济理论。然后,我们审查博弈理论和经济方法应用于FL的激励机制的应用。最后,我们突出了一些开放的问题和未来关于FL激励机制设计的研究方向。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
联合学习(FL)是一个有前途的分布式框架,用于协作人工智能模型培训,同时保护用户隐私。引起大量研究关注的引导组件是激励机制刺激佛罗里达用户协作的设计。大多数作品采用以经纪人为中心的方法来帮助中央运营商吸引参与者并进一步获得训练有素的模型。很少有作品认为参与者之间以参与者为中心的合作来追求其共同利益的FL模型,这会引起以经纪人FL的激励机制设计的显着差异。为了协调自私和异质参与者,我们提出了一个新颖的分析框架,以激励以参与者为中心的FL有效,有效的合作。具体而言,我们分别提出了两个新型游戏模型,用于贡献符合贡献的FL(COFL)和贡献感知的FL(CAFL),后者在其中实现了最低贡献阈值机制。我们进一步分析了COFL和CAFL游戏的NASH平衡的独特性和存在,并设计有效的算法以实现平衡溶液。广泛的绩效评估表明,COFL中存在自由骑行现象,通过采用CAFL模型具有优化的最低阈值,可以极大地缓解这种现象。
translated by 谷歌翻译
Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train a machine learning model by sharing the model updates instead of the raw data with a server. However, the heterogeneous computational and communication resources of edge devices give rise to stragglers that significantly decelerate the training process. To mitigate this issue, we propose a novel FL framework named stochastic coded federated learning (SCFL) that leverages coded computing techniques. In SCFL, before the training process starts, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding Gaussian noise to the projected local dataset. During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices. We design a gradient aggregation scheme to ensure that the aggregated model update is an unbiased estimate of the desired global update. Moreover, this aggregation scheme enables periodical model averaging to improve the training efficiency. We characterize the tradeoff between the convergence performance and privacy guarantee of SCFL. In particular, a more noisy coded dataset provides stronger privacy protection for edge devices but results in learning performance degradation. We further develop a contract-based incentive mechanism to coordinate such a conflict. The simulation results show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods. In addition, the proposed incentive mechanism grants better training performance than the conventional Stackelberg game approach.
translated by 谷歌翻译
Federated Learning (FL) has become a key choice for distributed machine learning. Initially focused on centralized aggregation, recent works in FL have emphasized greater decentralization to adapt to the highly heterogeneous network edge. Among these, Hierarchical, Device-to-Device and Gossip Federated Learning (HFL, D2DFL \& GFL respectively) can be considered as foundational FL algorithms employing fundamental aggregation strategies. A number of FL algorithms were subsequently proposed employing multiple fundamental aggregation schemes jointly. Existing research, however, subjects the FL algorithms to varied conditions and gauges the performance of these algorithms mainly against Federated Averaging (FedAvg) only. This work consolidates the FL landscape and offers an objective analysis of the major FL algorithms through a comprehensive cross-evaluation for a wide range of operating conditions. In addition to the three foundational FL algorithms, this work also analyzes six derived algorithms. To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed for rapid configuration of multiple FL algorithms. Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions, including asynchronous aggregation and the presence of stragglers. Furthermore, decentralized FL can also operate in noisy environments and with a comparably higher local update rate. However, the impact of extremely skewed data distributions on decentralized FL is much more adverse than on centralized variants. The results indicate that it may not be necessary to restrict the devices to a single FL algorithm; rather, multi-FL nodes may operate with greater efficiency.
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
The space-air-ground integrated network (SAGIN), one of the key technologies for next-generation mobile communication systems, can facilitate data transmission for users all over the world, especially in some remote areas where vast amounts of informative data are collected by Internet of remote things (IoRT) devices to support various data-driven artificial intelligence (AI) services. However, training AI models centrally with the assistance of SAGIN faces the challenges of highly constrained network topology, inefficient data transmission, and privacy issues. To tackle these challenges, we first propose a novel topology-aware federated learning framework for the SAGIN, namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the ground layer leverage their private data to perform model training locally, while the air nodes in the air layer and the ring-structured low earth orbit (LEO) satellite constellation in the space layer are in charge of model aggregation (synchronization) at different scales.To further enhance communication efficiency and inference performance of OBL, an efficient Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm is designed by taking the data class distribution of the air nodes as well as their geographic locations into account. Furthermore, we extend our OBL framework and CNASA algorithm to adapt to more complex multi-orbit satellite networks. We analyze the convergence of our OBL framework and conclude that the CNASA algorithm contributes to the fast convergence of the global model. Extensive experiments based on realistic datasets corroborate the superior performance of our algorithm over the benchmark policies.
translated by 谷歌翻译
由于机器学习(ML)模型变得越来越复杂,其中一个中央挑战是它们在规模的部署,使得公司和组织可以通过人工智能(AI)创造价值。 ML中的新兴范式是一种联合方法,其中学习模型部分地将其交付给一组异构剂,允许代理与自己的数据一起培训模型。然而,模型的估值问题,以及数据/模型的协作培训和交易的激励问题,在文献中获得了有限的待遇。本文提出了一种在基于信任区块基网络上交易的ML模型交易的新生态系统。买方可以获得ML市场的兴趣模型,兴趣的卖家将本地计算花在他们的数据上,以增强该模型的质量。在这样做时,考虑了本地数据与训练型型号的质量之间的比例关系,并且通过分布式数据福价(DSV)估计了销售课程中的训练中的数据的估值。同时,通过分布式分区技术(DLT)提供整个交易过程的可信度。对拟议方法的广泛实验评估显示出具有竞争力的运行时间绩效,在参与者的激励方面下降了15 \%。
translated by 谷歌翻译
联合学习(FL)是一种有效的分布式机器学习范式,以隐私的方式采用私人数据集。 FL的主要挑战是,END设备通常具有各种计算和通信功能,其培训数据并非独立且分布相同(非IID)。由于在移动网络中此类设备的通信带宽和不稳定的可用性,因此只能在每个回合中选择最终设备(也称为参与者或客户端的参与者或客户端)。因此,使用有效的参与者选择方案来最大程度地提高FL的性能,包括最终模型的准确性和训练时间,这一点至关重要。在本文中,我们对FL的参与者选择技术进行了评论。首先,我们介绍FL并突出参与者选择期间的主要挑战。然后,我们根据其解决方案来审查现有研究并将其分类。最后,根据我们对该主题领域最新的分析的分析,我们为FL的参与者选择提供了一些未来的指示。
translated by 谷歌翻译
Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models. However, it was demonstrated that private information could still be inferred by analyzing local model parameters, such as deep neural network model weights. Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much. Typically, in previous work, training parameters were clipped equally and noises were added uniformly. The heterogeneity and convergence of training parameters were simply not considered. In this paper, we propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL). Specifically, due to the gradient heterogeneity, we conduct adaptive gradient clipping for different clients and different rounds; due to the gradient convergence, we add decreasing noises accordingly. Extensive experiments on real-world datasets demonstrate that our Adap DP-FL outperforms previous methods significantly.
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
对网络攻击的现代防御越来越依赖于主动的方法,例如,基于过去的事件来预测对手的下一个行动。建立准确的预测模型需要许多组织的知识; las,这需要披露敏感信息,例如网络结构,安全姿势和政策,这些信息通常是不受欢迎的或完全不可能的。在本文中,我们探讨了使用联合学习(FL)预测未来安全事件的可行性。为此,我们介绍了Cerberus,这是一个系统,可以为参与组织的复发神经网络(RNN)模型进行协作培训。直觉是,FL可能会在非私有方法之间提供中间地面,在非私有方法中,训练数据在中央服务器上合并,而仅训练本地模型的较低性替代方案。我们将Cerberus实例化在从一家大型安全公司的入侵预防产品中获得的数据集上,并评估其有关实用程序,鲁棒性和隐私性,以及参与者如何从系统中贡献和受益。总体而言,我们的工作阐明了将FL执行此任务的积极方面和挑战,并为部署联合方法以进行预测安全铺平了道路。
translated by 谷歌翻译
通信技术和互联网的最新进展与人工智能(AI)启用了智能医疗保健。传统上,由于现代医疗保健网络的高性性和日益增长的数据隐私问题,AI技术需要集中式数据收集和处理,这可能在现实的医疗环境中可能是不可行的。作为一个新兴的分布式协作AI范例,通过协调多个客户(例如,医院)来执行AI培训而不共享原始数据,对智能医疗保健特别有吸引力。因此,我们对智能医疗保健的使用提供了全面的调查。首先,我们在智能医疗保健中展示了近期进程,动机和使用FL的要求。然后讨论了近期智能医疗保健的FL设计,从资源感知FL,安全和隐私感知到激励FL和个性化FL。随后,我们对关键医疗领域的FL新兴应用提供了最先进的综述,包括健康数据管理,远程健康监测,医学成像和Covid-19检测。分析了几个最近基于智能医疗保健项目,并突出了从调查中学到的关键经验教训。最后,我们讨论了智能医疗保健未来研究的有趣研究挑战和可能的指示。
translated by 谷歌翻译
如今,信息技术的发展正在迅速增长。在大数据时代,个人信息的隐私更加明显。主要的挑战是找到一种方法来确保在发布和分析数据时不会披露敏感的个人信息。在信任的第三方数据策展人的假设上建立了集中式差异隐私。但是,这个假设在现实中并不总是正确的。作为一种新的隐私保护模型,当地的差异隐私具有相对强大的隐私保证。尽管联邦学习相对是一种用于分布式学习的隐私方法,但它仍然引入了各种隐私问题。为了避免隐私威胁并降低沟通成本,我们建议将联合学习和当地差异隐私与动量梯度下降整合在一起,以提高机器学习模型的性能。
translated by 谷歌翻译
Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process by training and uploading their local models to an FL server in each global iteration. Upon receiving the local models from all the clients, the FL server generates a global model by aggregating the received local models. This traditional FL process may suffer from the straggler problem in heterogeneous client settings, where the FL server has to wait for slow clients to upload their local models in each global iteration, thus increasing the overall training time. One of the solutions is to set up a deadline and only the clients that can upload their local models before the deadline would be selected in the FL process. This solution may lead to a slow convergence rate and global model overfitting issues due to the limited client selection. In this paper, we propose the Latency awarE Semi-synchronous client Selection and mOdel aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies. That is, faster clients would be scheduled to upload their models more frequently than slow clients, thus resolving the straggler problem and accelerating the convergence speed, while avoiding model overfitting. Also, LESSON is capable of adjusting the tradeoff between the model accuracy and convergence rate by varying the deadline. Extensive simulations have been conducted to compare the performance of LESSON with the other two baseline methods, i.e., FedAvg and FedCS. The simulation results demonstrate that LESSON achieves faster convergence speed than FedAvg and FedCS, and higher model accuracy than FedCS.
translated by 谷歌翻译
联邦学习(FL)的最新进展为大规模的分布式客户带来了大规模的机器学习机会,具有绩效和数据隐私保障。然而,大多数当前的工作只关注FL中央控制器的兴趣,忽略了客户的利益。这可能导致不公平,阻碍客户积极参与学习过程并损害整个流动系统的可持续性。因此,在佛罗里达州确保公平的主题吸引了大量的研究兴趣。近年来,已经提出了各种公平知识的FL(FAFL)方法,以努力实现不同观点的流体公平。但是,没有全面的调查,帮助读者能够深入了解这种跨学科领域。本文旨在提供这样的调查。通过审查本领域现有文献所采用的基本和简化的假设,提出了涵盖FL的主要步骤的FAFL方法的分类,包括客户选择,优化,贡献评估和激励分配。此外,我们讨论了实验评估FAFL方法表现的主要指标,并建议了一些未来的未来研究方向。
translated by 谷歌翻译
在这项工作中,我们提出了一种新颖的框架来解决联邦学习(FL)的移动应用程序服务的争吵和隐私问题,考虑到移动用户(MUS)/移动应用程序提供者(MAP),隐私的有限计算/通信资源在贡献数据到地图中的MU中的成本,合理性和激励竞争。特别是,该地图首先基于MUS的信息/特征确定FL过程的一组最佳MU。为了缓解隐私意识的讨论问题,每个选定的MU可以加密本地数据的一部分,并除了本地培训过程之外,还可以将加密数据上载到加密培训过程的地图。为此,每个选定的MU可以根据其预期的培训本地数据和隐私保护的加密数据向地图提出合同。为了找到最佳合同,可以最大限度地利用地图和所有参与峰的同时保持整个系统的高学习质量,首先开发一个基于多个实用程序的基于多个实用程序的基于多项基于的一个基于的基于替代的问题。这些实用程序函数占MUS'隐私成本,地图的计算资源有限,地图和MU之间的不对称信息。然后,我们将问题转换为等同的低复杂性问题,并开发轻量级迭代算法,以有效地找到最佳解决方案。具有真实世界数据集的实验表明,我们的框架可以加快培训时间高达49%,提高预测准确性高达4.6倍,同时增强网络的社会福利,即所有参与实体的总实用性,高达114%与基线方法相比,隐私费用考虑。
translated by 谷歌翻译
The advent of Federated Learning (FL) has ignited a new paradigm for parallel and confidential decentralized Machine Learning (ML) with the potential of utilizing the computational power of a vast number of IoT, mobile and edge devices without data leaving the respective device, ensuring privacy by design. Yet, in order to scale this new paradigm beyond small groups of already entrusted entities towards mass adoption, the Federated Learning Framework (FLF) has to become (i) truly decentralized and (ii) participants have to be incentivized. This is the first systematic literature review analyzing holistic FLFs in the domain of both, decentralized and incentivized federated learning. 422 publications were retrieved, by querying 12 major scientific databases. Finally, 40 articles remained after a systematic review and filtering process for in-depth examination. Although having massive potential to direct the future of a more distributed and secure AI, none of the analyzed FLF is production-ready. The approaches vary heavily in terms of use-cases, system design, solved issues and thoroughness. We are the first to provide a systematic approach to classify and quantify differences between FLF, exposing limitations of current works and derive future directions for research in this novel domain.
translated by 谷歌翻译
联合学习(FL)可以使用学习者使用本地数据进行分布式培训,从而增强隐私和减少沟通。但是,它呈现出与数据分布,设备功能和参与者可用性的异质性有关的众多挑战,作为部署量表,这可能会影响模型融合和偏置。现有的FL方案使用随机参与者选择来提高公平性;然而,这可能导致资源低效和更低的质量培训。在这项工作中,我们系统地解决了FL中的资源效率问题,展示了智能参与者选择的好处,并将更新从争吵的参与者纳入。我们展示了这些因素如何实现资源效率,同时还提高了训练有素的模型质量。
translated by 谷歌翻译