Over the last years, topic modeling has emerged as a powerful technique for organizing and summarizing big collections of documents or searching for particular patterns in them. However, privacy concerns arise when cross-analyzing data from different sources is required. Federated topic modeling solves this issue by allowing multiple parties to jointly train a topic model without sharing their data. While several federated approximations of classical topic models do exist, no research has been carried out on their application for neural topic models. To fill this gap, we propose and analyze a federated implementation based on state-of-the-art neural topic modeling implementations, showing its benefits when there is a diversity of topics across the nodes' documents and the need to build a joint model. Our approach is by construction theoretically and in practice equivalent to a centralized approach but preserves the privacy of the nodes.
translated by 谷歌翻译
在医学领域,通常寻求多中心协作来通过利用患者和临床数据的异质性来产生更广泛的发现。但是,最近的隐私法规阻碍了共享数据的可能性,因此,提出了支持诊断和预后的基于机器学习的解决方案。联合学习(FL)旨在通过将基于AI的解决方案带入数据所有者,而仅共享需要汇总的本地AI模型或其部分,以避免这种限制。但是,大多数现有的联合学习解决方案仍处于起步阶段,并且由于缺乏可靠和有效的聚合计划能够保留本地学到的知识,从而显示出薄弱的隐私保护,因为可以从模型更新中重建实际数据,因此显示出几个缺点。此外,这些方法中的大多数,尤其是那些处理医学数据的方法,都依赖于一种集中的分布式学习策略,该策略构成了稳健性,可伸缩性和信任问题。在本文中,我们提出了一种分散的分布式方法,该方法从经验重播和生成对抗性研究中利用概念,有效地整合了本地节点的功能,从而提供了能够在维持隐私的同时跨多个数据集进行概括的模型。为了模拟现实的非i.i.d,使用多个数据集对两项任务进行了两项任务测试:结核病和黑色素瘤分类。数据方案。结果表明,我们的方法实现了与标准(未赋予)学习和联合方法相当的性能(因此,更有利)。
translated by 谷歌翻译
联合学习是一种数据解散隐私化技术,用于以安全的方式执行机器或深度学习。在本文中,我们介绍了有关联合学习的理论方面客户次数有所不同的用例。具体而言,使用从开放数据存储库中获得的胸部X射线图像提出了医学图像分析的用例。除了与隐私相关的优势外,还将研究预测的改进(就曲线下的准确性和面积而言)和减少执行时间(集中式方法)。将从培训数据中模拟不同的客户,以不平衡的方式选择,即,他们并非都有相同数量的数据。考虑三个或十个客户之间的结果与集中案件相比。间歇性客户将分析两种遵循方法,就像在实际情况下,某些客户可能会离开培训,一些新的新方法可能会进入培训。根据准确性,曲线下的区域和执行时间的结果,结果的结果的演变显示为原始数据被划分的客户次数。最后,提出了该领域的改进和未来工作。
translated by 谷歌翻译
Creating high-performance generalizable deep neural networks for phytoplankton monitoring requires utilizing large-scale data coming from diverse global water sources. A major challenge to training such networks lies in data privacy, where data collected at different facilities are often restricted from being transferred to a centralized location. A promising approach to overcome this challenge is federated learning, where training is done at site level on local data, and only the model parameters are exchanged over the network to generate a global model. In this study, we explore the feasibility of leveraging federated learning for privacy-preserving training of deep neural networks for phytoplankton classification. More specifically, we simulate two different federated learning frameworks, federated learning (FL) and mutually exclusive FL (ME-FL), and compare their performance to a traditional centralized learning (CL) framework. Experimental results from this study demonstrate the feasibility and potential of federated learning for phytoplankton monitoring.
translated by 谷歌翻译
Federated learning achieves joint training of deep models by connecting decentralized data sources, which can significantly mitigate the risk of privacy leakage. However, in a more general case, the distributions of labels among clients are different, called ``label distribution skew''. Directly applying conventional federated learning without consideration of label distribution skew issue significantly hurts the performance of the global model. To this end, we propose a novel federated learning method, named FedMGD, to alleviate the performance degradation caused by the label distribution skew issue. It introduces a global Generative Adversarial Network to model the global data distribution without access to local datasets, so the global model can be trained using the global information of data distribution without privacy leakage. The experimental results demonstrate that our proposed method significantly outperforms the state-of-the-art on several public benchmarks. Code is available at \url{https://github.com/Sheng-T/FedMGD}.
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
这项工作调查了联合学习的可能性,了解IOT恶意软件检测,并研究该新学习范式固有的安全问题。在此上下文中,呈现了一种使用联合学习来检测影响物联网设备的恶意软件的框架。 n-baiot,一个数据集在由恶意软件影响的几个实际物联网设备的网络流量,已被用于评估所提出的框架。经过培训和评估监督和无监督和无监督的联邦模型(多层Perceptron和AutoEncoder)能够检测到MATEN和UNEEN的IOT设备的恶意软件,并进行了培训和评估。此外,它们的性能与两种传统方法进行了比较。第一个允许每个参与者在本地使用自己的数据局面训练模型,而第二个包括使参与者与负责培训全局模型的中央实体共享他们的数据。这种比较表明,在联合和集中方法中完成的使用更多样化和大数据,对模型性能具有相当大的积极影响。此外,联邦模型,同时保留了参与者的隐私,将类似的结果与集中式相似。作为额外的贡献,并衡量联邦方法的稳健性,已经考虑了具有若干恶意参与者中毒联邦模型的对抗性设置。即使使用单个对手,大多数联邦学习算法中使用的基线模型聚合平均步骤也很容易受到不同攻击的影响。因此,在相同的攻击方案下评估了作为对策的其他模型聚合函数的性能。这些职能对恶意参与者提供了重大改善,但仍然需要更多的努力来使联邦方法强劲。
translated by 谷歌翻译
近年来经历的计算设备部署爆炸,由诸如互联网(物联网)和5G的技术(IOT)和5G等技术的激励,导致了全局情景,随着网络安全的风险和威胁的增加。其中,设备欺骗和模糊的网络攻击因其影响而脱颖而出,并且通常需要推出的低复杂性。为了解决这个问题,已经出现了几种解决方案,以根据行为指纹和机器/深度学习(ML / DL)技术的组合来识别设备模型和类型。但是,这些解决方案不适合数据隐私和保护的方案,因为它们需要数据集中处理以进行处理。在这种情况下,尚未完全探索较新的方法,例如联合学习(FL),特别是当恶意客户端存在于场景设置时。目前的工作分析并比较了使用基于执行时间的事件的v一体的集中式DL模型的设备模型识别性能。对于实验目的,已经收集并公布了属于四种不同模型的55个覆盆子PI的执行时间特征的数据集。使用此数据集,所提出的解决方案在两个设置,集中式和联合中实现了0.9999的精度,在保留数据隐私时显示没有性能下降。后来,使用几种聚集机制作为对策,评估标签翻转攻击在联邦模型训练期间的影响。 ZENO和协调明智的中值聚合表现出最佳性能,尽管当他们的性能大大降低时,当完全恶意客户(所有培训样本中毒)增长超过50%时大大降临。
translated by 谷歌翻译
通过参与大规模联合学习(FL)优化的设备的异构性质的激励,我们专注于由区块链(BC)技术赋予的异步服务器的FL解决方案。与主要采用的FL方法相比,假设同步操作,我们提倡一个异步方法,由此,模型聚合作为客户端提交本地更新。异步设置与具有异构客户端的实际大规模设置中的联合优化思路非常适合。因此,它可能导致通信开销和空闲时段的效率提高。为了评估启用了BC启用的FL的学习完成延迟,我们提供了基于批量服务队列理论的分析模型。此外,我们提供仿真结果以评估同步和异步机制的性能。涉及BC启用的流量的重要方面,例如网络大小,链路容量或用户要求,并分析并分析。随着我们的结果表明,同步设置导致比异步案例更高的预测精度。然而,异步联合优化在许多情况下提供了更低的延迟,从而在处理大数据集时成为一种吸引力的FL解决方案,严重的时序约束(例如,近实时应用)或高度不同的训练数据。
translated by 谷歌翻译
With its capability to deal with graph data, which is widely found in practical applications, graph neural networks (GNNs) have attracted significant research attention in recent years. As societies become increasingly concerned with the need for data privacy protection, GNNs face the need to adapt to this new normal. Besides, as clients in Federated Learning (FL) may have relationships, more powerful tools are required to utilize such implicit information to boost performance. This has led to the rapid development of the emerging research field of federated graph neural networks (FedGNNs). This promising interdisciplinary field is highly challenging for interested researchers to grasp. The lack of an insightful survey on this topic further exacerbates the entry difficulty. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a 2-dimensional taxonomy of the FedGNNs literature: 1) the main taxonomy provides a clear perspective on the integration of GNNs and FL by analyzing how GNNs enhance FL training as well as how FL assists GNNs training, and 2) the auxiliary taxonomy provides a view on how FedGNNs deal with heterogeneity across FL clients. Through discussions of key ideas, challenges, and limitations of existing works, we envision future research directions that can help build more robust, explainable, efficient, fair, inductive, and comprehensive FedGNNs.
translated by 谷歌翻译
Federated Learning (FL) has become a key choice for distributed machine learning. Initially focused on centralized aggregation, recent works in FL have emphasized greater decentralization to adapt to the highly heterogeneous network edge. Among these, Hierarchical, Device-to-Device and Gossip Federated Learning (HFL, D2DFL \& GFL respectively) can be considered as foundational FL algorithms employing fundamental aggregation strategies. A number of FL algorithms were subsequently proposed employing multiple fundamental aggregation schemes jointly. Existing research, however, subjects the FL algorithms to varied conditions and gauges the performance of these algorithms mainly against Federated Averaging (FedAvg) only. This work consolidates the FL landscape and offers an objective analysis of the major FL algorithms through a comprehensive cross-evaluation for a wide range of operating conditions. In addition to the three foundational FL algorithms, this work also analyzes six derived algorithms. To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed for rapid configuration of multiple FL algorithms. Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions, including asynchronous aggregation and the presence of stragglers. Furthermore, decentralized FL can also operate in noisy environments and with a comparably higher local update rate. However, the impact of extremely skewed data distributions on decentralized FL is much more adverse than on centralized variants. The results indicate that it may not be necessary to restrict the devices to a single FL algorithm; rather, multi-FL nodes may operate with greater efficiency.
translated by 谷歌翻译
生存分析(事件时间分析)是医疗保健中的重要问题,因为它对患者和姑息治疗产生了广泛的影响。许多生存分析方法都认为,生存数据是从一个医疗中心或通过多中心的数据共享中心可获得的。但是,患者属性和严格的隐私法的敏感性越来越多地禁止分享医疗保健数据。为了应对这一挑战,研究界研究了使用联合学习(FL)范式分散培训和模型参数共享的解决方案。在本文中,我们研究了FL对分布式医疗保健数据集进行生存分析的利用。最近,流行的COX比例危害(CPH)模型已适用于FL设置。然而,由于其线性性和比例危害假设,CPH模型会导致次优性能,尤其是对于非线性,非IID和经过严格审查的生存数据集。为了克服现有联合生存分析方法的挑战,我们利用深度学习模型的预测准确性和伪价值的力量来提出一种首个基于伪价值的基于伪价值的深度学习模型(用于联合生存分析)( FSA)称为FedPseudo。此外,我们引入了一种新的方法,该方法是在FL设置中得出伪值的伪值,从而加快了伪值的计算。关于合成和现实世界数据集的广泛实验表明,我们的基于伪有价值的FL框架的性能与最佳中心训练的深层生存分析模型相似。此外,我们提出的FL方法为各种审查设置获得了最佳结果。
translated by 谷歌翻译
联邦学习一直是一个热门的研究主题,使不同组织的机器学习模型的协作培训在隐私限制下。随着研究人员试图支持更多具有不同隐私方法的机器学习模型,需要开发系统和基础设施,以便于开发各种联合学习算法。类似于Pytorch和Tensorflow等深度学习系统,可以增强深度学习的发展,联邦学习系统(FLSS)是等效的,并且面临各个方面的面临挑战,如有效性,效率和隐私。在本调查中,我们对联合学习系统进行了全面的审查。为实现流畅的流动和引导未来的研究,我们介绍了联合学习系统的定义并分析了系统组件。此外,我们根据六种不同方面提供联合学习系统的全面分类,包括数据分布,机器学习模型,隐私机制,通信架构,联合集市和联合的动机。分类可以帮助设计联合学习系统,如我们的案例研究所示。通过系统地总结现有联合学习系统,我们展示了设计因素,案例研究和未来的研究机会。
translated by 谷歌翻译
基于物理学的模型已成为流体动力学的主流,用于开发预测模型。近年来,由于数据科学,处理单元,基于神经网络的技术和传感器适应性的快速发展,机器学习为流体社区提供了复兴。到目前为止,在流体动力学中的许多应用中,机器学习方法主要集中在标准过程上,该过程需要将培训数据集中在指定机器或数据中心上。在这封信中,我们提出了一种联合机器学习方法,该方法使本地化客户能够协作学习一个汇总和共享的预测模型,同时将所有培训数据保留在每个边缘设备上。我们证明了这种分散学习方法的可行性和前景,并努力为重建时空领域建立深度学习的替代模型。我们的结果表明,联合机器学习可能是设计与流体动力学相关的高度准确预测分散的数字双胞胎的可行工具。
translated by 谷歌翻译
随着基于位置的越来越多的社交网络,隐私保存位置预测已成为帮助用户发现新的兴趣点(POI)的主要任务。传统系统考虑一种需要传输和收集用户私有数据的集中方法。在这项工作中,我们展示了FedPoirec,隐私保留了联合学习方法的隐私,增强了用户社交界的功能,以获得最高$ N $ POI建议。首先,FedPoirec框架建立在本地数据永远不会离开所有者设备的原则上,而本地更新盲目地由参数服务器汇总。其次,本地推荐人通过允许用户交换学习参数来获得个性化,从而实现朋友之间的知识传输。为此,我们提出了一种隐私保留协议,用于通过利用CKKS完全同态加密方案的特性来集成用户朋友在联合计算之后的偏好。为了评估FEDPOIREC,我们使用两个推荐模型将我们的方法应用于五个现实世界数据集。广泛的实验表明,FEDPOIREC以集中方法实现了相当的推荐质量,而社会集成协议会突出用户侧的低计算和通信开销。
translated by 谷歌翻译
联合学习(FL)根据多个本地客户端协同聚合共享全球模型,同时保持培训数据分散以保护数据隐私。但是,标准的FL方法忽略了嘈杂的客户问题,这可能会损害聚合模型的整体性能。在本文中,我们首先分析了嘈杂的客户声明,然后用不同的噪声分布模型噪声客户端(例如,Bernoulli和截断的高斯分布)。要使用嘈杂的客户,我们提出了一个简单但有效的FL框架,名为联邦嘈杂的客户学习(FED-NCL),它是一个即插即用算法,并包含两个主要组件:动态的数据质量测量(DQM)量化每个参与客户端的数据质量,以及噪声鲁棒聚合(NRA),通过共同考虑本地训练数据和每个客户端的数据质量来自适应地聚合每个客户端的本地模型。我们的FED-NCL可以轻松应用于任何标准的流行流以处理嘈杂的客户端问题。各种数据集的实验结果表明,我们的算法提高了具有嘈杂客户端的不同现实系统的性能。
translated by 谷歌翻译
Causal discovery, the inference of causal relations from data, is a core task of fundamental importance in all scientific domains, and several new machine learning methods for addressing the causal discovery problem have been proposed recently. However, existing machine learning methods for causal discovery typically require that the data used for inference is pooled and available in a centralized location. In many domains of high practical importance, such as in healthcare, data is only available at local data-generating entities (e.g. hospitals in the healthcare context), and cannot be shared across entities due to, among others, privacy and regulatory reasons. In this work, we address the problem of inferring causal structure - in the form of a directed acyclic graph (DAG) - from a distributed data set that contains both observational and interventional data in a privacy-preserving manner by exchanging updates instead of samples. To this end, we introduce a new federated framework, FED-CD, that enables the discovery of global causal structures both when the set of intervened covariates is the same across decentralized entities, and when the set of intervened covariates are potentially disjoint. We perform a comprehensive experimental evaluation on synthetic data that demonstrates that FED-CD enables effective aggregation of decentralized data for causal discovery without direct sample sharing, even when the contributing distributed data sets cover disjoint sets of interventions. Effective methods for causal discovery in distributed data sets could significantly advance scientific discovery and knowledge sharing in important settings, for instance, healthcare, in which sharing of data across local sites is difficult or prohibited.
translated by 谷歌翻译
包含间歇性和可再生能源的含量增加了电力系统需求预测的重要性。由于它们提供的测量粒度,智能电表可以在需求预测中发挥关键作用。消费者的隐私问题,公用事业和供应商不愿与竞争对手或第三方共享数据,以及监管限制是一些限制智能米预测面。本文介绍了使用智能电表数据作为前一个约束的解决方案的短期需求预测的协作机器学习方法。隐私保存技术和联合学习使能够确保消费者对两者的机密性,它们的数据,使用它生成的模型(差异隐私),以及通信均值(安全聚合)。评估的方法考虑了几种方案,探讨了传统的集中方法如何在分散,协作和私人系统的方向上投射。在评估中获得的结果提供了几乎完美的隐私预算(1.39,$ 10E ^ {5} $)和(2.01,$ 10e ^ { - 5} $),具有可忽略不计的性能妥协。
translated by 谷歌翻译
联合学习(FL)是一个系统,中央聚合器协调多个客户解决机器学习问题的努力。此设置允许分散培训数据以保护隐私。本文的目的是提供针对医疗保健的FL系统的概述。 FL在此根据其框架,架构和应用程序进行评估。这里显示的是,FL通过中央聚合器服务器通过共享的全球深度学习(DL)模型解决了前面的问题。本文研究了最新的发展,并提供了来自FL研究的快速增长的启发,列出了未解决的问题。在FL的背景下,描述了几种隐私方法,包括安全的多方计算,同态加密,差异隐私和随机梯度下降。此外,还提供了对各种FL类的综述,例如水平和垂直FL以及联合转移学习。 FL在无线通信,服务建议,智能医学诊断系统和医疗保健方面有应用,本文将在本文中进行讨论。我们还对现有的FL挑战进行了彻底的审查,例如隐私保护,沟通成本,系统异质性和不可靠的模型上传,然后是未来的研究指示。
translated by 谷歌翻译