古典和集中的人工智能(AI)方法要求将数据从生产者(传感器,机器)移至饥饿的数据中心,从而在侵犯隐私的同时,由于计算和通信资源的需求而引起的环境问题。缓解这种高能源成本的新兴替代方案建议在通常低功率的设备上有效分发或联合跨设备的学习任务。本文提出了一个新的框架,用于分析分布式和联合学习(FL)中的能量和碳足迹。提出的框架量化了香草FL方法和基于共识的完全分散方法的能量足迹和碳当量排放。我们讨论支持绿色FL设计并支撑其可持续性评估的最佳界限和运营点。分析了新兴5G行业垂直行业的两项案例研究:它们量化了持续和强化学习设置的环境足迹,在这些培训过程中,定期重复训练过程以进行持续改进。对于所有情况,分布式学习的可持续性都取决于满足沟通效率和学习者人口规模的特定要求。考虑到目标工业应用的模型和数据足迹,还应将能源和测试精度交易。
translated by 谷歌翻译
Recent advances in Federated Learning (FL) have paved the way towards the design of novel strategies for solving multiple learning tasks simultaneously, by leveraging cooperation among networked devices. Multi-Task Learning (MTL) exploits relevant commonalities across tasks to improve efficiency compared with traditional transfer learning approaches. By learning multiple tasks jointly, significant reduction in terms of energy footprints can be obtained. This article provides a first look into the energy costs of MTL processes driven by the Model-Agnostic Meta-Learning (MAML) paradigm and implemented in distributed wireless networks. The paper targets a clustered multi-task network setup where autonomous agents learn different but related tasks. The MTL process is carried out in two stages: the optimization of a meta-model that can be quickly adapted to learn new tasks, and a task-specific model adaptation stage where the learned meta-model is transferred to agents and tailored for a specific task. This work analyzes the main factors that influence the MTL energy balance by considering a multi-task Reinforcement Learning (RL) setup in a robotized environment. Results show that the MAML method can reduce the energy bill by at least 2 times compared with traditional approaches without inductive transfer. Moreover, it is shown that the optimal energy balance in wireless networks depends on uplink/downlink and sidelink communication efficiencies.
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
为了满足下一代无线通信网络的极其异构要求,研究界越来越依赖于使用机器学习解决方案进行实时决策和无线电资源管理。传统的机器学习采用完全集中的架构,其中整个培训数据在一个节点上收集,即云服务器,显着提高了通信开销,并提高了严重的隐私问题。迄今为止,最近提出了作为联合学习(FL)称为联合学习的分布式机器学习范式。在FL中,每个参与边缘设备通过使用自己的培训数据列举其本地模型。然后,通过无线信道,本地训练模型的权重或参数被发送到中央ps,聚合它们并更新全局模型。一方面,FL对优化无线通信网络的资源起着重要作用,另一方面,无线通信对于FL至关重要。因此,FL和无线通信之间存在“双向”关系。虽然FL是一个新兴的概念,但许多出版物已经在FL的领域发表了发布及其对下一代无线网络的应用。尽管如此,我们注意到没有任何作品突出了FL和无线通信之间的双向关系。因此,本调查纸的目的是通过提供关于FL和无线通信之间的相互依存性的及时和全面的讨论来弥合文学中的这种差距。
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
尽管结果令人印象深刻,但深度学习的技术还引起了经常在数据中心进行的培训程序引起的严重隐私和环境问题。作为回应,已经出现了集中培训的替代方案,例如联邦学习(FL)。也许出乎意料的是,FL开始在全球范围内部署,这些公司必须遵守源自倡导隐私保护的政府和社会团体的新法律要求和政策。 \ textit {但是,与FL有关的潜在环境影响仍然不清楚和未开发。本文提供了有关佛罗里达碳足迹的首次系统研究。然后,我们将FL的碳足迹与传统的集中学习进行了比较。我们的发现表明,根据配置,FL可以比集中的机器学习高达两个数量级。但是,在某些情况下,由于嵌入式设备的能源消耗减少,它可以与集中学习相提并论。我们使用FL进行了不同类型的数据集,设置和各种深度学习模型的广泛实验。最后,我们强调并将报告的结果与FL的未来挑战和趋势联系起来,以减少其环境影响,包括算法效率,硬件能力和更强的行业透明度。
translated by 谷歌翻译
联合学习(FL)能够通过定期聚合培训的本地参数来在多个边缘用户执行大的分布式机器学习任务。为了解决在无线迷雾云系统上实现支持的关键挑战(例如,非IID数据,用户异质性),我们首先基于联合平均(称为FedFog)的高效流行算法来执行梯度参数的本地聚合在云端的FOG服务器和全球培训更新。接下来,我们通过调查新的网络知识的流动系统,在无线雾云系统中雇用FEDFog,这促使了全局损失和完成时间之间的平衡。然后开发了一种迭代算法以获得系统性能的精确测量,这有助于设计有效的停止标准以输出适当数量的全局轮次。为了缓解级体效果,我们提出了一种灵活的用户聚合策略,可以先培训快速用户在允许慢速用户加入全局培训更新之前获得一定程度的准确性。提供了使用若干现实世界流行任务的广泛数值结果来验证FEDFOG的理论融合。我们还表明,拟议的FL和通信的共同设计对于在实现学习模型的可比准确性的同时,基本上提高资源利用是必要的。
translated by 谷歌翻译
Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
联合学习(FL)是标准集中学习范式的最吸引人的替代方案之一,允许异质的设备集训练机器学习模型而无需共享其原始数据。但是,FL需要中央服务器来协调学习过程,从而引入潜在的可扩展性和安全性问题。在文献中,已经提出了诸如八卦联合学习(GFL)和支持区块链的联合学习(BFL)之类的无服务器的方法来减轻这些问题。在这项工作中,我们提出了这三种技术的完整概述,该技术根据整体性能指标进行比较,包括模型准确性,时间复杂性,交流开销,收敛时间和能源消耗。广泛的模拟活动允许进行定量分析。特别是,GFL能够节省18%的训练时间,68%的能源和51%的数据相对于CFL解决方案,但无法达到CFL的准确性水平。另一方面,BFL代表了一个可行的解决方案,用于以更高级别的安全性实施分散的学习,以额外的能源使用和数据共享为代价。最后,我们确定了两个分散的联合学习实施的开放问题,并就该新研究领域的潜在扩展和可能的研究方向提供见解。
translated by 谷歌翻译
智能物联网环境(iiote)由可以协作执行半自动的IOT应用的异构装置,其示例包括高度自动化的制造单元或自主交互收获机器。能量效率是这种边缘环境中的关键,因为它们通常基于由无线和电池运行设备组成的基础设施,例如电子拖拉机,无人机,自动引导车辆(AGV)S和机器人。总能源消耗从多种技术技术汲取贡献,使得能够实现边缘计算和通信,分布式学习以及分布式分区和智能合同。本文提供了本技术的最先进的概述,并说明了它们的功能和性能,特别关注资源,延迟,隐私和能源消耗之间的权衡。最后,本文提供了一种在节能IIOTE和路线图中集成这些能力技术的愿景,以解决开放的研究挑战
translated by 谷歌翻译
联合学习产生了重大兴趣,几乎所有作品都集中在一个“星形”拓扑上,其中节点/设备每个都连接到中央服务器。我们远离此架构,并将其通过网络维度扩展到最终设备和服务器之间存在多个节点的情况。具体而言,我们开发多级混合联合学习(MH-FL),是层内模型学习的混合,将网络视为基于多层群集的结构。 MH-FL认为集群中的节点中的拓扑结构,包括通过设备到设备(D2D)通信形成的本地网络,并假设用于联合学习的半分散式架构。它以协作/协作方式(即,使用D2D交互)在不同网络层处的设备进行编程,以在模型参数上形成本地共识,并将其与树形层次层的层之间的多级参数中继相结合。我们相对于网络拓扑(例如,光谱半径)和学习算法的参数来得出MH-F1的收敛的大界限(例如,不同簇中的D2D圆数的数量)。我们在不同的集群中获得了一系列D2D轮的政策,以保证有限的最佳差距或收敛到全局最佳。然后,我们开发一个分布式控制算法,用于MH-FL在每个集群中调整每个集群的D2D轮,以满足特定的收敛标准。我们在现实世界数据集上的实验验证了我们的分析结果,并展示了MH-FL在资源利用率指标方面的优势。
translated by 谷歌翻译
在本文中,我们研究了多服务器边缘计算中基于区块链的联合学习(BFL)的新延迟优化问题。在此系统模型中,分布式移动设备(MDS)与一组Edge服务器(ESS)通信,以同时处理机器学习(ML)模型培训和阻止开采。为了协助ML模型培训用于资源受限的MD,我们制定了一种卸载策略,使MD可以将其数据传输到相关的ESS之一。然后,我们基于共识机制在边缘层上提出了一个新的分散的ML模型聚合解决方案,以通过基于对等(P2P)基于基于的区块链通信构建全局ML模型。区块链在MDS和ESS之间建立信任,以促进可靠的ML模型共享和合作共识形成,并能够快速消除由中毒攻击引起的操纵模型。我们将延迟感知的BFL作为优化,旨在通过联合考虑数据卸载决策,MDS的传输功率,MDS数据卸载,MDS的计算分配和哈希功率分配来最大程度地减少系统延迟。鉴于离散卸载和连续分配变量的混合作用空间,我们提出了一种具有参数化优势演员评论家算法的新型深度强化学习方案。从理论上讲,我们根据聚合延迟,迷你批量大小和P2P通信回合的数量来表征BFL的收敛属性。我们的数值评估证明了我们所提出的方案优于基线,从模型训练效率,收敛速度,系统潜伏期和对模型中毒攻击的鲁棒性方面。
translated by 谷歌翻译
联合学习(FL)和分裂学习(SL)是两种新兴的协作学习方法,可能会极大地促进物联网(IoT)中无处不在的智能。联合学习使机器学习(ML)模型在本地培训的模型使用私人数据汇总为全球模型。分裂学习使ML模型的不同部分可以在学习框架中对不同工人进行协作培训。联合学习和分裂学习,每个学习都有独特的优势和各自的局限性,可能会相互补充,在物联网中无处不在的智能。因此,联合学习和分裂学习的结合最近成为一个活跃的研究领域,引起了广泛的兴趣。在本文中,我们回顾了联合学习和拆分学习方面的最新发展,并介绍了有关最先进技术的调查,该技术用于将这两种学习方法组合在基于边缘计算的物联网环境中。我们还确定了一些开放问题,并讨论了该领域未来研究的可能方向,希望进一步引起研究界对这个新兴领域的兴趣。
translated by 谷歌翻译
通过参与大规模联合学习(FL)优化的设备的异构性质的激励,我们专注于由区块链(BC)技术赋予的异步服务器的FL解决方案。与主要采用的FL方法相比,假设同步操作,我们提倡一个异步方法,由此,模型聚合作为客户端提交本地更新。异步设置与具有异构客户端的实际大规模设置中的联合优化思路非常适合。因此,它可能导致通信开销和空闲时段的效率提高。为了评估启用了BC启用的FL的学习完成延迟,我们提供了基于批量服务队列理论的分析模型。此外,我们提供仿真结果以评估同步和异步机制的性能。涉及BC启用的流量的重要方面,例如网络大小,链路容量或用户要求,并分析并分析。随着我们的结果表明,同步设置导致比异步案例更高的预测精度。然而,异步联合优化在许多情况下提供了更低的延迟,从而在处理大数据集时成为一种吸引力的FL解决方案,严重的时序约束(例如,近实时应用)或高度不同的训练数据。
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
在本章中,我们将主要关注跨无线设备的协作培训。培训ML模型相当于解决优化问题,并且在过去几十年中已经开发了许多分布式优化算法。这些分布式ML算法提供数据局部性;也就是说,可以协同地培训联合模型,而每个参与设备的数据仍然是本地的数据。这个地址,一些延伸,隐私问题。它们还提供计算可扩展性,因为它们允许利用分布在许多边缘设备的计算资源。然而,在实践中,这不会直接导致整体学习速度的线性增益与设备的数量。这部分是由于通信瓶颈限制了整体计算速度。另外,无线设备在其计算能力中具有高度异构,并且它们的计算速度和通信速率都可能由于物理因素而高度变化。因此,考虑到时变通信网络的影响以及器件的异构和随机计算能力,必须仔细设计分布式学习算法,特别是在无线网络边缘实现的算法。
translated by 谷歌翻译
更广泛的覆盖范围和更好的解决方案延迟减少5G需要其与多访问边缘计算(MEC)技术的组合。分散的深度学习(DDL),如联邦学习和群体学习作为对数百万智能边缘设备的隐私保留数据处理的有希望的解决方案,利用了本地客户端网络内的多层神经网络的分布式计算,而无需披露原始本地培训数据。值得注意的是,在金融和医疗保健等行业中,谨慎维护交易和个人医疗记录的敏感数据,DDL可以促进这些研究所的合作,以改善培训模型的性能,同时保护参与客户的数据隐私。在本调查论文中,我们展示了DDL的技术基础,通过分散的学习使社会许多人走。此外,我们通过概述DDL的挑战以及从新颖的沟通效率和可靠性的观点来概述目前本领域最先进的全面概述。
translated by 谷歌翻译
In recent years, deep learning (DL) models have demonstrated remarkable achievements on non-trivial tasks such as speech recognition and natural language understanding. One of the significant contributors to its success is the proliferation of end devices that acted as a catalyst to provide data for data-hungry DL models. However, computing DL training and inference is the main challenge. Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. To mitigate these drawbacks, considerable efforts have been made to push the processing of DL models to edge servers. Moreover, the confluence point of DL and edge has given rise to edge intelligence (EI). This survey paper focuses primarily on the fifth level of EI, called all in-edge level, where DL training and inference (deployment) are performed solely by edge servers. All in-edge is suitable when the end devices have low computing resources, e.g., Internet-of-Things, and other requirements such as latency and communication cost are important in mission-critical applications, e.g., health care. Firstly, this paper presents all in-edge computing architectures, including centralized, decentralized, and distributed. Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. Thirdly, model adaptation techniques based on model compression and conditional computation are described because the standard cloud-based DL deployment cannot be directly applied to all in-edge due to its limited computational resources. Fourthly, this paper discusses eleven key performance metrics to evaluate the performance of DL at all in-edge efficiently. Finally, several open research challenges in the area of all in-edge are presented.
translated by 谷歌翻译
We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.
translated by 谷歌翻译