Unmanned aerial vehicles (UAVs) mobility enables flexible and customized federated learning (FL) at the network edge. However, the underlying uncertainties in the aerial-terrestrial wireless channel may lead to a biased FL model. In particular, the distribution of the global model and the aggregation of the local updates within the FL learning rounds at the UAVs are governed by the reliability of the wireless channel. This creates an undesirable bias towards the training data of ground devices with better channel conditions, and vice versa. This paper characterizes the global bias problem of aerial FL in large-scale UAV networks. To this end, the paper proposes a channel-aware distribution and aggregation scheme to enforce equal contribution from all devices in the FL training as a means to resolve the global bias problem. We demonstrate the convergence of the proposed method by experimenting with the MNIST dataset and show its superiority compared to existing methods. The obtained results enable system parameter tuning to relieve the impact of the aerial channel deficiency on the FL convergence rate.
translated by 谷歌翻译
联邦学习(FL)是一个有前途的分布式学习技术,特别适用于无线学习场景,因为它可以在没有原始数据运输的情况下实现学习任务,以便保留数据隐私和较低的网络资源消耗。然而,目前在无线网络上的目前的工作并未深入研究由于频道损伤和网络干扰因沟通中断而受到沟通中断的无线网络的基本性能。要准确利用无线网络的FL的性能,本文提出了一种在蜂窝连接的无人机(UAV)网络上的新型间歇性FL模型,其特征在于从UAV(客户)到他们的服务器和数据集之间的数据异质性的通信中断在无人机。我们提出了一个分析讲解框架来导出上行停电概率并使用它来设计基于仿真的方法,以评估所提出的间歇性流动模型的性能。我们的调查结果揭示了如何通过上行链路通信中断和UAV部署影响间歇性的流动。提供了广泛的数值模拟,以显示所提出的间歇模型的模拟和分析性能之间的一致性。
translated by 谷歌翻译
In this work, we investigate the problem of an online trajectory design for an Unmanned Aerial Vehicle (UAV) in a Federated Learning (FL) setting where several different communities exist, each defined by a unique task to be learned. In this setting, spatially distributed devices belonging to each community collaboratively contribute towards training their community model via wireless links provided by the UAV. Accordingly, the UAV acts as a mobile orchestrator coordinating the transmissions and the learning schedule among the devices in each community, intending to accelerate the learning process of all tasks. We propose a heuristic metric as a proxy for the training performance of the different tasks. Capitalizing on this metric, a surrogate objective is defined which enables us to jointly optimize the UAV trajectory and the scheduling of the devices by employing convex optimization techniques and graph theory. The simulations illustrate the out-performance of our solution when compared to other handpicked static and mobile UAV deployment baselines.
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
In this work, we propose a communication-efficient two-layer federated learning algorithm for distributed setups including a core server and multiple edge servers with clusters of devices. Assuming different learning tasks, clusters with a same task collaborate. To implement the algorithm over wireless links, we propose a scalable clustered over-the-air aggregation scheme for the uplink with a bandwidth-limited broadcast scheme for the downlink that requires only two single resource blocks for each algorithm iteration, independent of the number of edge servers and devices. This setup is faced with interference of devices in the uplink and interference of edge servers in the downlink that are to be modeled rigorously. We first develop a spatial model for the setup by modeling devices as a Poisson cluster process over the edge servers and quantify uplink and downlink error terms due to the interference. Accordingly, we present a comprehensive mathematical approach to derive the convergence bound for the proposed algorithm including any number of collaborating clusters in the setup and provide important special cases and design remarks. Finally, we show that despite the interference in the proposed uplink and downlink schemes, the proposed algorithm achieves high learning accuracy for a variety of parameters.
translated by 谷歌翻译
为满足城市内部运输中不断增长的行动需求,已经提出了城市空运(UAM)的概念,其中垂直起飞和着陆(VTOL)飞机用于提供乘车服务。在UAM中,飞机可以在称为走廊的指定空间中运行,链接机场。 GBS和飞机之间的可靠通信网络使UAM能够充分利用空域,并创造快速,高效,安全的运输系统。在本文中,为了表征UAM的无线连接性能,提出了一种空间模型。对于该设置,导出任意选择的GBS与其相关飞机之间的距离和GBS经历的干扰的拉普拉斯变换的分布。使用这些结果,确定基于信号的连通概率(SIR)以捕获UAM飞机到地通信网络的连接性能。然后,提出了利用这些连接结果,建议使用傅里叶神经网络的无线的异步联合学习(AFL)框架来解决UAM操作期间湍流预测的具有挑战性问题。对于该AFL方案,引入了一种静止感知的全局聚合方案,以加快UAM飞机使用的最佳湍流预测模型的收敛性。仿真结果验证了UAM无线连接的理论派生。结果还表明,所提出的AFL框架会收敛于比同步联合学习基线和无期性的AFL方法更快地收敛到最佳湍流预测模型。此外,结果表征了在不同参数设置下的无线连接和飞机湍流模型的融合性能的性能,提供了有用的UAM设计指南。
translated by 谷歌翻译
Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
联合学习(FL)最近被揭示为有希望的技术,以便在网络边缘启用人工智能(AI),其中分布式移动设备在边缘服务器的协调下协同培训共享AI模型。为了显着提高FL的通信效率,通过利用无线多接入信道的叠加特性,遍布空中计算允许大量的移动设备通过利用无线多接入信道的叠加特性同时上传其本地模型。由于无线信道衰落,边缘服务器的模型聚合误差由所有设备中最弱的通道主导,导致严重的孤立问题。在本文中,我们提出了一种继电器协助的合作液计划,以有效地解决了斯塔格勒问题。特别是,我们部署了多个半双工继电器以协同协作在将本地模型更新上载到边缘服务器时的设备。空中计算的性质构成了与传统继电器通信系统中不同的系统目标和约束。此外,设计变量之间的强耦合使得这种系统具有挑战性的优化。为了解决问题,我们提出了一种基于交替优化的算法来优化收发器和中继操作,具有低复杂度。然后,我们在单个中继盒中分析模型聚合误差,并显示我们的继电器辅助方案实现比没有继电器的中继的误差较小的误差。该分析提供了对协同媒体实施中的继电器部署的关键见解。广泛的数值结果表明,与最先进的方案相比,我们的设计达到了更快的融合。
translated by 谷歌翻译
Federated Learning (FL) is a collaborative machine learning (ML) framework that combines on-device training and server-based aggregation to train a common ML model among distributed agents. In this work, we propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems. Considering limited wireless communication resources, we investigate the effect of different scheduling policies and aggregation designs on the convergence performance. Driven by the importance of reducing the bias and variance of the aggregated model updates, we propose a scheduling policy that jointly considers the channel quality and training data representation of user devices. The effectiveness of our channel-aware data-importance-based scheduling policy, compared with state-of-the-art methods proposed for synchronous FL, is validated through simulations. Moreover, we show that an "age-aware" aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
translated by 谷歌翻译
联合学习(FL)能够通过定期聚合培训的本地参数来在多个边缘用户执行大的分布式机器学习任务。为了解决在无线迷雾云系统上实现支持的关键挑战(例如,非IID数据,用户异质性),我们首先基于联合平均(称为FedFog)的高效流行算法来执行梯度参数的本地聚合在云端的FOG服务器和全球培训更新。接下来,我们通过调查新的网络知识的流动系统,在无线雾云系统中雇用FEDFog,这促使了全局损失和完成时间之间的平衡。然后开发了一种迭代算法以获得系统性能的精确测量,这有助于设计有效的停止标准以输出适当数量的全局轮次。为了缓解级体效果,我们提出了一种灵活的用户聚合策略,可以先培训快速用户在允许慢速用户加入全局培训更新之前获得一定程度的准确性。提供了使用若干现实世界流行任务的广泛数值结果来验证FEDFOG的理论融合。我们还表明,拟议的FL和通信的共同设计对于在实现学习模型的可比准确性的同时,基本上提高资源利用是必要的。
translated by 谷歌翻译
通过参与大规模联合学习(FL)优化的设备的异构性质的激励,我们专注于由区块链(BC)技术赋予的异步服务器的FL解决方案。与主要采用的FL方法相比,假设同步操作,我们提倡一个异步方法,由此,模型聚合作为客户端提交本地更新。异步设置与具有异构客户端的实际大规模设置中的联合优化思路非常适合。因此,它可能导致通信开销和空闲时段的效率提高。为了评估启用了BC启用的FL的学习完成延迟,我们提供了基于批量服务队列理论的分析模型。此外,我们提供仿真结果以评估同步和异步机制的性能。涉及BC启用的流量的重要方面,例如网络大小,链路容量或用户要求,并分析并分析。随着我们的结果表明,同步设置导致比异步案例更高的预测精度。然而,异步联合优化在许多情况下提供了更低的延迟,从而在处理大数据集时成为一种吸引力的FL解决方案,严重的时序约束(例如,近实时应用)或高度不同的训练数据。
translated by 谷歌翻译
Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process by training and uploading their local models to an FL server in each global iteration. Upon receiving the local models from all the clients, the FL server generates a global model by aggregating the received local models. This traditional FL process may suffer from the straggler problem in heterogeneous client settings, where the FL server has to wait for slow clients to upload their local models in each global iteration, thus increasing the overall training time. One of the solutions is to set up a deadline and only the clients that can upload their local models before the deadline would be selected in the FL process. This solution may lead to a slow convergence rate and global model overfitting issues due to the limited client selection. In this paper, we propose the Latency awarE Semi-synchronous client Selection and mOdel aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies. That is, faster clients would be scheduled to upload their models more frequently than slow clients, thus resolving the straggler problem and accelerating the convergence speed, while avoiding model overfitting. Also, LESSON is capable of adjusting the tradeoff between the model accuracy and convergence rate by varying the deadline. Extensive simulations have been conducted to compare the performance of LESSON with the other two baseline methods, i.e., FedAvg and FedCS. The simulation results demonstrate that LESSON achieves faster convergence speed than FedAvg and FedCS, and higher model accuracy than FedCS.
translated by 谷歌翻译
The space-air-ground integrated network (SAGIN), one of the key technologies for next-generation mobile communication systems, can facilitate data transmission for users all over the world, especially in some remote areas where vast amounts of informative data are collected by Internet of remote things (IoRT) devices to support various data-driven artificial intelligence (AI) services. However, training AI models centrally with the assistance of SAGIN faces the challenges of highly constrained network topology, inefficient data transmission, and privacy issues. To tackle these challenges, we first propose a novel topology-aware federated learning framework for the SAGIN, namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the ground layer leverage their private data to perform model training locally, while the air nodes in the air layer and the ring-structured low earth orbit (LEO) satellite constellation in the space layer are in charge of model aggregation (synchronization) at different scales.To further enhance communication efficiency and inference performance of OBL, an efficient Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm is designed by taking the data class distribution of the air nodes as well as their geographic locations into account. Furthermore, we extend our OBL framework and CNASA algorithm to adapt to more complex multi-orbit satellite networks. We analyze the convergence of our OBL framework and conclude that the CNASA algorithm contributes to the fast convergence of the global model. Extensive experiments based on realistic datasets corroborate the superior performance of our algorithm over the benchmark policies.
translated by 谷歌翻译
Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devices' distributed computing capabilities. However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations.
translated by 谷歌翻译
在本章中,我们将主要关注跨无线设备的协作培训。培训ML模型相当于解决优化问题,并且在过去几十年中已经开发了许多分布式优化算法。这些分布式ML算法提供数据局部性;也就是说,可以协同地培训联合模型,而每个参与设备的数据仍然是本地的数据。这个地址,一些延伸,隐私问题。它们还提供计算可扩展性,因为它们允许利用分布在许多边缘设备的计算资源。然而,在实践中,这不会直接导致整体学习速度的线性增益与设备的数量。这部分是由于通信瓶颈限制了整体计算速度。另外,无线设备在其计算能力中具有高度异构,并且它们的计算速度和通信速率都可能由于物理因素而高度变化。因此,考虑到时变通信网络的影响以及器件的异构和随机计算能力,必须仔细设计分布式学习算法,特别是在无线网络边缘实现的算法。
translated by 谷歌翻译
当上行链路和下行链路通信都有错误时联合学习(FL)工作吗?通信噪音可以处理多少,其对学习性能的影响是什么?这项工作致力于通过明确地纳入流水线中的上行链路和下行链路嘈杂的信道来回答这些实际重要的问题。我们在同时上行链路和下行链路嘈杂通信通道上提供了多种新的融合分析,其包括完整和部分客户端参与,直接模型和模型差分传输,以及非独立和相同分布的(IID)本地数据集。这些分析表征了嘈杂通道的流动条件,使其具有与无通信错误的理想情况相同的融合行为。更具体地,为了保持FEDAVG的O(1 / T)具有完美通信的O(1 / T)收敛速率,应控制用于直接模型传输的上行链路和下行链路信噪比(SNR),使得它们被缩放为O(t ^ 2)其中T是通信轮的索引,但可以保持常量的模型差分传输。这些理论结果的关键洞察力是“雷达下的飞行”原则 - 随机梯度下降(SGD)是一个固有的噪声过程,并且可以容忍上行链路/下行链路通信噪声,只要它们不占据时变的SGD噪声即可。我们举例说明了具有两种广泛采用的通信技术 - 传输功率控制和多样性组合的这些理论发现 - 并通过使用多个真实世界流动任务的广泛数值实验进一步通过标准方法验证它们的性能优势。
translated by 谷歌翻译
随着数据和无线设备的爆炸性增长,联合学习(FL)已成为大型智能系统的有希望的技术。利用电磁波的模拟叠加,空中计算是一种吸引力的方法,以减少流量聚集中的通信负担。然而,随着对智能系统的迫切需求,具有超空气计算的多个任务的培训进一步加剧了通信资源的稀缺性。可以在一定程度上通过同时培训共享通信资源的多个任务来减轻此问题,但后者不可避免地带来任务间干扰的问题。在本文中,我们在多输入多输出(MIMO)干扰通道上使用空中多任务FL(OA-MTFL)。我们提出了一种新颖的模型聚集方法,用于对不同器件的局部梯度对准,这减轻了由于信道异质性而在空中计算中广泛存在的脱柱问题。通过考虑设备之间的空间相关性,为所提出的OA-MTFL方案建立统一的通信 - 计算分析框架,并制定设计收发器波束形成和设备选择的优化问题。我们通过使用交替优化(AO)和分数编程(FP)来开发算法来解决这个问题,这有效地缓解了任务间干扰对流程的影响。我们表明,由于使用新的模型聚合方法,设备选择对我们的方案不再是必不可少的,从而避免了通过实现设备选择引起的重大计算负担。数值结果证明了分析的正确性和所提出的计划的出色性能。
translated by 谷歌翻译
我们检查了通过直播(OTA)聚合的联合学习(FL),移动用户(MUS)旨在借助聚合本地梯度的参数服务器(PS)在全球模型上达成共识。在OTA FL中,MUS在每个训练回合中使用本地数据训练他们的模型,并以未编码的方式使用相同的频带同时传输其梯度。根据超级梯度的接收信号,PS执行全局模型更新。尽管OTA FL的通信成本显着降低,但它容易受到不利的通道影响和噪声的影响。在接收器侧采用多个天线可以减少这些效果,但是对于远离PS的用户来说,路径损失仍然是一个限制因素。为了改善此问题,在本文中,我们提出了一种基于无线的层次FL方案,该方案使用中间服务器(ISS)在MUS更密集的区域形成簇。我们的计划利用OTA群集聚合与MUS与其相应的IS进行交流,而OTA全球聚合从ISS到PS。我们提出了针对所提出算法的收敛分析,并通过对使用ISS的衍生分析表达式和实验结果的数值评估显示,与单独使用较少的传输功率相比,利用ISS的结果比单独的OTA FL具有更快的收敛性和更好的性能。我们还使用不同数量的群集迭代以及不同数据集和数据分布来验证性能的结果。我们得出的结论是,群集聚集的最佳选择取决于MUS和集群之间的数据分布。
translated by 谷歌翻译
联合学习(FL)是一种有效的分布式机器学习范式,以隐私的方式采用私人数据集。 FL的主要挑战是,END设备通常具有各种计算和通信功能,其培训数据并非独立且分布相同(非IID)。由于在移动网络中此类设备的通信带宽和不稳定的可用性,因此只能在每个回合中选择最终设备(也称为参与者或客户端的参与者或客户端)。因此,使用有效的参与者选择方案来最大程度地提高FL的性能,包括最终模型的准确性和训练时间,这一点至关重要。在本文中,我们对FL的参与者选择技术进行了评论。首先,我们介绍FL并突出参与者选择期间的主要挑战。然后,我们根据其解决方案来审查现有研究并将其分类。最后,根据我们对该主题领域最新的分析的分析,我们为FL的参与者选择提供了一些未来的指示。
translated by 谷歌翻译