改善人与人之间的互动性和互连性是元视频的亮点之一。荟萃分析依赖于核心方法,数字孪生,这是将物理世界对象,人,动作和场景复制到虚拟世界中的一种手段。能够在实时和移动性的情况下访问与物理世界相关的场景和信息,对于为所有用户开发高度可访问,互动和互连体验至关重要。这种开发使来自其他位置的用户可以访问有关另一个位置发生的事件的高质量现实世界和最新信息,并与他人进行超相互交流的社交。然而,由于虚拟世界图形的数据大小以及对低延迟传输的需求,因此其他人从元评估中产生的持续,平稳的更新是一项具有挑战性的任务。随着移动增强现实(MAR)的开发,用户也可以通过高度交互方式(即使在移动性下)通过元视频进行交互。因此,在我们的工作中,我们考虑了一个环境,其中包括移动车辆互联网(IOV)的用户,并通过无线通信从Metaverse Service Provister Pasting Stations(MSPCSS)下载实时虚拟世界更新。我们设计了一个具有多个单元站的环境,其中将在细胞站之间交换用户虚拟世界图形下载任务。由于传输延迟是在移动性下接收虚拟世界更新的主要关注点,因此我们的工作旨在分配系统资源,以最大程度地减少用户在车辆中使用的总时间,以便从单元站下载其虚拟世界场景。我们利用深度强化学习并评估不同环境配置下算法的性能。我们的工作提供了启用AI支持的6G通信的元视体的用例。
translated by 谷歌翻译
Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual world scenes is computationally intensive and requires computation offloading. The disparity in transmitted scene dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where in the UL stage, the smaller data size of the physical world scenes captured by multiple extended reality users (XUs) will be uploaded to the Metaverse Console (MC) to be construed and rendered. In the DL stage, the larger-size 3D virtual world scenes need to be transmitted back to the XUs. The decisions pertaining to computation offloading and channel assignment are optimized in the UL stage, and the MC will optimize power allocation for users assigned with a channel in the UL transmission stage. Some problems arise therefrom: (i) interactive multi-process chain, specifically Asynchronous Markov Decision Process (AMDP), (ii) joint optimization in multiple processes, and (iii) high-dimensional objective functions, or hybrid reward scenarios. To ensure the reliability and low latency of the system, we design a novel multi-agent reinforcement learning algorithm structure, namely Asynchronous Actors Hybrid Critic (AAHC). Extensive experiments demonstrate that compared to proposed baselines, AAHC obtains better solutions with preferable training time.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
无人驾驶飞机(UAV)用作空中基础站,可将时间敏感的包装从物联网设备传递到附近的陆地底站(TBS)。在此类无人产用的物联网网络中安排数据包,以确保TBS在TBS上确保新鲜(或最新的)物联网设备的数据包是一个挑战性的问题,因为它涉及两个同时的步骤(i)(i)在IOT设备上生成的数据包的同时进行样本由UAVS [HOP-1]和(ii)将采样数据包从UAVS更新到TBS [Hop-2]。为了解决这个问题,我们建议针对两跳UAV相关的IoT网络的信息年龄(AOI)调度算法。首先,我们提出了一个低复杂的AOI调度程序,称为MAF-MAD,该计划使用UAV(HOP-1)和最大AOI差异(MAD)策略采样最大AOI(MAF)策略,以更新从无人机到TBS(Hop-2)。我们证明,MAF-MAD是理想条件下的最佳AOI调度程序(无线无线通道和在物联网设备上产生交通生成)。相反,对于一般条件(物联网设备的损失渠道条件和不同的周期性交通生成),提出了深厚的增强学习算法,即近端政策优化(PPO)基于调度程序。仿真结果表明,在所有考虑的一般情况下,建议的基于PPO的调度程序优于MAF-MAD,MAF和Round-Robin等其他调度程序。
translated by 谷歌翻译
多访问边缘计算(MEC)是一个新兴的计算范式,将云计算扩展到网络边缘,以支持移动设备上的资源密集型应用程序。作为MEC的关键问题,服务迁移需要决定如何迁移用户服务,以维持用户在覆盖范围和容量有限的MEC服务器之间漫游的服务质量。但是,由于动态的MEC环境和用户移动性,找到最佳的迁移策略是棘手的。许多现有研究根据完整的系统级信息做出集中式迁移决策,这是耗时的,并且缺乏理想的可扩展性。为了应对这些挑战,我们提出了一种新颖的学习驱动方法,该方法以用户为中心,可以通过使用不完整的系统级信息来做出有效的在线迁移决策。具体而言,服务迁移问题被建模为可观察到的马尔可夫决策过程(POMDP)。为了解决POMDP,我们设计了一个新的编码网络,该网络结合了长期记忆(LSTM)和一个嵌入式矩阵,以有效提取隐藏信息,并进一步提出了一种定制的非政策型演员 - 批判性算法,以进行有效的训练。基于现实世界的移动性痕迹的广泛实验结果表明,这种新方法始终优于启发式和最先进的学习驱动算法,并且可以在各种MEC场景上取得近乎最佳的结果。
translated by 谷歌翻译
Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
translated by 谷歌翻译
互联网连接系统的规模大大增加,这些系统比以往任何时候都更接触到网络攻击。网络攻击的复杂性和动态需要保护机制响应,自适应和可扩展。机器学习,或更具体地说,深度增强学习(DRL),方法已经广泛提出以解决这些问题。通过将深入学习纳入传统的RL,DRL能够解决复杂,动态,特别是高维的网络防御问题。本文提出了对为网络安全开发的DRL方法进行了调查。我们触及不同的重要方面,包括基于DRL的网络 - 物理系统的安全方法,自主入侵检测技术和基于多元的DRL的游戏理论模拟,用于防范策略对网络攻击。还给出了对基于DRL的网络安全的广泛讨论和未来的研究方向。我们预计这一全面审查提供了基础,并促进了未来的研究,探讨了越来越复杂的网络安全问题。
translated by 谷歌翻译
The Metaverse can be considered the extension of the present-day web, which integrates the physical and virtual worlds, delivering hyper-realistic user experiences. The inception of the Metaverse brings forth many ecosystem services such as content creation, social entertainment, in-world value transfer, intelligent traffic, healthcare. These services are compute-intensive and require computation offloading onto a Metaverse edge computing server (MECS). Existing Metaverse edge computing approaches do not efficiently and effectively handle resource allocation to ensure a fluid, seamless and hyper-realistic Metaverse experience required for Metaverse ecosystem services. Therefore, we introduce a new Metaverse-compatible, Unified, User and Task (UUT) centered artificial intelligence (AI)- based mobile edge computing (MEC) paradigm, which serves as a concept upon which future AI control algorithms could be built to develop a more user and task-focused MEC.
translated by 谷歌翻译
In heterogeneous networks (HetNets), the overlap of small cells and the macro cell causes severe cross-tier interference. Although there exist some approaches to address this problem, they usually require global channel state information, which is hard to obtain in practice, and get the sub-optimal power allocation policy with high computational complexity. To overcome these limitations, we propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet, where each access point makes power control decisions independently based on local information. To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems. By introducing regularization terms in the loss function, each agent tends to choose an experienced action with high reward when revisiting a state, and thus the policy updating speed slows down. In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process. We then implement the proposed PQL in the considered HetNet and compare it with other distributed-training-and-execution (DTE) algorithms. Simulation results show that our proposed PQL can learn the desired power control policy from a dynamic environment where the locations of users change episodically and outperform existing DTE MADRL algorithms.
translated by 谷歌翻译
对于正交多访问(OMA)系统,服务的用户设备(UES)的数量仅限于可用的正交资源的数量。另一方面,非正交多访问(NOMA)方案允许多个UES使用相同的正交资源。这种额外的自由度为资源分配带来了新的挑战。缓冲状态信息(BSI),例如等待传输的数据包的大小和年龄,可用于改善OMA系统中的调度。在本文中,我们研究了BSI对上行链路多载波NOMA场景中集中调度程序的性能的影响,UE具有各种数据速率和延迟要求。为了处理将UES分配给资源的大型组合空间,我们提出了一个基于Actor-Critic-Critic强化学习纳入BSI的新型调度程序。使用诺基亚的“无线套件”进行培训和评估。我们提出了各种新颖的技术来稳定和加快训练。建议的调度程序优于基准调度程序。
translated by 谷歌翻译
预计下一代(NEVERG)网络将支持苛刻的触觉互联网应用,例如增强现实和连接的自动车辆。虽然最近的创新带来了更大的联系能力的承诺,它们对环境的敏感性以及不稳定的性能无视基于传统的基于模型的控制理由。零触摸数据驱动的方法可以提高网络适应当前操作条件的能力。诸如强化学习(RL)算法等工具可以仅基于观察历史来构建最佳控制策略。具体而言,使用深神经网络(DNN)作为预测器的深RL(DRL)已经被示出,即使在复杂的环境和高维输入中也能够实现良好的性能。但是,DRL模型的培训需要大量数据,这可能会限制其对潜在环境的不断发展统计数据的适应性。此外,无线网络是固有的分布式系统,其中集中式DRL方法需要过多的数据交换,而完全分布的方法可能导致较慢的收敛速率和性能下降。在本文中,为了解决这些挑战,我们向DRL提出了联合学习(FL)方法,我们指的是联邦DRL(F-DRL),其中基站(BS)通过仅共享模型的重量协作培训嵌入式DNN而不是训练数据。我们评估了两个不同版本的F-DRL,价值和策略,并显示出与分布式和集中式DRL相比实现的卓越性能。
translated by 谷歌翻译
Terahertz频段(0.1---10 THZ)中的无线通信被视为未来第六代(6G)无线通信系统的关键促进技术之一,超出了大量多重输入多重输出(大量MIMO)技术。但是,THZ频率的非常高的传播衰减和分子吸收通常限制了信号传输距离和覆盖范围。从最近在可重构智能表面(RIS)上实现智能无线电传播环境的突破,我们为多跳RIS RIS辅助通信网络提供了一种新型的混合波束形成方案,以改善THZ波段频率的覆盖范围。特别是,部署了多个被动和可控的RIS,以协助基站(BS)和多个单人体用户之间的传输。我们通过利用最新的深钢筋学习(DRL)来应对传播损失的最新进展,研究了BS在BS和RISS上的模拟光束矩阵的联合设计。为了改善拟议的基于DRL的算法的收敛性,然后设计了两种算法,以初始化数字波束形成和使用交替优化技术的模拟波束形成矩阵。仿真结果表明,与基准相比,我们提出的方案能够改善50 \%的THZ通信范围。此外,还表明,我们提出的基于DRL的方法是解决NP-固定光束形成问题的最先进方法,尤其是当RIS辅助THZ通信网络的信号经历多个啤酒花时。
translated by 谷歌翻译
FOG无线电访问网络(F-RAN)是一项有前途的技术,用户移动设备(MDS)可以将计算任务卸载到附近的FOG接入点(F-APS)。由于F-APS的资源有限,因此设计有效的任务卸载方案很重要。在本文中,通过考虑随时间变化的网络环境,制定了F-RAN中的动态计算卸载和资源分配问题,以最大程度地减少MD的任务执行延迟和能源消耗。为了解决该问题,提出了基于联合的深入强化学习(DRL)算法,其中深层确定性策略梯度(DDPG)算法在每个F-AP中执行计算卸载和资源分配。利用联合学习来培训DDPG代理,以降低培训过程的计算复杂性并保护用户隐私。仿真结果表明,与其他现有策略相比,提议的联合DDPG算法可以更快地实现MDS更快的任务执行延迟和能源消耗。
translated by 谷歌翻译
资产分配(或投资组合管理)是确定如何最佳将有限预算的资金分配给一系列金融工具/资产(例如股票)的任务。这项研究调查了使用无模型的深RL代理应用于投资组合管理的增强学习(RL)的性能。我们培训了几个RL代理商的现实股票价格,以学习如何执行资产分配。我们比较了这些RL剂与某些基线剂的性能。我们还比较了RL代理,以了解哪些类别的代理表现更好。从我们的分析中,RL代理可以执行投资组合管理的任务,因为它们的表现明显优于基线代理(随机分配和均匀分配)。四个RL代理(A2C,SAC,PPO和TRPO)总体上优于最佳基线MPT。这显示了RL代理商发现更有利可图的交易策略的能力。此外,基于价值和基于策略的RL代理之间没有显着的性能差异。演员批评者的表现比其他类型的药物更好。同样,在政策代理商方面的表现要好,因为它们在政策评估方面更好,样品效率在投资组合管理中并不是一个重大问题。这项研究表明,RL代理可以大大改善资产分配,因为它们的表现优于强基础。基于我们的分析,在政策上,参与者批评的RL药物显示出最大的希望。
translated by 谷歌翻译
未来几年物联网设备计数的预期增加促使有效算法的开发,可以帮助其有效管理,同时保持功耗低。在本文中,我们提出了一种智能多通道资源分配算法,用于Loradrl的密集Lora网络,并提供详细的性能评估。我们的结果表明,所提出的算法不仅显着提高了Lorawan的分组传递比(PDR),而且还能够支持移动终端设备(EDS),同时确保较低的功耗,因此增加了网络的寿命和容量。}大多数之前作品侧重于提出改进网络容量的不同MAC协议,即Lorawan,传输前的延迟等。我们展示通过使用Loradrl,我们可以通过Aloha \ TextColor {Black}与Lorasim相比,我们可以实现相同的效率LORA-MAB在将复杂性从EDS移动到网关的同时,因此使EDS更简单和更便宜。此外,我们在大规模的频率干扰攻击下测试Loradrl的性能,并显示其对环境变化的适应性。我们表明,与基于学习的技术相比,Loradrl的输出改善了最先进的技术的性能,从而提高了PR的500多种\%。
translated by 谷歌翻译
最近,通过协作推断部署深神经网络(DNN)模型,该推断将预训练的模型分为两个部分,并分别在用户设备(UE)和Edge Server上执行它们,从而变得有吸引力。但是,DNN的大型中间特征会阻碍灵活的脱钩,现有方法要么集中在单个UE方案上,要么只是在考虑所需的CPU周期的情况下定义任务,但忽略了单个DNN层的不可分割性。在本文中,我们研究了多代理协作推理方案,其中单个边缘服务器协调了多个UES的推理。我们的目标是为所有UES实现快速和节能的推断。为了实现这一目标,我们首先设计了一种基于自动编码器的轻型方法,以压缩大型中间功能。然后,我们根据DNN的推理开销定义任务,并将问题作为马尔可夫决策过程(MDP)。最后,我们提出了一种多代理混合近端策略优化(MAHPPO)算法,以解决混合动作空间的优化问题。我们对不同类型的网络进行了广泛的实验,结果表明,我们的方法可以降低56%的推理潜伏期,并节省多达72 \%的能源消耗。
translated by 谷歌翻译
Recent advances in distributed artificial intelligence (AI) have led to tremendous breakthroughs in various communication services, from fault-tolerant factory automation to smart cities. When distributed learning is run over a set of wirelessly connected devices, random channel fluctuations and the incumbent services running on the same network impact the performance of both distributed learning and the coexisting service. In this paper, we investigate a mixed service scenario where distributed AI workflow and ultra-reliable low latency communication (URLLC) services run concurrently over a network. Consequently, we propose a risk sensitivity-based formulation for device selection to minimize the AI training delays during its convergence period while ensuring that the operational requirements of the URLLC service are met. To address this challenging coexistence problem, we transform it into a deep reinforcement learning problem and address it via a framework based on soft actor-critic algorithm. We evaluate our solution with a realistic and 3GPP-compliant simulator for factory automation use cases. Our simulation results confirm that our solution can significantly decrease the training delay of the distributed AI service while keeping the URLLC availability above its required threshold and close to the scenario where URLLC solely consumes all network resources.
translated by 谷歌翻译
需要下一代无线网络以同时满足各种服务和标准。为了解决即将到来的严格条件,开发了具有柔性设计,分解虚拟和可编程组件以及智能闭环控制等特征的新型开放式访问网络(O-RAN)。面对不断变化的情况,O-Ran切片被研究为确保网络服务质量(QoS)的关键策略。但是,必须动态控制不同的网络切片,以避免由环境快速变化引起的服务水平一致性(SLA)变化。因此,本文介绍了一个新颖的框架,能够通过智能提供的提供资源来管理网络切片。由于不同的异质环境,智能机器学习方法需要足够的探索来处理无线网络中最严厉的情况并加速收敛。为了解决这个问题,提出了一种新解决方案,基于基于进化的深度强化学习(EDRL),以加速和优化无线电访问网络(RAN)智能控制器(RIC)模块中的切片管理学习过程。为此,O-RAN切片被表示为Markov决策过程(MDP),然后最佳地解决了资源分配,以使用EDRL方法满足服务需求。在达到服务需求方面,仿真结果表明,所提出的方法的表现优于DRL基线62.2%。
translated by 谷歌翻译