室内多机器人通信面临两个关键挑战:一个是由堵塞(例如墙壁)引起的严重信号强度降解,另一个是由机器人移动性引起的动态环境。为了解决这些问题,我们考虑可重构的智能表面(RIS)来克服信号阻塞并协助多个机器人之间的轨迹设计。同时,采用了非正交的多重访问(NOMA)来应对频谱的稀缺并增强机器人的连通性。考虑到机器人的电池能力有限,我们旨在通过共同优化接入点(AP)的发射功率,RIS的相移和机器人的轨迹来最大化能源效率。开发了一种新颖的联邦深入强化学习(F-DRL)方法,以通过一个动态的长期目标解决这个具有挑战性的问题。通过每个机器人规划其路径和下行链路功率,AP只需要确定RIS的相移,这可以大大保存由于训练维度降低而导致的计算开销。仿真结果揭示了以下发现:i)与集中式DRL相比,提出的F-DRL可以减少至少86%的收敛时间; ii)设计的算法可以适应越来越多的机器人; iii)与传统的基于OMA的基准相比,NOMA增强方案可以实现更高的能源效率。
translated by 谷歌翻译
提出了一种新型可重构智能表面辅助的多机器人网络,其中多个移动机器人通过非正交多重访问(NOMA)提供了多个移动机器人(AP)。目的是通过共同优化机器人的轨迹和NOMA解码顺序,RIS的相移系数以及AP的功率分配,从而最大化多机器人系统的整个轨迹的总和率机器人的位置和每个机器人的服务质量(QoS)。为了解决这个问题,提出了一个集成的机器学习(ML)方案,该方案结合了长期记忆(LSTM) - 自动进取的集成移动平均线(ARIMA)模型和Duel Duel Double Deep Q-network(D $^{3} $ QN)算法。对于机器人的初始和最终位置预测,LSTM-ARIMA能够克服非平稳和非线性数据序列的梯度销售问题。为了共同确定相移矩阵和机器人的轨迹,调用D $^{3} $ qn用于解决动作值高估的问题。基于提议的方案,每个机器人都基于整个轨迹的最大总和率持有全局最佳轨迹,该轨迹揭示了机器人为整个轨迹设计追求长期福利。数值结果表明:1)LSTM-ARIMA模型提供了高精度预测模型; 2)提出的d $^{3} $ qn算法可以实现快速平均收敛; 3)具有较高分辨率位的RI提供的轨迹比率比低分辨率比特更大; 4)与RIS AID的正交对应物相比,RIS-NOMA网络的网络性能卓越。
translated by 谷歌翻译
FOG无线电访问网络(F-RAN)是一项有前途的技术,用户移动设备(MDS)可以将计算任务卸载到附近的FOG接入点(F-APS)。由于F-APS的资源有限,因此设计有效的任务卸载方案很重要。在本文中,通过考虑随时间变化的网络环境,制定了F-RAN中的动态计算卸载和资源分配问题,以最大程度地减少MD的任务执行延迟和能源消耗。为了解决该问题,提出了基于联合的深入强化学习(DRL)算法,其中深层确定性策略梯度(DDPG)算法在每个F-AP中执行计算卸载和资源分配。利用联合学习来培训DDPG代理,以降低培训过程的计算复杂性并保护用户隐私。仿真结果表明,与其他现有策略相比,提议的联合DDPG算法可以更快地实现MDS更快的任务执行延迟和能源消耗。
translated by 谷歌翻译
Terahertz频段(0.1---10 THZ)中的无线通信被视为未来第六代(6G)无线通信系统的关键促进技术之一,超出了大量多重输入多重输出(大量MIMO)技术。但是,THZ频率的非常高的传播衰减和分子吸收通常限制了信号传输距离和覆盖范围。从最近在可重构智能表面(RIS)上实现智能无线电传播环境的突破,我们为多跳RIS RIS辅助通信网络提供了一种新型的混合波束形成方案,以改善THZ波段频率的覆盖范围。特别是,部署了多个被动和可控的RIS,以协助基站(BS)和多个单人体用户之间的传输。我们通过利用最新的深钢筋学习(DRL)来应对传播损失的最新进展,研究了BS在BS和RISS上的模拟光束矩阵的联合设计。为了改善拟议的基于DRL的算法的收敛性,然后设计了两种算法,以初始化数字波束形成和使用交替优化技术的模拟波束形成矩阵。仿真结果表明,与基准相比,我们提出的方案能够改善50 \%的THZ通信范围。此外,还表明,我们提出的基于DRL的方法是解决NP-固定光束形成问题的最先进方法,尤其是当RIS辅助THZ通信网络的信号经历多个啤酒花时。
translated by 谷歌翻译
In heterogeneous networks (HetNets), the overlap of small cells and the macro cell causes severe cross-tier interference. Although there exist some approaches to address this problem, they usually require global channel state information, which is hard to obtain in practice, and get the sub-optimal power allocation policy with high computational complexity. To overcome these limitations, we propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet, where each access point makes power control decisions independently based on local information. To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems. By introducing regularization terms in the loss function, each agent tends to choose an experienced action with high reward when revisiting a state, and thus the policy updating speed slows down. In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process. We then implement the proposed PQL in the considered HetNet and compare it with other distributed-training-and-execution (DTE) algorithms. Simulation results show that our proposed PQL can learn the desired power control policy from a dynamic environment where the locations of users change episodically and outperform existing DTE MADRL algorithms.
translated by 谷歌翻译
预计下一代(NEVERG)网络将支持苛刻的触觉互联网应用,例如增强现实和连接的自动车辆。虽然最近的创新带来了更大的联系能力的承诺,它们对环境的敏感性以及不稳定的性能无视基于传统的基于模型的控制理由。零触摸数据驱动的方法可以提高网络适应当前操作条件的能力。诸如强化学习(RL)算法等工具可以仅基于观察历史来构建最佳控制策略。具体而言,使用深神经网络(DNN)作为预测器的深RL(DRL)已经被示出,即使在复杂的环境和高维输入中也能够实现良好的性能。但是,DRL模型的培训需要大量数据,这可能会限制其对潜在环境的不断发展统计数据的适应性。此外,无线网络是固有的分布式系统,其中集中式DRL方法需要过多的数据交换,而完全分布的方法可能导致较慢的收敛速率和性能下降。在本文中,为了解决这些挑战,我们向DRL提出了联合学习(FL)方法,我们指的是联邦DRL(F-DRL),其中基站(BS)通过仅共享模型的重量协作培训嵌入式DNN而不是训练数据。我们评估了两个不同版本的F-DRL,价值和策略,并显示出与分布式和集中式DRL相比实现的卓越性能。
translated by 谷歌翻译
同时传输和反射可重新配置的可重新配置智能表面(Star-Riss)被认为是有希望的辅助设备,以增强无线网络的性能,其中位于表面的不同侧的用户可以同时由发送和反射信号同时服务。本文研究了非正交多通道(NOMA)辅助星级下行链路网络的能效(EE)最大化问题。由于EE的分数形式,通过传统的凸优化解决方案解决EE最大化问题是挑战性的。在这项工作中,提出了一种深度确定的政策梯度(DDPG)基于算法,以通过共同优化基站的传输波束成形矢量和Star-RIS的系数矩阵来最大化EE。仿真结果表明,考虑时变通道,所提出的算法可以有效地最大化系统EE。
translated by 谷歌翻译
Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
可重新配置的智能表面(RIS)已成为近年来改善无线通信的有希望的技术。它通过控制具有较少硬件成本和较低功耗来控制可重新配置的被动元件来引导入射信号来创建有利的传播环境。在本文中,我们考虑了一个RIS辅助多用户多输入单输出下行链路通信系统。我们的目标是通过在接入点和RIS元件的被动波束形成向量中优化主动波束形成来最大化所有用户的加权和速率。与大多数现有的作品不同,我们考虑使用离散相移和不完美的信道状态信息(CSI)更实际的情况。具体而言,对于考虑离散相移和完美CSI的情况,我们首先开发一个深量化的神经网络(DQNN),同时设计主动和被动波束形成,而大多数报道的作品可选地设计。然后,我们基于DQNN提出改进的结构(I-DQNN),以简化参数决策过程,当每个RIS元素的控制位大于1位时。最后,我们将两种基于DQNN的算法扩展到同时考虑离散相移和不完全CSI的情况。我们的仿真结果表明,基于DQNN的两种算法比完美CSI案例中的传统算法更好,并且在不完美的CSI案例中也是更强大的。
translated by 谷歌翻译
同时传输和反射可重构的智能表面(星际摩托车)是一种有前途的被动装置,通过同时传输和反映入射信号,从而有助于全空间覆盖。作为无线通信的新范式,如何分析星际轮胎的覆盖范围和能力性能变得至关重要,但具有挑战性。为了解决星际辅助网络中的覆盖范围和容量优化(CCO)问题,提出了多目标近端策略优化(MO-PPO)算法来处理长期利益,而不是传统优化算法。为了在每个目标之间取得平衡,MO-PPO算法提供了一组最佳解决方案,以形成Pareto前部(PF),其中PF上的任何解决方案都被视为最佳结果。此外,研究了为了提高MO-PPO算法的性能,两种更新策略,即基于动作值的更新策略(AVU)和基于损失功能的更新策略(LFUS)。对于AVU,改进的点是整合覆盖范围和容量的动作值,然后更新损失函数。对于LFU,改进的点仅是为覆盖范围和容量损失函数分配动态权重,而权重在每个更新时由最小值求解器计算出来。数值结果表明,调查的更新策略在不同情况下的固定权重优化算法优于MO优化算法,其中包括不同数量的样品网格,星轮的数量,星轮中的元素数量和大小星际船。此外,星际辅助网络比没有星际轮胎的传统无线网络获得更好的性能。此外,具有相同的带宽,毫米波能够提供比低6 GHz更高的容量,但覆盖率较小。
translated by 谷歌翻译
Federated Edge Learning(Feel)已成为一种革命性的范式,可以在6G无线网络的边缘开发AI服务,因为它支持大量移动设备的协作模型培训。但是,无线通道上的模型通信,尤其是在上行链路模型上传的感觉中,已被广泛认为是一种严重限制感觉效率的瓶颈。尽管无线计算可以减轻广播资源在感觉上传中的过度成本,但无线空中感觉的实际实施仍然遭受了一些挑战,包括强烈的Straggler问题,大型沟通开销和潜在的隐私泄漏。在本文中,我们研究了这些挑战,并利用了未来无线系统的关键推动力,以应对这些挑战。我们研究了有关RIS授权的感觉的最新解决方案,并探索采用RIS增强感觉性能的有希望的研究机会。
translated by 谷歌翻译
本文调查了大师无人机(MUAV) - 互联网(IOT)网络,我们建议使用配备有智能反射表面(IRS)的可充电辅助UAV(AUAV)来增强来自MUAV的通信信号并将MUAG作为充电电源利用。在拟议的模型下,我们研究了这些能量有限的无人机的最佳协作策略,以最大限度地提高物联网网络的累计吞吐量。根据两个无人机之间是否有收费,配制了两个优化问题。为了解决这些问题,提出了两个多代理深度强化学习(DRL)方法,这些方法是集中培训多师深度确定性政策梯度(CT-MADDPG)和多代理深度确定性政策选项评论仪(MADDPOC)。结果表明,CT-MADDPG可以大大减少对UAV硬件的计算能力的要求,拟议的MADDPOC能够在连续动作域中支持低水平的多代理合作学习,其优于优势基于选项的分层DRL,只支持单代理学习和离散操作。
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
合作的感知在将车辆的感知范围扩展到超出其视线之外至关重要。然而,在有限的通信资源下交换原始感官数据是不可行的。为了实现有效的合作感知,车辆需要解决以下基本问题:需要共享哪些感官数据?,在哪个分辨率?,以及哪个车辆?为了回答这个问题,在本文中,提出了一种新颖的框架来允许加强学习(RL)基于车辆关联,资源块(RB)分配和通过利用基于四叉的点的协作感知消息(CPM)的内容选择云压缩机制。此外,引入了联合的RL方法,以便在跨车辆上加速训练过程。仿真结果表明,RL代理能够有效地学习车辆关联,RB分配和消息内容选择,同时在接收的感官信息方面最大化车辆的满足。结果还表明,与非联邦方法相比,联邦RL改善了培训过程,可以在与非联邦方法相同的时间内实现更好的政策。
translated by 谷歌翻译
Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual world scenes is computationally intensive and requires computation offloading. The disparity in transmitted scene dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where in the UL stage, the smaller data size of the physical world scenes captured by multiple extended reality users (XUs) will be uploaded to the Metaverse Console (MC) to be construed and rendered. In the DL stage, the larger-size 3D virtual world scenes need to be transmitted back to the XUs. The decisions pertaining to computation offloading and channel assignment are optimized in the UL stage, and the MC will optimize power allocation for users assigned with a channel in the UL transmission stage. Some problems arise therefrom: (i) interactive multi-process chain, specifically Asynchronous Markov Decision Process (AMDP), (ii) joint optimization in multiple processes, and (iii) high-dimensional objective functions, or hybrid reward scenarios. To ensure the reliability and low latency of the system, we design a novel multi-agent reinforcement learning algorithm structure, namely Asynchronous Actors Hybrid Critic (AAHC). Extensive experiments demonstrate that compared to proposed baselines, AAHC obtains better solutions with preferable training time.
translated by 谷歌翻译
协作深度加强学习(CDRL)算法,其中多个代理可以在无线网络上协调是一种有希望的方法,以便在复杂的动态环境中依赖实时决策的未来智能和自主系统。尽管如此,在实际情况下,CDRL由​​于代理的异质性及其学习任务,不同环境,学习时间限制以及无线网络的资源限制,因此CDRL面临着许多挑战。为了解决这些挑战,在本文中,提出了一种新颖的语义感知CDRL方法,以使一组异构未经训练的代理具有语义连接的DRL任务,以在资源受限无线蜂窝网络上有效地协作。为此,提出了一种新的异构联邦DRL(HFDRL)算法,以选择用于协作的语义相关DRL代理的最佳子集。然后,该方法将共同优化合作选定代理的训练损失和无线带宽分配,以便在其实时任务的时间限制内培训每个代理。仿真结果表明,与最先进的基线相比,所提出的算法的卓越性能。
translated by 谷歌翻译
Recent technological advancements in space, air and ground components have made possible a new network paradigm called "space-air-ground integrated network" (SAGIN). Unmanned aerial vehicles (UAVs) play a key role in SAGINs. However, due to UAVs' high dynamics and complexity, the real-world deployment of a SAGIN becomes a major barrier for realizing such SAGINs. Compared to the space and terrestrial components, UAVs are expected to meet performance requirements with high flexibility and dynamics using limited resources. Therefore, employing UAVs in various usage scenarios requires well-designed planning in algorithmic approaches. In this paper, we provide a comprehensive review of recent learning-based algorithmic approaches. We consider possible reward functions and discuss the state-of-the-art algorithms for optimizing the reward functions, including Q-learning, deep Q-learning, multi-armed bandit (MAB), particle swarm optimization (PSO) and satisfaction-based learning algorithms. Unlike other survey papers, we focus on the methodological perspective of the optimization problem, which can be applicable to various UAV-assisted missions on a SAGIN using these algorithms. We simulate users and environments according to real-world scenarios and compare the learning-based and PSO-based methods in terms of throughput, load, fairness, computation time, etc. We also implement and evaluate the 2-dimensional (2D) and 3-dimensional (3D) variations of these algorithms to reflect different deployment cases. Our simulation suggests that the $3$D satisfaction-based learning algorithm outperforms the other approaches for various metrics in most cases. We discuss some open challenges at the end and our findings aim to provide design guidelines for algorithm selections while optimizing the deployment of UAV-assisted SAGINs.
translated by 谷歌翻译
Microgrids(MGS)是未来的缩小能量系统的重要参与者,其中许多智能的东西(物联网)设备在智能电网中的能量管理中相互作用。虽然MG能源管理有许多作品,但大多数研究都假设了一个完美的通信环境,其中不考虑通信故障。在本文中,我们将MG视为具有IOT设备的多智能传播环境,其中AI代理与其同行交换信息以进行协作。但是,由于通信故障或分组丢失,协作信息可能会丢失。这些事件可能会影响整个MG的操作。为此,我们提出了一种多种子体贝叶斯深增强学习(BA-DRL)方法,用于MG能量管理下的通信故障。我们首先定义多个代理部分观察到的马尔可夫决策过程(MA-POMDP)来描述在通信失败下的代理商,其中每个代理人可以更新其对同龄人的行动的信念。然后,我们在BA-DRL中应用用于Q值估计的双深度Q学习(DDQN)架构,并提出了基于信念的相关性平衡,用于多助剂BA-DRL的关节动作选择。最后,仿真结果表明,BA-DRL对供电不确定度和通信故障不确定性强大。 BA-DRL的奖励比NASH Deep Q-Learning(NASH-DQN)和乘法器(ADMM)的交替方向方法分别在1%的通信失效概率下进行4.1%和10.3%。
translated by 谷歌翻译
The explosive growth of dynamic and heterogeneous data traffic brings great challenges for 5G and beyond mobile networks. To enhance the network capacity and reliability, we propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme that adaptively allocates the uplink and downlink time-frequency resources of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while alleviating the inter-cell interference. We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) that maximizes the long-term expected sum rate under the users' packet dropping ratio constraints. In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named federated Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm. The BSs decide their local time-frequency configurations through RL algorithms and achieve global training via exchanging local RL models with their neighbors under a decentralized federated learning framework. Specifically, to deal with the large-scale discrete action space of each BS, we adopt a DDPG-based algorithm to generate actions in a continuous space, and then utilize Wolpertinger policy to reduce the mapping errors from continuous action space back to discrete action space. Simulation results demonstrate the superiority of our proposed algorithm to benchmark algorithms with respect to system sum rate.
translated by 谷歌翻译
Hybrid FSO/RF system requires an efficient FSO and RF link switching mechanism to improve the system capacity by realizing the complementary benefits of both the links. The dynamics of network conditions, such as fog, dust, and sand storms compound the link switching problem and control complexity. To address this problem, we initiate the study of deep reinforcement learning (DRL) for link switching of hybrid FSO/RF systems. Specifically, in this work, we focus on actor-critic called Actor/Critic-FSO/RF and Deep-Q network (DQN) called DQN-FSO/RF for FSO/RF link switching under atmospheric turbulences. To formulate the problem, we define the state, action, and reward function of a hybrid FSO/RF system. DQN-FSO/RF frequently updates the deployed policy that interacts with the environment in a hybrid FSO/RF system, resulting in high switching costs. To overcome this, we lift this problem to ensemble consensus-based representation learning for deep reinforcement called DQNEnsemble-FSO/RF. The proposed novel DQNEnsemble-FSO/RF DRL approach uses consensus learned features representations based on an ensemble of asynchronous threads to update the deployed policy. Experimental results corroborate that the proposed DQNEnsemble-FSO/RF's consensus-learned features switching achieves better performance than Actor/Critic-FSO/RF, DQN-FSO/RF, and MyOpic for FSO/RF link switching while keeping the switching cost significantly low.
translated by 谷歌翻译