自动驾驶汽车(AV)必须在动态环境中安全有效地操作。为此,配备联合雷达通信(JRC)功能的AVS可以通过使用雷达检测和数据通信功能来增强驾驶安全性。但是,在不确定性和周围环境的动态下,通过两种不同功能优化AV系统的性能非常具有挑战性。在这项工作中,我们首先提出一个基于马尔可夫决策过程(MDP)的智能优化框架,以帮助AV在周围环境的动态和不确定性下选择JRC操作功能时做出最佳决策。然后,我们开发了一种有效的学习算法,利用了深度强化学习技术的最新进展,以找到AV的最佳政策,而无需任何有关周围环境的先前信息。此外,为了使我们提出的框架更加可扩展,我们开发了一种转移学习(TL)机制,该机制使AV能够利用有价值的体验来加速培训过程,以加速培训过程。广泛的模拟表明,与其他常规的深钢筋学习方法相比,提议的可转移深钢筋学习框架可将AV的障碍检测概率降低到67%。
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
合作的感知在将车辆的感知范围扩展到超出其视线之外至关重要。然而,在有限的通信资源下交换原始感官数据是不可行的。为了实现有效的合作感知,车辆需要解决以下基本问题:需要共享哪些感官数据?,在哪个分辨率?,以及哪个车辆?为了回答这个问题,在本文中,提出了一种新颖的框架来允许加强学习(RL)基于车辆关联,资源块(RB)分配和通过利用基于四叉的点的协作感知消息(CPM)的内容选择云压缩机制。此外,引入了联合的RL方法,以便在跨车辆上加速训练过程。仿真结果表明,RL代理能够有效地学习车辆关联,RB分配和消息内容选择,同时在接收的感官信息方面最大化车辆的满足。结果还表明,与非联邦方法相比,联邦RL改善了培训过程,可以在与非联邦方法相同的时间内实现更好的政策。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
The connectivity-aware path design is crucial in the effective deployment of autonomous Unmanned Aerial Vehicles (UAVs). Recently, Reinforcement Learning (RL) algorithms have become the popular approach to solving this type of complex problem, but RL algorithms suffer slow convergence. In this paper, we propose a Transfer Learning (TL) approach, where we use a teacher policy previously trained in an old domain to boost the path learning of the agent in the new domain. As the exploration processes and the training continue, the agent refines the path design in the new domain based on the subsequent interactions with the environment. We evaluate our approach considering an old domain at sub-6 GHz and a new domain at millimeter Wave (mmWave). The teacher path policy, previously trained at sub-6 GHz path, is the solution to a connectivity-aware path problem that we formulate as a constrained Markov Decision Process (CMDP). We employ a Lyapunov-based model-free Deep Q-Network (DQN) to solve the path design at sub-6 GHz that guarantees connectivity constraint satisfaction. We empirically demonstrate the effectiveness of our approach for different urban environment scenarios. The results demonstrate that our proposed approach is capable of reducing the training time considerably at mmWave.
translated by 谷歌翻译
车辆到基础设施(V2I)通信对于增强自动驾驶汽车(AV)的可靠性至关重要。但是,道路交通和AVS无线连接的不确定性会严重损害及时的决策。因此,至关重要的是,同时优化AVS的网络选择和驱动政策,以最大程度地减少道路碰撞,同时最大化通信数据速率。在本文中,我们开发了一个增强学习(RL)框架,以表征有效的网络选择和自主驾驶策略在传统的Sub-6GHz Spectrum和Terahertz(THZ)频率上运行的多波段车辆网络(VNET)中。所提出的框架旨在(i)通过自动驾驶的角度控制车辆的运动动力学(即速度和加速度)来最大化交通流量,并最大程度地减少冲突,以及(ii)通过共同控制车辆的交接,并最大程度地减少数据速率从电信的角度来看运动动力学和网络选择。我们将这个问题作为马尔可夫决策过程(MDP)提出,并开发了基于Q的深度学习解决方案,以优化给定AV状态的加速度,减速,车道变速器和AV基准站分配等动作。 AV的状态是根据AV的速度和通信渠道状态定义的。数值结果表明了与车辆运动动力学,交接和通信数据速率相互依赖性有关的有趣见解。拟议的政策使AVS能够采用具有改善连接性的安全驾驶行为。
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
Hybrid FSO/RF system requires an efficient FSO and RF link switching mechanism to improve the system capacity by realizing the complementary benefits of both the links. The dynamics of network conditions, such as fog, dust, and sand storms compound the link switching problem and control complexity. To address this problem, we initiate the study of deep reinforcement learning (DRL) for link switching of hybrid FSO/RF systems. Specifically, in this work, we focus on actor-critic called Actor/Critic-FSO/RF and Deep-Q network (DQN) called DQN-FSO/RF for FSO/RF link switching under atmospheric turbulences. To formulate the problem, we define the state, action, and reward function of a hybrid FSO/RF system. DQN-FSO/RF frequently updates the deployed policy that interacts with the environment in a hybrid FSO/RF system, resulting in high switching costs. To overcome this, we lift this problem to ensemble consensus-based representation learning for deep reinforcement called DQNEnsemble-FSO/RF. The proposed novel DQNEnsemble-FSO/RF DRL approach uses consensus learned features representations based on an ensemble of asynchronous threads to update the deployed policy. Experimental results corroborate that the proposed DQNEnsemble-FSO/RF's consensus-learned features switching achieves better performance than Actor/Critic-FSO/RF, DQN-FSO/RF, and MyOpic for FSO/RF link switching while keeping the switching cost significantly low.
translated by 谷歌翻译
In heterogeneous networks (HetNets), the overlap of small cells and the macro cell causes severe cross-tier interference. Although there exist some approaches to address this problem, they usually require global channel state information, which is hard to obtain in practice, and get the sub-optimal power allocation policy with high computational complexity. To overcome these limitations, we propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet, where each access point makes power control decisions independently based on local information. To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems. By introducing regularization terms in the loss function, each agent tends to choose an experienced action with high reward when revisiting a state, and thus the policy updating speed slows down. In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process. We then implement the proposed PQL in the considered HetNet and compare it with other distributed-training-and-execution (DTE) algorithms. Simulation results show that our proposed PQL can learn the desired power control policy from a dynamic environment where the locations of users change episodically and outperform existing DTE MADRL algorithms.
translated by 谷歌翻译
Recent technological advancements in space, air and ground components have made possible a new network paradigm called "space-air-ground integrated network" (SAGIN). Unmanned aerial vehicles (UAVs) play a key role in SAGINs. However, due to UAVs' high dynamics and complexity, the real-world deployment of a SAGIN becomes a major barrier for realizing such SAGINs. Compared to the space and terrestrial components, UAVs are expected to meet performance requirements with high flexibility and dynamics using limited resources. Therefore, employing UAVs in various usage scenarios requires well-designed planning in algorithmic approaches. In this paper, we provide a comprehensive review of recent learning-based algorithmic approaches. We consider possible reward functions and discuss the state-of-the-art algorithms for optimizing the reward functions, including Q-learning, deep Q-learning, multi-armed bandit (MAB), particle swarm optimization (PSO) and satisfaction-based learning algorithms. Unlike other survey papers, we focus on the methodological perspective of the optimization problem, which can be applicable to various UAV-assisted missions on a SAGIN using these algorithms. We simulate users and environments according to real-world scenarios and compare the learning-based and PSO-based methods in terms of throughput, load, fairness, computation time, etc. We also implement and evaluate the 2-dimensional (2D) and 3-dimensional (3D) variations of these algorithms to reflect different deployment cases. Our simulation suggests that the $3$D satisfaction-based learning algorithm outperforms the other approaches for various metrics in most cases. We discuss some open challenges at the end and our findings aim to provide design guidelines for algorithm selections while optimizing the deployment of UAV-assisted SAGINs.
translated by 谷歌翻译
无人驾驶飞机(UAV)用作空中基础站,可将时间敏感的包装从物联网设备传递到附近的陆地底站(TBS)。在此类无人产用的物联网网络中安排数据包,以确保TBS在TBS上确保新鲜(或最新的)物联网设备的数据包是一个挑战性的问题,因为它涉及两个同时的步骤(i)(i)在IOT设备上生成的数据包的同时进行样本由UAVS [HOP-1]和(ii)将采样数据包从UAVS更新到TBS [Hop-2]。为了解决这个问题,我们建议针对两跳UAV相关的IoT网络的信息年龄(AOI)调度算法。首先,我们提出了一个低复杂的AOI调度程序,称为MAF-MAD,该计划使用UAV(HOP-1)和最大AOI差异(MAD)策略采样最大AOI(MAF)策略,以更新从无人机到TBS(Hop-2)。我们证明,MAF-MAD是理想条件下的最佳AOI调度程序(无线无线通道和在物联网设备上产生交通生成)。相反,对于一般条件(物联网设备的损失渠道条件和不同的周期性交通生成),提出了深厚的增强学习算法,即近端政策优化(PPO)基于调度程序。仿真结果表明,在所有考虑的一般情况下,建议的基于PPO的调度程序优于MAF-MAD,MAF和Round-Robin等其他调度程序。
translated by 谷歌翻译
车辆到车辆(V2V)通信的性能在很大程度上取决于使用的调度方法。虽然集中式网络调度程序提供高V2V通信可靠性,但它们的操作通常仅限于具有完整的蜂窝网络覆盖范围的区域。相比之下,在细胞外覆盖区域中,使用了相对效率低下的分布式无线电资源管理。为了利用集中式方法的好处来增强V2V通信在缺乏蜂窝覆盖的道路上的可靠性,我们建议使用VRLS(车辆加固学习调度程序),这是一种集中的调度程序,该调度程序主动为覆盖外的V2V Communications主动分配资源,以前}车辆离开蜂窝网络覆盖范围。通过在模拟的车辆环境中进行培训,VRL可以学习一项适应环境变化的调度策略,从而消除了在复杂的现实生活环境中对有针对性(重新)培训的需求。我们评估了在不同的移动性,网络负载,无线通道和资源配置下VRL的性能。 VRL的表现优于最新的区域中最新分布式调度算法,而无需蜂窝网络覆盖,通过在高负载条件下将数据包错误率降低了一半,并在低负载方案中实现了接近最大的可靠性。
translated by 谷歌翻译
无人驾驶飞行器(UAV)是支持各种服务,包括通信的技术突破之一。UAV将在提高无线网络的物理层安全方面发挥关键作用。本文定义了窃听地面用户与UAV之间的链路的问题,该联接器用作空中基站(ABS)。提出了加强学习算法Q - 学习和深Q网络(DQN),用于优化ABS的位置和传输功率,以增强地面用户的数据速率。如果没有系统了解窃听器的位置,这会增加保密容量。与Q-Learnch和基线方法相比,仿真结果显示了拟议DQN的快速收敛性和最高保密能力。
translated by 谷歌翻译
互联网连接系统的规模大大增加,这些系统比以往任何时候都更接触到网络攻击。网络攻击的复杂性和动态需要保护机制响应,自适应和可扩展。机器学习,或更具体地说,深度增强学习(DRL),方法已经广泛提出以解决这些问题。通过将深入学习纳入传统的RL,DRL能够解决复杂,动态,特别是高维的网络防御问题。本文提出了对为网络安全开发的DRL方法进行了调查。我们触及不同的重要方面,包括基于DRL的网络 - 物理系统的安全方法,自主入侵检测技术和基于多元的DRL的游戏理论模拟,用于防范策略对网络攻击。还给出了对基于DRL的网络安全的广泛讨论和未来的研究方向。我们预计这一全面审查提供了基础,并促进了未来的研究,探讨了越来越复杂的网络安全问题。
translated by 谷歌翻译
The explosive growth of dynamic and heterogeneous data traffic brings great challenges for 5G and beyond mobile networks. To enhance the network capacity and reliability, we propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme that adaptively allocates the uplink and downlink time-frequency resources of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while alleviating the inter-cell interference. We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) that maximizes the long-term expected sum rate under the users' packet dropping ratio constraints. In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named federated Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm. The BSs decide their local time-frequency configurations through RL algorithms and achieve global training via exchanging local RL models with their neighbors under a decentralized federated learning framework. Specifically, to deal with the large-scale discrete action space of each BS, we adopt a DDPG-based algorithm to generate actions in a continuous space, and then utilize Wolpertinger policy to reduce the mapping errors from continuous action space back to discrete action space. Simulation results demonstrate the superiority of our proposed algorithm to benchmark algorithms with respect to system sum rate.
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
互联网连接系统的指数增长产生了许多挑战,例如频谱短缺问题,需要有效的频谱共享(SS)解决方案。复杂和动态的SS系统可以接触不同的潜在安全性和隐私问题,需要保护机制是自适应,可靠和可扩展的。基于机器学习(ML)的方法经常提议解决这些问题。在本文中,我们对最近的基于ML的SS方法,最关键的安全问题和相应的防御机制提供了全面的调查。特别是,我们详细说明了用于提高SS通信系统的性能的最先进的方法,包括基于ML基于ML的基于的数据库辅助SS网络,ML基于基于的数据库辅助SS网络,包括基于ML的数据库辅助的SS网络,基于ML的LTE-U网络,基于ML的环境反向散射网络和其他基于ML的SS解决方案。我们还从物理层和基于ML算法的相应防御策略的安全问题,包括主要用户仿真(PUE)攻击,频谱感测数据伪造(SSDF)攻击,干扰攻击,窃听攻击和隐私问题。最后,还给出了对ML基于ML的开放挑战的广泛讨论。这种全面的审查旨在为探索新出现的ML的潜力提供越来越复杂的SS及其安全问题,提供基础和促进未来的研究。
translated by 谷歌翻译
Recent advances in distributed artificial intelligence (AI) have led to tremendous breakthroughs in various communication services, from fault-tolerant factory automation to smart cities. When distributed learning is run over a set of wirelessly connected devices, random channel fluctuations and the incumbent services running on the same network impact the performance of both distributed learning and the coexisting service. In this paper, we investigate a mixed service scenario where distributed AI workflow and ultra-reliable low latency communication (URLLC) services run concurrently over a network. Consequently, we propose a risk sensitivity-based formulation for device selection to minimize the AI training delays during its convergence period while ensuring that the operational requirements of the URLLC service are met. To address this challenging coexistence problem, we transform it into a deep reinforcement learning problem and address it via a framework based on soft actor-critic algorithm. We evaluate our solution with a realistic and 3GPP-compliant simulator for factory automation use cases. Our simulation results confirm that our solution can significantly decrease the training delay of the distributed AI service while keeping the URLLC availability above its required threshold and close to the scenario where URLLC solely consumes all network resources.
translated by 谷歌翻译
在自主驾驶场中,人类知识融合到深增强学习(DRL)通常基于在模拟环境中记录的人类示范。这限制了在现实世界交通中的概率和可行性。我们提出了一种两级DRL方法,从真实的人类驾驶中学习,实现优于纯DRL代理的性能。培训DRL代理商是在Carla的框架内完成了机器人操作系统(ROS)。对于评估,我们设计了不同的真实驾驶场景,可以将提出的两级DRL代理与纯DRL代理进行比较。在从人驾驶员中提取“良好”行为之后,例如在信号交叉口中的预期,该代理变得更有效,并且驱动更安全,这使得这种自主代理更适应人体机器人交互(HRI)流量。
translated by 谷歌翻译
本文提出了一个基于加固学习(RL)的电动连接车辆(CV)的生态驾驶框架,以提高信号交叉点的车辆能效。通过整合基于型号的汽车策略,改变车道的政策和RL政策来确保车辆代理的安全操作。随后,制定了马尔可夫决策过程(MDP),该过程使车辆能够执行纵向控制和横向决策,从而共同优化了交叉口附近CVS的CAR跟踪和改变车道的行为。然后,将混合动作空间参数化为层次结构,从而在动态交通环境中使用二维运动模式训练代理。最后,我们所提出的方法从基于单车的透视和基于流的透视图中在Sumo软件中进行了评估。结果表明,我们的策略可以通过学习适当的动作方案来大大减少能源消耗,而不会中断其他人类驱动的车辆(HDVS)。
translated by 谷歌翻译