无人驾驶汽车(UAV)已被广泛用于军事战。在本文中,我们将自动运动控制(AMC)问题作为马尔可夫决策过程(MDP),并提出了一种先进的深度强化学习(DRL)方法,该方法允许无人机在大型动态三维(3D)中执行复杂的任务)环境。为了克服优先体验重播(PER)算法的局限性并提高性能,拟议的异步课程体验重播(ACER)使用多线程来异步更新优先级,分配了真实优先级,并应用了临时体验池,以使可用的更高体验可用学习质量。还引入了第一个无用的体验池(FIUO)体验池,以确保存储体验的更高使用价值。此外,与课程学习(CL)相结合,从简单到困难的抽样体验进行了更合理的培训范式,设计用于培训无人机。通过在基于真实无人机的参数构建的复杂未知环境中训练,提议的ACER将收敛速度提高24.66 \%,而与最先进的双胞胎延迟的深层确定性相比策略梯度(TD3)算法。在具有不同复杂性的环境中进行的测试实验表明,ACER剂的鲁棒性和泛化能力。
translated by 谷歌翻译
在包装交付,交通监控,搜索和救援操作以及军事战斗订婚等不同应用中,对使用无人驾驶汽车(UAV)(无人机)的需求越来越不断增加。在所有这些应用程序中,无人机用于自动导航环境 - 没有人类互动,执行特定任务并避免障碍。自主无人机导航通常是使用强化学习(RL)来完成的,在该学习中,代理在域中充当专家在避免障碍的同时导航环境。了解导航环境和算法限制在选择适当的RL算法以有效解决导航问题方面起着至关重要的作用。因此,本研究首先确定了无人机导航任务,并讨论导航框架和仿真软件。接下来,根据环境,算法特征,能力和不同无人机导航问题的应用程序对RL算法进行分类和讨论,这将帮助从业人员和研究人员为其无人机导航使用情况选择适当的RL算法。此外,确定的差距和机会将推动无人机导航研究。
translated by 谷歌翻译
Compared with model-based control and optimization methods, reinforcement learning (RL) provides a data-driven, learning-based framework to formulate and solve sequential decision-making problems. The RL framework has become promising due to largely improved data availability and computing power in the aviation industry. Many aviation-based applications can be formulated or treated as sequential decision-making problems. Some of them are offline planning problems, while others need to be solved online and are safety-critical. In this survey paper, we first describe standard RL formulations and solutions. Then we survey the landscape of existing RL-based applications in aviation. Finally, we summarize the paper, identify the technical gaps, and suggest future directions of RL research in aviation.
translated by 谷歌翻译
The high emission and low energy efficiency caused by internal combustion engines (ICE) have become unacceptable under environmental regulations and the energy crisis. As a promising alternative solution, multi-power source electric vehicles (MPS-EVs) introduce different clean energy systems to improve powertrain efficiency. The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range. Reinforcement learning (RL) has become an effective methodology for the development of EMS. RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS. To this end, this paper presents an in-depth analysis of the current research on RL-based EMS (RL-EMS) and summarizes the design elements of RL-based EMS. This paper first summarizes the previous applications of RL in EMS from five aspects: algorithm, perception scheme, decision scheme, reward function, and innovative training method. The contribution of advanced algorithms to the training effect is shown, the perception and control schemes in the literature are analyzed in detail, different reward function settings are classified, and innovative training methods with their roles are elaborated. Finally, by comparing the development routes of RL and RL-EMS, this paper identifies the gap between advanced RL solutions and existing RL-EMS. Finally, this paper suggests potential development directions for implementing advanced artificial intelligence (AI) solutions in EMS.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
小型无人驾驶飞机的障碍避免对于未来城市空袭(UAM)和无人机系统(UAS)交通管理(UTM)的安全性至关重要。有许多技术用于实时强大的无人机指导,但其中许多在离散的空域和控制中解决,这将需要额外的路径平滑步骤来为UA提供灵活的命令。为提供无人驾驶飞机的操作安全有效的计算指导,我们探讨了基于近端政策优化(PPO)的深增强学习算法的使用,以指导自主UA到其目的地,同时通过连续控制避免障碍物。所提出的场景状态表示和奖励功能可以将连续状态空间映射到连续控制,以便进行标题角度和速度。为了验证所提出的学习框架的性能,我们用静态和移动障碍进行了数值实验。详细研究了与环境和安全操作界限的不确定性。结果表明,该拟议的模型可以提供准确且强大的指导,并解决了99%以上的成功率的冲突。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
政策深度加强学习算法具有低数据利用率,需要重大的政策改进体验。本文提出了一种具有优先级轨迹重放(PTR-PPO)的近端策略优化算法,该轨道重播(PTR-PPO)结合了策略和违规方法来提高采样效率,通过优先考虑旧政策产生的轨迹的重播。我们首先根据轨迹的特点设计三个轨迹优先级:前两个是基于一步经验广义优势估计(GAE)值的最大和平均轨迹优先级,以及基于标准化未折衷奖励的最后一次奖励轨迹优先级。然后,我们将优先轨迹重放纳入PPO算法,提出了一个截断的重要性重量方法,克服了多步体验的大量重量引起的高方差,并在违规条件下为PPO设计了政策改进损失函数。我们评估PTR-PPO在一套ATARI离散控制任务中的性能,实现最先进的性能。此外,通过在训练期间分析优先存储器中各个位置的优先级的热图,我们发现内存大小和卷展展览长度可以对轨迹优先级的分布产生重大影响,并且因此在算法的性能上。
translated by 谷歌翻译
深度强化学习(DRL)和深度多机构的强化学习(MARL)在包括游戏AI,自动驾驶汽车,机器人技术等各种领域取得了巨大的成功。但是,众所周知,DRL和Deep MARL代理的样本效率低下,即使对于相对简单的问题设置,通常也需要数百万个相互作用,从而阻止了在实地场景中的广泛应用和部署。背后的一个瓶颈挑战是众所周知的探索问题,即如何有效地探索环境和收集信息丰富的经验,从而使政策学习受益于最佳研究。在稀疏的奖励,吵闹的干扰,长距离和非平稳的共同学习者的复杂环境中,这个问题变得更加具有挑战性。在本文中,我们对单格和多代理RL的现有勘探方法进行了全面的调查。我们通过确定有效探索的几个关键挑战开始调查。除了上述两个主要分支外,我们还包括其他具有不同思想和技术的著名探索方法。除了算法分析外,我们还对一组常用基准的DRL进行了全面和统一的经验比较。根据我们的算法和实证研究,我们终于总结了DRL和Deep Marl中探索的公开问题,并指出了一些未来的方向。
translated by 谷歌翻译
在这项工作中,我们优化了基于无人机(UAV)的便携式接入点(PAP)的3D轨迹,该轨迹为一组接地节点(GNS)提供无线服务。此外,根据Peukert效果,我们考虑无人机电池的实用非线性电池放电。因此,我们以一种新颖的方式提出问题,代表了基于公平的能源效率度量的最大化,并被称为公平能源效率(费用)。费用指标定义了一个系统,该系统对每用户服务的公平性和PAP的能源效率都非常重要。该法式问题采用非凸面问题的形式,并具有不可扣除的约束。为了获得解决方案,我们将问题表示为具有连续状态和动作空间的马尔可夫决策过程(MDP)。考虑到解决方案空间的复杂性,我们使用双胞胎延迟的深层确定性政策梯度(TD3)参与者 - 批判性深入强化学习(DRL)框架来学习最大化系统费用的政策。我们进行两种类型的RL培训来展示我们方法的有效性:第一种(离线)方法在整个训练阶段保持GN的位置相同;第二种方法将学习的政策概括为GN的任何安排,通过更改GN的位置,每次培训情节后。数值评估表明,忽视Peukert效应高估了PAP的播放时间,可以通过最佳选择PAP的飞行速度来解决。此外,用户公平,能源效率,因此可以通过有效地将PAP移动到GN上方,从而提高系统的费用价值。因此,我们注意到郊区,城市和茂密的城市环境的基线情景高达88.31%,272.34%和318.13%。
translated by 谷歌翻译
The connectivity-aware path design is crucial in the effective deployment of autonomous Unmanned Aerial Vehicles (UAVs). Recently, Reinforcement Learning (RL) algorithms have become the popular approach to solving this type of complex problem, but RL algorithms suffer slow convergence. In this paper, we propose a Transfer Learning (TL) approach, where we use a teacher policy previously trained in an old domain to boost the path learning of the agent in the new domain. As the exploration processes and the training continue, the agent refines the path design in the new domain based on the subsequent interactions with the environment. We evaluate our approach considering an old domain at sub-6 GHz and a new domain at millimeter Wave (mmWave). The teacher path policy, previously trained at sub-6 GHz path, is the solution to a connectivity-aware path problem that we formulate as a constrained Markov Decision Process (CMDP). We employ a Lyapunov-based model-free Deep Q-Network (DQN) to solve the path design at sub-6 GHz that guarantees connectivity constraint satisfaction. We empirically demonstrate the effectiveness of our approach for different urban environment scenarios. The results demonstrate that our proposed approach is capable of reducing the training time considerably at mmWave.
translated by 谷歌翻译
Reinforcement learning (RL) requires skillful definition and remarkable computational efforts to solve optimization and control problems, which could impair its prospect. Introducing human guidance into reinforcement learning is a promising way to improve learning performance. In this paper, a comprehensive human guidance-based reinforcement learning framework is established. A novel prioritized experience replay mechanism that adapts to human guidance in the reinforcement learning process is proposed to boost the efficiency and performance of the reinforcement learning algorithm. To relieve the heavy workload on human participants, a behavior model is established based on an incremental online learning method to mimic human actions. We design two challenging autonomous driving tasks for evaluating the proposed algorithm. Experiments are conducted to access the training and testing performance and learning mechanism of the proposed algorithm. Comparative results against the state-of-the-art methods suggest the advantages of our algorithm in terms of learning efficiency, performance, and robustness.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
According to the rapid development of drone technologies, drones are widely used in many applications including military domains. In this paper, a novel situation-aware DRL- based autonomous nonlinear drone mobility control algorithm in cyber-physical loitering munition applications. On the battlefield, the design of DRL-based autonomous control algorithm is not straightforward because real-world data gathering is generally not available. Therefore, the approach in this paper is that cyber-physical virtual environment is constructed with Unity environment. Based on the virtual cyber-physical battlefield scenarios, a DRL-based automated nonlinear drone mobility control algorithm can be designed, evaluated, and visualized. Moreover, many obstacles exist which is harmful for linear trajectory control in real-world battlefield scenarios. Thus, our proposed autonomous nonlinear drone mobility control algorithm utilizes situation-aware components those are implemented with a Raycast function in Unity virtual scenarios. Based on the gathered situation-aware information, the drone can autonomously and nonlinearly adjust its trajectory during flight. Therefore, this approach is obviously beneficial for avoiding obstacles in obstacle-deployed battlefields. Our visualization-based performance evaluation shows that the proposed algorithm is superior from the other linear mobility control algorithms.
translated by 谷歌翻译
In the field of autonomous robots, reinforcement learning (RL) is an increasingly used method to solve the task of dynamic obstacle avoidance for mobile robots, autonomous ships, and drones. A common practice to train those agents is to use a training environment with random initialization of agent and obstacles. Such approaches might suffer from a low coverage of high-risk scenarios in training, leading to impaired final performance of obstacle avoidance. This paper proposes a general training environment where we gain control over the difficulty of the obstacle avoidance task by using short training episodes and assessing the difficulty by two metrics: The number of obstacles and a collision risk metric. We found that shifting the training towards a greater task difficulty can massively increase the final performance. A baseline agent, using a traditional training environment based on random initialization of agent and obstacles and longer training episodes, leads to a significantly weaker performance. To prove the generalizability of the proposed approach, we designed two realistic use cases: A mobile robot and a maritime ship under the threat of approaching obstacles. In both applications, the previous results can be confirmed, which emphasizes the general usability of the proposed approach, detached from a specific application context and independent of the agent's dynamics. We further added Gaussian noise to the sensor signals, resulting in only a marginal degradation of performance and thus indicating solid robustness of the trained agent.
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
传统的多播路由方法在构建多播树时存在一些问题,例如对网络状态信息的访问有限,对网络的动态和复杂变化的适应性不佳以及不灵活的数据转发。为了解决这些缺陷,软件定义网络(SDN)中的最佳多播路由问题是根据多目标优化问题量身定制的,以及基于深Q网络(DQN)深度强化学习(DQN)的智能多播路由算法DRL-M4MR( DRL)方法旨在构建SDN中的多播树。首先,通过组合SDN的全局视图和控制,将多播树状态矩阵,链路带宽矩阵,链路延迟矩阵和链路延迟损耗矩阵设计为DRL代理的状态空间。其次,代理的动作空间是网络中的所有链接,而动作选择策略旨在将链接添加到四种情况下的当前多播树。第三,单步和最终奖励功能表格旨在指导智能以做出决定以构建最佳多播树。实验结果表明,与现有算法相比,DRL-M4MR的多播树结构可以在训练后获得更好的带宽,延迟和数据包损耗率,并且可以在动态网络环境中做出更智能的多播路由决策。
translated by 谷歌翻译
深度强化学习(DRL)是一种有前途的方法,可以通过与环境的互动来学习政策来解决复杂的控制任务。但是,对DRL政策的培训需要大量的培训经验,这使得直接了解物理系统的政策是不切实际的。 SIM到运行的方法可以利用模拟来验证DRL政策,然后将其部署在现实世界中。不幸的是,经过验证的政策的直接现实部署通常由于不同的动态(称为现实差距)而遭受性能恶化。最近的SIM到现实方法,例如域随机化和域的适应性,重点是改善预审预告剂的鲁棒性。然而,经过模拟训练的策略通常需要使用现实世界中的数据来调整以达到最佳性能,这是由于现实世界样本的高成本而具有挑战性的。这项工作提出了一个分布式的云边缘建筑,以实时培训现实世界中的DRL代理。在体系结构中,推理和训练被分配到边缘和云,将实时控制循环与计算昂贵的训练回路分开。为了克服现实差距,我们的体系结构利用了SIM到现实的转移策略,以继续在物理系统上训练模拟预言的代理。我们证明了其在物理倒置螺旋控制系统上的适用性,分析了关键参数。现实世界实验表明,我们的体系结构可以使验证的DRL代理能够始终如一,有效地看不见动态。
translated by 谷歌翻译
从意外的外部扰动中恢复的能力是双模型运动的基本机动技能。有效的答复包括不仅可以恢复平衡并保持稳定性的能力,而且在平衡恢复物质不可行时,也可以保证安全的方式。对于与双式运动有关的机器人,例如人形机器人和辅助机器人设备,可帮助人类行走,设计能够提供这种稳定性和安全性的控制器可以防止机器人损坏或防止伤害相关的医疗费用。这是一个具有挑战性的任务,因为它涉及用触点产生高维,非线性和致动系统的高动态运动。尽管使用基于模型和优化方法的前进方面,但诸如广泛领域知识的要求,诸如较大的计算时间和有限的动态变化的鲁棒性仍然会使这个打开问题。在本文中,为了解决这些问题,我们开发基于学习的算法,能够为两种不同的机器人合成推送恢复控制政策:人形机器人和有助于双模型运动的辅助机器人设备。我们的工作可以分为两个密切相关的指示:1)学习人形机器人的安全下降和预防策略,2)使用机器人辅助装置学习人类的预防策略。为实现这一目标,我们介绍了一套深度加强学习(DRL)算法,以学习使用这些机器人时提高安全性的控制策略。
translated by 谷歌翻译
强化学习和最近的深度增强学习是解决如Markov决策过程建模的顺序决策问题的流行方法。问题和选择算法和超参数的RL建模需要仔细考虑,因为不同的配置可能需要完全不同的性能。这些考虑因素主要是RL专家的任务;然而,RL在研究人员和系统设计师不是RL专家的其他领域中逐渐变得流行。此外,许多建模决策,例如定义状态和动作空间,批次的大小和批量更新的频率以及时间戳的数量通常是手动进行的。由于这些原因,RL框架的自动化不同组成部分具有重要意义,近年来它引起了很多关注。自动RL提供了一个框架,其中RL的不同组件包括MDP建模,算法选择和超参数优化是自动建模和定义的。在本文中,我们探讨了可以在自动化RL中使用的文献和目前的工作。此外,我们讨论了Autorl中的挑战,打开问题和研究方向。
translated by 谷歌翻译