小型无人驾驶飞机的障碍避免对于未来城市空袭(UAM)和无人机系统(UAS)交通管理(UTM)的安全性至关重要。有许多技术用于实时强大的无人机指导,但其中许多在离散的空域和控制中解决,这将需要额外的路径平滑步骤来为UA提供灵活的命令。为提供无人驾驶飞机的操作安全有效的计算指导,我们探讨了基于近端政策优化(PPO)的深增强学习算法的使用,以指导自主UA到其目的地,同时通过连续控制避免障碍物。所提出的场景状态表示和奖励功能可以将连续状态空间映射到连续控制,以便进行标题角度和速度。为了验证所提出的学习框架的性能,我们用静态和移动障碍进行了数值实验。详细研究了与环境和安全操作界限的不确定性。结果表明,该拟议的模型可以提供准确且强大的指导,并解决了99%以上的成功率的冲突。
translated by 谷歌翻译
Compared with model-based control and optimization methods, reinforcement learning (RL) provides a data-driven, learning-based framework to formulate and solve sequential decision-making problems. The RL framework has become promising due to largely improved data availability and computing power in the aviation industry. Many aviation-based applications can be formulated or treated as sequential decision-making problems. Some of them are offline planning problems, while others need to be solved online and are safety-critical. In this survey paper, we first describe standard RL formulations and solutions. Then we survey the landscape of existing RL-based applications in aviation. Finally, we summarize the paper, identify the technical gaps, and suggest future directions of RL research in aviation.
translated by 谷歌翻译
在包装交付,交通监控,搜索和救援操作以及军事战斗订婚等不同应用中,对使用无人驾驶汽车(UAV)(无人机)的需求越来越不断增加。在所有这些应用程序中,无人机用于自动导航环境 - 没有人类互动,执行特定任务并避免障碍。自主无人机导航通常是使用强化学习(RL)来完成的,在该学习中,代理在域中充当专家在避免障碍的同时导航环境。了解导航环境和算法限制在选择适当的RL算法以有效解决导航问题方面起着至关重要的作用。因此,本研究首先确定了无人机导航任务,并讨论导航框架和仿真软件。接下来,根据环境,算法特征,能力和不同无人机导航问题的应用程序对RL算法进行分类和讨论,这将帮助从业人员和研究人员为其无人机导航使用情况选择适当的RL算法。此外,确定的差距和机会将推动无人机导航研究。
translated by 谷歌翻译
安全是空中交通时的主要问题。通过成对分离最小值确保无人驾驶飞机(无人机)之间的飞行安全性,利用冲突检测和分辨方法。现有方法主要处理成对冲突,但由于交通密度的预期增加,可能会发生两个以上的无人机的遇到。在本文中,我们将多UAV冲突解决模型作为多功能加强学习问题。我们实现了一种基于图形神经网络的算法,配合代理可以与共同生成分辨率的操作进行通信。该模型在具有3和4个当前代理的情况下进行评估。结果表明,代理商能够通过合作策略成功解决多UV冲突。
translated by 谷歌翻译
在狭窄的空间中,基于传统层次自治系统的运动计划可能会导致映射,定位和控制噪声引起碰撞。此外,当无映射时,它将被禁用。为了解决这些问题,我们利用深厚的加强学习,可以证明可以有效地进行自我决策,从而在狭窄的空间中自探索而无需地图,同时避免碰撞。具体而言,基于我们的Ackermann-Steering矩形Zebrat机器人及其凉亭模拟器,我们建议矩形安全区域来表示状态并检测矩形形状的机器人的碰撞,以及无需精心制作的奖励功能,不需要增强功能。目的地信息。然后,我们在模拟的狭窄轨道中基准了五种增强学习算法,包括DDPG,DQN,SAC,PPO和PPO-DISCRETE。经过训练,良好的DDPG和DQN型号可以转移到三个全新的模拟轨道上,然后转移到三个现实世界中。
translated by 谷歌翻译
Development of navigation algorithms is essential for the successful deployment of robots in rapidly changing hazardous environments for which prior knowledge of configuration is often limited or unavailable. Use of traditional path-planning algorithms, which are based on localization and require detailed obstacle maps with goal locations, is not possible. In this regard, vision-based algorithms hold great promise, as visual information can be readily acquired by a robot's onboard sensors and provides a much richer source of information from which deep neural networks can extract complex patterns. Deep reinforcement learning has been used to achieve vision-based robot navigation. However, the efficacy of these algorithms in environments with dynamic obstacles and high variation in the configuration space has not been thoroughly investigated. In this paper, we employ a deep Dyna-Q learning algorithm for room evacuation and obstacle avoidance in partially observable environments based on low-resolution raw image data from an onboard camera. We explore the performance of a robotic agent in environments containing no obstacles, convex obstacles, and concave obstacles, both static and dynamic. Obstacles and the exit are initialized in random positions at the start of each episode of reinforcement learning. Overall, we show that our algorithm and training approach can generalize learning for collision-free evacuation of environments with complex obstacle configurations. It is evident that the agent can navigate to a goal location while avoiding multiple static and dynamic obstacles, and can escape from a concave obstacle while searching for and navigating to the exit.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
许多现实世界的应用程序都可以作为多机构合作问题进行配置,例如网络数据包路由和自动驾驶汽车的协调。深入增强学习(DRL)的出现为通过代理和环境的相互作用提供了一种有前途的多代理合作方法。但是,在政策搜索过程中,传统的DRL解决方案遭受了多个代理具有连续动作空间的高维度。此外,代理商政策的动态性使训练非平稳。为了解决这些问题,我们建议采用高级决策和低水平的个人控制,以进行有效的政策搜索,提出一种分层增强学习方法。特别是,可以在高级离散的动作空间中有效地学习多个代理的合作。同时,低水平的个人控制可以减少为单格强化学习。除了分层增强学习外,我们还建议对手建模网络在学习过程中对其他代理的政策进行建模。与端到端的DRL方法相反,我们的方法通过以层次结构将整体任务分解为子任务来降低学习的复杂性。为了评估我们的方法的效率,我们在合作车道变更方案中进行了现实世界中的案例研究。模拟和现实世界实验都表明我们的方法在碰撞速度和收敛速度中的优越性。
translated by 谷歌翻译
具有成本效益的资产管理是多个行业的兴趣领域。具体而言,本文开发了深入的加固学习(DRL)解决方案,以自动确定不断恶化的水管的最佳康复政策。我们在在线和离线DRL设置中处理康复计划的问题。在在线DRL中,代理与具有不同长度,材料和故障率特征的多个管道的模拟环境进行交互。我们使用深Q学习(DQN)训练代理商,以最低限度的平均成本和减少故障概率学习最佳政策。在离线学习中,代理使用静态数据,例如DQN重播数据,通过保守的Q学习算法学习最佳策略,而无需与环境进行进一步的交互。我们证明,基于DRL的政策改善了标准预防,纠正和贪婪的计划替代方案。此外,从固定的DQN重播数据集中学习超过在线DQN设置。结果保证,由大型国家和行动轨迹组成的水管的现有恶化概况为在离线环境中学习康复政策提供了宝贵的途径,而无需模拟器。
translated by 谷歌翻译
多机器人导航是一项具有挑战性的任务,其中必须在动态环境中同时协调多个机器人。我们应用深入的加固学习(DRL)来学习分散的端到端策略,该政策将原始传感器数据映射到代理的命令速度。为了使政策概括,培训是在不同的环境和场景中进行的。在常见的多机器人场景中测试和评估了学识渊博的政策,例如切换一个地方,交叉路口和瓶颈情况。此策略使代理可以从死端恢复并浏览复杂的环境。
translated by 谷歌翻译
近年来近年来,加固学习方法已经发展了一系列政策梯度方法,主要用于建模随机政策的高斯分布。然而,高斯分布具有无限的支持,而现实世界应用通常具有有限的动作空间。如果它提供有限支持,则该解剖会导致可以消除的估计偏差,因为它提出了有限的支持。在这项工作中,我们调查如何在Openai健身房的两个连续控制任务中训练该测试策略在训练时执行该测试策略。对于这两个任务来说,测试政策在代理人的最终预期奖励方面优于高斯政策,也显示出更多的稳定性和更快的培训过程融合。对于具有高维图像输入的卡路里环境,在高斯政策中,代理的成功率提高了63%。
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译
通过定义具有可变复杂性的流量类型独立环境,基于深度加强学习,介绍一种新的动态障碍避免方法。在当前文献中填补了差距,我们彻底调查了缺失速度信息对代理商在避免任务中的性能的影响。这是实践中至关重要的问题,因为几个传感器仅产生物体或车辆的位置信息。我们在部分可观察性方面评估频繁应用的方法,即在深神经网络中的复发性并简单帧堆叠。为我们的分析,我们依靠最先进的无模型深射RL算法。发现速度信息缺乏影响代理商的性能。两种方法 - 复发性和帧堆叠 - 不能在观察空间中一致地替换缺失的速度信息。但是,在简化的情况下,它们可以显着提高性能并稳定整体培训程序。
translated by 谷歌翻译
无人驾驶汽车(UAV)已被广泛用于军事战。在本文中,我们将自动运动控制(AMC)问题作为马尔可夫决策过程(MDP),并提出了一种先进的深度强化学习(DRL)方法,该方法允许无人机在大型动态三维(3D)中执行复杂的任务)环境。为了克服优先体验重播(PER)算法的局限性并提高性能,拟议的异步课程体验重播(ACER)使用多线程来异步更新优先级,分配了真实优先级,并应用了临时体验池,以使可用的更高体验可用学习质量。还引入了第一个无用的体验池(FIUO)体验池,以确保存储体验的更高使用价值。此外,与课程学习(CL)相结合,从简单到困难的抽样体验进行了更合理的培训范式,设计用于培训无人机。通过在基于真实无人机的参数构建的复杂未知环境中训练,提议的ACER将收敛速度提高24.66 \%,而与最先进的双胞胎延迟的深层确定性相比策略梯度(TD3)算法。在具有不同复杂性的环境中进行的测试实验表明,ACER剂的鲁棒性和泛化能力。
translated by 谷歌翻译
强化学习和最近的深度增强学习是解决如Markov决策过程建模的顺序决策问题的流行方法。问题和选择算法和超参数的RL建模需要仔细考虑,因为不同的配置可能需要完全不同的性能。这些考虑因素主要是RL专家的任务;然而,RL在研究人员和系统设计师不是RL专家的其他领域中逐渐变得流行。此外,许多建模决策,例如定义状态和动作空间,批次的大小和批量更新的频率以及时间戳的数量通常是手动进行的。由于这些原因,RL框架的自动化不同组成部分具有重要意义,近年来它引起了很多关注。自动RL提供了一个框架,其中RL的不同组件包括MDP建模,算法选择和超参数优化是自动建模和定义的。在本文中,我们探讨了可以在自动化RL中使用的文献和目前的工作。此外,我们讨论了Autorl中的挑战,打开问题和研究方向。
translated by 谷歌翻译
本文利用了强化学习和深度学习的最新发展来解决供应链库存管理(SCIM)问题,这是一个复杂的顺序决策问题,包括确定在给定时间范围内生产和运送到不同仓库的最佳产品数量。给出了随机两回波供应链环境的数学公式,该公式可以管理任意数量的仓库和产品类型。此外,开发了一个与深钢筋学习(DRL)算法接口的开源库,并公开可用于解决遇险问题。通过在合成生成的数据上进行了丰富的数值实验,比较了最新的DRL算法实现的性能。实验计划的设计和执行,包括供应链的不同结构,拓扑,需求,能力和成本。结果表明,PPO算法非常适合环境的不同特征。 VPG算法几乎总是会收敛到局部最大值,即使它通常达到可接受的性能水平。最后,A3C是最快的算法,但是就像VPG一样,与PPO相比,它从未取得最好的性能。总之,数值实验表明,DRL的性能始终如一,比标准的重新订购策略(例如静态(S,Q) - policy)更好。因此,它可以被认为是解决随机两回波问题的现实世界实例的实用和有效选择。
translated by 谷歌翻译
本文解决了当参与需求响应(DR)时优化电动汽车(EV)的充电/排放时间表的问题。由于电动汽车的剩余能量,到达和出发时间以及未来的电价中存在不确定性,因此很难做出充电决定以最大程度地减少充电成本,同时保证电动汽车的电池最先进(SOC)在内某些范围。为了解决这一难题,本文将EV充电调度问题制定为Markov决策过程(CMDP)。通过协同结合增强的Lagrangian方法和软演员评论家算法,本文提出了一种新型安全的非政策钢筋学习方法(RL)方法来解决CMDP。通过Lagrangian值函数以策略梯度方式更新Actor网络。采用双重危机网络来同步估计动作值函数,以避免高估偏差。所提出的算法不需要强烈的凸度保证,可以保证被检查的问题,并且是有效的样本。现实世界中电价的全面数值实验表明,我们提出的算法可以实现高解决方案最佳性和约束依从性。
translated by 谷歌翻译
In the field of autonomous robots, reinforcement learning (RL) is an increasingly used method to solve the task of dynamic obstacle avoidance for mobile robots, autonomous ships, and drones. A common practice to train those agents is to use a training environment with random initialization of agent and obstacles. Such approaches might suffer from a low coverage of high-risk scenarios in training, leading to impaired final performance of obstacle avoidance. This paper proposes a general training environment where we gain control over the difficulty of the obstacle avoidance task by using short training episodes and assessing the difficulty by two metrics: The number of obstacles and a collision risk metric. We found that shifting the training towards a greater task difficulty can massively increase the final performance. A baseline agent, using a traditional training environment based on random initialization of agent and obstacles and longer training episodes, leads to a significantly weaker performance. To prove the generalizability of the proposed approach, we designed two realistic use cases: A mobile robot and a maritime ship under the threat of approaching obstacles. In both applications, the previous results can be confirmed, which emphasizes the general usability of the proposed approach, detached from a specific application context and independent of the agent's dynamics. We further added Gaussian noise to the sensor signals, resulting in only a marginal degradation of performance and thus indicating solid robustness of the trained agent.
translated by 谷歌翻译
随着自动驾驶行业的发展,自动驾驶汽车群体的潜在相互作用也随之增长。结合人工智能和模拟的进步,可以模拟此类组,并且可以学习控制内部汽车的安全模型。这项研究将强化学习应用于多代理停车场的问题,在那里,汽车旨在有效地停车,同时保持安全和理性。利用强大的工具和机器学习框架,我们以马尔可夫决策过程的形式与独立学习者一起设计和实施灵活的停车环境,从而利用多代理通信。我们实施了一套工具来进行大规模执行实验,从而取得了超过98.1%成功率的高达7辆汽车的模型,从而超过了现有的单代机构模型。我们还获得了与汽车在我们环境中表现出的竞争性和协作行为有关的几个结果,这些行为的密度和沟通水平各不相同。值得注意的是,我们发现了一种没有竞争的合作形式,以及一种“泄漏”的合作形式,在没有足够状态的情况下,代理商进行了协作。这种工作在自动驾驶和车队管理行业中具有许多潜在的应用,并为将强化学习应用于多机构停车场提供了几种有用的技术和基准。
translated by 谷歌翻译
Autonomous vehicles are suited for continuous area patrolling problems. However, finding an optimal patrolling strategy can be challenging for many reasons. Firstly, patrolling environments are often complex and can include unknown and evolving environmental factors. Secondly, autonomous vehicles can have failures or hardware constraints such as limited battery lives. Importantly, patrolling large areas often requires multiple agents that need to collectively coordinate their actions. In this work, we consider these limitations and propose an approach based on a distributed, model-free deep reinforcement learning based multi-agent patrolling strategy. In this approach, agents make decisions locally based on their own environmental observations and on shared information. In addition, agents are trained to automatically recharge themselves when required to support continuous collective patrolling. A homogeneous multi-agent architecture is proposed, where all patrolling agents have an identical policy. This architecture provides a robust patrolling system that can tolerate agent failures and allow supplementary agents to be added to replace failed agents or to increase the overall patrol performance. This performance is validated through experiments from multiple perspectives, including the overall patrol performance, the efficiency of the battery recharging strategy, the overall robustness of the system, and the agents' ability to adapt to environment dynamics.
translated by 谷歌翻译