Development of navigation algorithms is essential for the successful deployment of robots in rapidly changing hazardous environments for which prior knowledge of configuration is often limited or unavailable. Use of traditional path-planning algorithms, which are based on localization and require detailed obstacle maps with goal locations, is not possible. In this regard, vision-based algorithms hold great promise, as visual information can be readily acquired by a robot's onboard sensors and provides a much richer source of information from which deep neural networks can extract complex patterns. Deep reinforcement learning has been used to achieve vision-based robot navigation. However, the efficacy of these algorithms in environments with dynamic obstacles and high variation in the configuration space has not been thoroughly investigated. In this paper, we employ a deep Dyna-Q learning algorithm for room evacuation and obstacle avoidance in partially observable environments based on low-resolution raw image data from an onboard camera. We explore the performance of a robotic agent in environments containing no obstacles, convex obstacles, and concave obstacles, both static and dynamic. Obstacles and the exit are initialized in random positions at the start of each episode of reinforcement learning. Overall, we show that our algorithm and training approach can generalize learning for collision-free evacuation of environments with complex obstacle configurations. It is evident that the agent can navigate to a goal location while avoiding multiple static and dynamic obstacles, and can escape from a concave obstacle while searching for and navigating to the exit.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
在包装交付,交通监控,搜索和救援操作以及军事战斗订婚等不同应用中,对使用无人驾驶汽车(UAV)(无人机)的需求越来越不断增加。在所有这些应用程序中,无人机用于自动导航环境 - 没有人类互动,执行特定任务并避免障碍。自主无人机导航通常是使用强化学习(RL)来完成的,在该学习中,代理在域中充当专家在避免障碍的同时导航环境。了解导航环境和算法限制在选择适当的RL算法以有效解决导航问题方面起着至关重要的作用。因此,本研究首先确定了无人机导航任务,并讨论导航框架和仿真软件。接下来,根据环境,算法特征,能力和不同无人机导航问题的应用程序对RL算法进行分类和讨论,这将帮助从业人员和研究人员为其无人机导航使用情况选择适当的RL算法。此外,确定的差距和机会将推动无人机导航研究。
translated by 谷歌翻译
小型无人驾驶飞机的障碍避免对于未来城市空袭(UAM)和无人机系统(UAS)交通管理(UTM)的安全性至关重要。有许多技术用于实时强大的无人机指导,但其中许多在离散的空域和控制中解决,这将需要额外的路径平滑步骤来为UA提供灵活的命令。为提供无人驾驶飞机的操作安全有效的计算指导,我们探讨了基于近端政策优化(PPO)的深增强学习算法的使用,以指导自主UA到其目的地,同时通过连续控制避免障碍物。所提出的场景状态表示和奖励功能可以将连续状态空间映射到连续控制,以便进行标题角度和速度。为了验证所提出的学习框架的性能,我们用静态和移动障碍进行了数值实验。详细研究了与环境和安全操作界限的不确定性。结果表明,该拟议的模型可以提供准确且强大的指导,并解决了99%以上的成功率的冲突。
translated by 谷歌翻译
本文提出了一种基于强化学习的导航方法,在其中我们将占用观测定义为运动原始启发式评估,而不是使用原始传感器数据。我们的方法可以将多传感器融合生成的占用数据快速映射到3D工作区中的轨迹值中。计算有效的轨迹评估允许对动作空间进行密集采样。我们利用不同数据结构中的占用观测来分析其对培训过程和导航性能的影响。我们在基于物理的仿真环境(包括静态和动态障碍)中对两个不同机器人进行训练和测试。我们通过最先进方法的其他常规数据结构对我们的占用表示进行基准测试。在动态环境中,通过物理机器人成功验证了训练有素的导航政策。结果表明,与其他占用表示相比,我们的方法不仅减少了所需的训练时间,还可以改善导航性能。我们的工作和所有相关信息的开源实现可从\ url {https://github.com/river-lab/tentabot}获得。
translated by 谷歌翻译
In the field of autonomous robots, reinforcement learning (RL) is an increasingly used method to solve the task of dynamic obstacle avoidance for mobile robots, autonomous ships, and drones. A common practice to train those agents is to use a training environment with random initialization of agent and obstacles. Such approaches might suffer from a low coverage of high-risk scenarios in training, leading to impaired final performance of obstacle avoidance. This paper proposes a general training environment where we gain control over the difficulty of the obstacle avoidance task by using short training episodes and assessing the difficulty by two metrics: The number of obstacles and a collision risk metric. We found that shifting the training towards a greater task difficulty can massively increase the final performance. A baseline agent, using a traditional training environment based on random initialization of agent and obstacles and longer training episodes, leads to a significantly weaker performance. To prove the generalizability of the proposed approach, we designed two realistic use cases: A mobile robot and a maritime ship under the threat of approaching obstacles. In both applications, the previous results can be confirmed, which emphasizes the general usability of the proposed approach, detached from a specific application context and independent of the agent's dynamics. We further added Gaussian noise to the sensor signals, resulting in only a marginal degradation of performance and thus indicating solid robustness of the trained agent.
translated by 谷歌翻译
通过直接将感知输入映射到机器人控制命令中,深入的强化学习(DRL)算法已被证明在机器人导航中有效,尤其是在未知环境中。但是,大多数现有方法忽略导航中的局部最小问题,从而无法处理复杂的未知环境。在本文中,我们提出了第一个基于DRL的导航方法,该方法由具有连续动作空间,自适应向前模拟时间(AFST)的SMDP建模,以克服此问题。具体而言,我们通过修改其GAE来更好地估计SMDP中的策略梯度,改善了指定SMDP问题的分布式近端策略优化(DPPO)算法。我们在模拟器和现实世界中评估了我们的方法。
translated by 谷歌翻译
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies "end-to-end": directly from raw pixel inputs.
translated by 谷歌翻译
在狭窄的空间中,基于传统层次自治系统的运动计划可能会导致映射,定位和控制噪声引起碰撞。此外,当无映射时,它将被禁用。为了解决这些问题,我们利用深厚的加强学习,可以证明可以有效地进行自我决策,从而在狭窄的空间中自探索而无需地图,同时避免碰撞。具体而言,基于我们的Ackermann-Steering矩形Zebrat机器人及其凉亭模拟器,我们建议矩形安全区域来表示状态并检测矩形形状的机器人的碰撞,以及无需精心制作的奖励功能,不需要增强功能。目的地信息。然后,我们在模拟的狭窄轨道中基准了五种增强学习算法,包括DDPG,DQN,SAC,PPO和PPO-DISCRETE。经过训练,良好的DDPG和DQN型号可以转移到三个全新的模拟轨道上,然后转移到三个现实世界中。
translated by 谷歌翻译
Robot assistants are emerging as high-tech solutions to support people in everyday life. Following and assisting the user in the domestic environment requires flexible mobility to safely move in cluttered spaces. We introduce a new approach to person following for assistance and monitoring. Our methodology exploits an omnidirectional robotic platform to detach the computation of linear and angular velocities and navigate within the domestic environment without losing track of the assisted person. While linear velocities are managed by a conventional Dynamic Window Approach (DWA) local planner, we trained a Deep Reinforcement Learning (DRL) agent to predict optimized angular velocities commands and maintain the orientation of the robot towards the user. We evaluate our navigation system on a real omnidirectional platform in various indoor scenarios, demonstrating the competitive advantage of our solution compared to a standard differential steering following.
translated by 谷歌翻译
Underwater navigation presents several challenges, including unstructured unknown environments, lack of reliable localization systems (e.g., GPS), and poor visibility. Furthermore, good-quality obstacle detection sensors for underwater robots are scant and costly; and many sensors like RGB-D cameras and LiDAR only work in-air. To enable reliable mapless underwater navigation despite these challenges, we propose a low-cost end-to-end navigation system, based on a monocular camera and a fixed single-beam echo-sounder, that efficiently navigates an underwater robot to waypoints while avoiding nearby obstacles. Our proposed method is based on Proximal Policy Optimization (PPO), which takes as input current relative goal information, estimated depth images, echo-sounder readings, and previous executed actions, and outputs 3D robot actions in a normalized scale. End-to-end training was done in simulation, where we adopted domain randomization (varying underwater conditions and visibility) to learn a robust policy against noise and changes in visibility conditions. The experiments in simulation and real-world demonstrated that our proposed method is successful and resilient in navigating a low-cost underwater robot in unknown underwater environments. The implementation is made publicly available at https://github.com/dartmouthrobotics/deeprl-uw-robot-navigation.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
通过定义具有可变复杂性的流量类型独立环境,基于深度加强学习,介绍一种新的动态障碍避免方法。在当前文献中填补了差距,我们彻底调查了缺失速度信息对代理商在避免任务中的性能的影响。这是实践中至关重要的问题,因为几个传感器仅产生物体或车辆的位置信息。我们在部分可观察性方面评估频繁应用的方法,即在深神经网络中的复发性并简单帧堆叠。为我们的分析,我们依靠最先进的无模型深射RL算法。发现速度信息缺乏影响代理商的性能。两种方法 - 复发性和帧堆叠 - 不能在观察空间中一致地替换缺失的速度信息。但是,在简化的情况下,它们可以显着提高性能并稳定整体培训程序。
translated by 谷歌翻译
强化学习和最近的深度增强学习是解决如Markov决策过程建模的顺序决策问题的流行方法。问题和选择算法和超参数的RL建模需要仔细考虑,因为不同的配置可能需要完全不同的性能。这些考虑因素主要是RL专家的任务;然而,RL在研究人员和系统设计师不是RL专家的其他领域中逐渐变得流行。此外,许多建模决策,例如定义状态和动作空间,批次的大小和批量更新的频率以及时间戳的数量通常是手动进行的。由于这些原因,RL框架的自动化不同组成部分具有重要意义,近年来它引起了很多关注。自动RL提供了一个框架,其中RL的不同组件包括MDP建模,算法选择和超参数优化是自动建模和定义的。在本文中,我们探讨了可以在自动化RL中使用的文献和目前的工作。此外,我们讨论了Autorl中的挑战,打开问题和研究方向。
translated by 谷歌翻译
最先进的多机构增强学习(MARL)方法为各种复杂问题提供了有希望的解决方案。然而,这些方法都假定代理执行同步的原始操作执行,因此它们不能真正可扩展到长期胜利的真实世界多代理/机器人任务,这些任务固有地要求代理/机器人以异步的理由,涉及有关高级动作选择的理由。不同的时间。宏观行动分散的部分可观察到的马尔可夫决策过程(MACDEC-POMDP)是在完全合作的多代理任务中不确定的异步决策的一般形式化。在本论文中,我们首先提出了MacDec-Pomdps的一组基于价值的RL方法,其中允许代理在三个范式中使用宏观成果功能执行异步学习和决策:分散学习和控制,集中学习,集中学习和控制,以及分散执行的集中培训(CTDE)。在上述工作的基础上,我们在三个训练范式下制定了一组基于宏观行动的策略梯度算法,在该训练范式下,允许代理以异步方式直接优化其参数化策略。我们在模拟和真实的机器人中评估了我们的方法。经验结果证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习具有宏观actions的高质量和异步溶液方面的有效性。
translated by 谷歌翻译
最近的视听导航工作是无噪音音频环境中的单一静态声音,并努力推广到闻名声音。我们介绍了一种新型动态视听导航基准测试,其中一个体现的AI代理必须在存在分散的人和嘈杂的声音存在下在未映射的环境中捕获移动声源。我们提出了一种依赖于多模态架构的端到端增强学习方法,该方法依赖于融合来自双耳音频信号和空间占用映射的空间视听信息,以编码为我们的新的稳健导航策略进行编码所需的功能复杂的任务设置。我们展示了我们的方法优于当前的最先进状态,以更好地推广到闻名声音以及对嘈杂的3D扫描现实世界数据集副本和TASTPORT3D上的嘈杂情景更好地对嘈杂的情景进行了更好的稳健性,以实现静态和动态的视听导航基准。我们的小型基准将在http://dav-nav.cs.uni-freiburg.de提供。
translated by 谷歌翻译
强化学习(RL)是一种基于代理的方法,可以教机器人在物理世界中导航。已知收集RL的数据是一项费力的任务,现实世界实验可能会冒险。模拟器以更快,更具成本效益的方式促进培训数据的收集。但是,RL经常需要大量的仿真步骤才能使代理在简单任务上变得熟练。这是基于RL的视觉四面导航字段中普遍的问题,其中状态尺寸通常非常大,动态模型很复杂。此外,渲染图像和获得代理的物理特性在计算上可能很昂贵。为了解决这个问题,我们提出了一个基于Airsim的模拟框架,该框架提供了有效的并行训练。在此框架的基础上,APE-X经过修改,以结合空调环境的分散培训,以利用众多网络计算机。通过实验,我们能够使用上述框架将训练时间从3.9小时减少到11分钟,总共有74个代理和两台网络计算机。可以在https://sites.google.com/view/prl4airsim/home上找到有关我们项目Prl4airsim的更多详细信息和有关我们项目的视频。
translated by 谷歌翻译
在过去的几十年中,多机构增强学习(MARL)一直在学术界和行业受到广泛关注。 MAL中的基本问题之一是如何全面评估不同的方法。在视频游戏或简单的模拟场景中评估了大多数现有的MAL方法。这些方法在实际情况下,尤其是多机器人系统中的性能仍然未知。本文介绍了一个可扩展的仿真平台,用于多机器人增强学习(MRRL),称为SMART,以满足这一需求。确切地说,智能由两个组成部分组成:1)一个模拟环境,该环境为培训提供了各种复杂的交互场景,以及2)现实世界中的多机器人系统,用于现实的性能评估。此外,SMART提供了代理环境API,这些API是算法实现的插件。为了说明我们平台的实用性,我们就合作驾驶车道变更方案进行了案例研究。在案例研究的基础上,我们总结了MRRL的一些独特挑战,这些挑战很少被考虑。最后,我们为鼓励和增强MRRL研究的仿真环境,相关的基准任务和最先进的基线开放。
translated by 谷歌翻译
随着自动驾驶行业的发展,自动驾驶汽车群体的潜在相互作用也随之增长。结合人工智能和模拟的进步,可以模拟此类组,并且可以学习控制内部汽车的安全模型。这项研究将强化学习应用于多代理停车场的问题,在那里,汽车旨在有效地停车,同时保持安全和理性。利用强大的工具和机器学习框架,我们以马尔可夫决策过程的形式与独立学习者一起设计和实施灵活的停车环境,从而利用多代理通信。我们实施了一套工具来进行大规模执行实验,从而取得了超过98.1%成功率的高达7辆汽车的模型,从而超过了现有的单代机构模型。我们还获得了与汽车在我们环境中表现出的竞争性和协作行为有关的几个结果,这些行为的密度和沟通水平各不相同。值得注意的是,我们发现了一种没有竞争的合作形式,以及一种“泄漏”的合作形式,在没有足够状态的情况下,代理商进行了协作。这种工作在自动驾驶和车队管理行业中具有许多潜在的应用,并为将强化学习应用于多机构停车场提供了几种有用的技术和基准。
translated by 谷歌翻译