The cooperation of a human pilot with an autonomous agent during flight control realizes parallel autonomy. A parallel-autonomous system acts as a guardian that significantly enhances the robustness and safety of flight operations in challenging circumstances. Here, we propose an air-guardian concept that facilitates cooperation between an artificial pilot agent and a parallel end-to-end neural control system. Our vision-based air-guardian system combines a causal continuous-depth neural network model with a cooperation layer to enable parallel autonomy between a pilot agent and a control system based on perceived differences in their attention profile. The attention profiles are obtained by computing the networks' saliency maps (feature importance) through the VisualBackProp algorithm. The guardian agent is trained via reinforcement learning in a fixed-wing aircraft simulated environment. When the attention profile of the pilot and guardian agents align, the pilot makes control decisions. If the attention map of the pilot and the guardian do not align, the air-guardian makes interventions and takes over the control of the aircraft. We show that our attention-based air-guardian system can balance the trade-off between its level of involvement in the flight and the pilot's expertise and attention. We demonstrate the effectivness of our methods in simulated flight scenarios with a fixed-wing aircraft and on a real drone platform.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
数据驱动的模拟器承诺高数据效率进行驾驶策略学习。当用于建模相互作用时,这种数据效率变为瓶颈:小型基础数据集通常缺乏用于学习交互式驾驶的有趣和具有挑战性的边缘案例。我们通过提出使用绘制的ADO车辆学习强大的驾驶策略的仿真方法来解决这一挑战。因此,我们的方法可用于学习涉及多代理交互的策略,并允许通过最先进的策略学习方法进行培训。我们评估了驾驶中学习标准交互情景的方法。在广泛的实验中,我们的工作表明,由此产生的政策可以直接转移到全规模的自治车辆,而无需使用任何传统的SIM-to-Real传输技术,例如域随机化。
translated by 谷歌翻译
Compared with model-based control and optimization methods, reinforcement learning (RL) provides a data-driven, learning-based framework to formulate and solve sequential decision-making problems. The RL framework has become promising due to largely improved data availability and computing power in the aviation industry. Many aviation-based applications can be formulated or treated as sequential decision-making problems. Some of them are offline planning problems, while others need to be solved online and are safety-critical. In this survey paper, we first describe standard RL formulations and solutions. Then we survey the landscape of existing RL-based applications in aviation. Finally, we summarize the paper, identify the technical gaps, and suggest future directions of RL research in aviation.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
我们解决了在存在障碍物的情况下,通过一系列航路点来解决四肢飞行的最低时间飞行问题,同时利用了完整的四型动力学。早期作品依赖于简化的动力学或多项式轨迹表示,而这些动力学或多项式轨迹表示,这些表示没有利用四四光的全部执行器电位,因此导致了次优溶液。最近的作品可以计划最小的时间轨迹;然而,轨迹是通过无法解释障碍的控制方法执行的。因此,由于模型不匹配和机上干扰,成功执行此类轨迹很容易出现错误。为此,我们利用深厚的强化学习和经典的拓扑路径计划来训练强大的神经网络控制器,以在混乱的环境中为最少的四型四型飞行。由此产生的神经网络控制器表现出比最新方法相比,高达19%的性能要高得多。更重要的是,博学的政策同时在线解决了计划和控制问题,以解决干扰,从而实现更高的鲁棒性。因此,提出的方法在没有碰撞的情况下实现了100%的最低时间策略的成功率,而传统的计划和控制方法仅获得40%。所提出的方法在模拟和现实世界中均已验证,四速速度高达42公里/小时,加速度为3.6g。
translated by 谷歌翻译
在包装交付,交通监控,搜索和救援操作以及军事战斗订婚等不同应用中,对使用无人驾驶汽车(UAV)(无人机)的需求越来越不断增加。在所有这些应用程序中,无人机用于自动导航环境 - 没有人类互动,执行特定任务并避免障碍。自主无人机导航通常是使用强化学习(RL)来完成的,在该学习中,代理在域中充当专家在避免障碍的同时导航环境。了解导航环境和算法限制在选择适当的RL算法以有效解决导航问题方面起着至关重要的作用。因此,本研究首先确定了无人机导航任务,并讨论导航框架和仿真软件。接下来,根据环境,算法特征,能力和不同无人机导航问题的应用程序对RL算法进行分类和讨论,这将帮助从业人员和研究人员为其无人机导航使用情况选择适当的RL算法。此外,确定的差距和机会将推动无人机导航研究。
translated by 谷歌翻译
In recent years, unmanned aerial vehicle (UAV) related technology has expanded knowledge in the area, bringing to light new problems and challenges that require solutions. Furthermore, because the technology allows processes usually carried out by people to be automated, it is in great demand in industrial sectors. The automation of these vehicles has been addressed in the literature, applying different machine learning strategies. Reinforcement learning (RL) is an automation framework that is frequently used to train autonomous agents. RL is a machine learning paradigm wherein an agent interacts with an environment to solve a given task. However, learning autonomously can be time consuming, computationally expensive, and may not be practical in highly-complex scenarios. Interactive reinforcement learning allows an external trainer to provide advice to an agent while it is learning a task. In this study, we set out to teach an RL agent to control a drone using reward-shaping and policy-shaping techniques simultaneously. Two simulated scenarios were proposed for the training; one without obstacles and one with obstacles. We also studied the influence of each technique. The results show that an agent trained simultaneously with both techniques obtains a lower reward than an agent trained using only a policy-based approach. Nevertheless, the agent achieves lower execution times and less dispersion during training.
translated by 谷歌翻译
Underwater navigation presents several challenges, including unstructured unknown environments, lack of reliable localization systems (e.g., GPS), and poor visibility. Furthermore, good-quality obstacle detection sensors for underwater robots are scant and costly; and many sensors like RGB-D cameras and LiDAR only work in-air. To enable reliable mapless underwater navigation despite these challenges, we propose a low-cost end-to-end navigation system, based on a monocular camera and a fixed single-beam echo-sounder, that efficiently navigates an underwater robot to waypoints while avoiding nearby obstacles. Our proposed method is based on Proximal Policy Optimization (PPO), which takes as input current relative goal information, estimated depth images, echo-sounder readings, and previous executed actions, and outputs 3D robot actions in a normalized scale. End-to-end training was done in simulation, where we adopted domain randomization (varying underwater conditions and visibility) to learn a robust policy against noise and changes in visibility conditions. The experiments in simulation and real-world demonstrated that our proposed method is successful and resilient in navigating a low-cost underwater robot in unknown underwater environments. The implementation is made publicly available at https://github.com/dartmouthrobotics/deeprl-uw-robot-navigation.
translated by 谷歌翻译
这项工作调查了基于课程学习(CL)对代理商的绩效的影响。特别是,我们专注于机器人毛美导航的安全方面,比较标准端到端(E2E)培训策略。为此,我们提出了一种方法,即利用学习(tol)和微调在基于团结的模拟中的微调,以及Robotnik Kairos作为机器人代理。对于公平的比较,我们的评估考虑了对每个学习方法的同等计算需求(即,相同的相互作用和环境的难度数),并确认我们基于CL的方法使用TOL优于E2E方法。特别是,我们提高了培训的政策的平均成功率和安全,导致看不见的测试方案中的碰撞减少了10%。为了进一步确认这些结果,我们采用正式的验证工具来量化加强学习政策的正确行为数量超过所需规范。
translated by 谷歌翻译
自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用中使用了使用,但他们仍然存在缺乏可解释性的问题。面包缺乏对研究人员和公众使用DRL解决方案的使用。为了解决这个问题,已经出现了可解释的人工智能(XAI)领域。这是各种不同的方法,它们希望打开DRL黑框,范围从使用可解释的符号决策树到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及使用了哪些应用程序。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。
translated by 谷歌翻译
自主驾驶有可能彻底改变流动性,因此是一个积极的研究领域。实际上,自动驾驶汽车的行为必须是可以接受的,即高效,安全和可解释的。尽管香草钢筋学习(RL)找到了表现的行为策略,但它们通常是不安全且无法解释的。安全性是通过安全的RL方法引入的,但是它们仍然无法解释,因为学习的行为在没有分别进行建模的情况下共同优化了安全性和性能。可解释的机器学习很少应用于RL。本文提出了SAFEDQN,它允许在仍然有效的同时使自动驾驶汽车的行为安全可解释。 SAFEDQN在算法上透明的同时,在预期风险和效用的效用之间提供了可以理解的语义权衡。我们表明,SAFEDQN为各种场景找到了可解释且安全的驾驶政策,并展示了最先进的显着性技术如何帮助评估风险和实用性。
translated by 谷歌翻译
策略搜索和模型预测控制〜(MPC)是机器人控制的两个不同范式:策略搜索具有使用经验丰富的数据自动学习复杂策略的强度,而MPC可以使用模型和轨迹优化提供最佳控制性能。开放的研究问题是如何利用并结合两种方法的优势。在这项工作中,我们通过使用策略搜索自动选择MPC的高级决策变量提供答案,这导致了一种新的策略搜索 - 用于模型预测控制框架。具体地,我们将MPC作为参数化控制器配制,其中难以优化的决策变量表示为高级策略。这种制定允许以自我监督的方式优化政策。我们通过专注于敏捷无人机飞行中的具有挑战性的问题来验证这一框架:通过快速的盖茨飞行四轮车。实验表明,我们的控制器在模拟和现实世界中实现了鲁棒和实时的控制性能。拟议的框架提供了合并学习和控制的新视角。
translated by 谷歌翻译
Recently, learning-based controllers have been shown to push mobile robotic systems to their limits and provide the robustness needed for many real-world applications. However, only classical optimization-based control frameworks offer the inherent flexibility to be dynamically adjusted during execution by, for example, setting target speeds or actuator limits. We present a framework to overcome this shortcoming of neural controllers by conditioning them on an auxiliary input. This advance is enabled by including a feature-wise linear modulation layer (FiLM). We use model-free reinforcement-learning to train quadrotor control policies for the task of navigating through a sequence of waypoints in minimum time. By conditioning the policy on the maximum available thrust or the viewing direction relative to the next waypoint, a user can regulate the aggressiveness of the quadrotor's flight during deployment. We demonstrate in simulation and in real-world experiments that a single control policy can achieve close to time-optimal flight performance across the entire performance envelope of the robot, reaching up to 60 km/h and 4.5g in acceleration. The ability to guide a learned controller during task execution has implications beyond agile quadrotor flight, as conditioning the control policy on human intent helps safely bringing learning based systems out of the well-defined laboratory environment into the wild.
translated by 谷歌翻译
我们展示了通过大规模多代理端到端增强学习的大射击可转移到真正的四轮压力机的无人驾驶群体控制器的可能性。我们培训由神经网络参数化的政策,该政策能够以完全分散的方式控制群体中的各个无人机。我们的政策,在具有现实的四轮流物理学的模拟环境中训练,展示了先进的植绒行为,在紧张的地层中执行侵略性的操作,同时避免彼此的碰撞,破裂和重新建立地层,以避免与移动障碍的碰撞,并有效地协调追求障碍,并有效地协调追求逃避任务。在模拟中,我们分析了培训制度的不同模型架构和参数影响神经群的最终表现。我们展示了在模拟中学习的模型的成功部署到高度资源受限的物理四体体执行站保持和目标交换行为。在Propers网站上提供代码和视频演示,在https://sites.google.com/view/swarm-rl上获得。
translated by 谷歌翻译
当代机器人主义者的主要目标之一是使智能移动机器人能够在共享的人类机器人环境中平稳运行。为此目标服务的最基本必要的功能之一是在这种“社会”背景下有效的导航。结果,最近的一般社会导航的研究激增,尤其是如何处理社会导航代理之间的冲突。这些贡献介绍了各种模型,算法和评估指标,但是由于该研究领域本质上是跨学科的,因此许多相关论文是不可比较的,并且没有共同的标准词汇。这项调查的主要目标是通过引入这种通用语言,使用它来调查现有工作并突出开放问题来弥合这一差距。它首先定义社会导航的冲突,并提供其组成部分的详细分类学。然后,这项调查将现有工作映射到了本分类法中,同时使用其框架讨论论文。最后,本文提出了一些未来的研究方向和开放问题,这些方向目前正在社会导航的边界,以帮助集中于正在进行的和未来的研究。
translated by 谷歌翻译
人类比赛无人机比针对端到端自治飞行所培训的神经网络更快。这可能与人类飞行员有效地选择任务相关的视觉信息的能力有关。这项工作调查了能够模仿人眼凝视行为和注意力的神经网络可以提高基于视觉的自主无人机赛车的挑战性的神经网络性能。我们假设基于凝视的注意预测可以是基于模拟器的无人机赛任务中的视觉信息选择和决策的有效机制。我们使用来自18个人的无人机飞行员的眼睛凝视和飞行轨迹数据来测试这个假设,以培训视觉注意预测模型。然后,我们使用这种视觉注意预测模型来使用模仿学习训练基于视觉的自主无人机赛车的端到端控制器。我们将注意力预测控制器的无人机赛竞赛性能与使用原始图像输入和基于图像的抽象(即,特征曲目)进行比较。我们的研究结果表明,关注预测的控制器优于基线,能够始终如一地完成挑战性的竞赛赛道,最高可达88%的成功率。此外,当在对凝固参考轨迹进行评估时,视觉注意力预测和基于特征轨迹的模型显示出比基于图像的模型更好的泛化性能。我们的结果表明,人类视觉注意力预测可提高基于视觉视觉的无人机赛车的性能,为最终可以达到甚至超过人类性能的基于视觉,快速和敏捷的自主飞行提供了重要步骤。
translated by 谷歌翻译
机器人进行深入增强学习(RL)的导航,在复杂的环境下实现了更高的性能,并且表现良好。同时,对深度RL模型的决策的解释成为更多自主机器人安全性和可靠性的关键问题。在本文中,我们提出了一种基于深入RL模型的注意力分支的视觉解释方法。我们将注意力分支与预先训练的深度RL模型联系起来,并通过以监督的学习方式使用受过训练的深度RL模型作为正确标签来训练注意力分支。由于注意力分支经过训练以输出与深RL模型相同的结果,因此获得的注意图与具有更高可解释性的代理作用相对应。机器人导航任务的实验结果表明,所提出的方法可以生成可解释的注意图以进行视觉解释。
translated by 谷歌翻译
嘈杂的传感,不完美的控制和环境变化是许多现实世界机器人任务的定义特征。部分可观察到的马尔可夫决策过程(POMDP)提供了一个原则上的数学框架,用于建模和解决不确定性下的机器人决策和控制任务。在过去的十年中,它看到了许多成功的应用程序,涵盖了本地化和导航,搜索和跟踪,自动驾驶,多机器人系统,操纵和人类机器人交互。这项调查旨在弥合POMDP模型的开发与算法之间的差距,以及针对另一端的不同机器人决策任务的应用。它分析了这些任务的特征,并将它们与POMDP框架的数学和算法属性联系起来,以进行有效的建模和解决方案。对于从业者来说,调查提供了一些关键任务特征,以决定何时以及如何成功地将POMDP应用于机器人任务。对于POMDP算法设计师,该调查为将POMDP应用于机器人系统的独特挑战提供了新的见解,并指出了有希望的新方向进行进一步研究。
translated by 谷歌翻译