用于训练自动汽车的两种目前的方法是加强学习和模仿学习。本研究通过将监督模仿学习集成到强化学习中,在模拟和更小的现实世界环境中开发了一种新的学习方法和系统方法,使RL训练数据收集过程更有效和高效。通过组合这两种方法,所提出的研究成功利用了RL和IL方法的优点。首先,使用模仿学习将一个真正的迷你级机器人汽车组装并培训了6英尺的真实世界轨道。在此过程中,通过模仿人类专家驱动程序并手动记录使用Microsoft Airsim的API手动记录动作来控制迷你级机器人车辆以控制磁级机器人车辆。 331能够生成和收集准确的人类奖励训练样本。然后,使用加强学习在Microsoft Airsim模拟器中培训了一个代理,使用初始331奖励数据从模仿学习培训输入的初始331奖励数据。经过6小时的培训期后,迷你规模的机器人汽车能够在迷你级机器人汽车无法完成一个全圈,即使在30之后,迷你规模机器人汽车无法完成一个全圈小时培训纯RL培训。培训时间减少80%,新方法每小时产生更高的平均奖励。因此,新方法能够节省大量的培训时间,可用于加速自动驾驶中的RL的采用,这将有助于在应用于现实生活场景时长期产生更有效和更好的结果。关键词:加固学习(RL),仿制学习(IL),自主驾驶,人类驾驶数据,CNN
translated by 谷歌翻译
在典型的自主驾驶堆栈中,计划和控制系统代表了两个最关键的组件,其中传感器检索并通过感知算法处理的数据用于实施安全舒适的自动驾驶行为。特别是,计划模块可以预测自动驾驶汽车应遵循正确的高级操作的路径,而控制系统则执行一系列低级动作,控制转向角度,油门和制动器。在这项工作中,我们提出了一个无模型的深钢筋学习计划者培训一个可以预测加速度和转向角度的神经网络,从而获得了一个单个模块,可以使用自我自我的本地化和感知算法处理的数据来驱动车辆-驾车。特别是,在模拟中进行了全面训练的系统能够在模拟和帕尔马市现实世界中的无障碍环境中平稳驱动,证明该系统具有良好的概括能力,也可以驱动驱动在培训方案之外的那些部分。此外,为了将系统部署在真正的自动驾驶汽车上,并减少模拟和现实世界中的差距,我们还开发了一个由微小的神经网络表示的模块,能够在期间重现真正的车辆动态行为模拟的培训。
translated by 谷歌翻译
在自主驾驶场中,人类知识融合到深增强学习(DRL)通常基于在模拟环境中记录的人类示范。这限制了在现实世界交通中的概率和可行性。我们提出了一种两级DRL方法,从真实的人类驾驶中学习,实现优于纯DRL代理的性能。培训DRL代理商是在Carla的框架内完成了机器人操作系统(ROS)。对于评估,我们设计了不同的真实驾驶场景,可以将提出的两级DRL代理与纯DRL代理进行比较。在从人驾驶员中提取“良好”行为之后,例如在信号交叉口中的预期,该代理变得更有效,并且驱动更安全,这使得这种自主代理更适应人体机器人交互(HRI)流量。
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
本文提出了一种新颖的方法,该方法支持自然语言语音说明,以指导训练自动驾驶汽车时进行深度强化学习(DRL)算法。DRL方法是自动驾驶汽车(AV)代理的流行方法。但是,大多数现有的方法都是样本和时间的,并且缺乏与人类专家的自然通信渠道。在本文中,新的人类驾驶员如何从人类教练那里学习,激励我们研究人类在循环学习的新方法,并为代理商学习更自然和平易近人的培训界面。我们建议将自然语言语音说明(NLI)纳入基于模型的深度强化学习以训练自动驾驶汽车。我们与Carla模拟器中的一些最先进的DRL方法一起评估了所提出的方法。结果表明,NLI可以帮助缓解训练过程,并大大提高代理商的学习速度。
translated by 谷歌翻译
Aerial view of test environment (b) Vision-based driving, view from onboard camera (c) Side view of vehicle Fig. 1. Conditional imitation learning allows an autonomous vehicle trained end-to-end to be directed by high-level commands. (a) We train and evaluate robotic vehicles in the physical world (top) and in simulated urban environments (bottom). (b) The vehicles drive based on video from a forward-facing onboard camera. At the time these images were taken, the vehicle was given the command "turn right at the next intersection". (c) The trained controller handles sensorimotor coordination (staying on the road, avoiding collisions) and follows the provided commands.
translated by 谷歌翻译
在多机构动态交通情况下的自主驾驶具有挑战性:道路使用者的行为不确定,很难明确建模,并且自我车辆应与他们应用复杂的谈判技巧,例如屈服,合并和交付,以实现,以实现在各种环境中都有安全有效的驾驶。在这些复杂的动态场景中,传统的计划方法主要基于规则,并且通常会导致反应性甚至过于保守的行为。因此,他们需要乏味的人类努力来维持可行性。最近,基于深度学习的方法显示出令人鼓舞的结果,具有更好的概括能力,但手工工程的工作较少。但是,它们要么是通过有监督的模仿学习(IL)来实施的,该学习遭受了数据集偏见和分配不匹配问题,要么接受了深入强化学习(DRL)的培训,但专注于一种特定的交通情况。在这项工作中,我们建议DQ-GAT实现可扩展和主动的自主驾驶,在这些驾驶中,基于图形注意力的网络用于隐式建模相互作用,并采用了深层Q学习来以无聊的方式训练网络端到端的网络。 。在高保真驾驶模拟器中进行的广泛实验表明,我们的方法比以前的基于学习的方法和传统的基于规则的方法获得了更高的成功率,并且在可见和看不见的情况下都可以更好地摆脱安全性和效率。此外,轨迹数据集的定性结果表明,我们所学的政策可以通过实时速度转移到现实世界中。演示视频可在https://caipeide.github.io/dq-gat/上找到。
translated by 谷歌翻译
Imitation learning (IL) is a simple and powerful way to use high-quality human driving data, which can be collected at scale, to identify driving preferences and produce human-like behavior. However, policies based on imitation learning alone often fail to sufficiently account for safety and reliability concerns. In this paper, we show how imitation learning combined with reinforcement learning using simple rewards can substantially improve the safety and reliability of driving policies over those learned from imitation alone. In particular, we use a combination of imitation and reinforcement learning to train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision risk. To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.
translated by 谷歌翻译
We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an endto-end model trained via imitation learning, and an end-to-end model trained via reinforcement learning. The approaches are evaluated in controlled scenarios of increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform's utility for autonomous driving research.
translated by 谷歌翻译
已经证明了深度增强学习(DRL)对自动驾驶和机器人等几种复杂的决策应用有效。然而,DRL众所周知,众所周知,其高样本复杂性及其缺乏稳定性。先验知识,例如作为专家演示,通常可以提供,但挑战杠杆以减轻这些问题。在本文中,我们提出了一般的增强模仿(GRI),这是一种新的方法,它与勘探和专家数据相结合的好处,并在任何偏离策略的RL算法上实施。我们制作一个简化假设:专家演示可以被视为完美的数据,其基础政策得到了不断的高奖励。基于此假设,GRI介绍了离线演示代理的概念。该代理发送了专家数据,并与来自在线RL探索代理商的经验同时处理。我们表明,我们的方法能够在城市环境中的基于视觉的自主驾驶的重大改进。我们进一步验证了具有不同偏离策略RL算法的Mujoco连续控制任务的GRI方法。我们的方法在Carla排行榜上排名第一,在Rails,以前的最先进,以17%越来越胜过世界。
translated by 谷歌翻译
模仿学习使用专家的演示来揭示最佳政策,并且也适用于现实世界的机器人技术任务。但是,在这种情况下,由于安全,经济和时间限制,对代理的培训是在模拟环境中进行的。后来,使用SIM到现实方法将代理应用于现实域。在本文中,我们采用模仿学习方法来解决模拟环境中的机器人技术任务,并使用转移学习将这些解决方案应用于现实世界环境。我们的任务设置在Duckietown环境中,机器人代理必须根据单个前向摄像头的输入图像遵循右车道。我们提出了三个模仿学习和两种能够完成此任务的模拟方法。在这些技术上提供了详细的比较,以突出它们的优势和缺点。
translated by 谷歌翻译
本文探讨了强化学习(RL)模型用于自动赛车的使用。与安全车是头等大事的乘用车相反,赛车的目的是最大程度地减少单圈时间。我们将问题视为一项强化学习任务,其中包括由车辆遥测组成的多维输入和连续的动作空间。为了找出哪种RL方法更好地解决了问题,以及获得的模型是否推广到未知轨道上,我们将10种深层确定性策略梯度(DDPG)变体进行了两个实验:i)〜研究RL方法如何学习驱动驱动赛车和ii)研究学习方案如何影响模型的推广能力。我们的研究表明,接受RL训练的模型不仅能够比基线开源手工机器人更快地驾驶,而且还可以推广到未知轨道。
translated by 谷歌翻译
安全驾驶需要人类和智能代理的多种功能,例如无法看到环境的普遍性,对周围交通的安全意识以及复杂的多代理设置中的决策。尽管强化学习取得了巨大的成功(RL),但由于缺乏集成的环境,大多数RL研究工作分别研究了每个能力。在这项工作中,我们开发了一个名为MetAdrive的新驾驶模拟平台,以支持对机器自治的可概括增强学习算法的研究。 Metadrive具有高度的组成性,可以从程序生成和实际数据导入的实际数据中产生无限数量的不同驾驶场景。基于Metadrive,我们在单一代理和多代理设置中构建了各种RL任务和基线,包括在看不见的场景,安全探索和学习多机构流量的情况下进行基准标记。对程序生成的场景和现实世界情景进行的概括实验表明,增加训练集的多样性和大小会导致RL代理的推广性提高。我们进一步评估了元数据环境中各种安全的增强学习和多代理增强学习算法,并提供基准。源代码,文档和演示视频可在\ url {https://metadriverse.github.io/metadrive}上获得。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
在各种控制任务域中,现有控制器提供了基线的性能水平,虽然可能是次优的 - 应维护。依赖于国家和行动空间的广泛探索的强化学习(RL)算法可用于优化控制策略。但是,完全探索性的RL算法可能会在训练过程中降低低于基线水平的性能。在本文中,我们解决了控制政策的在线优化问题,同时最大程度地减少了遗憾的W.R.T基线政策绩效。我们提出了一个共同的仿制学习框架,表示乔尔。 JIRL中的学习过程假设了基线策略的可用性,并设计了两个目标\ textbf {(a)}利用基线的在线演示,以最大程度地减少培训期间的遗憾W.R.T的基线策略,\ textbf {(b) }最终超过了基线性能。 JIRL通过最初学习模仿基线策略并逐渐将控制从基线转移到RL代理来解决这些目标。实验结果表明,JIRR有效地实现了几个连续的动作空间域中的上述目标。结果表明,JIRL在最终性能中与最先进的算法相当,同时在所有提出的域中训练期间都会降低基线后悔。此外,结果表明,对于最先进的基线遗憾最小化方法,其基线后悔的减少因素最高为21美元。
translated by 谷歌翻译
数据驱动的模拟器承诺高数据效率进行驾驶策略学习。当用于建模相互作用时,这种数据效率变为瓶颈:小型基础数据集通常缺乏用于学习交互式驾驶的有趣和具有挑战性的边缘案例。我们通过提出使用绘制的ADO车辆学习强大的驾驶策略的仿真方法来解决这一挑战。因此,我们的方法可用于学习涉及多代理交互的策略,并允许通过最先进的策略学习方法进行培训。我们评估了驾驶中学习标准交互情景的方法。在广泛的实验中,我们的工作表明,由此产生的政策可以直接转移到全规模的自治车辆,而无需使用任何传统的SIM-to-Real传输技术,例如域随机化。
translated by 谷歌翻译
深度强化学习(DRL)是一种仅从演示和经验中学习机器人控制政策的有前途的方法。为了涵盖机器人的整个动态行为,DRL训练是通常在仿真环境中得出的主动探索过程。尽管这种模拟培训廉价且快速,但将DRL算法应用于现实世界的设置很困难。如果对代理进行训练直到它们在模拟中安全执行,则由于模拟动力学和物理机器人之间的差异引起的SIM到真实差距,将其传输到物理系统很困难。在本文中,我们提出了一种在线培训DRL代理的方法,可以使用基于模型的安全主管在实体车辆上自动驾驶。我们的解决方案使用监督系统检查代理选择的操作是安全还是不安全,并确保在车辆上始终采取安全措施。这样,我们可以在安全,快速,有效地训练DRL算法的同时绕过SIM到现实的问题。我们提供各种现实世界实验,在线培训一辆小型实体车辆,可以自动驾驶,没有事先模拟培训。评估结果表明,我们的方法在未崩溃的同时提高了样品效率的训练代理,并且受过训练的代理比在模拟中训练的代理表现出更好的驾驶性能。
translated by 谷歌翻译
在未来几年和几十年中,自动驾驶汽车(AV)将变得越来越普遍,为更安全,更方便的旅行提供了新的机会,并可能利用自动化和连接性的更智能的交通控制方法。跟随汽车是自动驾驶中的主要功能。近年来,基于强化学习的汽车已受到关注,目的是学习和达到与人类相当的绩效水平。但是,大多数现有的RL方法将汽车模拟为单方面问题,仅感知前方的车辆。然而,最近的文献,王和霍恩[16]表明,遵循的双边汽车考虑了前方的车辆,而后面的车辆表现出更好的系统稳定性。在本文中,我们假设可以使用RL学习这款双边汽车,同时学习其他目标,例如效率最大化,混蛋最小化和安全奖励,从而导致学识渊博的模型超过了人类驾驶。我们通过将双边信息集成到基于双边控制模型(BCM)的CAR遵循控制的状态和奖励功能的情况下,提出并引入了遵循控制遵循的汽车的深钢筋学习(DRL)框架。此外,我们使用分散的多代理增强学习框架来为每个代理生成相​​应的控制动作。我们的仿真结果表明,我们学到的政策比(a)汽车间的前进方向,(b)平均速度,(c)混蛋,(d)碰撞时间(TTC)和(e)的速度更好。字符串稳定性。
translated by 谷歌翻译
Reinforcement learning (RL) requires skillful definition and remarkable computational efforts to solve optimization and control problems, which could impair its prospect. Introducing human guidance into reinforcement learning is a promising way to improve learning performance. In this paper, a comprehensive human guidance-based reinforcement learning framework is established. A novel prioritized experience replay mechanism that adapts to human guidance in the reinforcement learning process is proposed to boost the efficiency and performance of the reinforcement learning algorithm. To relieve the heavy workload on human participants, a behavior model is established based on an incremental online learning method to mimic human actions. We design two challenging autonomous driving tasks for evaluating the proposed algorithm. Experiments are conducted to access the training and testing performance and learning mechanism of the proposed algorithm. Comparative results against the state-of-the-art methods suggest the advantages of our algorithm in terms of learning efficiency, performance, and robustness.
translated by 谷歌翻译