对自动驾驶车辆性能的定量评估,交通模拟引起了很多兴趣。为了使模拟器成为有价值的测试工作台,要求对现场每个交通代理的驾驶策略动画,就像人类在保持最小安全保证的同时一样。从记录的人类驾驶数据或通过强化学习中学习交通代理的驾驶政策似乎是在不受控制的交叉路口或回旋处中产生现实且高度互动的交通状况的有吸引力的解决方案。在这项工作中,我们表明,在学习驾驶政策时模仿人类驾驶与保持安全性之间存在权衡。我们通过比较应用于驾驶任务时的各种模仿学习和强化学习算法的性能来做到这一点。我们还提出了一种多物镜学习算法(MOPPO),可以共同提高两个目标。我们在从交互数据集中提取的高度互动驾驶方案上测试驾驶政策,以评估它们的表现如何。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
在自主驾驶场中,人类知识融合到深增强学习(DRL)通常基于在模拟环境中记录的人类示范。这限制了在现实世界交通中的概率和可行性。我们提出了一种两级DRL方法,从真实的人类驾驶中学习,实现优于纯DRL代理的性能。培训DRL代理商是在Carla的框架内完成了机器人操作系统(ROS)。对于评估,我们设计了不同的真实驾驶场景,可以将提出的两级DRL代理与纯DRL代理进行比较。在从人驾驶员中提取“良好”行为之后,例如在信号交叉口中的预期,该代理变得更有效,并且驱动更安全,这使得这种自主代理更适应人体机器人交互(HRI)流量。
translated by 谷歌翻译
Imitation learning (IL) is a simple and powerful way to use high-quality human driving data, which can be collected at scale, to identify driving preferences and produce human-like behavior. However, policies based on imitation learning alone often fail to sufficiently account for safety and reliability concerns. In this paper, we show how imitation learning combined with reinforcement learning using simple rewards can substantially improve the safety and reliability of driving policies over those learned from imitation alone. In particular, we use a combination of imitation and reinforcement learning to train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision risk. To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.
translated by 谷歌翻译
交通模拟器是运输系统运营和计划中的重要组成部分。常规的交通模拟器通常采用校准的物理跟踪模型来描述车辆的行为及其与交通环境的相互作用。但是,没有普遍的物理模型可以准确地预测不同情况下车辆行为的模式。鉴于交通动态的非平稳性质,固定的物理模型在复杂的环境中往往不太有效。在本文中,我们将流量模拟作为一个反向加强学习问题,并提出一个参数共享对抗性逆增强学习模型,以进行动态射击模拟学习。我们提出的模型能够模仿现实世界中车辆的轨迹,同时恢复奖励功能,从而揭示了车辆的真实目标,这是不同动态的不变。关于合成和现实世界数据集的广泛实验表明,与最先进的方法相比,我们方法的出色性能及其对流量变化动态的鲁棒性。
translated by 谷歌翻译
自动驾驶在过去二十年中吸引了重要的研究兴趣,因为它提供了许多潜在的好处,包括释放驾驶和减轻交通拥堵的司机等。尽管进展有前途,但车道变化仍然是自治车辆(AV)的巨大挑战,特别是在混合和动态的交通方案中。最近,强化学习(RL)是一种强大的数据驱动控制方法,已被广泛探索了在令人鼓舞的效果中的通道中的车道改变决策。然而,这些研究的大多数研究专注于单车展,并且在多个AVS与人类驱动车辆(HDV)共存的情况下,道路变化已经受到稀缺的关注。在本文中,我们在混合交通公路环境中制定了多个AVS的车道改变决策,作为多功能增强学习(Marl)问题,其中每个AV基于相邻AV的动作使车道变化的决定和HDV。具体地,使用新颖的本地奖励设计和参数共享方案开发了一种多代理优势演员批评网络(MA2C)。特别是,提出了一种多目标奖励功能来纳入燃油效率,驾驶舒适度和自主驾驶的安全性。综合实验结果,在三种不同的交通密度和各级人类司机侵略性下进行,表明我们所提出的Marl框架在效率,安全和驾驶员舒适方面始终如一地优于几个最先进的基准。
translated by 谷歌翻译
Designing a safe and human-like decision-making system for an autonomous vehicle is a challenging task. Generative imitation learning is one possible approach for automating policy-building by leveraging both real-world and simulated decisions. Previous work that applies generative imitation learning to autonomous driving policies focuses on learning a low-level controller for simple settings. However, to scale to complex settings, many autonomous driving systems combine fixed, safe, optimization-based low-level controllers with high-level decision-making logic that selects the appropriate task and associated controller. In this paper, we attempt to bridge this gap in complexity by employing Safety-Aware Hierarchical Adversarial Imitation Learning (SHAIL), a method for learning a high-level policy that selects from a set of low-level controller instances in a way that imitates low-level driving data on-policy. We introduce an urban roundabout simulator that controls non-ego vehicles using real data from the Interaction dataset. We then demonstrate empirically that even with simple controller options, our approach can produce better behavior than previous approaches in driver imitation that have difficulty scaling to complex environments. Our implementation is available at https://github.com/sisl/InteractionImitation.
translated by 谷歌翻译
High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with fidelity, diversity, and controllability in consideration, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows meet all three design goals, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.
translated by 谷歌翻译
安全驾驶需要人类和智能代理的多种功能,例如无法看到环境的普遍性,对周围交通的安全意识以及复杂的多代理设置中的决策。尽管强化学习取得了巨大的成功(RL),但由于缺乏集成的环境,大多数RL研究工作分别研究了每个能力。在这项工作中,我们开发了一个名为MetAdrive的新驾驶模拟平台,以支持对机器自治的可概括增强学习算法的研究。 Metadrive具有高度的组成性,可以从程序生成和实际数据导入的实际数据中产生无限数量的不同驾驶场景。基于Metadrive,我们在单一代理和多代理设置中构建了各种RL任务和基线,包括在看不见的场景,安全探索和学习多机构流量的情况下进行基准标记。对程序生成的场景和现实世界情景进行的概括实验表明,增加训练集的多样性和大小会导致RL代理的推广性提高。我们进一步评估了元数据环境中各种安全的增强学习和多代理增强学习算法,并提供基准。源代码,文档和演示视频可在\ url {https://metadriverse.github.io/metadrive}上获得。
translated by 谷歌翻译
离线强化学习(RL)为从离线数据提供学习决策的框架,因此构成了现实世界应用程序作为自动驾驶的有希望的方法。自动驾驶车辆(SDV)学习策略,这甚至可能甚至优于次优数据集中的行为。特别是在安全关键应用中,作为自动化驾驶,解释性和可转换性是成功的关键。这激发了使用基于模型的离线RL方法,该方法利用规划。然而,目前的最先进的方法往往忽视了多种子体系统随机行为引起的溶液不确定性的影响。这项工作提出了一种新的基于不确定感知模型的离线强化学习利用规划(伞)的新方法,其解决了以可解释的基于学习的方式共同的预测,规划和控制问题。训练有素的动作调节的随机动力学模型捕获了交通场景的独特不同的未来演化。分析为我们在挑战自动化驾驶模拟中的效力和基于现实世界的公共数据集的方法提供了经验证据。
translated by 谷歌翻译
Reinforcement learning (RL) requires skillful definition and remarkable computational efforts to solve optimization and control problems, which could impair its prospect. Introducing human guidance into reinforcement learning is a promising way to improve learning performance. In this paper, a comprehensive human guidance-based reinforcement learning framework is established. A novel prioritized experience replay mechanism that adapts to human guidance in the reinforcement learning process is proposed to boost the efficiency and performance of the reinforcement learning algorithm. To relieve the heavy workload on human participants, a behavior model is established based on an incremental online learning method to mimic human actions. We design two challenging autonomous driving tasks for evaluating the proposed algorithm. Experiments are conducted to access the training and testing performance and learning mechanism of the proposed algorithm. Comparative results against the state-of-the-art methods suggest the advantages of our algorithm in terms of learning efficiency, performance, and robustness.
translated by 谷歌翻译
End-to-end autonomous driving provides a feasible way to automatically maximize overall driving system performance by directly mapping the raw pixels from a front-facing camera to control signals. Recent advanced methods construct a latent world model to map the high dimensional observations into compact latent space. However, the latent states embedded by the world model proposed in previous works may contain a large amount of task-irrelevant information, resulting in low sampling efficiency and poor robustness to input perturbations. Meanwhile, the training data distribution is usually unbalanced, and the learned policy is hard to cope with the corner cases during the driving process. To solve the above challenges, we present a semantic masked recurrent world model (SEM2), which introduces a latent filter to extract key task-relevant features and reconstruct a semantic mask via the filtered features, and is trained with a multi-source data sampler, which aggregates common data and multiple corner case data in a single batch, to balance the data distribution. Extensive experiments on CARLA show that our method outperforms the state-of-the-art approaches in terms of sample efficiency and robustness to input permutations.
translated by 谷歌翻译
不确定性下的实时计划对于在复杂的动态环境中运行的机器人至关重要。例如,考虑一下,汽车,摩托车,公共汽车等不受监管的城市交通不受监管的自动机器人车辆驾驶。机器人车辆必须在短期和长时间内计划,以便与许多具有不确定意图和不确定意图的交通参与者互动有效驾驶。然而,在很长一段时间内明确规划会产生过度的计算成本,并且在实时限制下是不切实际的。为了实现大规模计划的实时性能,这项工作从树木搜索驾驶(Lets-Drive)中引入了一种新的算法学习,该算法将计划和学习集成到封闭的循环中,并将其应用于拥挤的城市交通中的自动驾驶在模拟中。具体而言,让我们驱动器从在线规划者提供的数据中学习策略及其价值函数,该数据搜索了稀疏采样的信念树;在线规划师依次使用学习的策略和价值功能作为启发式方法来扩展其运行时性能,以实现实时机器人控制。重复这两个步骤以形成一个封闭的循环,以便计划者和学习者相互通知并同步改进。该算法以自我监督的方式自行学习,而无需人工努力明确的数据标记。实验结果表明,让驱动器的表现优于计划或学习,以及计划和学习的开环集成。
translated by 谷歌翻译
自驱动粒子(SDP)描述了日常生活中常见的一类常见的多种子体系统,例如植绒鸟类和交通流量。在SDP系统中,每个代理商都追求自己的目标,并不断改变其与附近代理商的合作或竞争行为。手动设计用于此类SDP系统的控制器是耗时的,而产生的紧急行为往往是不可逼真的,也不是更广泛的。因此,SDP系统的现实模拟仍然具有挑战性。强化学习提供了一种吸引人的替代方案,用于自动化SDP控制器的开发。然而,以前的多档强化学习(Marl)方法将代理人定义为手头之前的队友或敌人,这未能捕获每个代理的作用的SDP的本质,即使在一个集中也变化或竞争。为了用Marl模拟SDP,一个关键挑战是协调代理的行为,同时仍然最大化个人目标。将交通仿真作为测试床,在这项工作中,我们开发了一种称为协调政策优化(Copo)的新型MARL方法,该方法包括社会心理学原理来学习SDP的神经控制器。实验表明,与各种度量标准的Marl基线相比,该方法可以实现优越的性能。明显的车辆明显地表现出复杂和多样化的社会行为,以提高整个人口的性能和安全性。演示视频和源代码可用于:https://decisionforce.github.io/copo/
translated by 谷歌翻译
自动驾驶汽车和自主驾驶研究一直受到现代人工智能应用中主要有希望的前景。根据先进的驾驶员辅助系统(ADAS)的演变,自动驾驶车辆和自主驱动系统的设计变得复杂和安全至关重要。通常,智能系统同时和有效地激活ADAS功能。因此,必须考虑可靠的ADAS功能协调,安全地控制驱动系统。为了处理这个问题,本文提出了一种随机的对抗性模仿学习(RAIL)算法。铁路是一种新的无衍生仿制学习方法,用于具有各种ADAS功能协调的自主驾驶;因此,它模仿决策者的运作,可以使用各种ADAS功能控制自动驾驶。该方法能够培训涉及激光雷达数据的决策者,并控制多车道复合道环境中的自主驾驶。基于仿真的评估验证了所提出的方法实现了所需的性能。
translated by 谷歌翻译
ML-based motion planning is a promising approach to produce agents that exhibit complex behaviors, and automatically adapt to novel environments. In the context of autonomous driving, it is common to treat all available training data equally. However, this approach produces agents that do not perform robustly in safety-critical settings, an issue that cannot be addressed by simply adding more data to the training set - we show that an agent trained using only a 10% subset of the data performs just as well as an agent trained on the entire dataset. We present a method to predict the inherent difficulty of a driving situation given data collected from a fleet of autonomous vehicles deployed on public roads. We then demonstrate that this difficulty score can be used in a zero-shot transfer to generate curricula for an imitation-learning based planning agent. Compared to training on the entire unbiased training dataset, we show that prioritizing difficult driving scenarios both reduces collisions by 15% and increases route adherence by 14% in closed-loop evaluation, all while using only 10% of the training data.
translated by 谷歌翻译
由于互动交通参与者的随机性质和道路结构的复杂性,城市自动驾驶的决策是具有挑战性的。尽管基于强化的学习(RL)决策计划有望处理城市驾驶方案,但它的样本效率低和适应性差。在本文中,我们提出了Scene-Rep Transformer,以通过更好的场景表示编码和顺序预测潜在蒸馏来提高RL决策能力。具体而言,构建了多阶段变压器(MST)编码器,不仅对自我车辆及其邻居之间的相互作用意识进行建模,而且对代理商及其候选路线之间的意图意识。具有自我监督学习目标的连续潜伏变压器(SLT)用于将未来的预测信息提炼成潜在的场景表示,以减少勘探空间并加快训练的速度。基于软演员批评的最终决策模块(SAC)将来自场景rep变压器的精制潜在场景表示输入,并输出驾驶动作。该框架在五个挑战性的模拟城市场景中得到了验证,其性能通过成功率,安全性和效率方面的数据效率和性能的大幅度提高来定量表现出来。定性结果表明,我们的框架能够提取邻居代理人的意图,以帮助做出决策并提供更多多元化的驾驶行为。
translated by 谷歌翻译
自主驾驶有可能彻底改变流动性,因此是一个积极的研究领域。实际上,自动驾驶汽车的行为必须是可以接受的,即高效,安全和可解释的。尽管香草钢筋学习(RL)找到了表现的行为策略,但它们通常是不安全且无法解释的。安全性是通过安全的RL方法引入的,但是它们仍然无法解释,因为学习的行为在没有分别进行建模的情况下共同优化了安全性和性能。可解释的机器学习很少应用于RL。本文提出了SAFEDQN,它允许在仍然有效的同时使自动驾驶汽车的行为安全可解释。 SAFEDQN在算法上透明的同时,在预期风险和效用的效用之间提供了可以理解的语义权衡。我们表明,SAFEDQN为各种场景找到了可解释且安全的驾驶政策,并展示了最先进的显着性技术如何帮助评估风险和实用性。
translated by 谷歌翻译
行人在场的运动控制算法对于开发安全可靠的自动驾驶汽车(AV)至关重要。传统运动控制算法依赖于手动设计的决策政策,这些政策忽略了AV和行人之间的相互作用。另一方面,深度强化学习的最新进展允许在没有手动设计的情况下自动学习政策。为了解决行人在场的决策问题,作者介绍了一个基于社会价值取向和深入强化学习(DRL)的框架,该框架能够以不同的驾驶方式生成决策政策。该政策是在模拟环境中使用最先进的DRL算法培训的。还引入了适合DRL训练的新型计算效率的行人模型。我们执行实验以验证我们的框架,并对使用两种不同的无模型深钢筋学习算法获得的策略进行了比较分析。模拟结果表明,开发的模型如何表现出自然的驾驶行为,例如短暂的驾驶行为,以促进行人的穿越。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译