已经证明了深度增强学习(DRL)对自动驾驶和机器人等几种复杂的决策应用有效。然而,DRL众所周知,众所周知,其高样本复杂性及其缺乏稳定性。先验知识,例如作为专家演示,通常可以提供,但挑战杠杆以减轻这些问题。在本文中,我们提出了一般的增强模仿(GRI),这是一种新的方法,它与勘探和专家数据相结合的好处,并在任何偏离策略的RL算法上实施。我们制作一个简化假设:专家演示可以被视为完美的数据,其基础政策得到了不断的高奖励。基于此假设,GRI介绍了离线演示代理的概念。该代理发送了专家数据,并与来自在线RL探索代理商的经验同时处理。我们表明,我们的方法能够在城市环境中的基于视觉的自主驾驶的重大改进。我们进一步验证了具有不同偏离策略RL算法的Mujoco连续控制任务的GRI方法。我们的方法在Carla排行榜上排名第一,在Rails,以前的最先进,以17%越来越胜过世界。
translated by 谷歌翻译
在自主驾驶场中,人类知识融合到深增强学习(DRL)通常基于在模拟环境中记录的人类示范。这限制了在现实世界交通中的概率和可行性。我们提出了一种两级DRL方法,从真实的人类驾驶中学习,实现优于纯DRL代理的性能。培训DRL代理商是在Carla的框架内完成了机器人操作系统(ROS)。对于评估,我们设计了不同的真实驾驶场景,可以将提出的两级DRL代理与纯DRL代理进行比较。在从人驾驶员中提取“良好”行为之后,例如在信号交叉口中的预期,该代理变得更有效,并且驱动更安全,这使得这种自主代理更适应人体机器人交互(HRI)流量。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
在现实世界中学习机器人任务仍然是高度挑战性的,有效的实用解决方案仍有待发现。在该领域使用的传统方法是模仿学习和强化学习,但是当应用于真正的机器人时,它们都有局限性。将强化学习与预先收集的演示结合在一起是一种有前途的方法,可以帮助学习控制机器人任务的控制政策。在本文中,我们提出了一种使用新技术来利用离线和在线培训来利用离线专家数据的算法,以获得更快的收敛性和提高性能。拟议的算法(AWET)用新颖的代理优势权重对批评损失进行了加权,以改善专家数据。此外,AWET利用自动的早期终止技术来停止和丢弃与专家轨迹不同的策略推出 - 以防止脱离专家数据。在一项消融研究中,与在四个标准机器人任务上的最新基线相比,AWET表现出改善和有希望的表现。
translated by 谷歌翻译
Deep reinforcement learning (DRL) provides a new way to generate robot control policy. However, the process of training control policy requires lengthy exploration, resulting in a low sample efficiency of reinforcement learning (RL) in real-world tasks. Both imitation learning (IL) and learning from demonstrations (LfD) improve the training process by using expert demonstrations, but imperfect expert demonstrations can mislead policy improvement. Offline to Online reinforcement learning requires a lot of offline data to initialize the policy, and distribution shift can easily lead to performance degradation during online fine-tuning. To solve the above problems, we propose a learning from demonstrations method named A-SILfD, which treats expert demonstrations as the agent's successful experiences and uses experiences to constrain policy improvement. Furthermore, we prevent performance degradation due to large estimation errors in the Q-function by the ensemble Q-functions. Our experiments show that A-SILfD can significantly improve sample efficiency using a small number of different quality expert demonstrations. In four Mujoco continuous control tasks, A-SILfD can significantly outperform baseline methods after 150,000 steps of online training and is not misled by imperfect expert demonstrations during training.
translated by 谷歌翻译
近年来,深度加固学习(DRL)已经成功地进入了复杂的决策应用,例如机器人,自动驾驶或视频游戏。在寻找更多采样高效的算法中,有希望的方向是利用尽可能多的外部偏离策略数据。这种数据驱动方法的一个主题是从专家演示中学习。在过去,已经提出了多种想法来利用添加到重放缓冲区的示范,例如仅在演示中预先预订或最小化额外的成本函数。我们提出了一种新的方法,能够利用任何稀疏奖励环境中在线收集的演示和剧集,以任何违规算法在线。我们的方法基于奖励奖金,给出了示范和成功的剧集,鼓励专家模仿和自模仿。首先,我们向来自示威活动的过渡提供奖励奖金,以鼓励代理商符合所证明的行为。然后,在收集成功的剧集时,我们将其在将其添加到重播缓冲区之前与相同的奖金转换,鼓励代理也与其先前的成功相匹配。我们的实验专注于操纵机器人,特别是在模拟中有6个自由的机器人手臂的三个任务。我们表明,即使在没有示范的情况下,我们基于奖励重新标记的方法可以提高基础算法(SAC和DDPG)对这些任务的性能。此外,集成到我们的方法中的两种改进来自以前的作品,允许我们的方法优于所有基线。
translated by 谷歌翻译
近年来,深度加固学习(DRL)已经成功地进入了复杂的决策应用,例如机器人,自动驾驶或视频游戏。违规算法往往比其策略对应物更具样本效率,并且可以从存储在重放缓冲区中存储的任何违规数据中受益。专家演示是此类数据的流行来源:代理人接触到成功的国家和行动,可以加速学习过程并提高性能。在过去,已经提出了多种想法来充分利用缓冲区中的演示,例如仅在演示或最小化额外的成本函数的预先估算。我们继续进行研究,以孤立地评估这些想法中的几个想法,以了解哪一个具有最大的影响。我们还根据给予示范和成功集中的奖励奖金,为稀疏奖励任务提供了一种新的方法。首先,我们向来自示威活动的过渡提供奖励奖金,以鼓励代理商符合所证明的行为。然后,在收集成功的剧集时,我们将其在将其添加到重播缓冲区之前与相同的奖金转换,鼓励代理也与其先前的成功相匹配。我们的实验的基本算法是流行的软演员 - 评论家(SAC),用于连续动作空间的最先进的脱核算法。我们的实验专注于操纵机器人,特别是在模拟中的机器人手臂的3D到达任务。我们表明,我们的方法Sacr2根据奖励重新标记提高了此任务的性能,即使在没有示范的情况下也是如此。
translated by 谷歌翻译
在各种控制任务域中,现有控制器提供了基线的性能水平,虽然可能是次优的 - 应维护。依赖于国家和行动空间的广泛探索的强化学习(RL)算法可用于优化控制策略。但是,完全探索性的RL算法可能会在训练过程中降低低于基线水平的性能。在本文中,我们解决了控制政策的在线优化问题,同时最大程度地减少了遗憾的W.R.T基线政策绩效。我们提出了一个共同的仿制学习框架,表示乔尔。 JIRL中的学习过程假设了基线策略的可用性,并设计了两个目标\ textbf {(a)}利用基线的在线演示,以最大程度地减少培训期间的遗憾W.R.T的基线策略,\ textbf {(b) }最终超过了基线性能。 JIRL通过最初学习模仿基线策略并逐渐将控制从基线转移到RL代理来解决这些目标。实验结果表明,JIRR有效地实现了几个连续的动作空间域中的上述目标。结果表明,JIRL在最终性能中与最先进的算法相当,同时在所有提出的域中训练期间都会降低基线后悔。此外,结果表明,对于最先进的基线遗憾最小化方法,其基线后悔的减少因素最高为21美元。
translated by 谷歌翻译
如何在演示相对较大时更加普遍地进行模仿学习一直是强化学习(RL)的持续存在问题。糟糕的示威活动导致狭窄和偏见的日期分布,非马洛维亚人类专家演示使代理商难以学习,而过度依赖子最优轨迹可以使代理商努力提高其性能。为了解决这些问题,我们提出了一种名为TD3FG的新算法,可以平稳地过渡从专家到学习从经验中学习。我们的算法在Mujoco环境中实现了有限的有限和次优的演示。我们使用行为克隆来将网络作为参考动作发生器训练,并在丢失函数和勘探噪声方面使用它。这种创新可以帮助代理商从示威活动中提取先验知识,同时降低了糟糕的马尔科维亚特性的公正的不利影响。与BC +微调和DDPGFD方法相比,它具有更好的性能,特别是当示范相对有限时。我们调用我们的方法TD3FG意味着来自发电机的TD3。
translated by 谷歌翻译
我们提出了一种从演示方法(LFD)方法的新颖学习,即示范(DMFD)的可变形操作,以使用状态或图像作为输入(给定的专家演示)来求解可变形的操纵任务。我们的方法以三种不同的方式使用演示,并平衡在线探索环境和使用专家的指导之间进行权衡的权衡,以有效地探索高维空间。我们在一组一维绳索的一组代表性操纵任务上测试DMFD,并从软件套件中的一套二维布和2维布进行测试,每个任务都带有状态和图像观测。对于基于状态的任务,我们的方法超过基线性能高达12.9%,在基于图像的任务上最多超过33.44%,具有可比或更好的随机性。此外,我们创建了两个具有挑战性的环境,用于使用基于图像的观测值折叠2D布,并为其设定性能基准。与仿真相比,我们在现实世界执行过程中归一化性能损失最小的真实机器人(约为6%),我们将DMFD部署为最小。源代码在github.com/uscresl/dmfd上
translated by 谷歌翻译
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a randomly initialized DNN policy. Existing RL schedulers overlook the importance of learning from historical data and improving upon custom heuristic policies. Offline reinforcement learning presents the prospect of policy optimization from pre-recorded datasets without online environment interaction. Following the recent success of data-driven learning, we explore two RL methods: 1) Behaviour Cloning and 2) Offline RL, which aim to learn policies from logged data without interacting with the environment. These methods address the challenges concerning the cost of data collection and safety, particularly pertinent to real-world applications of RL. Although the data-driven RL methods generate good results, we show that the performance is highly dependent on the quality of the historical datasets. Finally, we demonstrate that by effectively incorporating prior expert demonstrations to pre-train the agent, we short-circuit the random exploration phase to learn a reasonable policy with online training. We utilize Offline RL as a \textbf{launchpad} to learn effective scheduling policies from prior experience collected using Oracle or heuristic policies. Such a framework is effective for pre-training from historical datasets and well suited to continuous improvement with online data collection.
translated by 谷歌翻译
Imitation learning (IL) is a simple and powerful way to use high-quality human driving data, which can be collected at scale, to identify driving preferences and produce human-like behavior. However, policies based on imitation learning alone often fail to sufficiently account for safety and reliability concerns. In this paper, we show how imitation learning combined with reinforcement learning using simple rewards can substantially improve the safety and reliability of driving policies over those learned from imitation alone. In particular, we use a combination of imitation and reinforcement learning to train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision risk. To our knowledge, this is the first application of a combined imitation and reinforcement learning approach in autonomous driving that utilizes large amounts of real-world human driving data.
translated by 谷歌翻译
模仿学习在有效地学习政策方面对复杂的决策问题有着巨大的希望。当前的最新算法经常使用逆增强学习(IRL),在给定一组专家演示的情况下,代理会替代奖励功能和相关的最佳策略。但是,这种IRL方法通常需要在复杂控制问题上进行实质性的在线互动。在这项工作中,我们提出了正规化的最佳运输(ROT),这是一种新的模仿学习算法,基于最佳基于最佳运输轨迹匹配的最新进展。我们的主要技术见解是,即使只有少量演示,即使只有少量演示,也可以自适应地将轨迹匹配的奖励与行为克隆相结合。我们对横跨DeepMind Control Suite,OpenAI Robotics和Meta-World基准的20个视觉控制任务进行的实验表明,与先前最新的方法相比,平均仿真达到了90%的专家绩效的速度,达到了90%的专家性能。 。在现实世界的机器人操作中,只有一次演示和一个小时的在线培训,ROT在14个任务中的平均成功率为90.1%。
translated by 谷歌翻译
在多机构动态交通情况下的自主驾驶具有挑战性:道路使用者的行为不确定,很难明确建模,并且自我车辆应与他们应用复杂的谈判技巧,例如屈服,合并和交付,以实现,以实现在各种环境中都有安全有效的驾驶。在这些复杂的动态场景中,传统的计划方法主要基于规则,并且通常会导致反应性甚至过于保守的行为。因此,他们需要乏味的人类努力来维持可行性。最近,基于深度学习的方法显示出令人鼓舞的结果,具有更好的概括能力,但手工工程的工作较少。但是,它们要么是通过有监督的模仿学习(IL)来实施的,该学习遭受了数据集偏见和分配不匹配问题,要么接受了深入强化学习(DRL)的培训,但专注于一种特定的交通情况。在这项工作中,我们建议DQ-GAT实现可扩展和主动的自主驾驶,在这些驾驶中,基于图形注意力的网络用于隐式建模相互作用,并采用了深层Q学习来以无聊的方式训练网络端到端的网络。 。在高保真驾驶模拟器中进行的广泛实验表明,我们的方法比以前的基于学习的方法和传统的基于规则的方法获得了更高的成功率,并且在可见和看不见的情况下都可以更好地摆脱安全性和效率。此外,轨迹数据集的定性结果表明,我们所学的政策可以通过实时速度转移到现实世界中。演示视频可在https://caipeide.github.io/dq-gat/上找到。
translated by 谷歌翻译
Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.
translated by 谷歌翻译
仅国家模仿学习的最新进展将模仿学习的适用性扩展到现实世界中的范围,从而减轻了观察专家行动的需求。但是,现有的解决方案只学会从数据中提取州对行动映射策略,而无需考虑专家如何计划到目标。这阻碍了利用示威游行并限制政策的灵活性的能力。在本文中,我们介绍了解耦政策优化(DEPO),该策略优化(DEPO)明确将策略脱离为高级状态计划者和逆动力学模型。借助嵌入式的脱钩策略梯度和生成对抗训练,DEPO可以将知识转移到不同的动作空间或状态过渡动态,并可以将规划师推广到无示威的状态区域。我们的深入实验分析表明,DEPO在学习最佳模仿性能的同时学习通用目标状态计划者的有效性。我们证明了DEPO通过预训练跨任务转移的吸引力,以及与各种技能共同培训的潜力。
translated by 谷歌翻译
End-to-end autonomous driving provides a feasible way to automatically maximize overall driving system performance by directly mapping the raw pixels from a front-facing camera to control signals. Recent advanced methods construct a latent world model to map the high dimensional observations into compact latent space. However, the latent states embedded by the world model proposed in previous works may contain a large amount of task-irrelevant information, resulting in low sampling efficiency and poor robustness to input perturbations. Meanwhile, the training data distribution is usually unbalanced, and the learned policy is hard to cope with the corner cases during the driving process. To solve the above challenges, we present a semantic masked recurrent world model (SEM2), which introduces a latent filter to extract key task-relevant features and reconstruct a semantic mask via the filtered features, and is trained with a multi-source data sampler, which aggregates common data and multiple corner case data in a single batch, to balance the data distribution. Extensive experiments on CARLA show that our method outperforms the state-of-the-art approaches in terms of sample efficiency and robustness to input permutations.
translated by 谷歌翻译
本文解决了逆增强学习(IRL)的问题 - 从观察其行为中推断出代理的奖励功能。 IRL可以为学徒学习提供可概括和紧凑的代表,并能够准确推断人的偏好以帮助他们。 %并提供更准确的预测。但是,有效的IRL具有挑战性,因为许多奖励功能可以与观察到的行为兼容。我们专注于如何利用先前的强化学习(RL)经验,以使学习这些偏好更快,更高效。我们提出了IRL算法基础(通过样本中的连续功能意图推断行为获取行为),该算法利用多任务RL预培训和后继功能,使代理商可以为跨越可能的目标建立强大的基础,从而跨越可能的目标。给定的域。当仅接触一些专家演示以优化新颖目标时,代理商会使用其基础快速有效地推断奖励功能。我们的实验表明,我们的方法非常有效地推断和优化显示出奖励功能,从而准确地从少于100个轨迹中推断出奖励功能。
translated by 谷歌翻译
离线强化学习在利用大型预采用的数据集进行政策学习方面表现出了巨大的希望,使代理商可以放弃经常廉价的在线数据收集。但是,迄今为止,离线强化学习的探索相对较小,并且缺乏对剩余挑战所在的何处的了解。在本文中,我们试图建立简单的基线以在视觉域中连续控制。我们表明,对两个基于最先进的在线增强学习算法,Dreamerv2和DRQ-V2进行了简单的修改,足以超越事先工作并建立竞争性的基准。我们在现有的离线数据集中对这些算法进行了严格的评估,以及从视觉观察结果中进行离线强化学习的新测试台,更好地代表现实世界中离线增强学习问题中存在的数据分布,并开放我们的代码和数据以促进此方面的进度重要领域。最后,我们介绍并分析了来自视觉观察的离线RL所独有的几个关键Desiderata,包括视觉分散注意力和动态视觉上可识别的变化。
translated by 谷歌翻译