近年来,由于机器学习的进步,已经完成了无数关于智能机器人政策的最高级工作。然而,效率低下和缺乏转移能力阻碍了实用应用程序,尤其是在人类机器人协作中,少数快速学习和高灵活性成为一种努力。为了克服这一障碍,我们指的是一个“政策池”,其中包含可以轻松访问和重复使用的预训练技能。通过以灵活的顺序展开必要的技能,采用代理来管理“政策池”,取决于特定于任务的偏爱。可以从一个或几个人类专家示范中自动解释这种偏好。在这个层次结构的环境下,我们的算法能够在迷你招架环境中获得一个稀疏的奖励,多阶段的诀窍,只有一次演示,显示了有可能立即掌握人类教练的复杂机器人技能的潜力。此外,我们算法的先天质量还允许终身学习,使其成为一种多功能的代理。
translated by 谷歌翻译
由于部分可观察性,高维视觉感知和延迟奖励,在MINECRAFT等开放世界游戏中的学习理性行为仍然是挑战,以便对加固学习(RL)研究造成挑战性,高维视觉感知和延迟奖励。为了解决这个问题,我们提出了一种具有代表学习和模仿学习的样本有效的等级RL方法,以应对感知和探索。具体来说,我们的方法包括两个层次结构,其中高级控制器学习控制策略来控制选项,低级工作人员学会解决每个子任务。为了提高子任务的学习,我们提出了一种技术组合,包括1)动作感知表示学习,其捕获了行动和表示之间的基础关系,2)基于鉴别者的自模仿学习,以实现有效的探索,以及3)合奏行为克隆一致性筛选政策鲁棒性。广泛的实验表明,Juewu-MC通过大边缘显着提高了样品效率并优于一组基线。值得注意的是,我们赢得了神经脂溢斯矿业锦标赛2021年研究竞赛的冠军,并实现了最高的绩效评分。
translated by 谷歌翻译
迷你竞赛旨在开发强化学习和模仿学习算法,可以有效地利用人类演示,大大减少了解复杂\ emph {获取德国}任务以稀疏奖励所需的环境交互的数量。为了解决挑战,在本文中,我们呈现\ textbf {seihai},a \ textbf {s} ample-\ textbf {e} ff \ textbf {e} ff \ textbf {i} cient \ textbf {h} ierrampf {h} ierraschical \ textbf {ai},充分利用人类示范和任务结构。具体而言,我们将任务分成几个顺序相关的子任务,并使用强化学习和模仿学习培训每个子任务的合适代理。我们进一步设计了一个调度程序,为自动为不同的子任务选择不同的代理。Seihai在Neurips-2020 Minerl竞赛中初步和最终的第一名。
translated by 谷歌翻译
Skill-based reinforcement learning (RL) has emerged as a promising strategy to leverage prior knowledge for accelerated robot learning. Skills are typically extracted from expert demonstrations and are embedded into a latent space from which they can be sampled as actions by a high-level RL agent. However, this skill space is expansive, and not all skills are relevant for a given robot state, making exploration difficult. Furthermore, the downstream RL agent is limited to learning structurally similar tasks to those used to construct the skill space. We firstly propose accelerating exploration in the skill space using state-conditioned generative models to directly bias the high-level agent towards only sampling skills relevant to a given state based on prior experience. Next, we propose a low-level residual policy for fine-grained skill adaptation enabling downstream RL agents to adapt to unseen task variations. Finally, we validate our approach across four challenging manipulation tasks that differ from those used to build the skill space, demonstrating our ability to learn across task variations while significantly accelerating exploration, outperforming prior works. Code and videos are available on our project website: https://krishanrana.github.io/reskill.
translated by 谷歌翻译
尽管最近的强化学习最近在学习复杂的行为方面非常成功,但它需要大量的数据才能学习任务,更不用说能够适应新任务了。引起这种限制的根本原因之一在于试验学习范式的强化学习范式的性质,在这种情况下,代理商与任务进行交流并进行学习仅依靠奖励信号,这是隐含的,这是隐含的和不足以学习的一项任务很好。相反,人类主要通过语义表征或自然语言指示来学习新技能。但是,将语言指示用于机器人运动控制来提高适应性,这是一个新出现的主题和挑战。在本文中,我们提出了一种元素算法,该算法通过多个操纵任务中的语言说明来解决学习技能的挑战。一方面,我们的算法利用语言指令来塑造其对任务的解释,另一方面,它仍然学会了在试用过程中解决任务。我们在机器人操纵基准(Meta-World)上评估了算法,并且在培训和测试成功率方面显着优于最先进的方法。该代码可在\ url {https://tumi6robot.wixsite.com/million}中获得。
translated by 谷歌翻译
技能链是一种希望通过顺序结合以前学习的技能来合成复杂行为的有希望的方法。然而,当政策遭遇在培训期间从未见过的起始状态时,幼稚的技能组成失败。对于成功的技能链接,先前的方法试图扩大策略的起始状态分布。然而,这些方法需要覆盖更大的状态分布,因为更多的策略进行测序,因此仅限于短的技能序列。在本文中,我们通过在对抗学习框架中规范终端状态分布来提出连锁多个初始状态分布的多重政策。我们评估了我们对家具组件的两个复杂的长地平衡任务的方法。我们的结果表明,我们的方法建立了第一种无模型加强学习算法来解决这些任务;而先前的技能链接方法失败。代码和视频可在https://clvrai.com/skill-chaining上获得
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
元学习的目标是尽快推出新的任务和目标。理想情况下,我们希望在第一次尝试中概括新目标和任务的方法。对此结束,我们介绍了上下文规划网络(CPN)。任务表示为目标图像并用于调节方法。我们评估CPN以及适用于零射击目标的荟萃学习的其他几种方法。我们使用MetaForld基准任务评估24个不同的操作任务的方法。我们发现CPN在一项任务上表现了几种方法和基线,并且与他人的现有方法具有竞争力。我们展示了使用Kinova Jaco机械臂在Jenga任务的物理平台上的方法。
translated by 谷歌翻译
Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.
translated by 谷歌翻译
我们开发了一种新的持续元学习方法,以解决连续多任务学习中的挑战。在此设置中,代理商的目标是快速通过任何任务序列实现高奖励。先前的Meta-Creenifiltive学习算法已经表现出有希望加速收购新任务的结果。但是,他们需要在培训期间访问所有任务。除了简单地将过去的经验转移到新任务,我们的目标是设计学习学习的持续加强学习算法,使用他们以前任务的经验更快地学习新任务。我们介绍了一种新的方法,连续的元策略搜索(Comps),通过以增量方式,在序列中的每个任务上,通过序列的每个任务来消除此限制,而无需重新访问先前的任务。 Comps持续重复两个子程序:使用RL学习新任务,并使用RL的经验完全离线Meta学习,为后续任务学习做好准备。我们发现,在若干挑战性连续控制任务的旧序列上,Comps优于持续的持续学习和非政策元增强方法。
translated by 谷歌翻译
Deep Reinforcement Learning has been successfully applied to learn robotic control. However, the corresponding algorithms struggle when applied to problems where the agent is only rewarded after achieving a complex task. In this context, using demonstrations can significantly speed up the learning process, but demonstrations can be costly to acquire. In this paper, we propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration. To do so, our method learns a goal-conditioned policy to control a system between successive low-dimensional goals. This sequential goal-reaching approach raises a problem of compatibility between successive goals: we need to ensure that the state resulting from reaching a goal is compatible with the achievement of the following goals. To tackle this problem, we present a new algorithm called DCIL-II. We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up as well as fast running with a simulated Cassie robot. Our method leveraging sequentiality is a step towards the resolution of complex robotic tasks under minimal specification effort, a key feature for the next generation of autonomous robots.
translated by 谷歌翻译
许多现有的模仿学习数据集都是从多个演示者那里收集的,每个示威者在环境的不同部分都有不同的专业知识。然而,标准模仿学习算法通常将所有示威者视为同质的,无论其专业知识如何,都会吸收任何次优示威者的弱点。在这项工作中,我们表明,对演示者专业知识的无监督学习可以导致模仿学习算法的性能一致。我们在示威者的学习政策和专业知识水平上开发并优化了联合模型。这使我们的模型能够从最佳行为中学习,并过滤每个演示者的次优行为。我们的模型学会了一项单一的政策,即使是最好的演示者,也可以用来估计任何州的任何演示者的专业知识。我们说明了我们从机器人和离散环境(例如Minigrid和国际象棋)的真实性持续控制任务的发现,以21美元的价格出售$ 23 $设置,平均价格为$ 7 \%\%,最高$ 60 \%\% $根据最终奖励的改进。
translated by 谷歌翻译
如何在演示相对较大时更加普遍地进行模仿学习一直是强化学习(RL)的持续存在问题。糟糕的示威活动导致狭窄和偏见的日期分布,非马洛维亚人类专家演示使代理商难以学习,而过度依赖子最优轨迹可以使代理商努力提高其性能。为了解决这些问题,我们提出了一种名为TD3FG的新算法,可以平稳地过渡从专家到学习从经验中学习。我们的算法在Mujoco环境中实现了有限的有限和次优的演示。我们使用行为克隆来将网络作为参考动作发生器训练,并在丢失函数和勘探噪声方面使用它。这种创新可以帮助代理商从示威活动中提取先验知识,同时降低了糟糕的马尔科维亚特性的公正的不利影响。与BC +微调和DDPGFD方法相比,它具有更好的性能,特别是当示范相对有限时。我们调用我们的方法TD3FG意味着来自发电机的TD3。
translated by 谷歌翻译
实践和磨练技能构成了人类学习方式的基本组成部分,但很少专门培训人造代理人来执行它们。取而代之的是,它们通常是端到端训练的,希望有用的技能将被隐含地学习,以最大程度地提高某些外部奖励功能的折扣回报。在本文中,我们研究了如何将技能纳入具有较大州行动空间和稀疏奖励的复杂环境中的加固学习训练中。为此,我们创建了Skillhack,这是Nethack游戏的任务和相关技能的基准。我们评估了该基准测试的许多基准,以及我们自己的新型基于技能的方法层次启动(HKS),该方法的表现优于所有其他评估的方法。我们的实验表明,先验了解有用技能的学习可以显着改善代理在复杂问题上的表现。我们最终认为,利用预定义的技能为RL问题提供了有用的归纳偏见,尤其是那些具有较大国家行动空间和稀疏奖励的问题。
translated by 谷歌翻译
连续空间中有效有效的探索是将加固学习(RL)应用于自主驾驶的核心问题。从专家演示或为特定任务设计的技能可以使探索受益,但是它们通常是昂贵的,不平衡/次优的,或者未能转移到各种任务中。但是,人类驾驶员可以通过在整个技能空间中进行高效和结构性探索而不是具有特定于任务的技能的有限空间来适应各种驾驶任务。受上述事实的启发,我们提出了一种RL算法,以探索所有可行的运动技能,而不是一组有限的特定于任务和以对象为中心的技能。没有演示,我们的方法仍然可以在各种任务中表现出色。首先,我们以纯粹的运动角度构建了一个任务不合时宜的和以自我为中心的(TAEC)运动技能库,该运动技能库是足够多样化的,可以在不同的复杂任务中重复使用。然后,将运动技能编码为低维的潜在技能空间,其中RL可以有效地进行探索。在各种具有挑战性的驾驶场景中的验证表明,我们提出的方法TAEC-RL在学习效率和任务绩效方面的表现显着优于其同行。
translated by 谷歌翻译
强化学习算法在解决稀疏和延迟奖励的复杂分层任务时需要许多样本。对于此类复杂的任务,最近提出的方向舵使用奖励再分配来利用与完成子任务相关的Q功能中的步骤。但是,由于当前的探索策略无法在合理的时间内发现它们,因此通常只有很少有具有高回报的情节作为示范。在这项工作中,我们介绍了Align-rudder,该王牌利用了一个配置文件模型来进行奖励重新分布,该模型是从多个示范序列比对获得的。因此,Align-Rudder有效地采用了奖励再分配,从而大大改善了很少的演示学习。 Align-rudder在复杂的人工任务上的竞争者优于竞争对手,延迟的奖励和几乎没有示威的竞争者。在Minecraft获得Diamond的任务上,Align Rudder能够挖掘钻石,尽管不经常。代码可在https://github.com/ml-jku/align-rudder上找到。 YouTube:https://youtu.be/ho-_8zul-uy
translated by 谷歌翻译
在现实世界中学习机器人任务仍然是高度挑战性的,有效的实用解决方案仍有待发现。在该领域使用的传统方法是模仿学习和强化学习,但是当应用于真正的机器人时,它们都有局限性。将强化学习与预先收集的演示结合在一起是一种有前途的方法,可以帮助学习控制机器人任务的控制政策。在本文中,我们提出了一种使用新技术来利用离线和在线培训来利用离线专家数据的算法,以获得更快的收敛性和提高性能。拟议的算法(AWET)用新颖的代理优势权重对批评损失进行了加权,以改善专家数据。此外,AWET利用自动的早期终止技术来停止和丢弃与专家轨迹不同的策略推出 - 以防止脱离专家数据。在一项消融研究中,与在四个标准机器人任务上的最新基线相比,AWET表现出改善和有希望的表现。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.
translated by 谷歌翻译
连续控制设置中的复杂顺序任务通常需要代理在其状态空间中成功遍历一组“窄段”。通过以样本有效的方式解决具有稀疏奖励的这些任务对现代钢筋(RL)构成了挑战,由于问题的相关的长地平性,并且在学习期间缺乏充足的正信号。已应用各种工具来解决这一挑战。当可用时,大型演示可以指导代理探索。后威尔同时释放不需要额外的信息来源。然而,现有的战略基于任务不可行的目标分布探索,这可以使长地平线的解决方案不切实际。在这项工作中,我们扩展了后视可释放的机制,以指导沿着一小组成功示范所暗示的特定任务特定分布的探索。我们评估了四个复杂,单身和双臂,机器人操纵任务的方法,对抗强合适的基线。该方法需要较少的演示来解决所有任务,并且达到明显更高的整体性能作为任务复杂性增加。最后,我们研究了提出的解决方案对输入表示质量和示范人数的鲁棒性。
translated by 谷歌翻译