加强学习中的序列模型需要任务知识来估计任务策略。本文提出了一种用于从演示中学习序列模型的层次结构算法。高级机制通过选择后者来达到的子目标来指导低级控制器。该序列取代了以前方法的回报,从而提高了其整体性能,尤其是在较长的情节和稀缺奖励的任务中。我们在OpenAigym,D4RL和Robomimic基准测试的多个任务中验证我们的方法。我们的方法的表现优于在不同的地平线任务中的八个任务中的八个基准和没有事先任务知识的奖励频率,这显示了使用序列模型从演示中学习的层次模型方法的优势。
translated by 谷歌翻译
人类可以利用先前的经验,并从少数示威活动中学习新颖的任务。与旨在通过更好的算法设计来快速适应的离线元强化学习相反,我们研究了建筑归纳偏见对少量学习能力的影响。我们提出了一个基于及时的决策变压器(提示-DT),该变压器利用了变压器体系结构和及时框架的顺序建模能力,以在离线RL中实现少量适应。我们设计了轨迹提示,其中包含少量演示的片段,并编码特定于任务的信息以指导策略生成。我们在五个Mujoco控制基准中进行的实验表明,提示-DT是一个强大的少数学习者,而没有对看不见的目标任务进行任何额外的填充。提示-DT的表现优于其变体和强大的元线RL基线,只有一个轨迹提示符只包含少量时间段。提示-DT也很健壮,可以提示长度更改并可以推广到分布(OOD)环境。
translated by 谷歌翻译
Learning long-horizon tasks such as navigation has presented difficult challenges for successfully applying reinforcement learning. However, from another perspective, under a known environment model, methods such as sampling-based planning can robustly find collision-free paths in environments without learning. In this work, we propose Control Transformer which models return-conditioned sequences from low-level policies guided by a sampling-based Probabilistic Roadmap (PRM) planner. Once trained, we demonstrate that our framework can solve long-horizon navigation tasks using only local information. We evaluate our approach on partially-observed maze navigation with MuJoCo robots, including Ant, Point, and Humanoid, and show that Control Transformer can successfully navigate large mazes and generalize to new, unknown environments. Additionally, we apply our method to a differential drive robot (Turtlebot3) and show zero-shot sim2real transfer under noisy observations.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
众所周知,在漫长的地平线和稀疏的奖励任务中,加强学习(RL)是困难的,需要大量的培训步骤。加快该过程的标准解决方案是利用额外的奖励信号,将其塑造以更好地指导学习过程。在语言条件的RL的背景下,语言输入的抽象和概括属性为更有效地塑造奖励的方式提供了机会。在本文中,我们利用这一想法并提出了一种自动奖励塑形方法,代理商从一般语言目标中提取辅助目标。这些辅助目标使用问题生成(QG)和问题答案(QA)系统:它们包括导致代理商尝试使用其自己的轨迹重建有关全球目标的部分信息的问题。当它成功时,它会获得与对答案的信心成正比的内在奖励。这激励代理生成轨迹,这些轨迹明确解释了一般语言目标的各个方面。我们的实验研究表明,这种方法不需要工程师干预来设计辅助目标,可以通过有效指导探索来提高样品效率。
translated by 谷歌翻译
基于模型的强化学习中的世界模型通常会面临不切实际的长期预测问题,因为随着预测错误的积累,复杂的错误。图形结构化世界模型中的最新作品通过构建图表来表示环境,提高了长途推理能力,但它们是在目标条件的设置中设计的,无法指导代理在没有传统的强化学习环境中最大化情节回报外部给定目标状态。为了克服这一限制,我们通过构建基于指导的马尔可夫决策过程(MDP),在离线增强学习中设计了图形结构化的世界模型,并将分配给每个定向边缘的奖励作为原始连续环境的抽象。与原始环境相比,由于我们的世界模型具有较小且有限的状态/动作空间,因此可以在此处轻松地应用价值迭代来估计图表上的状态值并找出最佳未来。与以前需要外部提供目标的图形结构化世界模型不同,我们的世界模型(称为Value Memory图(VMG))可以单独提供具有较高值的​​所需目标。 VMG可用于指导低级目标条件政策,这些政策通过监督学习培训以最大化情节回报。 D4RL基准测试的实验表明,VMG可以在多种任务中胜过最先进的方法,在这些任务中,长时间的地平线推理能力至关重要。代码将公开可用。
translated by 谷歌翻译
长摩根和包括一系列隐性子任务的日常任务仍然在离线机器人控制中构成了重大挑战。尽管许多先前的方法旨在通过模仿和离线增强学习的变体来解决这种设置,但学习的行为通常是狭窄的,并且经常努力实现可配置的长匹配目标。由于这两个范式都具有互补的优势和劣势,因此我们提出了一种新型的层次结构方法,结合了两种方法的优势,以从高维相机观察中学习任务无关的长胜压策略。具体而言,我们结合了一项低级政策,该政策通过模仿学习和从离线强化学习中学到的高级政策学习潜在的技能,以促进潜在的行为先验。各种模拟和真实机器人控制任务的实验表明,我们的配方使以前看不见的技能组合能够通过“缝制”潜在技能通过目标链条,并在绩效上提高绩效的顺序,从而实现潜在的目标。艺术基线。我们甚至还学习了一个多任务视觉运动策略,用于现实世界中25个不同的操纵任务,这既优于模仿学习和离线强化学习技术。
translated by 谷歌翻译
While large-scale sequence modeling from offline data has led to impressive performance gains in natural language and image generation, directly translating such ideas to robotics has been challenging. One critical reason for this is that uncurated robot demonstration data, i.e. play data, collected from non-expert human demonstrators are often noisy, diverse, and distributionally multi-modal. This makes extracting useful, task-centric behaviors from such data a difficult generative modeling problem. In this work, we present Conditional Behavior Transformers (C-BeT), a method that combines the multi-modal generation ability of Behavior Transformer with future-conditioned goal specification. On a suite of simulated benchmark tasks, we find that C-BeT improves upon prior state-of-the-art work in learning from play data by an average of 45.7%. Further, we demonstrate for the first time that useful task-centric behaviors can be learned on a real-world robot purely from play data without any task labels or reward information. Robot videos are best viewed on our project website: https://play-to-policy.github.io
translated by 谷歌翻译
强化学习(RL)通常涉及估计静止政策或单步模型,利用马尔可夫属性来解决问题。但是,我们也可以将RL视为通用序列建模问题,目标是产生一系列导致一系列高奖励的动作。通过这种方式观看,考虑在其他域中运用良好的高容量序列预测模型,例如自然语言处理,也可以为RL问题提供有效的解决方案。为此,我们探索如何使用变压器架构与序列建模的工具来解决RL,以将分布在轨迹上和将光束搜索作为规划算法进行重新定位。框架RL作为序列建模问题简化了一系列设计决策,允许我们分配在离线RL算法中常见的许多组件。我们展示了这种方法跨越长地平动态预测,仿制学习,目标条件的RL和离线RL的灵活性。此外,我们表明这种方法可以与现有的无模型算法结合起来,以在稀疏奖励,长地平线任务中产生最先进的策划仪。
translated by 谷歌翻译
连续控制设置中的复杂顺序任务通常需要代理在其状态空间中成功遍历一组“窄段”。通过以样本有效的方式解决具有稀疏奖励的这些任务对现代钢筋(RL)构成了挑战,由于问题的相关的长地平性,并且在学习期间缺乏充足的正信号。已应用各种工具来解决这一挑战。当可用时,大型演示可以指导代理探索。后威尔同时释放不需要额外的信息来源。然而,现有的战略基于任务不可行的目标分布探索,这可以使长地平线的解决方案不切实际。在这项工作中,我们扩展了后视可释放的机制,以指导沿着一小组成功示范所暗示的特定任务特定分布的探索。我们评估了四个复杂,单身和双臂,机器人操纵任务的方法,对抗强合适的基线。该方法需要较少的演示来解决所有任务,并且达到明显更高的整体性能作为任务复杂性增加。最后,我们研究了提出的解决方案对输入表示质量和示范人数的鲁棒性。
translated by 谷歌翻译
最近的工作表明,通过将RL任务转换为监督学习任务,通过有条件的政策来解决离线加强学习(RL)可以产生有希望的结果。决策变压器(DT)结合了条件政策方法和变压器体系结构,以显示针对多个基准测试的竞争性能。但是,DT缺乏缝线能力 - 离线RL的关键能力之一,它从亚最佳轨迹中学习了最佳策略。当离线数据集仅包含亚最佳轨迹时,问题就变得很重要。另一方面,基于动态编程(例如Q学习)的常规RL方法不会遇到相同的问题;但是,他们患有不稳定的学习行为,尤其是当它在非政策学习环境中采用功能近似时。在本文中,我们提出了通过利用动态编程(Q-Learning)的好处来解决DT的缺点的Q学习决策者(QDT)。 QDT利用动态编程(Q-学习)结果来重新标记培训数据中的返回。然后,我们使用重新标记的数据训练DT。我们的方法有效利用了这两种方法的好处,并弥补了彼此的缺点,以取得更好的绩效。我们在简单的环境中演示了DT的问题和QDT的优势。我们还在更复杂的D4RL基准测试中评估了QDT,显示出良好的性能增长。
translated by 谷歌翻译
Learning policies that effectively utilize language instructions in complex, multi-task environments is an important problem in sequential decision-making. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. In our work, we propose \emph{Learning Interpretable Skill Abstractions (LISA)}, a hierarchical imitation learning framework that can learn diverse, interpretable primitive behaviors or skills from language-conditioned demonstrations to better generalize to unseen instructions. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation environments, LISA outperforms a strong non-hierarchical Decision Transformer baseline in the low data regime and is able to compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills.
translated by 谷歌翻译
最近的工作表明,单独监督学习,没有时间差异(TD)学习,可以对离线RL显着有效。什么时候保持真实,需要哪些算法组件?通过广泛的实验,我们致力于将RL离线的监督学习到其基本要素。在我们考虑的每个环境套件中,只需通过双层前馈MLP最大化的可能性,与基于TD学习或与变压器的序列建模的基本更复杂的方法具有竞争力的竞争性。仔细选择模型容量(例如,通过正则化或架构),并选择哪些信息(例如,目标或奖励)对性能至关重要。这些见解是通过监督学习进行加强学习的从业者(我们投入“RVS学习”)的实践指南。他们还探讨了现有RVS方法的限制,在随机数据上相对较弱,并提出了许多打开问题。
translated by 谷歌翻译
Deep Reinforcement Learning has been successfully applied to learn robotic control. However, the corresponding algorithms struggle when applied to problems where the agent is only rewarded after achieving a complex task. In this context, using demonstrations can significantly speed up the learning process, but demonstrations can be costly to acquire. In this paper, we propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration. To do so, our method learns a goal-conditioned policy to control a system between successive low-dimensional goals. This sequential goal-reaching approach raises a problem of compatibility between successive goals: we need to ensure that the state resulting from reaching a goal is compatible with the achievement of the following goals. To tackle this problem, we present a new algorithm called DCIL-II. We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up as well as fast running with a simulated Cassie robot. Our method leveraging sequentiality is a step towards the resolution of complex robotic tasks under minimal specification effort, a key feature for the next generation of autonomous robots.
translated by 谷歌翻译
Recently, methods such as Decision Transformer that reduce reinforcement learning to a prediction task and solve it via supervised learning (RvS) have become popular due to their simplicity, robustness to hyperparameters, and strong overall performance on offline RL tasks. However, simply conditioning a probabilistic model on a desired return and taking the predicted action can fail dramatically in stochastic environments since trajectories that result in a return may have only achieved that return due to luck. In this work, we describe the limitations of RvS approaches in stochastic environments and propose a solution. Rather than simply conditioning on the return of a single trajectory as is standard practice, our proposed method, ESPER, learns to cluster trajectories and conditions on average cluster returns, which are independent from environment stochasticity. Doing so allows ESPER to achieve strong alignment between target return and expected performance in real environments. We demonstrate this in several challenging stochastic offline-RL tasks including the challenging puzzle game 2048, and Connect Four playing against a stochastic opponent. In all tested domains, ESPER achieves significantly better alignment between the target return and achieved return than simply conditioning on returns. ESPER also achieves higher maximum performance than even the value-based baselines.
translated by 谷歌翻译
Hierarchical Reinforcement Learning (HRL) algorithms have been demonstrated to perform well on high-dimensional decision making and robotic control tasks. However, because they solely optimize for rewards, the agent tends to search the same space redundantly. This problem reduces the speed of learning and achieved reward. In this work, we present an Off-Policy HRL algorithm that maximizes entropy for efficient exploration. The algorithm learns a temporally abstracted low-level policy and is able to explore broadly through the addition of entropy to the high-level. The novelty of this work is the theoretical motivation of adding entropy to the RL objective in the HRL setting. We empirically show that the entropy can be added to both levels if the Kullback-Leibler (KL) divergence between consecutive updates of the low-level policy is sufficiently small. We performed an ablative study to analyze the effects of entropy on hierarchy, in which adding entropy to high-level emerged as the most desirable configuration. Furthermore, a higher temperature in the low-level leads to Q-value overestimation and increases the stochasticity of the environment that the high-level operates on, making learning more challenging. Our method, SHIRO, surpasses state-of-the-art performance on a range of simulated robotic control benchmark tasks and requires minimal tuning.
translated by 谷歌翻译
强化学习(RL)可以视为序列建模任务:给定一系列过去的状态奖励经验,代理人预测了下一步动作的序列。在这项工作中,我们提出了用于视觉RL的国家行动 - 奖励变压器(星形形式),该变压器明确对短期状态行动奖励表示(Star-epresentations)进行建模,从本质上引入了马尔可夫式的感应偏见,以改善长期的长期偏见造型。我们的方法首先通过在短暂的时间窗口内的自我管理图像状态贴片,动作和奖励令牌提取星星代表。然后将它们与纯图像状态表示结合 - 提取为卷积特征,以在整个序列上执行自我注意力。我们的实验表明,在离线RL和模仿学习设置中,StarFormer在基于图像的Atari和DeepMind Control Suite基准上的最先进的变压器方法优于最先进的变压器方法。 StarFormer也更符合更长的输入序列。我们的代码可在https://github.com/elicassion/starformer上找到。
translated by 谷歌翻译
Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the FlexiBiT framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single FlexiBiT model is simultaneously capable of carrying out many tasks with performance similar to or better than specialized models. Additionally, we show that performance can be further improved by fine-tuning our general model on specific tasks of interest.
translated by 谷歌翻译
在模仿学习的背景下,提供专家轨迹通常是昂贵且耗时的。因此,目标必须是创建算法,这些算法需要尽可能少的专家数据。在本文中,我们提出了一种算法,该算法模仿了专家的高级战略,而不仅仅是模仿行动水平的专家,我们假设这需要更少的专家数据并使培训更加稳定。作为先验,我们假设高级策略是达到未知的目标状态区域,我们假设这对于强化学习中许多领域是有效的先验。目标国家地区未知,但是由于专家已经证明了如何达到目标,因此代理商试图到达与专家类似的州。我们的算法以时间连贯性的思想为基础,训练神经网络,以预测两个状态是否相似,从某种意义上说,它们可能会随着时间的流逝而发生。在推论期间,代理将其当前状态与案例基础的专家状态进行比较以获得相似性。结果表明,我们的方法仍然可以在很少有专家数据的设置中学习一个近乎最佳的政策,这些算法试图模仿动作级别的专家,这一算法再也无法做到了。
translated by 谷歌翻译
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Trajectory Stitching (TS) - generates new trajectories (sequences of states and actions) by `stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using TS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
translated by 谷歌翻译