尽管基于计划的序列建模方法在连续控制方面表现出巨大的潜力,但由于高维空间中规划的高度计算复杂性和天生的困难,将它们扩展到高维状态序列仍然是一个开放的挑战。我们提出了轨迹自动编码计划器(TAP),这是一种基于计划的序列建模RL方法,可扩展到高州行动维度。使用状态条件矢量定量的变分自动编码器(VQ-VAE),点击模拟给定当前状态的轨迹的条件分布。当部署为RL代理时,TAP避免在高维连续动作空间中逐步计划,而是通过Beam Search寻找最佳的潜在代码序列。与$ o(d^3)$轨迹变压器的复杂性不同,TAP享受常数$ o(c)$规划有关州行动维度$ d $的计算复杂性。我们的经验评估还表明,随着维度的增长,TAP的表现越来越强。对于具有较高状态和动作维度的ADROIT机器人手动操纵任务,TAP超过了基于模型的方法,包括TT,其边距很大,并且还击败了强大的无模型参与者 - 批评基准。
translated by 谷歌翻译
基于变压器神经网络体系结构的自然语言处理(NLP)的令人印象深刻的结果激发了研究人员探索视线离线增强学习(RL)作为通用序列建模问题。基于此范式的最新著作已获得最新的结果,其中一些主要确定性的离线Atari和D4RL基准。但是,由于这些方法将国家和行动共同模拟单一的测序问题,因此它们努力将政策和世界动态对回报的影响解散。因此,在对抗或随机环境中,这些方法导致过度乐观的行为,在自主驾驶(例如自主驾驶)中可能是危险的。在这项工作中,我们提出了一种通过明确解开政策和世界模型来解决这种乐观偏见的方法,该方法使我们在测试时可以搜索对环境中多个可能的未来的稳健性的策略。我们在模拟中的各种自动驾驶任务上展示了我们的方法的出色性能。
translated by 谷歌翻译
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Trajectory Stitching (TS) - generates new trajectories (sequences of states and actions) by `stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using TS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
translated by 谷歌翻译
强化学习(RL)通常涉及估计静止政策或单步模型,利用马尔可夫属性来解决问题。但是,我们也可以将RL视为通用序列建模问题,目标是产生一系列导致一系列高奖励的动作。通过这种方式观看,考虑在其他域中运用良好的高容量序列预测模型,例如自然语言处理,也可以为RL问题提供有效的解决方案。为此,我们探索如何使用变压器架构与序列建模的工具来解决RL,以将分布在轨迹上和将光束搜索作为规划算法进行重新定位。框架RL作为序列建模问题简化了一系列设计决策,允许我们分配在离线RL算法中常见的许多组件。我们展示了这种方法跨越长地平动态预测,仿制学习,目标条件的RL和离线RL的灵活性。此外,我们表明这种方法可以与现有的无模型算法结合起来,以在稀疏奖励,长地平线任务中产生最先进的策划仪。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
在离线强化学习(离线RL)中,主要挑战之一是处理学习策略与给定数据集之间的分布转变。为了解决这个问题,最近的离线RL方法试图引入保守主义偏见,以鼓励在高信心地区学习。无模型方法使用保守的正常化或特殊网络结构直接对策略或价值函数学习进行这样的偏见,但它们约束的策略搜索限制了脱机数据集之外的泛化。基于模型的方法使用保守量量化学习前瞻性动态模型,然后生成虚构的轨迹以扩展脱机数据集。然而,由于离线数据集中的有限样本,保守率量化通常在支撑区域内遭受全面化。不可靠的保守措施将误导基于模型的想象力,以不受欢迎的地区,导致过多的行为。为了鼓励更多的保守主义,我们提出了一种基于模型的离线RL框架,称为反向离线模型的想象(ROMI)。我们与新颖的反向策略结合使用逆向动力学模型,该模型可以生成导致脱机数据集中的目标目标状态的卷展栏。这些反向的想象力提供了无通知的数据增强,以便无模型策略学习,并使远程数据集的保守概括。 ROMI可以有效地与现成的无模型算法组合,以实现基于模型的概括,具有适当的保守主义。经验结果表明,我们的方法可以在离线RL基准任务中产生更保守的行为并实现最先进的性能。
translated by 谷歌翻译
强化学习(RL)通过与环境相互作用的试验过程解决顺序决策问题。尽管RL在玩复杂的视频游戏方面取得了巨大的成功,但在现实世界中,犯错误总是不希望的。为了提高样本效率并从而降低错误,据信基于模型的增强学习(MBRL)是一个有前途的方向,它建立了环境模型,在该模型中可以进行反复试验,而无需实际成本。在这项调查中,我们对MBRL进行了审查,重点是Deep RL的最新进展。对于非壮观环境,学到的环境模型与真实环境之间始终存在概括性错误。因此,非常重要的是分析环境模型中的政策培训与实际环境中的差异,这反过来又指导了更好的模型学习,模型使用和政策培训的算法设计。此外,我们还讨论了其他形式的RL,包括离线RL,目标条件RL,多代理RL和Meta-RL的最新进展。此外,我们讨论了MBRL在现实世界任务中的适用性和优势。最后,我们通过讨论MBRL未来发展的前景来结束这项调查。我们认为,MBRL在被忽略的现实应用程序中具有巨大的潜力和优势,我们希望这项调查能够吸引更多关于MBRL的研究。
translated by 谷歌翻译
尽管学习环境内部模型的强化学习(RL)方法具有比没有模型的对应物更有效的样本效率,但学会从高维传感器中建模原始观察结果可能具有挑战性。先前的工作通过通过辅助目标(例如重建或价值预测)学习观察值的低维表示来解决这一挑战。但是,这些辅助目标与RL目标之间的一致性通常不清楚。在这项工作中,我们提出了一个单一的目标,该目标共同优化了潜在空间模型和政策,以实现高回报,同时保持自洽。这个目标是预期收益的下限。与基于模型的RL在策略探索或模型保证方面的先前范围不同,我们的界限直接依靠整体RL目标。我们证明,所得算法匹配或改善了最佳基于模型和无模型的RL方法的样品效率。尽管这种有效的样品方法通常在计算上是要求的,但我们的方法在较小的壁式锁定时间降低了50 \%。
translated by 谷歌翻译
Offline reinforcement-learning (RL) algorithms learn to make decisions using a given, fixed training dataset without the possibility of additional online data collection. This problem setting is captivating because it holds the promise of utilizing previously collected datasets without any costly or risky interaction with the environment. However, this promise also bears the drawback of this setting. The restricted dataset induces subjective uncertainty because the agent can encounter unfamiliar sequences of states and actions that the training data did not cover. Moreover, inherent system stochasticity further increases uncertainty and aggravates the offline RL problem, preventing the agent from learning an optimal policy. To mitigate the destructive uncertainty effects, we need to balance the aspiration to take reward-maximizing actions with the incurred risk due to incorrect ones. In financial economics, modern portfolio theory (MPT) is a method that risk-averse investors can use to construct diversified portfolios that maximize their returns without unacceptable levels of risk. We integrate MPT into the agent's decision-making process to present a simple-yet-highly-effective risk-aware planning algorithm for offline RL. Our algorithm allows us to systematically account for the \emph{estimated quality} of specific actions and their \emph{estimated risk} due to the uncertainty. We show that our approach can be coupled with the Transformer architecture to yield a state-of-the-art planner for offline RL tasks, maximizing the return while significantly reducing the variance.
translated by 谷歌翻译
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
translated by 谷歌翻译
最近的工作表明,离线增强学习(RL)可以作为序列建模问题(Chen等,2021; Janner等,2021)配制,并通过类似于大规模语言建模的方法解决。但是,RL的任何实际实例化也涉及一个在线组件,在线组件中,通过与环境的任务规定相互作用对被动离线数据集进行了预测的策略。我们建议在线决策变压器(ODT),这是一种基于序列建模的RL算法,该算法将离线预处理与统一框架中的在线填充融为一体。我们的框架将序列级熵正规仪与自回归建模目标结合使用,用于样品效率探索和填充。从经验上讲,我们表明ODT在D4RL基准上的绝对性能中与最先进的表现具有竞争力,但在填充过程中显示出更大的收益。
translated by 谷歌翻译
离线强化学习在利用大型预采用的数据集进行政策学习方面表现出了巨大的希望,使代理商可以放弃经常廉价的在线数据收集。但是,迄今为止,离线强化学习的探索相对较小,并且缺乏对剩余挑战所在的何处的了解。在本文中,我们试图建立简单的基线以在视觉域中连续控制。我们表明,对两个基于最先进的在线增强学习算法,Dreamerv2和DRQ-V2进行了简单的修改,足以超越事先工作并建立竞争性的基准。我们在现有的离线数据集中对这些算法进行了严格的评估,以及从视觉观察结果中进行离线强化学习的新测试台,更好地代表现实世界中离线增强学习问题中存在的数据分布,并开放我们的代码和数据以促进此方面的进度重要领域。最后,我们介绍并分析了来自视觉观察的离线RL所独有的几个关键Desiderata,包括视觉分散注意力和动态视觉上可识别的变化。
translated by 谷歌翻译
基于模型的增强学习(RL)是一种通过利用学习的单步动力学模型来计划想象中的动作来学习复杂行为的样本效率方法。但是,计划为长马操作计划的每项行动都是不切实际的,类似于每个肌肉运动的人类计划。相反,人类有效地计划具有高级技能来解决复杂的任务。从这种直觉中,我们提出了一个基于技能的RL框架(SKIMO),该框架能够使用技能动力学模型在技能空间中进行计划,该模型直接预测技能成果,而不是预测中级状态中的所有小细节,逐步。为了准确有效的长期计划,我们共同学习了先前经验的技能动力学模型和技能曲目。然后,我们利用学到的技能动力学模型准确模拟和计划技能空间中的长范围,这可以有效地学习长摩盛,稀疏的奖励任务。导航和操纵域中的实验结果表明,Skimo扩展了基于模型的方法的时间范围,并提高了基于模型的RL和基于技能的RL的样品效率。代码和视频可在\ url {https://clvrai.com/skimo}上找到
translated by 谷歌翻译
离线强化学习(RL)为从离线数据提供学习决策的框架,因此构成了现实世界应用程序作为自动驾驶的有希望的方法。自动驾驶车辆(SDV)学习策略,这甚至可能甚至优于次优数据集中的行为。特别是在安全关键应用中,作为自动化驾驶,解释性和可转换性是成功的关键。这激发了使用基于模型的离线RL方法,该方法利用规划。然而,目前的最先进的方法往往忽视了多种子体系统随机行为引起的溶液不确定性的影响。这项工作提出了一种新的基于不确定感知模型的离线强化学习利用规划(伞)的新方法,其解决了以可解释的基于学习的方式共同的预测,规划和控制问题。训练有素的动作调节的随机动力学模型捕获了交通场景的独特不同的未来演化。分析为我们在挑战自动化驾驶模拟中的效力和基于现实世界的公共数据集的方法提供了经验证据。
translated by 谷歌翻译
博学的无模型离线增强学习(RL)方法的策略通常被限制在数据集的支持范围内,以避免可能的危险危险分发措施或状态,从而使处理不支持的区域挑战。基于模型的RL方法通过使用经过训练的前进或反向动力学模型生成虚构轨迹来提供更丰富的数据集和收益概括。但是,想象的过渡可能不准确,因此降低了基础离线RL方法的性能。在本文中,我们建议通过使用训练有素的双向动力学模型和通过双重检查推出策略来增强离线数据集。我们通过信任前向模型和落后模型一致的样本来介绍保守主义。我们的方法是基于置信度的双向离线模型的想象力,可以生成可靠的样本,并可以与任何无模型的离线RL方法结合使用。 D4RL基准测试的实验结果表明,我们的方法显着提高了现有的无模型离线RL算法的性能,并在基线方法上取得了竞争性或更好的分数。
translated by 谷歌翻译
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. * Equal contribution. Order was determined by coin flip.
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.
translated by 谷歌翻译
离线增强学习(RL)定义了从静态记录数据集学习的任务,而无需与环境不断交互。学识渊博的政策与行为政策之间的分配变化使得价值函数必须保持保守,以使分布(OOD)的动作不会被严重高估。但是,现有的方法,对看不见的行为进行惩罚或与行为政策进行正规化,太悲观了,这抑制了价值功能的概括并阻碍了性能的提高。本文探讨了温和但足够的保守主义,可以在线学习,同时不损害概括。我们提出了轻度保守的Q学习(MCQ),其中通过分配了适当的伪Q值来积极训练OOD。从理论上讲,我们表明MCQ诱导了至少与行为策略的行为,并且对OOD行动不会发生错误的高估。 D4RL基准测试的实验结果表明,与先前的工作相比,MCQ取得了出色的性能。此外,MCQ在从离线转移到在线时显示出卓越的概括能力,并明显胜过基准。
translated by 谷歌翻译
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
translated by 谷歌翻译