离线增强学习(RL)旨在使用先前收集的静态数据集学习最佳策略,是RL的重要范式。由于函数近似错误在分布外动作上的功能近似错误,因此在此任务上的标准RL方法通常会表现较差。尽管已经提出了各种正规化方法来减轻此问题,但它们通常受到表达有限的策略类别的限制,有时会导致次优的解决方案。在本文中,我们提出了利用条件扩散模型作为行为克隆和策略正则化的高度表达政策类别的扩散-QL。在我们的方法中,我们学习了一个动作值函数,并在有条件扩散模型的训练损失中添加了最大化动作值的术语,这导致损失寻求接近行为政策的最佳动作。我们展示了基于扩散模型的策略的表现力以及在扩散模型下的行为克隆和策略改进的耦合都有助于扩散-QL的出色性能。我们在具有多模式行为策略的简单2D强盗示例中说明了我们的方法和先前的工作。然后,我们证明我们的方法可以在离线RL的大多数D4RL基准任务上实现最先进的性能。
translated by 谷歌翻译
在离线增强学习中,加权回归是一种常见方法,可以确保学习的政策与行为策略保持接近并防止选择样本外动作。在这项工作中,我们表明,由于政策模型的分配表达有限,以前的方法可能仍会在培训期间选择看不见的动作,这会偏离其最初动机。为了解决这个问题,我们通过将学习的政策分解为两个部分:表达生成行为模型和动作评估模型,采用生成方法。关键见解是,这种去耦避免学习具有封闭形式表达式的明确参数化的策略模型。直接学习行为策略使我们能够利用生成建模的现有进步,例如基于扩散的方法,以建模各种行为。至于行动评估,我们将方法与样本中的计划技术相结合,以进一步避免选择样本外动作并提高计算效率。 D4RL数据集的实验结果表明,与最先进的离线RL方法相比,我们提出的方法具有竞争性或卓越的性能,尤其是在诸如Antmaze之类的复杂任务中。我们还经验证明,我们的方法可以从包含多个独特但类似成功策略的异质数据集中成功学习,而以前的单峰政策失败了。
translated by 谷歌翻译
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
translated by 谷歌翻译
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Trajectory Stitching (TS) - generates new trajectories (sequences of states and actions) by `stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using TS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
translated by 谷歌翻译
离线增强学习(RL)将经典RL算法的范式扩展到纯粹从静态数据集中学习,而无需在学习过程中与基础环境进行交互。离线RL的一个关键挑战是政策培训的不稳定,这是由于离线数据的分布与学习政策的未结束的固定状态分配之间的不匹配引起的。为了避免分配不匹配的有害影响,我们将当前政策的未静置固定分配正规化在政策优化过程中的离线数据。此外,我们训练动力学模型既实施此正规化,又可以更好地估计当前策略的固定分布,从而减少了分布不匹配引起的错误。在各种连续控制的离线RL数据集中,我们的方法表示竞争性能,从而验证了我们的算法。该代码公开可用。
translated by 谷歌翻译
We consider an offline reinforcement learning (RL) setting where the agent need to learn from a dataset collected by rolling out multiple behavior policies. There are two challenges for this setting: 1) The optimal trade-off between optimizing the RL signal and the behavior cloning (BC) signal changes on different states due to the variation of the action coverage induced by different behavior policies. Previous methods fail to handle this by only controlling the global trade-off. 2) For a given state, the action distribution generated by different behavior policies may have multiple modes. The BC regularizers in many previous methods are mean-seeking, resulting in policies that select out-of-distribution (OOD) actions in the middle of the modes. In this paper, we address both challenges by using adaptively weighted reverse Kullback-Leibler (KL) divergence as the BC regularizer based on the TD3 algorithm. Our method not only trades off the RL and BC signals with per-state weights (i.e., strong BC regularization on the states with narrow action coverage, and vice versa) but also avoids selecting OOD actions thanks to the mode-seeking property of reverse KL. Empirically, our algorithm can outperform existing offline RL algorithms in the MuJoCo locomotion tasks with the standard D4RL datasets as well as the mixed datasets that combine the standard datasets.
translated by 谷歌翻译
在离线强化学习(离线RL)中,主要挑战之一是处理学习策略与给定数据集之间的分布转变。为了解决这个问题,最近的离线RL方法试图引入保守主义偏见,以鼓励在高信心地区学习。无模型方法使用保守的正常化或特殊网络结构直接对策略或价值函数学习进行这样的偏见,但它们约束的策略搜索限制了脱机数据集之外的泛化。基于模型的方法使用保守量量化学习前瞻性动态模型,然后生成虚构的轨迹以扩展脱机数据集。然而,由于离线数据集中的有限样本,保守率量化通常在支撑区域内遭受全面化。不可靠的保守措施将误导基于模型的想象力,以不受欢迎的地区,导致过多的行为。为了鼓励更多的保守主义,我们提出了一种基于模型的离线RL框架,称为反向离线模型的想象(ROMI)。我们与新颖的反向策略结合使用逆向动力学模型,该模型可以生成导致脱机数据集中的目标目标状态的卷展栏。这些反向的想象力提供了无通知的数据增强,以便无模型策略学习,并使远程数据集的保守概括。 ROMI可以有效地与现成的无模型算法组合,以实现基于模型的概括,具有适当的保守主义。经验结果表明,我们的方法可以在离线RL基准任务中产生更保守的行为并实现最先进的性能。
translated by 谷歌翻译
Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.
translated by 谷歌翻译
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks.
translated by 谷歌翻译
Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
最近的工作表明,单独监督学习,没有时间差异(TD)学习,可以对离线RL显着有效。什么时候保持真实,需要哪些算法组件?通过广泛的实验,我们致力于将RL离线的监督学习到其基本要素。在我们考虑的每个环境套件中,只需通过双层前馈MLP最大化的可能性,与基于TD学习或与变压器的序列建模的基本更复杂的方法具有竞争力的竞争性。仔细选择模型容量(例如,通过正则化或架构),并选择哪些信息(例如,目标或奖励)对性能至关重要。这些见解是通过监督学习进行加强学习的从业者(我们投入“RVS学习”)的实践指南。他们还探讨了现有RVS方法的限制,在随机数据上相对较弱,并提出了许多打开问题。
translated by 谷歌翻译
大多数前往离线强化学习(RL)的方法都采取了一种迭代演员 - 批评批评,涉及违规评估。在本文中,我们展示了使用行为政策的政策Q估计来令人惊讶地执行一步的Q估计,从而简单地执行一个受限制/正规化的政策改进的步骤。该一步算法在大部分D4RL基准测试中击败了先前报告的迭代算法的结果。一步基线实现了这种强劲的性能,同时对超公数更简单,更强大而不是先前提出的迭代算法。我们认为迭代方法的表现相对较差是在违反政策评估中固有的高方差,并通过对这些估计的重复优化的政策进行放大。此外,我们假设一步算法的强大性能是由于环境和行为政策中有利结构的组合。
translated by 谷歌翻译
离线增强学习(RL)定义了从静态记录数据集学习的任务,而无需与环境不断交互。学识渊博的政策与行为政策之间的分配变化使得价值函数必须保持保守,以使分布(OOD)的动作不会被严重高估。但是,现有的方法,对看不见的行为进行惩罚或与行为政策进行正规化,太悲观了,这抑制了价值功能的概括并阻碍了性能的提高。本文探讨了温和但足够的保守主义,可以在线学习,同时不损害概括。我们提出了轻度保守的Q学习(MCQ),其中通过分配了适当的伪Q值来积极训练OOD。从理论上讲,我们表明MCQ诱导了至少与行为策略的行为,并且对OOD行动不会发生错误的高估。 D4RL基准测试的实验结果表明,与先前的工作相比,MCQ取得了出色的性能。此外,MCQ在从离线转移到在线时显示出卓越的概括能力,并明显胜过基准。
translated by 谷歌翻译
博学的无模型离线增强学习(RL)方法的策略通常被限制在数据集的支持范围内,以避免可能的危险危险分发措施或状态,从而使处理不支持的区域挑战。基于模型的RL方法通过使用经过训练的前进或反向动力学模型生成虚构轨迹来提供更丰富的数据集和收益概括。但是,想象的过渡可能不准确,因此降低了基础离线RL方法的性能。在本文中,我们建议通过使用训练有素的双向动力学模型和通过双重检查推出策略来增强离线数据集。我们通过信任前向模型和落后模型一致的样本来介绍保守主义。我们的方法是基于置信度的双向离线模型的想象力,可以生成可靠的样本,并可以与任何无模型的离线RL方法结合使用。 D4RL基准测试的实验结果表明,我们的方法显着提高了现有的无模型离线RL算法的性能,并在基线方法上取得了竞争性或更好的分数。
translated by 谷歌翻译
离线强化学习(RL)任务要求代理从预先收集的数据集中学习,没有与环境进行进一步的交互。尽管有可能超越行为政策,但基于RL的方法通常是不切实际的,因为培训不稳定并引导外推错误,这始终需要通过在线评估进行仔细的超参数调整。相比之下,离线模仿学习(IL)没有这样的问题,因为它直接在不估计值函数的情况下直接了解策略。然而,IL通常限制在行为政策的能力,并且倾向于从政策混合收集的数据集中学习平庸行为。在本文中,我们的目标是利用IL但缓解这种缺点。观察行为克隆能够使用较少的数据模仿邻近的策略,我们提出\ Textit {课程脱机仿制学习(线圈)},它利用具有更高回报的自适应邻近策略的体验挑选策略,并提高了当前策略沿课程阶段。在连续控制基准测试中,我们将线圈与基于仿制的和基于RL的方法进行比较,表明它不仅避免了在混合数据集上学习平庸行为,而且甚至与最先进的离线RL方法竞争。
translated by 谷歌翻译
最近的工作表明,离线增强学习(RL)可以作为序列建模问题(Chen等,2021; Janner等,2021)配制,并通过类似于大规模语言建模的方法解决。但是,RL的任何实际实例化也涉及一个在线组件,在线组件中,通过与环境的任务规定相互作用对被动离线数据集进行了预测的策略。我们建议在线决策变压器(ODT),这是一种基于序列建模的RL算法,该算法将离线预处理与统一框架中的在线填充融为一体。我们的框架将序列级熵正规仪与自回归建模目标结合使用,用于样品效率探索和填充。从经验上讲,我们表明ODT在D4RL基准上的绝对性能中与最先进的表现具有竞争力,但在填充过程中显示出更大的收益。
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
translated by 谷歌翻译
离线强化学习用于在实时访问环境昂贵或不可能的情况下培训策略。作为这些恶劣条件的自然后果,在采取行动之前,代理商可能缺乏完全遵守在线环境的资源。我们配备了这种情况资源受限的设置。这导致脱机数据集(可用于培训)的情况可以包含完全处理的功能(使用功能强大的语言模型,图像模型,复杂传感器等)在实际在线时不可用。此断开连接导致离线RL中的有趣和未开发的问题:是否可以使用丰富地处理的脱机数据集来培训可访问在线环境中的更少功能的策略?在这项工作中,我们介绍并正式化这一新颖的资源受限的问题设置。我们突出了使用有限功能培训的完整脱机数据集和策略培训的策略之间的性能差距。我们通过策略传输算法解决了这种性能缺口,该策略传输算法首先使用功能完全可用的脱机数据集列举教师代理,然后将此知识传输到仅使用资源约束功能的学生代理。为了更好地捕获此设置的挑战,我们提出了一个数据收集过程:RL(RC-D4RL)的资源受限数据集。我们在RC-D4RL和流行的D4RL基准测试中评估传输算法,并观察到基线上的一致性改进(无需传输)。实验的代码在https://github.com/jayanthrr /rc-offlinerl上获得。
translated by 谷歌翻译