在离线增强学习中,加权回归是一种常见方法,可以确保学习的政策与行为策略保持接近并防止选择样本外动作。在这项工作中,我们表明,由于政策模型的分配表达有限,以前的方法可能仍会在培训期间选择看不见的动作,这会偏离其最初动机。为了解决这个问题,我们通过将学习的政策分解为两个部分:表达生成行为模型和动作评估模型,采用生成方法。关键见解是,这种去耦避免学习具有封闭形式表达式的明确参数化的策略模型。直接学习行为策略使我们能够利用生成建模的现有进步,例如基于扩散的方法,以建模各种行为。至于行动评估,我们将方法与样本中的计划技术相结合,以进一步避免选择样本外动作并提高计算效率。 D4RL数据集的实验结果表明,与最先进的离线RL方法相比,我们提出的方法具有竞争性或卓越的性能,尤其是在诸如Antmaze之类的复杂任务中。我们还经验证明,我们的方法可以从包含多个独特但类似成功策略的异质数据集中成功学习,而以前的单峰政策失败了。
translated by 谷歌翻译
离线增强学习(RL)旨在使用先前收集的静态数据集学习最佳策略,是RL的重要范式。由于函数近似错误在分布外动作上的功能近似错误,因此在此任务上的标准RL方法通常会表现较差。尽管已经提出了各种正规化方法来减轻此问题,但它们通常受到表达有限的策略类别的限制,有时会导致次优的解决方案。在本文中,我们提出了利用条件扩散模型作为行为克隆和策略正则化的高度表达政策类别的扩散-QL。在我们的方法中,我们学习了一个动作值函数,并在有条件扩散模型的训练损失中添加了最大化动作值的术语,这导致损失寻求接近行为政策的最佳动作。我们展示了基于扩散模型的策略的表现力以及在扩散模型下的行为克隆和策略改进的耦合都有助于扩散-QL的出色性能。我们在具有多模式行为策略的简单2D强盗示例中说明了我们的方法和先前的工作。然后,我们证明我们的方法可以在离线RL的大多数D4RL基准任务上实现最先进的性能。
translated by 谷歌翻译
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
translated by 谷歌翻译
在离线强化学习(离线RL)中,主要挑战之一是处理学习策略与给定数据集之间的分布转变。为了解决这个问题,最近的离线RL方法试图引入保守主义偏见,以鼓励在高信心地区学习。无模型方法使用保守的正常化或特殊网络结构直接对策略或价值函数学习进行这样的偏见,但它们约束的策略搜索限制了脱机数据集之外的泛化。基于模型的方法使用保守量量化学习前瞻性动态模型,然后生成虚构的轨迹以扩展脱机数据集。然而,由于离线数据集中的有限样本,保守率量化通常在支撑区域内遭受全面化。不可靠的保守措施将误导基于模型的想象力,以不受欢迎的地区,导致过多的行为。为了鼓励更多的保守主义,我们提出了一种基于模型的离线RL框架,称为反向离线模型的想象(ROMI)。我们与新颖的反向策略结合使用逆向动力学模型,该模型可以生成导致脱机数据集中的目标目标状态的卷展栏。这些反向的想象力提供了无通知的数据增强,以便无模型策略学习,并使远程数据集的保守概括。 ROMI可以有效地与现成的无模型算法组合,以实现基于模型的概括,具有适当的保守主义。经验结果表明,我们的方法可以在离线RL基准任务中产生更保守的行为并实现最先进的性能。
translated by 谷歌翻译
离线增强学习(RL)定义了从静态记录数据集学习的任务,而无需与环境不断交互。学识渊博的政策与行为政策之间的分配变化使得价值函数必须保持保守,以使分布(OOD)的动作不会被严重高估。但是,现有的方法,对看不见的行为进行惩罚或与行为政策进行正规化,太悲观了,这抑制了价值功能的概括并阻碍了性能的提高。本文探讨了温和但足够的保守主义,可以在线学习,同时不损害概括。我们提出了轻度保守的Q学习(MCQ),其中通过分配了适当的伪Q值来积极训练OOD。从理论上讲,我们表明MCQ诱导了至少与行为策略的行为,并且对OOD行动不会发生错误的高估。 D4RL基准测试的实验结果表明,与先前的工作相比,MCQ取得了出色的性能。此外,MCQ在从离线转移到在线时显示出卓越的概括能力,并明显胜过基准。
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
依赖于太多的实验来学习良好的行动,目前的强化学习(RL)算法在现实世界的环境中具有有限的适用性,这可能太昂贵,无法探索探索。我们提出了一种批量RL算法,其中仅使用固定的脱机数据集来学习有效策略,而不是与环境的在线交互。批量RL中的有限数据产生了在培训数据中不充分表示的状态/行动的价值估计中的固有不确定性。当我们的候选政策从生成数据的候选政策发散时,这导致特别严重的外推。我们建议通过两个直接的惩罚来减轻这个问题:减少这种分歧的政策限制和减少过于乐观估计的价值约束。在全面的32个连续动作批量RL基准测试中,我们的方法对最先进的方法进行了比较,无论如何收集离线数据如何。
translated by 谷歌翻译
离线增强学习(RL)将经典RL算法的范式扩展到纯粹从静态数据集中学习,而无需在学习过程中与基础环境进行交互。离线RL的一个关键挑战是政策培训的不稳定,这是由于离线数据的分布与学习政策的未结束的固定状态分配之间的不匹配引起的。为了避免分配不匹配的有害影响,我们将当前政策的未静置固定分配正规化在政策优化过程中的离线数据。此外,我们训练动力学模型既实施此正规化,又可以更好地估计当前策略的固定分布,从而减少了分布不匹配引起的错误。在各种连续控制的离线RL数据集中,我们的方法表示竞争性能,从而验证了我们的算法。该代码公开可用。
translated by 谷歌翻译
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Trajectory Stitching (TS) - generates new trajectories (sequences of states and actions) by `stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using TS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
translated by 谷歌翻译
大多数前往离线强化学习(RL)的方法都采取了一种迭代演员 - 批评批评,涉及违规评估。在本文中,我们展示了使用行为政策的政策Q估计来令人惊讶地执行一步的Q估计,从而简单地执行一个受限制/正规化的政策改进的步骤。该一步算法在大部分D4RL基准测试中击败了先前报告的迭代算法的结果。一步基线实现了这种强劲的性能,同时对超公数更简单,更强大而不是先前提出的迭代算法。我们认为迭代方法的表现相对较差是在违反政策评估中固有的高方差,并通过对这些估计的重复优化的政策进行放大。此外,我们假设一步算法的强大性能是由于环境和行为政策中有利结构的组合。
translated by 谷歌翻译
在没有高保真模拟环境的情况下,学习有效的加强学习(RL)政策可以解决现实世界中的复杂任务。在大多数情况下,我们只有具有简化动力学的不完善的模拟器,这不可避免地导致RL策略学习中的SIM到巨大差距。最近出现的离线RL领域为直接从预先收集的历史数据中学习政策提供了另一种可能性。但是,为了达到合理的性能,现有的离线RL算法需要不切实际的离线数据,并具有足够的州行动空间覆盖范围进行培训。这提出了一个新问题:是否有可能通过在线RL中的不完美模拟器中的离线RL中的有限数据中的学习结合到无限制的探索,以解决两种方法的缺点?在这项研究中,我们提出了动态感知的混合离线和对线增强学习(H2O)框架,以为这个问题提供肯定的答案。 H2O引入了动态感知的政策评估方案,该方案可以自适应地惩罚Q函数在模拟的状态行动对上具有较大的动态差距,同时也允许从固定的现实世界数据集中学习。通过广泛的模拟和现实世界任务以及理论分析,我们证明了H2O与其他跨域在线和离线RL算法相对于其他跨域的表现。 H2O提供了全新的脱机脱机RL范式,该范式可能会阐明未来的RL算法设计,以解决实用的现实世界任务。
translated by 谷歌翻译
强化学习(RL)已在域中展示有效,在域名可以通过与其操作环境进行积极互动来学习政策。但是,如果我们将RL方案更改为脱机设置,代理商只能通过静态数据集更新其策略,其中脱机强化学习中的一个主要问题出现,即分配转移。我们提出了一种悲观的离线强化学习(PESSORL)算法,以主动引导代理通过操纵价值函数来恢复熟悉的区域。我们专注于由分销外(OOD)状态引起的问题,并且故意惩罚训练数据集中不存在的状态的高值,以便学习的悲观值函数下限界限状态空间内的任何位置。我们在各种基准任务中评估Pessorl算法,在那里我们表明我们的方法通过明确处理OOD状态,与这些方法仅考虑ood行动时,我们的方法通过明确处理OOD状态。
translated by 谷歌翻译
Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.
translated by 谷歌翻译
We consider an offline reinforcement learning (RL) setting where the agent need to learn from a dataset collected by rolling out multiple behavior policies. There are two challenges for this setting: 1) The optimal trade-off between optimizing the RL signal and the behavior cloning (BC) signal changes on different states due to the variation of the action coverage induced by different behavior policies. Previous methods fail to handle this by only controlling the global trade-off. 2) For a given state, the action distribution generated by different behavior policies may have multiple modes. The BC regularizers in many previous methods are mean-seeking, resulting in policies that select out-of-distribution (OOD) actions in the middle of the modes. In this paper, we address both challenges by using adaptively weighted reverse Kullback-Leibler (KL) divergence as the BC regularizer based on the TD3 algorithm. Our method not only trades off the RL and BC signals with per-state weights (i.e., strong BC regularization on the states with narrow action coverage, and vice versa) but also avoids selecting OOD actions thanks to the mode-seeking property of reverse KL. Empirically, our algorithm can outperform existing offline RL algorithms in the MuJoCo locomotion tasks with the standard D4RL datasets as well as the mixed datasets that combine the standard datasets.
translated by 谷歌翻译
有效的探索是深度强化学习的关键挑战。几种方法,例如行为先验,能够利用离线数据,以便在复杂任务上有效加速加强学习。但是,如果手动的任务与所证明的任务过度偏离,则此类方法的有效性是有限的。在我们的工作中,我们建议从离线数据中学习功能,这些功能由更加多样化的任务共享,例如动作与定向之间的相关性。因此,我们介绍了无国有先验,该先验直接在显示的轨迹中直接建模时间一致性,并且即使在对简单任务收集的数据进行培训时,也能够在复杂的任务中推动探索。此外,我们通过从政策和行动之前的概率混合物中动态采样动作,引入了一种新颖的集成方案,用于非政策强化学习中的动作研究。我们将我们的方法与强大的基线相提并论,并提供了经验证据,表明它可以在稀疏奖励环境下的长途持续控制任务中加速加强学习。
translated by 谷歌翻译
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
博学的无模型离线增强学习(RL)方法的策略通常被限制在数据集的支持范围内,以避免可能的危险危险分发措施或状态,从而使处理不支持的区域挑战。基于模型的RL方法通过使用经过训练的前进或反向动力学模型生成虚构轨迹来提供更丰富的数据集和收益概括。但是,想象的过渡可能不准确,因此降低了基础离线RL方法的性能。在本文中,我们建议通过使用训练有素的双向动力学模型和通过双重检查推出策略来增强离线数据集。我们通过信任前向模型和落后模型一致的样本来介绍保守主义。我们的方法是基于置信度的双向离线模型的想象力,可以生成可靠的样本,并可以与任何无模型的离线RL方法结合使用。 D4RL基准测试的实验结果表明,我们的方法显着提高了现有的无模型离线RL算法的性能,并在基线方法上取得了竞争性或更好的分数。
translated by 谷歌翻译
通常通过利用低级别表示来解决马尔可夫决策过程(MDP)中维度的诅咒。这激发了有关线性MDP的最新理论研究。但是,大多数方法在不切实际的假设下对分解的归一化或在实践中引入未解决的计算挑战。相反,我们考虑了线性MDP的替代定义,该定义自动确保正常化,同时允许通过对比度估计进行有效的表示。该框架还承认了置信度调整的索引算法,从而使面对不确定性的乐观或悲观主义,使得有效而有原则的方法。据我们所知,这为线性MDP提供了第一种实用的表示学习方法,该方法既可以实现强大的理论保证和经验绩效。从理论上讲,我们证明所提出的算法在在线和离线设置中均有效。从经验上讲,我们在几个基准测试中表现出优于现有基于模型的现有模型和无模型算法的卓越性能。
translated by 谷歌翻译