Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
依赖于太多的实验来学习良好的行动,目前的强化学习(RL)算法在现实世界的环境中具有有限的适用性,这可能太昂贵,无法探索探索。我们提出了一种批量RL算法,其中仅使用固定的脱机数据集来学习有效策略,而不是与环境的在线交互。批量RL中的有限数据产生了在培训数据中不充分表示的状态/行动的价值估计中的固有不确定性。当我们的候选政策从生成数据的候选政策发散时,这导致特别严重的外推。我们建议通过两个直接的惩罚来减轻这个问题:减少这种分歧的政策限制和减少过于乐观估计的价值约束。在全面的32个连续动作批量RL基准测试中,我们的方法对最先进的方法进行了比较,无论如何收集离线数据如何。
translated by 谷歌翻译
离线增强学习(RL)定义了从静态记录数据集学习的任务,而无需与环境不断交互。学识渊博的政策与行为政策之间的分配变化使得价值函数必须保持保守,以使分布(OOD)的动作不会被严重高估。但是,现有的方法,对看不见的行为进行惩罚或与行为政策进行正规化,太悲观了,这抑制了价值功能的概括并阻碍了性能的提高。本文探讨了温和但足够的保守主义,可以在线学习,同时不损害概括。我们提出了轻度保守的Q学习(MCQ),其中通过分配了适当的伪Q值来积极训练OOD。从理论上讲,我们表明MCQ诱导了至少与行为策略的行为,并且对OOD行动不会发生错误的高估。 D4RL基准测试的实验结果表明,与先前的工作相比,MCQ取得了出色的性能。此外,MCQ在从离线转移到在线时显示出卓越的概括能力,并明显胜过基准。
translated by 谷歌翻译
强化学习(RL)已在域中展示有效,在域名可以通过与其操作环境进行积极互动来学习政策。但是,如果我们将RL方案更改为脱机设置,代理商只能通过静态数据集更新其策略,其中脱机强化学习中的一个主要问题出现,即分配转移。我们提出了一种悲观的离线强化学习(PESSORL)算法,以主动引导代理通过操纵价值函数来恢复熟悉的区域。我们专注于由分销外(OOD)状态引起的问题,并且故意惩罚训练数据集中不存在的状态的高值,以便学习的悲观值函数下限界限状态空间内的任何位置。我们在各种基准任务中评估Pessorl算法,在那里我们表明我们的方法通过明确处理OOD状态,与这些方法仅考虑ood行动时,我们的方法通过明确处理OOD状态。
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
在许多顺序决策问题(例如,机器人控制,游戏播放,顺序预测),人类或专家数据可用包含有关任务的有用信息。然而,来自少量专家数据的模仿学习(IL)可能在具有复杂动态的高维环境中具有挑战性。行为克隆是一种简单的方法,由于其简单的实现和稳定的收敛而被广泛使用,但不利用涉及环境动态的任何信息。由于对奖励和政策近似器或偏差,高方差梯度估计器,难以在实践中难以在实践中努力训练的许多现有方法。我们介绍了一种用于动态感知IL的方法,它通过学习单个Q函数来避免对抗训练,隐含地代表奖励和策略。在标准基准测试中,隐式学习的奖励显示与地面真实奖励的高正面相关性,说明我们的方法也可以用于逆钢筋学习(IRL)。我们的方法,逆软Q学习(IQ-Learn)获得了最先进的结果,在离线和在线模仿学习设置中,显着优于现有的现有方法,这些方法都在所需的环境交互和高维空间中的可扩展性中,通常超过3倍。
translated by 谷歌翻译
Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard offpolicy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.
translated by 谷歌翻译
大多数前往离线强化学习(RL)的方法都采取了一种迭代演员 - 批评批评,涉及违规评估。在本文中,我们展示了使用行为政策的政策Q估计来令人惊讶地执行一步的Q估计,从而简单地执行一个受限制/正规化的政策改进的步骤。该一步算法在大部分D4RL基准测试中击败了先前报告的迭代算法的结果。一步基线实现了这种强劲的性能,同时对超公数更简单,更强大而不是先前提出的迭代算法。我们认为迭代方法的表现相对较差是在违反政策评估中固有的高方差,并通过对这些估计的重复优化的政策进行放大。此外,我们假设一步算法的强大性能是由于环境和行为政策中有利结构的组合。
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
在训练数据的分布中评估时,学到的模型和政策可以有效地概括,但可以在分布输入输入的情况下产生不可预测且错误的输出。为了避免在部署基于学习的控制算法时分配变化,我们寻求一种机制将代理商限制为类似于受过训练的国家和行动的机制。在控制理论中,Lyapunov稳定性和控制不变的集合使我们能够保证稳定系统周围系统的控制器,而在机器学习中,密度模型使我们能够估算培训数据分布。我们可以将这两个概念结合起来,产生基于学习的控制算法,这些算法仅使用分配动作将系统限制为分布状态?在这项工作中,我们建议通过结合Lyapunov稳定性和密度估计的概念来做到这一点,引入Lyapunov密度模型:控制Lyapunov函数和密度模型的概括,这些函数和密度模型可以保证代理商在其整个轨迹上保持分布的能力。
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
在没有高保真模拟环境的情况下,学习有效的加强学习(RL)政策可以解决现实世界中的复杂任务。在大多数情况下,我们只有具有简化动力学的不完善的模拟器,这不可避免地导致RL策略学习中的SIM到巨大差距。最近出现的离线RL领域为直接从预先收集的历史数据中学习政策提供了另一种可能性。但是,为了达到合理的性能,现有的离线RL算法需要不切实际的离线数据,并具有足够的州行动空间覆盖范围进行培训。这提出了一个新问题:是否有可能通过在线RL中的不完美模拟器中的离线RL中的有限数据中的学习结合到无限制的探索,以解决两种方法的缺点?在这项研究中,我们提出了动态感知的混合离线和对线增强学习(H2O)框架,以为这个问题提供肯定的答案。 H2O引入了动态感知的政策评估方案,该方案可以自适应地惩罚Q函数在模拟的状态行动对上具有较大的动态差距,同时也允许从固定的现实世界数据集中学习。通过广泛的模拟和现实世界任务以及理论分析,我们证明了H2O与其他跨域在线和离线RL算法相对于其他跨域的表现。 H2O提供了全新的脱机脱机RL范式,该范式可能会阐明未来的RL算法设计,以解决实用的现实世界任务。
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
我们根据相对悲观主义的概念,在数据覆盖不足的情况下提出了经过对抗训练的演员评论家(ATAC),这是一种新的无模型算法(RL)。 ATAC被设计为两人Stackelberg游戏:政策演员与受对抗训练的价值评论家竞争,后者发现参与者不如数据收集行为策略的数据一致方案。我们证明,当演员在两人游戏中不后悔时,运行ATAC会产生一项政策,证明1)在控制悲观程度的各种超级参数上都超过了行为政策,而2)与最佳竞争。 policy covered by data with appropriately chosen hyperparameters.与现有作品相比,尤其是我们的框架提供了一般函数近似的理论保证,也提供了可扩展到复杂环境和大型数据集的深度RL实现。在D4RL基准测试中,ATAC在一系列连续的控制任务上始终优于最先进的离线RL算法。
translated by 谷歌翻译
Offline reinforcement learning (RL) promises the ability to learn effective policies solely using existing, static datasets, without any costly online interaction. To do so, offline RL methods must handle distributional shift between the dataset and the learned policy. The most common approach is to learn conservative, or lower-bound, value functions, which underestimate the return of out-of-distribution (OOD) actions. However, such methods exhibit one notable drawback: policies optimized on such value functions can only behave according to a fixed, possibly suboptimal, degree of conservatism. However, this can be alleviated if we instead are able to learn policies for varying degrees of conservatism at training time and devise a method to dynamically choose one of them during evaluation. To do so, in this work, we propose learning value functions that additionally condition on the degree of conservatism, which we dub confidence-conditioned value functions. We derive a new form of a Bellman backup that simultaneously learns Q-values for any degree of confidence with high probability. By conditioning on confidence, our value functions enable adaptive strategies during online evaluation by controlling for confidence level using the history of observations thus far. This approach can be implemented in practice by conditioning the Q-function from existing conservative algorithms on the confidence. We theoretically show that our learned value functions produce conservative estimates of the true value at any desired confidence. Finally, we empirically show that our algorithm outperforms existing conservative offline RL algorithms on multiple discrete control domains.
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks.
translated by 谷歌翻译
设计有效的基于模型的增强学习算法很困难,因为必须对模型生成数据的偏置权衡数据生成的易用性。在本文中,我们研究了模型使用在理论上和经验上的政策优化中的作用。我们首先制定和分析一种基于模型的加强学习算法,并在每个步骤中保证单调改善。在实践中,该分析过于悲观,并表明实际的脱助策略数据总是优选模拟策略数据,但我们表明可以将模型概括的经验估计纳入这样的分析以证明模型使用证明模型使用。通过这种分析的动机,我们证明,使用从真实数据分支的短模型生成的卷展栏的简单过程具有更复杂的基于模型的算法而没有通常的缺陷的效益。特别是,这种方法超越了基于模型的方法的样本效率,匹配了最佳无模型算法的渐近性能,并缩放到导致其他基于模型的方法完全失败的视野。
translated by 谷歌翻译
离线增强学习(RL)可以从静态数据集中学习控制策略,但是像标准RL方法一样,它需要每个过渡的奖励注释。在许多情况下,将大型数据集标记为奖励可能会很高,尤其是如果人类标签必须提供这些奖励,同时收集多样的未标记数据可能相对便宜。我们如何在离线RL中最好地利用这种未标记的数据?一种自然解决方案是从标记的数据中学习奖励函数,并使用它标记未标记的数据。在本文中,我们发现,也许令人惊讶的是,一种简单得多的方法,它简单地将零奖励应用于未标记的数据可以导致理论和实践中的有效数据共享,而无需学习任何奖励模型。虽然这种方法起初可能看起来很奇怪(并且不正确),但我们提供了广泛的理论和经验分析,说明了它如何摆脱奖励偏见,样本复杂性和分配变化,通常会导致良好的结果。我们表征了这种简单策略有效的条件,并进一步表明,使用简单的重新加权方法扩展它可以进一步缓解通过使用不正确的奖励标签引入的偏见。我们的经验评估证实了模拟机器人运动,导航和操纵设置中的这些发现。
translated by 谷歌翻译