强化学习中的固有问题是应对不确定要采取的行动(或状态价值)的政策。模型不确定性,更正式地称为认知不确定性,是指超出采样噪声的模型的预期预测误差。在本文中,我们提出了Q值函数中认知不确定性估计的度量,我们将其称为路线上的认知不确定性。我们进一步开发了一种计算其近似上限的方法,我们称之为f值。我们通过实验将后者应用于深Q-Networks(DQN),并表明增强学习中的不确定性估计是学习进步的有用指标。然后,我们提出了一种新的方法,通过从现有(以前学过的或硬编码)的甲骨文政策中学习不确定性的同时,旨在避免在训练过程中避免非生产性的随机操作,从而提高参与者批评算法的样本效率。我们认为这位评论家的信心指导了探索(CCGE)。我们使用我们的F-Value指标在软演奏者(SAC)上实施CCGE,我们将其应用于少数流行的健身环境,并表明它比有限的背景下的香草囊获得了更好的样本效率和全部情节奖励。
translated by 谷歌翻译
无模型的深度增强学习(RL)已成功应用于挑战连续控制域。然而,较差的样品效率可防止这些方法广泛用于现实世界领域。我们通过提出一种新的无模型算法,现实演员 - 评论家(RAC)来解决这个问题,旨在通过学习关于Q函数的各种信任的政策家庭来解决价值低估和高估之间的权衡。我们构建不确定性惩罚Q-Learning(UPQ),该Q-Learning(UPQ)使用多个批评者的合并来控制Q函数的估计偏差,使Q函数平稳地从低于更高的置信范围偏移。随着这些批评者的指导,RAC采用通用价值函数近似器(UVFA),同时使用相同的神经网络学习许多乐观和悲观的政策。乐观的政策会产生有效的探索行为,而悲观政策会降低价值高估的风险,以确保稳定的策略更新和Q函数。该方法可以包含任何违规的演员 - 评论家RL算法。我们的方法实现了10倍的样本效率和25 \%的性能改进与SAC在最具挑战性的人形环境中,获得了11107美元的集中奖励1107美元,价格为10 ^ 6美元。所有源代码都可以在https://github.com/ihuhuhu/rac获得。
translated by 谷歌翻译
在无模型的深度加强学习(RL)算法中,利用嘈杂的值估计监督政策评估和优化对样品效率有害。由于这种噪声是异源的,因此可以在优化过程中使用基于不确定性的权重来缓解其效果。以前的方法依赖于采样的合奏,这不会捕获不确定性的所有方面。我们对在RL的嘈杂监管中提供了对不确定性的不确定性来源的系统分析,并引入了诸如将概率集合和批处理逆差加权组合的贝叶斯框架的逆差异RL。我们提出了一种方法,其中两个互补的不确定性估计方法占Q值和环境随机性,以更好地减轻嘈杂监督的负面影响。我们的结果表明,对离散和连续控制任务的采样效率方面显着改进。
translated by 谷歌翻译
不确定性在游戏中无处不在,无论是在玩游戏的代理商还是在游戏本身中。因此,不确定性是成功深入强化学习剂的重要组成部分。尽管在理解和处理监督学习的不确定性方面已经做出了巨大的努力和进展,但不确定性的文献意识到深度强化学习的发展却较少。尽管有关监督学习的神经网络中的不确定性的许多相同问题仍然用于强化学习,但由于可相互作用的环境的性质,还有其他不确定性来源。在这项工作中,我们提供了一个激励和介绍不确定性深入强化学习的现有技术的概述。这些作品在各种强化学习任务上显示出经验益处。这项工作有助于集中不同的结果并促进该领域的未来研究。
translated by 谷歌翻译
深层确定性的非政策算法的类别有效地用于解决具有挑战性的连续控制问题。但是,当前的方法使用随机噪声作为一种常见的探索方法,该方法具有多个弱点,例如需要对给定任务进行手动调整以及在训练过程中没有探索性校准。我们通过提出一种新颖的指导探索方法来应对这些挑战,该方法使用差异方向控制器来结合可扩展的探索性动作校正。提供探索性方向的蒙特卡洛评论家合奏作为控制器。提出的方法通过动态改变勘探来改善传统探索方案。然后,我们提出了一种新颖的算法,利用拟议的定向控制器进行政策和评论家修改。所提出的算法在DMControl Suite的各种问题上都优于现代增强算法的现代增强算法。
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
不确定性量化是现实世界应用中机器学习的主要挑战之一。在强化学习中,一个代理人面对两种不确定性,称为认识论不确定性和态度不确定性。同时解开和评估这些不确定性,有机会提高代理商的最终表现,加速培训并促进部署后的质量保证。在这项工作中,我们为连续控制任务的不确定性感知强化学习算法扩展了深层确定性策略梯度算法(DDPG)。它利用了认识论的不确定性,以加快探索和不确定性来学习风险敏感的政策。我们进行数值实验,表明我们的DDPG变体在机器人控制和功率网络优化方面的基准任务中均优于香草DDPG而没有不确定性估计。
translated by 谷歌翻译
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering in different fields. Particularly, it is a recurring theme in Reinforcement Learning research, where forward models approximate the state transition function of a Markov Decision Process by learning a mapping function from current state and action to the next state. This problem is commonly defined as a Supervised Learning problem in a direct way. This common approach faces several difficulties due to the inherent complexities of the dynamics to learn, for example, delayed effects, high non-linearity, non-stationarity, partial observability and, more important, error accumulation when using bootstrapped predictions (predictions based on past predictions), over large time horizons. Here we explore the use of Reinforcement Learning in this problem. We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
translated by 谷歌翻译
大多数强化学习算法都利用了经验重播缓冲液,以反复对代理商过去观察到的样本进行训练。这样可以防止灾难性的遗忘,但是仅仅对每个样本都分配了同等的重要性是一种天真的策略。在本文中,我们提出了一种根据样本可以从样本中学到多少样本确定样本优先级的方法。我们将样本的学习能力定义为随着时间的推移,与该样品相关的训练损失的稳定减少。我们开发了一种算法,以优先考虑具有较高学习能力的样本,同时将优先级较低,为那些难以学习的样本,通常是由噪声或随机性引起的。我们从经验上表明,我们的方法比随机抽样更强大,而且比仅在训练损失方面优先排序更好,即时间差损失,这是在香草优先的经验重播中使用的。
translated by 谷歌翻译
准确的价值估计对于禁止禁止增强学习是重要的。基于时间差学学习的算法通常容易容易出现过度或低估的偏差。在本文中,我们提出了一种称为自适应校准批评者(ACC)的一般方法,该方法使用最近的高方差,但不偏见的on-Police Rollouts来缓解低方差时间差目标的偏差。我们将ACC应用于截断的分位数批评,这是一种连续控制的算法,允许使用每个环境调谐的超参数调节偏差。生成的算法在训练渲染渲染超参数期间自适应调整参数不必要,并在Openai健身房连续控制基准测试中设置一个新的算法中,这些算法在所有环境中没有调整HyperParameters的所有算法中。此外,我们证明ACC通过进一步将其进一步应用于TD3并在此设置中显示出改进的性能而相当一般。
translated by 谷歌翻译
尽管强化学习(RL)对于不确定性下的顺序决策问题有效,但在风险或安全性是具有约束力约束的现实系统中,它仍然无法蓬勃发展。在本文中,我们将安全限制作为非零和游戏制定了RL问题。在用最大熵RL部署的同时,此配方会导致一个安全的对手引导的软角色批评框架,称为SAAC。在SAAC中,对手旨在打破安全约束,而RL代理的目标是在对手的策略下最大程度地提高约束价值功能。对代理的价值函数的安全限制仅表现为代理商和对手政策之间的排斥项。与以前的方法不同,SAAC可以解决不同的安全标准,例如安全探索,均值差异风险敏感性和类似CVAR的相干风险敏感性。我们说明了这些约束的对手的设计。然后,在每种变化中,我们都表明,除了学习解决任务外,代理人与对手的不安全行为不同。最后,对于具有挑战性的持续控制任务,我们证明SAAC可以实现更快的融合,提高效率和更少的失败以满足安全限制,而不是风险避免风险的分布RL和风险中性的软性参与者批判性算法。
translated by 谷歌翻译
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN Replay Dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully-trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN Replay Dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms used on sufficiently large and diverse offline datasets can lead to high quality policies. To provide a testbed for offline RL and reproduce our results, the DQN Replay Dataset is released at offline-rl.github.io.
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
Recent advances in batch (offline) reinforcement learning have shown promising results in learning from available offline data and proved offline reinforcement learning to be an essential toolkit in learning control policies in a model-free setting. An offline reinforcement learning algorithm applied to a dataset collected by a suboptimal non-learning-based algorithm can result in a policy that outperforms the behavior agent used to collect the data. Such a scenario is frequent in robotics, where existing automation is collecting operational data. Although offline learning techniques can learn from data generated by a sub-optimal behavior agent, there is still an opportunity to improve the sample complexity of existing offline reinforcement learning algorithms by strategically introducing human demonstration data into the training process. To this end, we propose a novel approach that uses uncertainty estimation to trigger the injection of human demonstration data and guide policy training towards optimal behavior while reducing overall sample complexity. Our experiments show that this approach is more sample efficient when compared to a naive way of combining expert data with data collected from a sub-optimal agent. We augmented an existing offline reinforcement learning algorithm Conservative Q-Learning with our approach and performed experiments on data collected from MuJoCo and OffWorld Gym learning environments.
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
软演员 - 评论家(SAC)是最先进的偏离策略强化学习(RL)算法之一,其在基于最大熵的RL框架内。 SAC被证明在具有良好稳定性和稳健性的持续控制任务的列表中表现得非常好。 SAC了解一个随机高斯政策,可以最大限度地提高预期奖励和政策熵之间的权衡。要更新策略,SAC可最大限度地减少当前策略密度与软值函数密度之间的kl分歧。然后用于获得这种分歧的近似梯度的回报。在本文中,我们提出了跨熵策略优化(SAC-CEPO)的软演员 - 评论家,它使用跨熵方法(CEM)来优化SAC的政策网络。初始思想是使用CEM来迭代地对软价函数密度的最接近的分布进行采样,并使用结果分布作为更新策略网络的目标。为了降低计算复杂性,我们还介绍了一个解耦的策略结构,该策略结构将高斯策略解耦为一个策略,了解了学习均值的均值和另一个策略,以便只有CEM训练平均政策。我们表明,这种解耦的政策结构确实会聚到最佳,我们还通过实验证明SAC-CEPO实现对原始囊的竞争性能。
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译