设计有效的基于模型的增强学习算法很困难,因为必须对模型生成数据的偏置权衡数据生成的易用性。在本文中,我们研究了模型使用在理论上和经验上的政策优化中的作用。我们首先制定和分析一种基于模型的加强学习算法,并在每个步骤中保证单调改善。在实践中,该分析过于悲观,并表明实际的脱助策略数据总是优选模拟策略数据,但我们表明可以将模型概括的经验估计纳入这样的分析以证明模型使用证明模型使用。通过这种分析的动机,我们证明,使用从真实数据分支的短模型生成的卷展栏的简单过程具有更复杂的基于模型的算法而没有通常的缺陷的效益。特别是,这种方法超越了基于模型的方法的样本效率,匹配了最佳无模型算法的渐近性能,并缩放到导致其他基于模型的方法完全失败的视野。
translated by 谷歌翻译
我们介绍了$ \ Gamma $ -Model,一种具有无限概率的环境动态的预测模型。用$ \ gamma $ -models替换标准的单步模型导致程序中概括为基于模型的控制,包括模型卷展栏和基于模型的值估计。$ \ Gamma $ -Model,具有经常对时间差异学习的生成重新诠释的,是继任者表示的自然连续模拟和模型和基于模型的机制之间的混合。与价值函数一样,它包含有关长期未来的信息;与标准预测模型一样,它与任务奖励无关。我们将$ \ Gamma $ -Model实例化为生成的对抗网络和规范化流程,讨论其培训如何反映训练时间和测试时间复合错误之间的不可避免的权衡,并经验证明其效用进行预测和控制。
translated by 谷歌翻译
基于模型的增强学习(RL)通过学习动态模型来生成用于策略学习的样本,在实践中实现了实践中的样本效率更高。先前的作品学习了一个“全球”动力学模型,以适合所有历史政策的国家行动探视分布。但是,在本文中,我们发现学习全球动力学模型并不一定会受益于当前策略的模型预测,因为使用的策略正在不断发展。培训期间不断发展的政策将导致州行动探访分配变化。我们理论上分析了历史政策的分布如何影响模型学习和模型推出。然后,我们提出了一种基于模型的新型RL方法,名为\ textit {策略适应模型基于contor-Critic(PMAC)},该方法基于策略适应机制学习了一个基于策略适应的动力学模型。该机制会动态调整历史政策混合分布,以确保学习模型可以不断适应不断发展的政策的国家行动探视分布。在Mujoco中的一系列连续控制环境上进行的实验表明,PMAC可以实现最新的渐近性能,而样品效率几乎是基于模型的方法的两倍。
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
基于模型的强化学习引起了广泛的样本效率。尽管到目前为止,它令人印象深刻,但仍然不清楚如何适当安排重要的超参数,以实现足够的性能,例如基于Dyna样式的算法中的政策优化的实际数据比。在本文中,我们首先分析了实际数据在政策培训中的作用,这表明逐渐增加了实际数据的比例会产生更好的性能。灵感来自分析,我们提出了一个名为autombpo的框架,以自动安排真实的数据比以及基于培训模型的策略优化(MBPO)算法的其他超参数,是基于模型的方法的代表性运行情况。在几个连续控制任务上,由AutomBPO安排的HyperParameters培训的MBPO实例可以显着超越原始的,并且AutomBPO找到的真实数据比例计划显示了与我们的理论分析的一致性。
translated by 谷歌翻译
强化学习(RL)通过与环境相互作用的试验过程解决顺序决策问题。尽管RL在玩复杂的视频游戏方面取得了巨大的成功,但在现实世界中,犯错误总是不希望的。为了提高样本效率并从而降低错误,据信基于模型的增强学习(MBRL)是一个有前途的方向,它建立了环境模型,在该模型中可以进行反复试验,而无需实际成本。在这项调查中,我们对MBRL进行了审查,重点是Deep RL的最新进展。对于非壮观环境,学到的环境模型与真实环境之间始终存在概括性错误。因此,非常重要的是分析环境模型中的政策培训与实际环境中的差异,这反过来又指导了更好的模型学习,模型使用和政策培训的算法设计。此外,我们还讨论了其他形式的RL,包括离线RL,目标条件RL,多代理RL和Meta-RL的最新进展。此外,我们讨论了MBRL在现实世界任务中的适用性和优势。最后,我们通过讨论MBRL未来发展的前景来结束这项调查。我们认为,MBRL在被忽略的现实应用程序中具有巨大的潜力和优势,我们希望这项调查能够吸引更多关于MBRL的研究。
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
尽管学习环境内部模型的强化学习(RL)方法具有比没有模型的对应物更有效的样本效率,但学会从高维传感器中建模原始观察结果可能具有挑战性。先前的工作通过通过辅助目标(例如重建或价值预测)学习观察值的低维表示来解决这一挑战。但是,这些辅助目标与RL目标之间的一致性通常不清楚。在这项工作中,我们提出了一个单一的目标,该目标共同优化了潜在空间模型和政策,以实现高回报,同时保持自洽。这个目标是预期收益的下限。与基于模型的RL在策略探索或模型保证方面的先前范围不同,我们的界限直接依靠整体RL目标。我们证明,所得算法匹配或改善了最佳基于模型和无模型的RL方法的样品效率。尽管这种有效的样品方法通常在计算上是要求的,但我们的方法在较小的壁式锁定时间降低了50 \%。
translated by 谷歌翻译
提高强化学习样本效率的一种有希望的方法是基于模型的方法,其中在学习模型中可以进行许多探索和评估以节省现实世界样本。但是,当学习模型具有不可忽略的模型误差时,很难准确评估模型中的顺序步骤,从而限制了模型的利用率。本文建议通过引入多步计划来替换基于模型的RL的多步骤操作来减轻此问题。我们采用多步计划价值估计,该估计在执行给定状态的一系列操作计划后评估预期的折扣收益,并通过直接通过计划价值估计来直接计算多步策略梯度来更新策略。新的基于模型的强化学习算法MPPVE(基于模型的计划策略学习具有多步计划价值估计)显示了对学习模型的利用率更好,并且比基于ART模型的RL更好地实现了样本效率方法。
translated by 谷歌翻译
Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.
translated by 谷歌翻译
解决稀疏奖励的多目标强化学习(RL)问题通常是具有挑战性的。现有方法利用目标依赖收集的经验,以减轻稀疏奖励提出的问题。然而,这些方法仍然有效,无法充分利用经验。在本文中,我们提出了基于模型的后敏感体验重放(MIRH),通过利用环境动态来产生虚拟实现的目标,更有效地利用更有效的体验。用从训练有素的动态模型的交互中产生的虚拟目标替换原始目标导致一种新的重定相制方法,基于模型的重新标记(MBR)。基于MBR,MEHER执行加强学习和监督学习以获得高效的政策改进。从理论上讲,我们还证明了MBR数据的目标调节监督学习的监督部分,优化了多目标RL目标的下限。基于几个点的任务和模拟机器人环境的实验结果表明,MINHER比以前的无模型和基于模型的多目标方法实现显着更高的样本效率。
translated by 谷歌翻译
确保基于乐观或后采样(PSRL)的基于模型的强化增强学习(MBRL)通过引入模型的复杂度度量,以渐近地实现全局最优性。但是,对于最简单的非线性模型,复杂性可能会成倍增长,在有限的迭代中,全局收敛是不可能的。当模型遭受大的概括误差(通过模型复杂性定量测量)时,不确定性可能很大。因此,对当前策略进行贪婪优化的采样模型将不设置,从而导致积极的政策更新和过度探索。在这项工作中,我们提出了涉及参考更新和保守更新的保守双重政策优化(CDPO)。该策略首先在参考模型下进行了优化,该策略模仿PSRL的机制,同时提供更大的稳定性。通过最大化模型值的期望来保证保守的随机性范围。没有有害的采样程序,CDPO仍然可以达到与PSRL相同的遗憾。更重要的是,CDPO同时享有单调的政策改进和全球最优性。经验结果还验证了CDPO的勘探效率。
translated by 谷歌翻译
现实世界的顺序决策需要数据驱动的算法,这些算法在整个培训中为性能提供实际保证,同时还可以有效利用数据。无模型的深入强化学习代表了此类数据驱动决策的框架,但是现有算法通常只关注其中一个目标,同时牺牲了相对于另一个目标。政策算法确保整个培训的政策改进,但遭受了较高的样本复杂性,而政策算法则可以通过样本重用,但缺乏理论保证来有效利用数据。为了平衡这些竞争目标,我们开发了一系列广义政策改进算法,这些算法结合了政策改进的政策保证和理论支持的样本重用的效率。我们通过对DeepMind Control Suite的各种连续控制任务进行广泛的实验分析来证明这种新算法的好处。
translated by 谷歌翻译
依赖于太多的实验来学习良好的行动,目前的强化学习(RL)算法在现实世界的环境中具有有限的适用性,这可能太昂贵,无法探索探索。我们提出了一种批量RL算法,其中仅使用固定的脱机数据集来学习有效策略,而不是与环境的在线交互。批量RL中的有限数据产生了在培训数据中不充分表示的状态/行动的价值估计中的固有不确定性。当我们的候选政策从生成数据的候选政策发散时,这导致特别严重的外推。我们建议通过两个直接的惩罚来减轻这个问题:减少这种分歧的政策限制和减少过于乐观估计的价值约束。在全面的32个连续动作批量RL基准测试中,我们的方法对最先进的方法进行了比较,无论如何收集离线数据如何。
translated by 谷歌翻译
在训练数据的分布中评估时,学到的模型和政策可以有效地概括,但可以在分布输入输入的情况下产生不可预测且错误的输出。为了避免在部署基于学习的控制算法时分配变化,我们寻求一种机制将代理商限制为类似于受过训练的国家和行动的机制。在控制理论中,Lyapunov稳定性和控制不变的集合使我们能够保证稳定系统周围系统的控制器,而在机器学习中,密度模型使我们能够估算培训数据分布。我们可以将这两个概念结合起来,产生基于学习的控制算法,这些算法仅使用分配动作将系统限制为分布状态?在这项工作中,我们建议通过结合Lyapunov稳定性和密度估计的概念来做到这一点,引入Lyapunov密度模型:控制Lyapunov函数和密度模型的概括,这些函数和密度模型可以保证代理商在其整个轨迹上保持分布的能力。
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
传统的基于模型的增强学习(RL)方法使用学习的动力学模型生成前向推出轨迹,以减少与真实环境的相互作用。最近基于模型的RL方法考虑了学习向后模型的方法,该模型指定了前一个状态的条件概率给定了先前的动作和当前状态以生成向后推出轨迹。但是,在这种基于模型的方法中,从向后推出的样品和向前推出的样品简单地聚集在一起,以通过无模型的RL算法优化策略,这可能会降低样本效率和收敛速率。这是因为这种方法忽略了这样一个事实,即落后推出轨迹通常是从某些高价值状态开始产生的,并且对于代理人改善行为的肯定会更具启发性。在本文中,我们提出了向后的模仿和前向加强学习(BIFRL)框架,在该框架中,代理将向后的推出痕迹视为模仿出色行为的专家演示,然后收集策略强化的前向推出过渡。因此,BIFRL以更有效的方式使代理人能够从高价值状态伸出并从高价值状态进行探索,并进一步降低了实际相互作用,从而使其更适合于实体机器人学习。此外,引入了价值调节的生成对抗网络,以增强代理商很少收到的宝贵状态。从理论上讲,我们提供了BIFRL优于基线方法的条件。在实验上,我们证明了BIFRL获得了更好的样品效率,并在与基于最新模型的方法相比的各种Mujoco运动任务上产生了竞争性渐近性能。
translated by 谷歌翻译
我们为模仿学习提供了一个新的框架 - 将模仿视为政策和奖励之间的基于两人排名的游戏。在这个游戏中,奖励代理商学会了满足行为之间的成对性能排名,而政策代理人则学会最大程度地提高这种奖励。在模仿学习中,很难获得近乎最佳的专家数据,即使在无限数据的限制下,也不能像偏好一样对轨迹进行总订购。另一方面,仅从偏好中学习就具有挑战性,因为需要大量偏好来推断高维奖励功能,尽管偏好数据通常比专家演示更容易收集。经典的逆增强学习(IRL)的配方从专家演示中学习,但没有提供从离线偏好中纳入学习的机制,反之亦然。我们将提出的排名游戏框架实例化,并具有新颖的排名损失,从而使算法可以同时从专家演示和偏好中学习,从而获得两种方式的优势。我们的实验表明,所提出的方法可实现最新的样本效率,并可以从观察(LFO)设置中学习以前无法解决的任务。
translated by 谷歌翻译