Recently, methods such as Decision Transformer that reduce reinforcement learning to a prediction task and solve it via supervised learning (RvS) have become popular due to their simplicity, robustness to hyperparameters, and strong overall performance on offline RL tasks. However, simply conditioning a probabilistic model on a desired return and taking the predicted action can fail dramatically in stochastic environments since trajectories that result in a return may have only achieved that return due to luck. In this work, we describe the limitations of RvS approaches in stochastic environments and propose a solution. Rather than simply conditioning on the return of a single trajectory as is standard practice, our proposed method, ESPER, learns to cluster trajectories and conditions on average cluster returns, which are independent from environment stochasticity. Doing so allows ESPER to achieve strong alignment between target return and expected performance in real environments. We demonstrate this in several challenging stochastic offline-RL tasks including the challenging puzzle game 2048, and Connect Four playing against a stochastic opponent. In all tested domains, ESPER achieves significantly better alignment between the target return and achieved return than simply conditioning on returns. ESPER also achieves higher maximum performance than even the value-based baselines.
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
最近的工作表明,单独监督学习,没有时间差异(TD)学习,可以对离线RL显着有效。什么时候保持真实,需要哪些算法组件?通过广泛的实验,我们致力于将RL离线的监督学习到其基本要素。在我们考虑的每个环境套件中,只需通过双层前馈MLP最大化的可能性,与基于TD学习或与变压器的序列建模的基本更复杂的方法具有竞争力的竞争性。仔细选择模型容量(例如,通过正则化或架构),并选择哪些信息(例如,目标或奖励)对性能至关重要。这些见解是通过监督学习进行加强学习的从业者(我们投入“RVS学习”)的实践指南。他们还探讨了现有RVS方法的限制,在随机数据上相对较弱,并提出了许多打开问题。
translated by 谷歌翻译
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. * equal contribution. † equal advising. Orders randomized.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
强化学习中的信用作业是衡量行动对未来奖励的影响的问题。特别是,这需要从运气中分离技能,即解除外部因素和随后的行动对奖励行动的影响。为实现这一目标,我们将来自因果关系的反事件的概念调整为无模型RL设置。关键思想是通过学习从轨迹中提取相关信息来应对未来事件的价值函数。我们制定了一系列政策梯度算法,这些算法使用这些未来条件的价值函数作为基准或批评,并表明它们是可怕的差异。为避免对未来信息的调理潜在偏见,我们将后视信息限制为不包含有关代理程序行为的信息。我们展示了我们对许多说明性和具有挑战性问题的算法的功效和有效性。
translated by 谷歌翻译
强化学习(RL)通过与环境相互作用的试验过程解决顺序决策问题。尽管RL在玩复杂的视频游戏方面取得了巨大的成功,但在现实世界中,犯错误总是不希望的。为了提高样本效率并从而降低错误,据信基于模型的增强学习(MBRL)是一个有前途的方向,它建立了环境模型,在该模型中可以进行反复试验,而无需实际成本。在这项调查中,我们对MBRL进行了审查,重点是Deep RL的最新进展。对于非壮观环境,学到的环境模型与真实环境之间始终存在概括性错误。因此,非常重要的是分析环境模型中的政策培训与实际环境中的差异,这反过来又指导了更好的模型学习,模型使用和政策培训的算法设计。此外,我们还讨论了其他形式的RL,包括离线RL,目标条件RL,多代理RL和Meta-RL的最新进展。此外,我们讨论了MBRL在现实世界任务中的适用性和优势。最后,我们通过讨论MBRL未来发展的前景来结束这项调查。我们认为,MBRL在被忽略的现实应用程序中具有巨大的潜力和优势,我们希望这项调查能够吸引更多关于MBRL的研究。
translated by 谷歌翻译
最近的工作表明,离线增强学习(RL)可以作为序列建模问题(Chen等,2021; Janner等,2021)配制,并通过类似于大规模语言建模的方法解决。但是,RL的任何实际实例化也涉及一个在线组件,在线组件中,通过与环境的任务规定相互作用对被动离线数据集进行了预测的策略。我们建议在线决策变压器(ODT),这是一种基于序列建模的RL算法,该算法将离线预处理与统一框架中的在线填充融为一体。我们的框架将序列级熵正规仪与自回归建模目标结合使用,用于样品效率探索和填充。从经验上讲,我们表明ODT在D4RL基准上的绝对性能中与最先进的表现具有竞争力,但在填充过程中显示出更大的收益。
translated by 谷歌翻译
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
基于变压器神经网络体系结构的自然语言处理(NLP)的令人印象深刻的结果激发了研究人员探索视线离线增强学习(RL)作为通用序列建模问题。基于此范式的最新著作已获得最新的结果,其中一些主要确定性的离线Atari和D4RL基准。但是,由于这些方法将国家和行动共同模拟单一的测序问题,因此它们努力将政策和世界动态对回报的影响解散。因此,在对抗或随机环境中,这些方法导致过度乐观的行为,在自主驾驶(例如自主驾驶)中可能是危险的。在这项工作中,我们提出了一种通过明确解开政策和世界模型来解决这种乐观偏见的方法,该方法使我们在测试时可以搜索对环境中多个可能的未来的稳健性的策略。我们在模拟中的各种自动驾驶任务上展示了我们的方法的出色性能。
translated by 谷歌翻译
在离线强化学习(离线RL)中,主要挑战之一是处理学习策略与给定数据集之间的分布转变。为了解决这个问题,最近的离线RL方法试图引入保守主义偏见,以鼓励在高信心地区学习。无模型方法使用保守的正常化或特殊网络结构直接对策略或价值函数学习进行这样的偏见,但它们约束的策略搜索限制了脱机数据集之外的泛化。基于模型的方法使用保守量量化学习前瞻性动态模型,然后生成虚构的轨迹以扩展脱机数据集。然而,由于离线数据集中的有限样本,保守率量化通常在支撑区域内遭受全面化。不可靠的保守措施将误导基于模型的想象力,以不受欢迎的地区,导致过多的行为。为了鼓励更多的保守主义,我们提出了一种基于模型的离线RL框架,称为反向离线模型的想象(ROMI)。我们与新颖的反向策略结合使用逆向动力学模型,该模型可以生成导致脱机数据集中的目标目标状态的卷展栏。这些反向的想象力提供了无通知的数据增强,以便无模型策略学习,并使远程数据集的保守概括。 ROMI可以有效地与现成的无模型算法组合,以实现基于模型的概括,具有适当的保守主义。经验结果表明,我们的方法可以在离线RL基准任务中产生更保守的行为并实现最先进的性能。
translated by 谷歌翻译
最近的工作表明,通过将RL任务转换为监督学习任务,通过有条件的政策来解决离线加强学习(RL)可以产生有希望的结果。决策变压器(DT)结合了条件政策方法和变压器体系结构,以显示针对多个基准测试的竞争性能。但是,DT缺乏缝线能力 - 离线RL的关键能力之一,它从亚最佳轨迹中学习了最佳策略。当离线数据集仅包含亚最佳轨迹时,问题就变得很重要。另一方面,基于动态编程(例如Q学习)的常规RL方法不会遇到相同的问题;但是,他们患有不稳定的学习行为,尤其是当它在非政策学习环境中采用功能近似时。在本文中,我们提出了通过利用动态编程(Q-Learning)的好处来解决DT的缺点的Q学习决策者(QDT)。 QDT利用动态编程(Q-学习)结果来重新标记培训数据中的返回。然后,我们使用重新标记的数据训练DT。我们的方法有效利用了这两种方法的好处,并弥补了彼此的缺点,以取得更好的绩效。我们在简单的环境中演示了DT的问题和QDT的优势。我们还在更复杂的D4RL基准测试中评估了QDT,显示出良好的性能增长。
translated by 谷歌翻译
在许多顺序决策问题(例如,机器人控制,游戏播放,顺序预测),人类或专家数据可用包含有关任务的有用信息。然而,来自少量专家数据的模仿学习(IL)可能在具有复杂动态的高维环境中具有挑战性。行为克隆是一种简单的方法,由于其简单的实现和稳定的收敛而被广泛使用,但不利用涉及环境动态的任何信息。由于对奖励和政策近似器或偏差,高方差梯度估计器,难以在实践中难以在实践中努力训练的许多现有方法。我们介绍了一种用于动态感知IL的方法,它通过学习单个Q函数来避免对抗训练,隐含地代表奖励和策略。在标准基准测试中,隐式学习的奖励显示与地面真实奖励的高正面相关性,说明我们的方法也可以用于逆钢筋学习(IRL)。我们的方法,逆软Q学习(IQ-Learn)获得了最先进的结果,在离线和在线模仿学习设置中,显着优于现有的现有方法,这些方法都在所需的环境交互和高维空间中的可扩展性中,通常超过3倍。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
强化学习的主要方法是根据预期的回报将信贷分配给行动。但是,我们表明回报可能取决于政策,这可能会导致价值估计的过度差异和减慢学习的速度。取而代之的是,我们证明了优势函数可以解释为因果效应,并与因果关系共享相似的属性。基于此洞察力,我们提出了直接优势估计(DAE),这是一种可以对优势函数进行建模并直接从政策数据进行估算的新方法,同时同时最大程度地减少了返回的方差而无需(操作 - )值函数。我们还通过显示如何无缝整合到DAE中来将我们的方法与时间差异方法联系起来。所提出的方法易于实施,并且可以通过现代参与者批评的方法很容易适应。我们对三个离散控制域进行经验评估DAE,并表明它可以超过广义优势估计(GAE),这是优势估计的强大基线,当将大多数环境应用于策略优化时。
translated by 谷歌翻译
在现实世界中,通过弱势政策影响环境可能是昂贵的或非常危险的,因此妨碍了现实世界的加强学习应用。离线强化学习(RL)可以从给定数据集中学习策略,而不与环境进行交互。但是,数据集是脱机RL算法的唯一信息源,并确定学习策略的性能。我们仍然缺乏关于数据集特征如何影响不同离线RL算法的研究。因此,我们对数据集特性如何实现离散动作环境的离线RL算法的性能的全面实证分析。数据集的特点是两个度量:(1)通过轨迹质量(TQ)测量的平均数据集返回和(2)由状态 - 动作覆盖(SACO)测量的覆盖范围。我们发现,禁止政策深度Q网家族的变体需要具有高SACO的数据集来表现良好。将学习策略朝向给定数据集的算法对具有高TQ或SACO的数据集进行了良好。对于具有高TQ的数据集,行为克隆优先级或类似于最好的离线RL算法。
translated by 谷歌翻译
增强学习(RL)算法假设用户通过手动编写奖励函数来指定任务。但是,这个过程可能是费力的,需要相当大的技术专长。我们可以设计RL算法,而是通过提供成功结果的示例来支持用户来指定任务吗?在本文中,我们推导了一种控制算法,可以最大化这些成功结果示例的未来概率。在前阶段的工作已经接近了类似的问题,首先学习奖励功能,然后使用另一个RL算法优化此奖励功能。相比之下,我们的方法直接从过渡和成功的结果中学习价值函数,而无需学习此中间奖励功能。因此,我们的方法需要较少的封闭式曲折和调试的代码行。我们表明我们的方法满足了一种新的数据驱动Bellman方程,其中示例取代了典型的奖励函数术语。实验表明,我们的方法优于学习明确奖励功能的先前方法。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译