In this paper we argue for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent. This is in contrast to the common approach to reinforcement learning which models the expectation of this return, or value. Although there is an established body of literature studying the value distribution, thus far it has always been used for a specific purpose such as implementing risk-aware behaviour. We begin with theoretical results in both the policy evaluation and control settings, exposing a significant distributional instability in the latter. We then use the distributional perspective to design a new algorithm which applies Bellman's equation to the learning of approximate value distributions. We evaluate our algorithm using the suite of games from the Arcade Learning Environment. We obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning. Finally, we combine theoretical and empirical evidence to highlight the ways in which the value distribution impacts learning in the approximate setting.
translated by 谷歌翻译
In reinforcement learning an agent interacts with the environment by taking actions and observing the next state and reward. When sampled probabilistically, these state transitions, rewards, and actions can all induce randomness in the observed long-term return. Traditionally, reinforcement learning algorithms average over this randomness to estimate the value function. In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. That is, we examine methods of learning the value distribution instead of the value function. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare, . First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related distributional algorithm C51.
translated by 谷歌翻译
在这项工作中,我们继续建立最近有限马尔可夫进程的钢筋学习的进步。以前现有的算法中的一种共同方法,包括单个演员和分布式,都是剪辑奖励,也可以在Q函数上应用转换方法,以处理真正的折扣回报中的各种大小。理论上我们展示了如果我们有非确定性过程,最成功的方法可能不会产生最佳政策。作为一种解决方案,我们认为分布加强学习借给自己完全解决这种情况。通过引入共轭分布运营商,我们可以处理大量转换,以获得有保证的理论融合。我们提出了一种基于该操作员的近似单录像机算法,该操作员使用Cram \'ER距离给出的适当分布度量直接在不妨碍的奖励上培养代理。在使用粘性动作的35个Atari 2600游戏套件中培训代理的随机环境中的表现,与多巴胺框架中的其他众所周知的算法相比,获得最先进的绩效。
translated by 谷歌翻译
在动态编程(DP)和强化学习(RL)中,代理商学会在通过由Markov决策过程(MDP)建模的环境中顺序交互来实现预期的长期返回。更一般地在分布加强学习(DRL)中,重点是返回的整体分布,而不仅仅是其期望。虽然基于DRL的方法在RL中产生了最先进的性能,但它们涉及尚未充分理解的额外数量(与非分布设置相比)。作为第一个贡献,我们介绍了一类新的分类运营商,以及一个实用的DP算法,用于策略评估,具有强大的MDP解释。实际上,我们的方法通过增强的状态空间重新重新重新重新重新重新格式化,其中每个状态被分成最坏情况的子变量,并且最佳的子变电站,其值分别通过安全和危险的策略最大化。最后,我们派生了分配运营商和DP算法解决了一个新的控制任务:如何区分安全性的最佳动作,以便在最佳政策空间中打破联系?
translated by 谷歌翻译
我们研究了分销RL的多步非政策学习方法。尽管基于价值的RL和分布RL之间的相似性明显相似,但我们的研究揭示了多步环境中两种情况之间的有趣和根本差异。我们确定了依赖路径依赖性分布TD误差的新颖概念,这对于原则上的多步分布RL是必不可少的。基于价值的情况的区别对诸如后视算法等概念的重要含义具有重要意义。我们的工作提供了多步非政策分布RL算法的第一个理论保证,包括适用于多步分配RL现有方法的结果。此外,我们得出了一种新颖的算法,即分位数回归 - 逆转录,该算法导致了深度RL QR QR-DQN-RETRACE,显示出对Atari-57基准上QR-DQN的经验改进。总的来说,我们阐明了多步分布RL中如何在理论和实践中解决多个独特的挑战。
translated by 谷歌翻译
在实际情况下,代理观察的状态观察可能含有测量误差或对抗性噪音,误导代理人在训练时采取次优行动甚至崩溃。在本文中,我们研究了分布加固学习的培训稳健性〜(RL),一类最先进的方法,即估计整个分布,而不是仅期望的总回报。首先,我们验证了基于期望和分布的Bellman运营商在状态 - Noisy Markov决策过程〜(SN-MDP)中的收缩,该典型表格案例包含随机和对抗状态观察噪声。除了SN-MDP之外,我们将分析基于期望的RL中最小二乘损失的脆弱性,具有线性或非线性函数近似。相比之下,基于直方图密度估计理论地表征分布RL损耗的有界梯度规范。由此产生的稳定梯度,而分布RL的优化占其更好地训练稳健性,而不是国家观察噪声。最后,在游戏套件上进行了广泛的实验,在不同的状态观察噪声的不同强度下,在SN-MDP样设置中验证了基于期望和分布RL的收敛性。更重要的是,与SN-MDP之外的嘈杂设置中,与基于期望的对应物相比,分布RL与嘈杂的状态观察相比,分配RL不易受到噪声的噪声。
translated by 谷歌翻译
具有很多玩家的非合作和合作游戏具有许多应用程序,但是当玩家数量增加时,通常仍然很棘手。由Lasry和Lions以及Huang,Caines和Malham \'E引入的,平均野外运动会(MFGS)依靠平均场外近似值,以使玩家数量可以成长为无穷大。解决这些游戏的传统方法通常依赖于以完全了解模型的了解来求解部分或随机微分方程。最近,增强学习(RL)似乎有望解决复杂问题。通过组合MFGS和RL,我们希望在人口规模和环境复杂性方面能够大规模解决游戏。在这项调查中,我们回顾了有关学习MFG中NASH均衡的最新文献。我们首先确定最常见的设置(静态,固定和进化)。然后,我们为经典迭代方法(基于最佳响应计算或策略评估)提供了一个通用框架,以确切的方式解决MFG。在这些算法和与马尔可夫决策过程的联系的基础上,我们解释了如何使用RL以无模型的方式学习MFG解决方案。最后,我们在基准问题上介绍了数值插图,并以某些视角得出结论。
translated by 谷歌翻译
最近的平均野外游戏(MFG)形式主义促进了对许多代理环境中近似NASH均衡的棘手计算。在本文中,我们考虑具有有限摩托目标目标的离散时间有限的MFG。我们表明,所有具有非恒定固定点运算符的离散时间有限的MFG无法正如现有MFG文献中通常假设的,禁止通过固定点迭代收敛。取而代之的是,我们将熵验证和玻尔兹曼策略纳入固定点迭代中。结果,我们获得了现有方法失败的近似固定点的可证明的融合,并达到了近似NASH平衡的原始目标。所有提出的方法均可在其可剥削性方面进行评估,这两个方法都具有可牵引的精确溶液和高维问题的启发性示例,在这些示例中,精确方法变得棘手。在高维场景中,我们采用了既定的深入强化学习方法,并从经验上将虚拟的游戏与我们的近似值结合在一起。
translated by 谷歌翻译
我们介绍了一种改进政策改进的方法,该方法在基于价值的强化学习(RL)的贪婪方法与基于模型的RL的典型计划方法之间进行了插值。新方法建立在几何视野模型(GHM,也称为伽马模型)的概念上,该模型对给定策略的折现状态验证分布进行了建模。我们表明,我们可以通过仔细的基本策略GHM的仔细组成,而无需任何其他学习,可以评估任何非马尔科夫策略,以固定的概率在一组基本马尔可夫策略之间切换。然后,我们可以将广义政策改进(GPI)应用于此类非马尔科夫政策的收集,以获得新的马尔可夫政策,通常将其表现优于其先驱。我们对这种方法提供了彻底的理论分析,开发了转移和标准RL的应用,并在经验上证明了其对标准GPI的有效性,对充满挑战的深度RL连续控制任务。我们还提供了GHM培训方法的分析,证明了关于先前提出的方法的新型收敛结果,并显示了如何在深度RL设置中稳定训练这些模型。
translated by 谷歌翻译
我们在马尔可夫决策过程的状态空间上提出了一种新的行为距离,并展示使用该距离作为塑造深度加强学习代理的学习言论的有效手段。虽然由于高计算成本和基于样本的算法缺乏缺乏样本的距离,但是,虽然现有的国家相似性通常难以在规模上学习,但我们的新距离解决了这两个问题。除了提供详细的理论分析外,我们还提供了学习该距离的经验证据,与价值函数产生的结构化和信息化表示,包括对街机学习环境基准的强劲结果。
translated by 谷歌翻译
分布强化学习〜(RL)是一类最先进的算法,可估计总回报的全部分布,而不仅仅是其期望。尽管分销RL的表现出色,但对基于预期的RL的优势的理论理解仍然难以捉摸。在本文中,我们将分布RL的优越性归因于其正规化效果,无论其预期如何,其价值分布信息。首先,通过稳健统计数据中总误差模型的变体的杠杆作用,我们将值分布分解为其预期和其余分布部分。因此,与基于期望的RL相比,分布RL的额外好处主要解释为在神经拟合Z-材料框架中\ textit {风险敏感的熵正则化}的影响。同时,我们在最大熵RL中的分布RL的风险敏感熵正则和香草熵之间建立了一个桥梁,专门针对参与者 - 批评算法。它揭示了分布RL诱导校正后的奖励函数,从而促进了针对环境内在不确定性的风险敏感探索。最后,广泛的实验证实了分布RL的正则化作用和不同熵正则化的相互影响的作用。我们的研究铺平了一种更好地解释分布RL算法的功效,尤其是通过正则化的镜头的方法。
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
策略梯度方法适用于复杂的,不理解的,通过对参数化的策略进行随机梯度下降来控制问题。不幸的是,即使对于可以通过标准动态编程技术解决的简单控制问题,策略梯度算法也会面临非凸优化问题,并且被广泛理解为仅收敛到固定点。这项工作确定了结构属性 - 通过几个经典控制问题共享 - 确保策略梯度目标函数尽管是非凸面,但没有次优的固定点。当这些条件得到加强时,该目标满足了产生收敛速率的Polyak-lojasiewicz(梯度优势)条件。当其中一些条件放松时,我们还可以在任何固定点的最佳差距上提供界限。
translated by 谷歌翻译
连续的时间加强学习提供了一种吸引人的形式主义,用于描述控制问题,其中时间的流逝并不自然地分为离散的增量。在这里,我们考虑了预测在连续时间随机环境中相互作用的代理商获得的回报分布的问题。准确的回报预测已被证明可用于确定对风险敏感的控制,学习状态表示,多基因协调等的最佳策略。我们首先要建立汉密尔顿 - 雅各布人(HJB)方程的分布模拟,以扩散和更广泛的feller-dynkin过程。然后,我们将此方程式专注于返回分布近似于$ n $均匀加权粒子的设置,这是分销算法中常见的设计选择。我们的派生突出显示了由于统计扩散率而引起的其他术语,这是由于在连续时间设置中正确处理分布而产生的。基于此,我们提出了一种可访问算法,用于基于JKO方案近似求解分布HJB,该方案可以在在线控制算法中实现。我们证明了这种算法在合成控制问题中的有效性。
translated by 谷歌翻译
We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA'S REVENGE.
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
本文讨论了一种学习最佳Q功能的基本问题的新方法。在这种方法中,最佳Q函数被配制为源自经典Bellman最优方程的非线性拉格朗日函数的鞍点。该论文表明,尽管非线性具有非线性,但拉格朗日人仍然具有很强的双重性,这为Q-function学习的一般方法铺平了道路。作为演示,本文根据二元性理论开发了模仿学习算法,并将算法应用于最先进的机器翻译基准。然后,该论文转弯以证明有关拉格朗日鞍点的最佳性的对称性破坏现象,这证明了开发拉格朗日方法的很大程度上被忽视的方向。
translated by 谷歌翻译
在这项工作中,我们在分配强化学习方面建立了最新的进步,以基于IQN提供模型的最新分配变体。我们通过使用GAN模型的生成器和鉴别器功能与分位数回归来实现这一目标,从而近似于状态返回分布的完整分位数。我们证明了基线数据集的性能提高-57 Atari 2600游戏。此外,我们使用算法来显示Atari游戏中风险敏感政策的最新培训表现,并通过政策优化和评估。
translated by 谷歌翻译
在钢筋学习中,体验重播存储过去的样本以进一步重用。优先采样是一个有希望的技术,可以更好地利用这些样品。以前的优先级标准包括TD误差,近似和纠正反馈,主要是启发式设计。在这项工作中,我们从遗憾最小化目标开始,并获得最佳的贝尔曼更新优先级探讨策略,可以直接最大化策略的返回。该理论表明,具有较高后视TD误差的数据,应在采样期间具有更高权重的重量来分配更高的Hindsight TD误差,更好的政策和更准确的Q值。因此,最先前的标准只会部分考虑这一战略。我们不仅为以前的标准提供了理论理由,还提出了两种新方法来计算优先级重量,即remern并恢复。 remern学习错误网络,而remert利用状态的时间顺序。这两种方法都以先前的优先考虑的采样算法挑战,包括Mujoco,Atari和Meta-World。
translated by 谷歌翻译