我们介绍了一种改进政策改进的方法,该方法在基于价值的强化学习(RL)的贪婪方法与基于模型的RL的典型计划方法之间进行了插值。新方法建立在几何视野模型(GHM,也称为伽马模型)的概念上,该模型对给定策略的折现状态验证分布进行了建模。我们表明,我们可以通过仔细的基本策略GHM的仔细组成,而无需任何其他学习,可以评估任何非马尔科夫策略,以固定的概率在一组基本马尔可夫策略之间切换。然后,我们可以将广义政策改进(GPI)应用于此类非马尔科夫政策的收集,以获得新的马尔可夫政策,通常将其表现优于其先驱。我们对这种方法提供了彻底的理论分析,开发了转移和标准RL的应用,并在经验上证明了其对标准GPI的有效性,对充满挑战的深度RL连续控制任务。我们还提供了GHM培训方法的分析,证明了关于先前提出的方法的新型收敛结果,并显示了如何在深度RL设置中稳定训练这些模型。
translated by 谷歌翻译
While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.
translated by 谷歌翻译
由政策引起的马尔可夫链的混合时间限制了现实世界持续学习场景中的性能。然而,混合时间对持续增强学习学习(RL)的影响仍然是曝光率。在本文中,我们表征了长期兴趣的问题,以通过混合时间调用可扩展的MDP来发展持续的RL。特别是,我们建立可扩展的MDP具有与问题的大小相等的混合时间。我们继续证明,多项式混合时间对现有方法产生显着困难,并提出了一种基于模型的算法,通过新颖的引导程序直接优化平均奖励来加速学习。最后,我们对我们提出的方法进行了实证遗憾分析,展示了对基线的清晰改进,以及如何使用可缩放的MDP来分析RL算法作为混合时间规模。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
This paper studies systematic exploration for reinforcement learning with rich observations and function approximation. We introduce a new model called contextual decision processes, that unifies and generalizes most prior settings. Our first contribution is a complexity measure, the Bellman rank , that we show enables tractable learning of near-optimal behavior in these processes and is naturally small for many well-studied reinforcement learning settings. Our second contribution is a new reinforcement learning algorithm that engages in systematic exploration to learn contextual decision processes with low Bellman rank. Our algorithm provably learns near-optimal behavior with a number of samples that is polynomial in all relevant parameters but independent of the number of unique observations. The approach uses Bellman error minimization with optimistic exploration and provides new insights into efficient exploration for reinforcement learning with function approximation.
translated by 谷歌翻译
具有很多玩家的非合作和合作游戏具有许多应用程序,但是当玩家数量增加时,通常仍然很棘手。由Lasry和Lions以及Huang,Caines和Malham \'E引入的,平均野外运动会(MFGS)依靠平均场外近似值,以使玩家数量可以成长为无穷大。解决这些游戏的传统方法通常依赖于以完全了解模型的了解来求解部分或随机微分方程。最近,增强学习(RL)似乎有望解决复杂问题。通过组合MFGS和RL,我们希望在人口规模和环境复杂性方面能够大规模解决游戏。在这项调查中,我们回顾了有关学习MFG中NASH均衡的最新文献。我们首先确定最常见的设置(静态,固定和进化)。然后,我们为经典迭代方法(基于最佳响应计算或策略评估)提供了一个通用框架,以确切的方式解决MFG。在这些算法和与马尔可夫决策过程的联系的基础上,我们解释了如何使用RL以无模型的方式学习MFG解决方案。最后,我们在基准问题上介绍了数值插图,并以某些视角得出结论。
translated by 谷歌翻译
策略梯度方法适用于复杂的,不理解的,通过对参数化的策略进行随机梯度下降来控制问题。不幸的是,即使对于可以通过标准动态编程技术解决的简单控制问题,策略梯度算法也会面临非凸优化问题,并且被广泛理解为仅收敛到固定点。这项工作确定了结构属性 - 通过几个经典控制问题共享 - 确保策略梯度目标函数尽管是非凸面,但没有次优的固定点。当这些条件得到加强时,该目标满足了产生收敛速率的Polyak-lojasiewicz(梯度优势)条件。当其中一些条件放松时,我们还可以在任何固定点的最佳差距上提供界限。
translated by 谷歌翻译
一种简单自然的增强学习算法(RL)是蒙特卡洛探索开始(MCES),通过平均蒙特卡洛回报来估算Q功能,并通过选择最大化Q当前估计的行动来改进策略。 -功能。探索是通过“探索开始”来执行的,即每个情节以随机选择的状态和动作开始,然后遵循当前的策略到终端状态。在Sutton&Barto(2018)的RL经典书中,据说建立MCES算法的收敛是RL中最重要的剩余理论问题之一。但是,MCE的收敛问题证明是非常细微的。 Bertsekas&Tsitsiklis(1996)提供了一个反例,表明MCES算法不一定会收敛。 TSITSIKLIS(2002)进一步表明,如果修改了原始MCES算法,以使Q-功能估计值以所有状态行动对以相同的速率更新,并且折现因子严格少于一个,则MCES算法收敛。在本文中,我们通过Sutton&Barto(1998)中给出的原始,更有效的MCES算法取得进展政策。这样的MDP包括大量的环境,例如所有确定性环境和所有具有时间步长的情节环境或作为状态的任何单调变化的值。与以前使用随机近似的证据不同,我们引入了一种新型的感应方法,该方法非常简单,仅利用大量的强规律。
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
在线强化学习(RL)中的挑战之一是代理人需要促进对环境的探索和对样品的利用来优化其行为。无论我们是否优化遗憾,采样复杂性,状态空间覆盖范围或模型估计,我们都需要攻击不同的勘探开发权衡。在本文中,我们建议在分离方法组成的探索 - 剥削问题:1)“客观特定”算法(自适应)规定哪些样本以收集到哪些状态,似乎它可以访问a生成模型(即环境的模拟器); 2)负责尽可能快地生成规定样品的“客观无关的”样品收集勘探策略。建立最近在随机最短路径问题中进行探索的方法,我们首先提供一种算法,它给出了每个状态动作对所需的样本$ B(S,a)$的样本数量,需要$ \ tilde {o} (bd + d ^ {3/2} s ^ 2 a)收集$ b = \ sum_ {s,a} b(s,a)$所需样本的$时间步骤,以$ s $各国,$ a $行动和直径$ d $。然后我们展示了这种通用探索算法如何与“客观特定的”策略配对,这些策略规定了解决各种设置的样本要求 - 例如,模型估计,稀疏奖励发现,无需无成本勘探沟通MDP - 我们获得改进或新颖的样本复杂性保证。
translated by 谷歌翻译
本文讨论了一种学习最佳Q功能的基本问题的新方法。在这种方法中,最佳Q函数被配制为源自经典Bellman最优方程的非线性拉格朗日函数的鞍点。该论文表明,尽管非线性具有非线性,但拉格朗日人仍然具有很强的双重性,这为Q-function学习的一般方法铺平了道路。作为演示,本文根据二元性理论开发了模仿学习算法,并将算法应用于最先进的机器翻译基准。然后,该论文转弯以证明有关拉格朗日鞍点的最佳性的对称性破坏现象,这证明了开发拉格朗日方法的很大程度上被忽视的方向。
translated by 谷歌翻译
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of $\mathcal{O}(C)$, where $C$ is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
translated by 谷歌翻译
本文研究了一种使用背景计划的新方法,用于基于模型的增强学习:混合(近似)动态编程更新和无模型更新,类似于DYNA体系结构。通过学习模型的背景计划通常比无模型替代方案(例如Double DQN)差,尽管前者使用了更多的内存和计算。基本问题是,学到的模型可能是不准确的,并且经常会产生无效的状态,尤其是在迭代许多步骤时。在本文中,我们通过将背景规划限制为一组(抽象)子目标并仅学习本地,子观念模型来避免这种限制。这种目标空间计划(GSP)方法更有效地是在计算上,自然地纳入了时间抽象,以进行更快的长胜压计划,并避免完全学习过渡动态。我们表明,在各种情况下,我们的GSP算法比双DQN基线要快得多。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
我们在马尔可夫决策过程的状态空间上提出了一种新的行为距离,并展示使用该距离作为塑造深度加强学习代理的学习言论的有效手段。虽然由于高计算成本和基于样本的算法缺乏缺乏样本的距离,但是,虽然现有的国家相似性通常难以在规模上学习,但我们的新距离解决了这两个问题。除了提供详细的理论分析外,我们还提供了学习该距离的经验证据,与价值函数产生的结构化和信息化表示,包括对街机学习环境基准的强劲结果。
translated by 谷歌翻译
政策梯度定理(Sutton等,2000)规定了目标政策下的累积折扣国家分配以近似梯度。实际上,基于该定理的大多数算法都打破了这一假设,引入了分布转移,该分配转移可能导致逆转溶液的收敛性。在本文中,我们提出了一种新的方法,可以从开始状态重建政策梯度,而无需采取特定的采样策略。可以根据梯度评论家来简化此形式的策略梯度计算,由于梯度的新钟声方程式,可以递归估算。通过使用来自差异数据流的梯度评论家的时间差异更新,我们开发了第一个以无模型方式避开分布变化问题的估计器。我们证明,在某些可实现的条件下,无论采样策略如何,我们的估计器都是公正的。我们从经验上表明,我们的技术在存在非政策样品的情况下实现了卓越的偏见变化权衡和性能。
translated by 谷歌翻译
已经引入了生成流量网络(GFlowNETS)作为在主动学习背景下采样多样化候选的方法,具有培训目标,其使它们与给定奖励功能成比例地进行比例。在本文中,我们显示了许多额外的GFLOWN的理论特性。它们可用于估计联合概率分布和一些变量未指定的相应边际分布,并且特别感兴趣地,可以代表像集合和图形的复合对象的分布。 Gflownets摊销了通常通过计算昂贵的MCMC方法在单个但训练有素的生成通行证中进行的工作。它们还可用于估计分区功能和自由能量,给定子集(子图)的超标(超图)的条件概率,以及给定集合(图)的所有超标仪(超图)的边际分布。我们引入了熵和相互信息估计的变体,从帕累托前沿采样,与奖励最大化策略的连接,以及随机环境的扩展,连续动作和模块化能量功能。
translated by 谷歌翻译
由于策略梯度定理导致的策略设置存在各种理论上 - 声音策略梯度算法,其为梯度提供了简化的形式。然而,由于存在多重目标和缺乏明确的脱助政策政策梯度定理,截止策略设置不太明确。在这项工作中,我们将这些目标统一到一个违规目标,并为此统一目标提供了政策梯度定理。推导涉及强调的权重和利息职能。我们显示多种策略来近似梯度,以识别权重(ACE)称为Actor评论家的算法。我们证明了以前(半梯度)脱离政策演员 - 评论家 - 特别是offpac和DPG - 收敛到错误的解决方案,而Ace找到最佳解决方案。我们还强调为什么这些半梯度方法仍然可以在实践中表现良好,表明ace中的方差策略。我们经验研究了两个经典控制环境的若干ACE变体和基于图像的环境,旨在说明每个梯度近似的权衡。我们发现,通过直接逼近强调权重,ACE在所有测试的所有设置中执行或优于offpac。
translated by 谷歌翻译
强化学习算法的实用性由于相对于问题大小的规模差而受到限制,因为学习$ \ epsilon $ -optimal策略的样本复杂性为$ \ tilde {\ omega} \ left(| s | s || a || a || a || a | h^3 / \ eps^2 \ right)$在MDP的最坏情况下,带有状态空间$ S $,ACTION SPACE $ A $和HORIZON $ H $。我们考虑一类显示出低级结构的MDP,其中潜在特征未知。我们认为,价值迭代和低级别矩阵估计的自然组合导致估计误差在地平线上呈指数增长。然后,我们提供了一种新算法以及统计保证,即有效利用了对生成模型的访问,实现了$ \ tilde {o} \ left的样本复杂度(d^5(d^5(| s |+| a |)\),我们有效利用低级结构。对于等级$ d $设置的Mathrm {Poly}(h)/\ EPS^2 \ right)$,相对于$ | s |,| a | $和$ \ eps $的缩放,这是最小值的最佳。与线性和低级别MDP的文献相反,我们不需要已知的功能映射,我们的算法在计算上很简单,并且我们的结果长期存在。我们的结果提供了有关MDP对过渡内核与最佳动作值函数所需的最小低级结构假设的见解。
translated by 谷歌翻译
In this paper, we consider the problem of adjusting the exploration rate when using value-of-information-based exploration. We do this by converting the value-of-information optimization into a problem of finding equilibria of a flow for a changing exploration rate. We then develop an efficient path-following scheme for converging to these equilibria and hence uncovering optimal action-selection policies. Under this scheme, the exploration rate is automatically adapted according to the agent's experiences. Global convergence is theoretically assured. We first evaluate our exploration-rate adaptation on the Nintendo GameBoy games Centipede and Millipede. We demonstrate aspects of the search process. We show that our approach yields better policies in fewer episodes than conventional search strategies relying on heuristic, annealing-based exploration-rate adjustments. We then illustrate that these trends hold for deep, value-of-information-based agents that learn to play ten simple games and over forty more complicated games for the Nintendo GameBoy system. Performance either near or well above the level of human play is observed.
translated by 谷歌翻译