Reinforcement learning is a framework for interactive decision-making with incentives sequentially revealed across time without a system dynamics model. Due to its scaling to continuous spaces, we focus on policy search where one iteratively improves a parameterized policy with stochastic policy gradient (PG) updates. In tabular Markov Decision Problems (MDPs), under persistent exploration and suitable parameterization, global optimality may be obtained. By contrast, in continuous space, the non-convexity poses a pathological challenge as evidenced by existing convergence results being mostly limited to stationarity or arbitrary local extrema. To close this gap, we step towards persistent exploration in continuous space through policy parameterizations defined by distributions of heavier tails defined by tail-index parameter alpha, which increases the likelihood of jumping in state space. Doing so invalidates smoothness conditions of the score function common to PG. Thus, we establish how the convergence rate to stationarity depends on the policy's tail index alpha, a Holder continuity parameter, integrability conditions, and an exploration tolerance parameter introduced here for the first time. Further, we characterize the dependence of the set of local maxima on the tail index through an exit and transition time analysis of a suitably defined Markov chain, identifying that policies associated with Levy Processes of a heavier tail converge to wider peaks. This phenomenon yields improved stability to perturbations in supervised learning, which we corroborate also manifests in improved performance of policy search, especially when myopic and farsighted incentives are misaligned.
translated by 谷歌翻译
策略梯度方法适用于复杂的,不理解的,通过对参数化的策略进行随机梯度下降来控制问题。不幸的是,即使对于可以通过标准动态编程技术解决的简单控制问题,策略梯度算法也会面临非凸优化问题,并且被广泛理解为仅收敛到固定点。这项工作确定了结构属性 - 通过几个经典控制问题共享 - 确保策略梯度目标函数尽管是非凸面,但没有次优的固定点。当这些条件得到加强时,该目标满足了产生收敛速率的Polyak-lojasiewicz(梯度优势)条件。当其中一些条件放松时,我们还可以在任何固定点的最佳差距上提供界限。
translated by 谷歌翻译
政策梯度(PG)算法是备受期待的强化学习对现实世界控制任务(例如机器人技术)的最佳候选人之一。但是,每当必须在物理系统上执行学习过程本身或涉及任何形式的人类计算机相互作用时,这些方法的反复试验性质就会提出安全问题。在本文中,我们解决了一种特定的安全公式,其中目标和危险都以标量奖励信号进行编码,并且学习代理被限制为从不恶化其性能,以衡量为预期的奖励总和。通过从随机优化的角度研究仅行为者的政策梯度,我们为广泛的参数政策建立了改进保证,从而将现有结果推广到高斯政策上。这与策略梯度估计器的差异的新型上限一起,使我们能够识别出具有很高概率的单调改进的元参数计划。两个关键的元参数是参数更新的步长和梯度估计的批处理大小。通过对这些元参数的联合自适应选择,我们获得了具有单调改进保证的政策梯度算法。
translated by 谷歌翻译
我们改进了用于分析非凸优化随机梯度下降(SGD)的最新工具,以获得香草政策梯度(PG) - 加强和GPOMDP的收敛保证和样本复杂性。我们唯一的假设是预期回报是平滑的w.r.t.策略参数以及其渐变的第二个时刻满足某种\ EMPH {ABC假设}。 ABC的假设允许梯度的第二时刻绑定为\ geq 0 $次的子项优差距,$ b \ geq 0 $乘以完整批量梯度的标准和添加剂常数$ c \ geq 0 $或上述任何组合。我们表明ABC的假设比策略空间上的常用假设更为一般,以证明收敛到静止点。我们在ABC的假设下提供单个融合定理,并表明,尽管ABC假设的一般性,我们恢复了$ \ widetilde {\ mathcal {o}}(\ epsilon ^ {-4})$样本复杂性pg 。我们的融合定理还可在选择超参数等方面提供更大的灵活性,例如步长和批量尺寸的限制$ M $。即使是单个轨迹案例(即,$ M = 1 $)适合我们的分析。我们认为,ABC假设的一般性可以为PG提供理论担保,以至于以前未考虑的更广泛的问题。
translated by 谷歌翻译
当我们不允许我们使用目标策略进行采样,而只能访问某些未知行为策略生成的数据集时,策略梯度(PG)估计就成为一个挑战。用于支付政策PG估计的常规方法通常会遭受明显的偏差或指数较大的差异。在本文中,我们提出了双拟合的PG估计(FPG)算法。假设访问Bellman-Complete值函数类,FPG可以与任意策略参数化一起工作。在线性值函数近似的情况下,我们在策略梯度估计误差上提供了一个紧密的有限样本上限,该界限受特征空间中测量的分布不匹配量的控制。我们还建立了FPG估计误差的渐近正态性,并具有精确的协方差表征,这进一步证明在统计上是最佳的,具有匹配的Cramer-Rao下限。从经验上讲,我们使用SoftMax表格或RELU策略网络评估FPG在策略梯度估计和策略优化方面的性能。在各种指标下,我们的结果表明,基于重要性采样和降低方差技术,FPG显着优于现有的非政策PG估计方法。
translated by 谷歌翻译
In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.
translated by 谷歌翻译
我们研究了无限 - 马,连续状态和行动空间的政策梯度的全球融合以及熵登记的马尔可夫决策过程(MDPS)。我们考虑了在平均场状态下具有(单隐层)神经网络近似(一层)神经网络近似的策略。添加了相关的平均场概率度量中的其他熵正则化,并在2-Wasserstein度量中研究了相应的梯度流。我们表明,目标函数正在沿梯度流量增加。此外,我们证明,如果按平均场测量的正则化足够,则梯度流将成倍收敛到唯一的固定溶液,这是正则化MDP物镜的独特最大化器。最后,我们研究了相对于正则参数和初始条件,沿梯度流的值函数的灵敏度。我们的结果依赖于对非线性Fokker-Planck-Kolmogorov方程的仔细分析,并扩展了Mei等人的开拓性工作。 2020和Agarwal等。 2020年,量化表格环境中熵调控MDP的策略梯度的全局收敛速率。
translated by 谷歌翻译
为了在许多因素动态影响输出轨迹的复杂随机系统上学习,希望有效利用从以前迭代中收集的历史样本中的信息来加速策略优化。经典的经验重播使代理商可以通过重复使用历史观察来记住。但是,处理所有观察结果的统一重复使用策略均忽略了不同样本的相对重要性。为了克服这一限制,我们提出了一个基于一般差异的经验重播(VRER)框架,该框架可以选择性地重复使用最相关的样本以改善策略梯度估计。这种选择性机制可以自适应地对过去的样品增加重量,这些样本更可能由当前目标分布产生。我们的理论和实证研究表明,提议的VRER可以加速学习最佳政策,并增强最先进的政策优化方法的性能。
translated by 谷歌翻译
为了在许多因素动态影响输出轨迹的复杂随机系统上学习,希望有效利用从以前迭代中收集的历史样本中的信息来加速策略优化。经典的经验重播使代理商可以通过重复使用历史观察来记住。但是,处理所有观察结果的统一重复使用策略均忽略了不同样本的相对重要性。为了克服这一限制,我们提出了一个基于一般差异的经验重播(VRER)框架,该框架可以选择性地重复使用最相关的样本以改善策略梯度估计。这种选择性机制可以自适应地对过去的样品增加重量,这些样本更可能由当前目标分布产生。我们的理论和实证研究表明,提议的VRER可以加速学习最佳政策,并增强最先进的政策优化方法的性能。
translated by 谷歌翻译
我们研究马尔可夫决策过程(MDP)框架中的离线数据驱动的顺序决策问题。为了提高学习政策的概括性和适应性,我们建议通过一套关于在政策诱导的固定分配所在的分发的一套平均奖励来评估每项政策。给定由某些行为策略生成的多个轨迹的预收集数据集,我们的目标是在预先指定的策略类中学习一个强大的策略,可以最大化此集的最小值。利用半参数统计的理论,我们开发了一种统计上有效的策略学习方法,用于估算DE NED强大的最佳政策。在数据集中的总决策点方面建立了达到对数因子的速率最佳遗憾。
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
近年来,神经网络授权的演员 - 评论家(AC)算法具有重大的经验成功。然而,AC算法的大多数现有的理论支持集中于线性函数近似或线性化神经网络的情况,其中特征表示在整个训练中都是固定的。这种限制未能捕获神经AC中的表示学习的关键方面,这在实际问题中是关键的。在这项工作中,我们采取了一种含义的基于特征神经交流的演变和融合的视角。具体而言,我们考虑一个AC的版本,其中Actor和批评者由过度分辨率的双层神经网络表示,并以两时间测定的学习速率更新。批评评论批评者通过时间差异(TD)学习使用较大的步骤,而演员通过近端策略优化(PPO)更新,具有较小的步骤。在连续时间和无限宽度限制性方案中,当时间尺度适当分开时,我们证明了神经通讯以Sublinear率找到全球最佳政策。此外,我们证明了批评网络引起的特征表示允许在初始概念的邻域内发展。
translated by 谷歌翻译
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space. Focusing on a stochastic query model that provides noisy evaluations of the operator, we analyze a variance-reduced stochastic approximation scheme, and establish non-asymptotic bounds for both the operator defect and the estimation error, measured in an arbitrary semi-norm. In contrast to worst-case guarantees, our bounds are instance-dependent, and achieve the local asymptotic minimax risk non-asymptotically. For linear operators, contractivity can be relaxed to multi-step contractivity, so that the theory can be applied to problems like average reward policy evaluation problem in reinforcement learning. We illustrate the theory via applications to stochastic shortest path problems, two-player zero-sum Markov games, as well as policy evaluation and $Q$-learning for tabular Markov decision processes.
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译
We revisit the domain of off-policy policy optimization in RL from the perspective of coordinate ascent. One commonly-used approach is to leverage the off-policy policy gradient to optimize a surrogate objective -- the total discounted in expectation return of the target policy with respect to the state distribution of the behavior policy. However, this approach has been shown to suffer from the distribution mismatch issue, and therefore significant efforts are needed for correcting this mismatch either via state distribution correction or a counterfactual method. In this paper, we rethink off-policy learning via Coordinate Ascent Policy Optimization (CAPO), an off-policy actor-critic algorithm that decouples policy improvement from the state distribution of the behavior policy without using the policy gradient. This design obviates the need for distribution correction or importance sampling in the policy improvement step of off-policy policy gradient. We establish the global convergence of CAPO with general coordinate selection and then further quantify the convergence rates of several instances of CAPO with popular coordinate selection rules, including the cyclic and the randomized variants of CAPO. We then extend CAPO to neural policies for a more practical implementation. Through experiments, we demonstrate that CAPO provides a competitive approach to RL in practice.
translated by 谷歌翻译
我们在无限地平线马尔可夫决策过程中考虑批量(离线)策略学习问题。通过移动健康应用程序的推动,我们专注于学习最大化长期平均奖励的政策。我们为平均奖励提出了一款双重强大估算器,并表明它实现了半导体效率。此外,我们开发了一种优化算法来计算参数化随机策略类中的最佳策略。估计政策的履行是通过政策阶级的最佳平均奖励与估计政策的平均奖励之间的差异来衡量,我们建立了有限样本的遗憾保证。通过模拟研究和促进体育活动的移动健康研究的分析来说明该方法的性能。
translated by 谷歌翻译
我们研究了平均奖励马尔可夫决策过程(AMDP)的问题,并开发了具有强大理论保证的新型一阶方法,以进行政策评估和优化。由于缺乏勘探,现有的彻底评估方法遭受了次优融合率以及处理不足的随机策略(例如确定性政策)的失败。为了解决这些问题,我们开发了一种新颖的差异时间差异(VRTD)方法,具有随机策略的线性函数近似以及最佳收敛保证,以及一种探索性方差降低的时间差(EVRTD)方法,用于不充分的随机策略,可相当的融合保证。我们进一步建立了政策评估偏见的线性收敛速率,这对于改善策略优化的总体样本复杂性至关重要。另一方面,与对MDP的政策梯度方法的有限样本分析相比,对AMDP的策略梯度方法的现有研究主要集中在基础马尔可夫流程的限制性假设下(例如,参见Abbasi-e, Yadkori等人,2019年),他们通常缺乏整体样本复杂性的保证。为此,我们开发了随机策略镜下降(SPMD)的平均奖励变体(LAN,2022)。我们建立了第一个$ \ widetilde {\ Mathcal {o}}(\ epsilon^{ - 2})$样品复杂性,用于在生成模型(带有UNICHAIN假设)和Markovian Noise模型(使用Ergodicicic Modele(具有核能的模型)下,使用策略梯度方法求解AMDP假设)。该界限可以进一步改进到$ \ widetilde {\ Mathcal {o}}}(\ epsilon^{ - 1})$用于求解正则化AMDPS。我们的理论优势通过数值实验来证实。
translated by 谷歌翻译
We propose a new policy gradient method, named homotopic policy mirror descent (HPMD), for solving discounted, infinite horizon MDPs with finite state and action spaces. HPMD performs a mirror descent type policy update with an additional diminishing regularization term, and possesses several computational properties that seem to be new in the literature. We first establish the global linear convergence of HPMD instantiated with Kullback-Leibler divergence, for both the optimality gap, and a weighted distance to the set of optimal policies. Then local superlinear convergence is obtained for both quantities without any assumption. With local acceleration and diminishing regularization, we establish the first result among policy gradient methods on certifying and characterizing the limiting policy, by showing, with a non-asymptotic characterization, that the last-iterate policy converges to the unique optimal policy with the maximal entropy. We then extend all the aforementioned results to HPMD instantiated with a broad class of decomposable Bregman divergences, demonstrating the generality of the these computational properties. As a by product, we discover the finite-time exact convergence for some commonly used Bregman divergences, implying the continuing convergence of HPMD to the limiting policy even if the current policy is already optimal. Finally, we develop a stochastic version of HPMD and establish similar convergence properties. By exploiting the local acceleration, we show that for small optimality gap, a better than $\tilde{\mathcal{O}}(\left|\mathcal{S}\right| \left|\mathcal{A}\right| / \epsilon^2)$ sample complexity holds with high probability, when assuming a generative model for policy evaluation.
translated by 谷歌翻译
我们在使用函数近似的情况下,在使用最小的Minimax方法估算这些功能时,使用功能近似来实现函数近似和$ q $ functions的理论表征。在各种可靠性和完整性假设的组合下,我们表明Minimax方法使我们能够实现重量和质量功能的快速收敛速度,其特征在于关键的不平等\ citep {bartlett2005}。基于此结果,我们分析了OPE的收敛速率。特别是,我们引入了新型的替代完整性条件,在该条件下,OPE是可行的,我们在非尾部环境中以一阶效率提出了第一个有限样本结果,即在领先期限中具有最小的系数。
translated by 谷歌翻译
本文分析了双模的彼此优化随机算法框架。 Bilevel优化是一类表现出两级结构的问题,其目标是使具有变量的外目标函数最小化,该变量被限制为对(内部)优化问题的最佳解决方案。我们考虑内部问题的情况是不受约束的并且强烈凸起的情况,而外部问题受到约束并具有平滑的目标函数。我们提出了一种用于解决如此偏纤维问题的两次时间尺度随机近似(TTSA)算法。在算法中,使用较大步长的随机梯度更新用于内部问题,而具有较小步长的投影随机梯度更新用于外部问题。我们在各种设置下分析了TTSA算法的收敛速率:当外部问题强烈凸起(RESP。〜弱凸)时,TTSA算法查找$ \ MATHCAL {O}(k ^ { - 2/3})$ -Optimal(resp。〜$ \ mathcal {o}(k ^ {-2/5})$ - 静止)解决方案,其中$ k $是总迭代号。作为一个应用程序,我们表明,两个时间尺度的自然演员 - 批评批评近端策略优化算法可以被视为我们的TTSA框架的特殊情况。重要的是,与全球最优政策相比,自然演员批评算法显示以预期折扣奖励的差距,以$ \ mathcal {o}(k ^ { - 1/4})的速率收敛。
translated by 谷歌翻译