本文涉及增强学习的样本效率,假设进入生成模型(或模拟器)。我们首先考虑$ \ gamma $ -discounted infinite-horizo​​ n markov决策过程(mdps)与状态空间$ \ mathcal {s} $和动作空间$ \ mathcal {a} $。尽管有许多先前的作品解决这个问题,但尚未确定样本复杂性和统计准确性之间的权衡的完整图像。特别地,所有事先结果都遭受严重的样本大小屏障,因为只有在样本量超过$ \ FRAC {| \ Mathcal {S} || \ Mathcal {A} |} {(1- \ gamma)^ 2} $。目前的论文通过认证了两种算法的最小值 - 基于模型的算法和基于保守模型的算法的最小值,克服了该障碍 - 一旦样本大小超过$ \ FRAC {| \ Mathcal {s } || mathcal {a} |} {1- \ gamma} $(modulo一些日志系数)。超越无限地平线MDP,我们进一步研究了时代的有限情况MDP,并证明了一种基于普通模型的规划算法足以实现任何目标精度水平的最佳样本复杂性。据我们所知,这项工作提供了第一个最低限度的最佳保证,可容纳全部样本尺寸(超出哪个发现有意义的政策是理论上不可行的信息)。
translated by 谷歌翻译
Q-Learning,旨在以无模式的方式学习Markov决策过程(MDP)的最佳Q函数,位于加强学习的核心。当涉及到同步设置时(从每次迭代中从生成模型中从生成模型中汲取独立样本)时,已经对理解Q学习的样本效率进行了实质性进展。考虑一个$ \ gamma $ -discounted infinite-horizo​​ n mdp与状态空间$ \ mathcal {s} $和动作空间$ \ mathcal {a} $:要产生一个entrywise $ \ varepsilon $ - 最佳q函数的克制,最先进的Q-Learning理论需要超出$ \ FRAC {| \ Mathcal {s} || \ mathcal {a} || \ {(1- \ gamma)^ 5 \ varepsilon的示例大小^ {2}} $,它无法匹配现有的最低限度下限。这引起了自然问题:Q-Learning的急剧性复杂性是什么?是Q-Learning可怕的次优吗?本文为同步设置解决了这些问题:(1)当$ | \ mathcal {a} | = 1 $(使q学习减少到TD学习)时,我们证明了TD学习的样本复杂性是最佳的最佳和尺度为$ \ frac {| \ mathcal {s} |} {(1- \ gamma)^ 3 \ varepsilon ^ 2} $(最多到日志系数); (2)当$ | \ mathcal {a} | \ geq 2 $时,我们解决了q-learning的样本复杂性,按$ \ frac {| \ mathcal {s} || \ mathcal {a} || } {(1- \ gamma)^ 4 \ varepsilon ^ 2} $(最多到日志系数)。我们的理论推出了Q-Leature的严格次优,当$ | \ mathcal {a} | \ geq 2 $,并严格严格估计在q-learning中的负面影响。最后,我们扩展了我们的分析以适应异步Q-Learning(即,与马尔可夫样本的情况),锐化其样本复杂性的地平线依赖性为$ \ frac {1} {(1- \ gamma)^ 4} $。
translated by 谷歌翻译
维度学习(RL)的诅咒是一种广为人知的问题。在表格设置中,状态空间$ \ mathcal {s} $和动作空间$ \ mathcal {a} $均为有限的,以获得与生成模型的采样访问的几乎最佳的政策,最低限度的最佳样本复杂度尺度用$ | \ mathcal {s} | \ times | \ mathcal {a} | $,它在$ \ mathcal {s} $或$ \ mathcal {a} $很大。本文考虑了Markov决策过程(MDP),该过程承认一组状态操作功能,该功能可以线性地表达(或近似)其概率转换内核。我们展示了基于模型的方法(RESP。$〜$ Q-Learning)可否在样本大小超过订单时,通过高概率可以获得高概率的$ \ varepsilon $ -optimal策略(RESP。$〜$ q-function) $ \ frac {k} {(1- \ gamma)^ {3} \ varepsilon ^ {2}} $(resp. $〜$$ \ frac {k} {(1- \ gamma)^ {4} varepsilon ^ {2}} $),直到一些对数因子。在这里,$ k $是特征尺寸和$ \ gamma \ IN(0,1)$是MDP的折扣系数。两个样本复杂性界限都是可透明的,我们对基于模型的方法的结果匹配最低限度的下限。我们的结果表明,对于任意大规模的MDP来说,基于模型的方法和Q-Learning都是在$ K $相对较小的时候进行样本效率,因此本文的标题。
translated by 谷歌翻译
离线或批次加固学习试图使用历史数据来学习近乎最佳的政策,而无需积极探索环境。为了应对许多离线数据集的覆盖范围和样本稀缺性,最近引入了悲观的原则,以减轻估计值的高偏差。在理论上已经研究了基于模型的算法的悲观变体(例如,具有较低置信度范围的价值迭代),但他们的无模型对应物(不需要明确的模型估计)尚未得到充分研究,尤其是在样本方面的研究效率。为了解决这种不足,我们在有限的马尔可夫决策过程中研究了Q学习的悲观变体,并在单极浓缩性假设下表征其样品复杂性,该假设不需要全面覆盖状态行动空间。此外,提出了降低方差的悲观Q学习算法来达到近乎最佳的样本复杂性。总的来说,这项工作突出了与悲观和降低差异一起使用时,在离线RL中无模型算法的效率。
translated by 谷歌翻译
本文通过离线数据在两人零和马尔可夫游戏中学习NASH Equilibria的进展。具体而言,考虑使用$ S $州的$ \ gamma $ discousped Infinite-Horizo​​n Markov游戏,其中Max-player具有$ $ ACTIVE,而Min-player具有$ B $ Actions。我们提出了一种基于悲观模型的算法,具有伯恩斯坦风格的较低置信界(称为VI-LCB游戏),事实证明,该算法可以找到$ \ varepsilon $ - approximate-approximate nash平衡,带有样品复杂性,不大于$ \ frac {c_ {c_ {c_ {c_ { \ Mathsf {剪切}}}^{\ star} s(a+b)} {(1- \ gamma)^{3} \ varepsilon^{2}} $(最多到某个log factor)。在这里,$ c _ {\ mathsf {剪切}}}^{\ star} $是一些单方面剪接的浓缩系数,反映了可用数据的覆盖范围和分配变化(vis- \`a-vis目标数据),而目标是目标精度$ \ varepsilon $可以是$ \ big(0,\ frac {1} {1- \ gamma} \ big] $的任何值。我们的样本复杂性绑定了先前的艺术,以$ \ min \ {a, b \} $,实现整个$ \ varepsilon $ range的最小值最佳性。我们结果的一个吸引力的功能在于算法简单性,这揭示了降低方差降低和样本拆分的不必要性。
translated by 谷歌翻译
The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning. For $\gamma$-discounted infinite-horizon tabular Markov decision processes (MDPs), remarkable progress has recently been achieved towards establishing global convergence of softmax PG methods in finding a near-optimal policy. However, prior results fall short of delineating clear dependencies of convergence rates on salient parameters such as the cardinality of the state space $\mathcal{S}$ and the effective horizon $\frac{1}{1-\gamma}$, both of which could be excessively large. In this paper, we deliver a pessimistic message regarding the iteration complexity of softmax PG methods, despite assuming access to exact gradient computation. Specifically, we demonstrate that the softmax PG method with stepsize $\eta$ can take \[ \frac{1}{\eta} |\mathcal{S}|^{2^{\Omega\big(\frac{1}{1-\gamma}\big)}} ~\text{iterations} \] to converge, even in the presence of a benign policy initialization and an initial state distribution amenable to exploration (so that the distribution mismatch coefficient is not exceedingly large). This is accomplished by characterizing the algorithmic dynamics over a carefully-constructed MDP containing only three actions. Our exponential lower bound hints at the necessity of carefully adjusting update rules or enforcing proper regularization in accelerating PG methods.
translated by 谷歌翻译
本文涉及两人零和马尔可夫游戏 - 可以说是多代理增强学习中最基本的设置 - 目的是学习纳什平衡(NE)的样本 - 优越。所有先前的结果至少都有两个障碍中的至少一个:多种试剂的诅咒和长层的障碍,无论使用采样方案如何。假设访问灵活的采样机制:生成模型,我们朝着解决此问题迈出了一步。专注于非平稳的有限 - 霍森马尔可夫游戏,我们开发了一种学习算法$ \ mathsf {nash} \ text { - } \ mathsf {q} \ text { - } \ text { - } \ mathsf {ftrl} $ and deflavery and Adaptive采样方案对抗性学习中的乐观原则(尤其是跟随规范化领导者(FTRL)方法),具有精致的奖励术语设计,可确保在FTRL动力学下进行某些可分解性。我们的算法使用$$ \ widetilde {o} \ bigg(\ frac {h^4 s(a+b)} {\ varepsilon^2} \ bigg)$ bigg)$ samples $ \ varepsilon $ -Approximate Markov ne策略其中$ s $是状态的数量,$ h $是地平线,而$ a $ a $ a $ a $ a $(resp。〜 $ b $)表示max-player的动作数(分别〜min-player)。从最小的意义上讲,这几乎无法得到解决。在此过程中,我们得出了一个精致的遗憾,以赋予FTRL的遗憾,从而明确说明了差异数量的作用,这可能具有独立的利益。
translated by 谷歌翻译
本文涉及离线增强学习(RL)中模型鲁棒性和样本效率的核心问题,该问题旨在学习从没有主动探索的情况下从历史数据中执行决策。由于环境的不确定性和变异性,至关重要的是,学习强大的策略(尽可能少的样本),即使部署的环境偏离用于收集历史记录数据集的名义环境时,该策略也能很好地执行。我们考虑了离线RL的分布稳健公式,重点是标签非平稳的有限摩托稳健的马尔可夫决策过程,其不确定性设置为Kullback-Leibler Divergence。为了与样本稀缺作用,提出了一种基于模型的算法,该算法将分布强劲的价值迭代与面对不确定性时的悲观原理结合在一起,通过对稳健的价值估计值进行惩罚,以精心设计的数据驱动的惩罚项进行惩罚。在对历史数据集的轻度和量身定制的假设下,该数据集测量分布变化而不需要完全覆盖州行动空间,我们建立了所提出算法的有限样本复杂性,进一步表明,鉴于几乎无法改善的情况,匹配信息理论下限至地平线长度的多项式因素。据我们所知,这提供了第一个在模型不确定性和部分覆盖范围内学习的近乎最佳的稳健离线RL算法。
translated by 谷歌翻译
政策优化,通过大规模优化技术最大化价值函数来学习兴趣的政策,位于现代强化学习(RL)的核心。除了价值最大化之外,其他实际考虑因素也出现,包括令人鼓舞的探索,以及确保由于安全,资源和运营限制而确保学习政策的某些结构性。这些考虑通常可以通过诉诸正规化的RL来占据,这增加了目标值函数,并通过结构促进正则化术语。专注于无限范围打折马尔可夫决策过程,本文提出了一种用于解决正规化的RL的广义策略镜血压(GPMD)算法。作为策略镜血压LAN的概括(2021),所提出的算法可以容纳一般类凸常规的常规阶级,以及在使用中的规则器的认识到的广泛的Bregman分歧。我们展示了我们的算法在整个学习速率范围内,以无维的方式在全球解决方案的整个学习速率范围内融合到全球解决方案,即使常规器缺乏强大的凸起和平滑度。此外,在不精确的策略评估和不完美的政策更新方面,该线性收敛特征是可透明的。提供数值实验以证实GPMD的适用性和吸引力性能。
translated by 谷歌翻译
强化学习算法的实用性由于相对于问题大小的规模差而受到限制,因为学习$ \ epsilon $ -optimal策略的样本复杂性为$ \ tilde {\ omega} \ left(| s | s || a || a || a || a | h^3 / \ eps^2 \ right)$在MDP的最坏情况下,带有状态空间$ S $,ACTION SPACE $ A $和HORIZON $ H $。我们考虑一类显示出低级结构的MDP,其中潜在特征未知。我们认为,价值迭代和低级别矩阵估计的自然组合导致估计误差在地平线上呈指数增长。然后,我们提供了一种新算法以及统计保证,即有效利用了对生成模型的访问,实现了$ \ tilde {o} \ left的样本复杂度(d^5(d^5(| s |+| a |)\),我们有效利用低级结构。对于等级$ d $设置的Mathrm {Poly}(h)/\ EPS^2 \ right)$,相对于$ | s |,| a | $和$ \ eps $的缩放,这是最小值的最佳。与线性和低级别MDP的文献相反,我们不需要已知的功能映射,我们的算法在计算上很简单,并且我们的结果长期存在。我们的结果提供了有关MDP对过渡内核与最佳动作值函数所需的最小低级结构假设的见解。
translated by 谷歌翻译
我们研究了用线性函数近似的加固学习中的违规评估(OPE)问题,旨在根据行为策略收集的脱机数据来估计目标策略的价值函数。我们建议纳入价值函数的方差信息以提高ope的样本效率。更具体地说,对于时间不均匀的epiSodic线性马尔可夫决策过程(MDP),我们提出了一种算法VA-OPE,它使用价值函数的估计方差重新重量拟合Q迭代中的Bellman残差。我们表明我们的算法达到了比最着名的结果绑定的更紧密的误差。我们还提供了行为政策与目标政策之间的分布转移的细粒度。广泛的数值实验证实了我们的理论。
translated by 谷歌翻译
我们在$ \ Gamma $ -diScounted MDP中使用Polyak-Ruppert平均(A.K.A.,平均Q-Leaning)进行同步Q学习。我们为平均迭代$ \ bar {\ boldsymbol {q}}建立渐近常态。此外,我们展示$ \ bar {\ boldsymbol {q}} _ t $实际上是一个常规的渐近线性(RAL)估计值,用于最佳q-value函数$ \ boldsymbol {q} ^ * $与最有效的影响功能。它意味着平均Q学习迭代在所有RAL估算器之间具有最小的渐近方差。此外,我们为$ \ ell _ {\ infty} $错误$ \ mathbb {e} \ | \ | \ bar {\ boldsymbol {q}} _ t- \ boldsymbol {q} ^ *} ^ *} _ {\ idty} $,显示它与实例相关的下限以及最佳最低限度复杂性下限。作为一个副产品,我们发现Bellman噪音具有var-gaussian坐标,具有方差$ \ mathcal {o}((1- \ gamma)^ {-1})$而不是现行$ \ mathcal {o}((1- \ Gamma)^ { - 2})$根据标准界限奖励假设。子高斯结果有可能提高许多R1算法的样本复杂性。简而言之,我们的理论分析显示平均Q倾斜在统计上有效。
translated by 谷歌翻译
我们介绍了一种普遍的策略,可实现有效的多目标勘探。它依赖于adagoal,一种基于简单约束优化问题的新的目标选择方案,其自适应地针对目标状态,这既不是太困难也不是根据代理目前的知识达到的。我们展示了Adagoal如何用于解决学习$ \ epsilon $ -optimal的目标条件的政策,以便在$ L $ S_0 $ S_0 $奖励中获得的每一个目标状态,以便在$ S_0 $中获取。免费马尔可夫决策过程。在标准的表格外壳中,我们的算法需要$ \ tilde {o}(l ^ 3 s a \ epsilon ^ { - 2})$探索步骤,这几乎很少最佳。我们还容易在线性混合Markov决策过程中实例化Adagoal,其产生具有线性函数近似的第一目标导向的PAC保证。除了强大的理论保证之外,迈克纳队以现有方法的高级别算法结构为锚定,为目标条件的深度加固学习。
translated by 谷歌翻译
我们研究了具有线性函数近似增强学习中的随机最短路径(SSP)问题,其中过渡内核表示为未知模型的线性混合物。我们将此类别的SSP问题称为线性混合物SSP。我们提出了一种具有Hoeffding-type置信度的新型算法,用于学习线性混合物SSP,可以获得$ \ tilde {\ Mathcal {o}}}}(d B _ {\ star}^{1.5} \ sqrt {k/c_ {k/c_ {k/c_ {k/c_ { \ min}})$遗憾。这里$ k $是情节的数量,$ d $是混合模型中功能映射的维度,$ b _ {\ star} $限制了最佳策略的预期累积成本,$ c _ {\ min}>> 0 $是成本函数的下限。当$ c _ {\ min} = 0 $和$ \ tilde {\ mathcal {o}}}(k^{2/3})$遗憾时,我们的算法也适用于情况。据我们所知,这是第一个具有sublrinear遗憾保证线性混合物SSP的算法。此外,我们设计了精致的伯恩斯坦型信心集并提出了改进的算法,该算法可实现$ \ tilde {\ Mathcal {o}}}(d b _ {\ star} \ sqrt {k/c/c/c {k/c _ {\ min}}) $遗憾。为了补充遗憾的上限,我们还证明了$ \ omega(db _ {\ star} \ sqrt {k})$的下限。因此,我们的改进算法将下限匹配到$ 1/\ sqrt {c _ {\ min}} $ factor和poly-logarithmic因素,从而实现了近乎最佳的遗憾保证。
translated by 谷歌翻译
我们研究了情节块MDP中模型估计和无奖励学习的问题。在这些MDP中,决策者可以访问少数潜在状态产生的丰富观察或上下文。我们首先对基于固定行为策略生成的数据估算潜在状态解码功能(从观测到潜在状态的映射)感兴趣。我们在估计此功能的错误率上得出了信息理论的下限,并提出了接近此基本限制的算法。反过来,我们的算法还提供了MDP的所有组件的估计值。然后,我们研究在无奖励框架中学习近乎最佳政策的问题。根据我们有效的模型估计算法,我们表明我们可以以最佳的速度推断出策略(随着收集样品的数量增长大)的最佳策略。有趣的是,我们的分析提供了必要和充分的条件,在这些条件下,利用块结构可以改善样本复杂性,以识别近乎最佳的策略。当满足这些条件时,Minimax无奖励设置中的样本复杂性将通过乘法因子$ n $提高,其中$ n $是可能的上下文数量。
translated by 谷歌翻译
This work considers the sample complexity of obtaining an $\varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP), given access to a generative model (simulator). When the ground-truth MDP is weakly communicating, we prove an upper bound of $\widetilde O(H \varepsilon^{-3} \ln \frac{1}{\delta})$ samples per state-action pair, where $H := sp(h^*)$ is the span of bias of any optimal policy, $\varepsilon$ is the accuracy and $\delta$ is the failure probability. This bound improves the best-known mixing-time-based approaches in [Jin & Sidford 2021], which assume the mixing-time of every deterministic policy is bounded. The core of our analysis is a proper reduction bound from AMDP problems to discounted MDP (DMDP) problems, which may be of independent interests since it allows the application of DMDP algorithms for AMDP in other settings. We complement our upper bound by proving a minimax lower bound of $\Omega(|\mathcal S| |\mathcal A| H \varepsilon^{-2} \ln \frac{1}{\delta})$ total samples, showing that a linear dependent on $H$ is necessary and that our upper bound matches the lower bound in all parameters of $(|\mathcal S|, |\mathcal A|, H, \ln \frac{1}{\delta})$ up to some logarithmic factors.
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space. Focusing on a stochastic query model that provides noisy evaluations of the operator, we analyze a variance-reduced stochastic approximation scheme, and establish non-asymptotic bounds for both the operator defect and the estimation error, measured in an arbitrary semi-norm. In contrast to worst-case guarantees, our bounds are instance-dependent, and achieve the local asymptotic minimax risk non-asymptotically. For linear operators, contractivity can be relaxed to multi-step contractivity, so that the theory can be applied to problems like average reward policy evaluation problem in reinforcement learning. We illustrate the theory via applications to stochastic shortest path problems, two-player zero-sum Markov games, as well as policy evaluation and $Q$-learning for tabular Markov decision processes.
translated by 谷歌翻译
This paper studies systematic exploration for reinforcement learning with rich observations and function approximation. We introduce a new model called contextual decision processes, that unifies and generalizes most prior settings. Our first contribution is a complexity measure, the Bellman rank , that we show enables tractable learning of near-optimal behavior in these processes and is naturally small for many well-studied reinforcement learning settings. Our second contribution is a new reinforcement learning algorithm that engages in systematic exploration to learn contextual decision processes with low Bellman rank. Our algorithm provably learns near-optimal behavior with a number of samples that is polynomial in all relevant parameters but independent of the number of unique observations. The approach uses Bellman error minimization with optimistic exploration and provides new insights into efficient exploration for reinforcement learning with function approximation.
translated by 谷歌翻译
Offline reinforcement learning (RL) concerns pursuing an optimal policy for sequential decision-making from a pre-collected dataset, without further interaction with the environment. Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators, especially to handle the case with excessively large state-action spaces. Among them, the framework based on the linear-programming (LP) reformulation of Markov decision processes has shown promise: it enables sample-efficient offline RL with function approximation, under only partial data coverage and realizability assumptions on the function classes, with favorable computational tractability. In this work, we revisit the LP framework for offline RL, and advance the existing results in several aspects, relaxing certain assumptions and achieving optimal statistical rates in terms of sample size. Our key enabler is to introduce proper constraints in the reformulation, instead of using any regularization as in the literature, sometimes also with careful choices of the function classes and initial state distributions. We hope our insights further advocate the study of the LP framework, as well as the induced primal-dual minimax optimization, in offline RL.
translated by 谷歌翻译