Reinforcementlearning(RL)folkloresuggeststhathistory-basedfunctionapproximationmethods,suchas recurrent neural nets or history-based state abstraction, perform better than their memory-less counterparts, due to the fact that function approximation in Markov decision processes (MDP) can be viewed as inducing a Partially observable MDP. However, there has been little formal analysis of such history-based algorithms, as most existing frameworks focus exclusively on memory-less features. In this paper, we introduce a theoretical framework for studying the behaviour of RL algorithms that learn to control an MDP using history-based feature abstraction mappings. Furthermore, we use this framework to design a practical RL algorithm and we numerically evaluate its effectiveness on a set of continuous control tasks.
translated by 谷歌翻译
近年来,神经网络授权的演员 - 评论家(AC)算法具有重大的经验成功。然而,AC算法的大多数现有的理论支持集中于线性函数近似或线性化神经网络的情况,其中特征表示在整个训练中都是固定的。这种限制未能捕获神经AC中的表示学习的关键方面,这在实际问题中是关键的。在这项工作中,我们采取了一种含义的基于特征神经交流的演变和融合的视角。具体而言,我们考虑一个AC的版本,其中Actor和批评者由过度分辨率的双层神经网络表示,并以两时间测定的学习速率更新。批评评论批评者通过时间差异(TD)学习使用较大的步骤,而演员通过近端策略优化(PPO)更新,具有较小的步骤。在连续时间和无限宽度限制性方案中,当时间尺度适当分开时,我们证明了神经通讯以Sublinear率找到全球最佳政策。此外,我们证明了批评网络引起的特征表示允许在初始概念的邻域内发展。
translated by 谷歌翻译
深度加强学习的最近成功的大部分是由正常化的政策优化(RPO)算法驱动,具有跨多个域的强大性能。在这家族的方法中,代理经过培训,以在惩罚某些引用或默认策略的行为中的偏差时最大化累积奖励。除了经验的成功外,还有一个强大的理论基础,了解应用于单一任务的RPO方法,与自然梯度,信任区域和变分方法有关。但是,对于多任务设置中的默认策略,对所需属性的正式理解有限,越来越重要的域作为现场转向培训更有能力的代理商。在这里,我们通过将默认策略的质量与其对优化的影响正式链接到其对其影响的效果方面,进行第一步才能填补这种差距。使用这些结果,我们将获得具有强大性能保证的多任务学习的原则性的RPO算法。
translated by 谷歌翻译
政策梯度(PG)算法是备受期待的强化学习对现实世界控制任务(例如机器人技术)的最佳候选人之一。但是,每当必须在物理系统上执行学习过程本身或涉及任何形式的人类计算机相互作用时,这些方法的反复试验性质就会提出安全问题。在本文中,我们解决了一种特定的安全公式,其中目标和危险都以标量奖励信号进行编码,并且学习代理被限制为从不恶化其性能,以衡量为预期的奖励总和。通过从随机优化的角度研究仅行为者的政策梯度,我们为广泛的参数政策建立了改进保证,从而将现有结果推广到高斯政策上。这与策略梯度估计器的差异的新型上限一起,使我们能够识别出具有很高概率的单调改进的元参数计划。两个关键的元参数是参数更新的步长和梯度估计的批处理大小。通过对这些元参数的联合自适应选择,我们获得了具有单调改进保证的政策梯度算法。
translated by 谷歌翻译
In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.
translated by 谷歌翻译
为了在许多因素动态影响输出轨迹的复杂随机系统上学习,希望有效利用从以前迭代中收集的历史样本中的信息来加速策略优化。经典的经验重播使代理商可以通过重复使用历史观察来记住。但是,处理所有观察结果的统一重复使用策略均忽略了不同样本的相对重要性。为了克服这一限制,我们提出了一个基于一般差异的经验重播(VRER)框架,该框架可以选择性地重复使用最相关的样本以改善策略梯度估计。这种选择性机制可以自适应地对过去的样品增加重量,这些样本更可能由当前目标分布产生。我们的理论和实证研究表明,提议的VRER可以加速学习最佳政策,并增强最先进的政策优化方法的性能。
translated by 谷歌翻译
为了在许多因素动态影响输出轨迹的复杂随机系统上学习,希望有效利用从以前迭代中收集的历史样本中的信息来加速策略优化。经典的经验重播使代理商可以通过重复使用历史观察来记住。但是,处理所有观察结果的统一重复使用策略均忽略了不同样本的相对重要性。为了克服这一限制,我们提出了一个基于一般差异的经验重播(VRER)框架,该框架可以选择性地重复使用最相关的样本以改善策略梯度估计。这种选择性机制可以自适应地对过去的样品增加重量,这些样本更可能由当前目标分布产生。我们的理论和实证研究表明,提议的VRER可以加速学习最佳政策,并增强最先进的政策优化方法的性能。
translated by 谷歌翻译
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.Preprint. Under review.
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译
我们研究了平均奖励马尔可夫决策过程(AMDP)的问题,并开发了具有强大理论保证的新型一阶方法,以进行政策评估和优化。由于缺乏勘探,现有的彻底评估方法遭受了次优融合率以及处理不足的随机策略(例如确定性政策)的失败。为了解决这些问题,我们开发了一种新颖的差异时间差异(VRTD)方法,具有随机策略的线性函数近似以及最佳收敛保证,以及一种探索性方差降低的时间差(EVRTD)方法,用于不充分的随机策略,可相当的融合保证。我们进一步建立了政策评估偏见的线性收敛速率,这对于改善策略优化的总体样本复杂性至关重要。另一方面,与对MDP的政策梯度方法的有限样本分析相比,对AMDP的策略梯度方法的现有研究主要集中在基础马尔可夫流程的限制性假设下(例如,参见Abbasi-e, Yadkori等人,2019年),他们通常缺乏整体样本复杂性的保证。为此,我们开发了随机策略镜下降(SPMD)的平均奖励变体(LAN,2022)。我们建立了第一个$ \ widetilde {\ Mathcal {o}}(\ epsilon^{ - 2})$样品复杂性,用于在生成模型(带有UNICHAIN假设)和Markovian Noise模型(使用Ergodicicic Modele(具有核能的模型)下,使用策略梯度方法求解AMDP假设)。该界限可以进一步改进到$ \ widetilde {\ Mathcal {o}}}(\ epsilon^{ - 1})$用于求解正则化AMDPS。我们的理论优势通过数值实验来证实。
translated by 谷歌翻译
增强学习算法通常需要马尔可夫决策过程(MDP)中的状态和行动空间的有限度,并且在文献中已经对连续状态和动作空间的这种算法的适用性进行了各种努力。在本文中,我们表明,在非常温和的规律条件下(特别是仅涉及MDP的转换内核的弱连续性),通过量化状态和动作会聚到限制,Q-Learning用于标准BOREL MDP,而且此外限制满足最优性方程,其导致与明确的性能界限接近最优性,或者保证渐近最佳。我们的方法在(i)上建立了(i)将量化视为测量内核,因此将量化的MDP作为POMDP,(ii)利用Q-Learning的Q-Learning的近的最优性和收敛结果,并最终是有限状态的近最优态模型近似用于MDP的弱连续内核,我们展示对应于构造POMDP的固定点。因此,我们的论文提出了一种非常一般的收敛性和近似值,了解Q-Learning用于连续MDP的适用性。
translated by 谷歌翻译
策略梯度方法适用于复杂的,不理解的,通过对参数化的策略进行随机梯度下降来控制问题。不幸的是,即使对于可以通过标准动态编程技术解决的简单控制问题,策略梯度算法也会面临非凸优化问题,并且被广泛理解为仅收敛到固定点。这项工作确定了结构属性 - 通过几个经典控制问题共享 - 确保策略梯度目标函数尽管是非凸面,但没有次优的固定点。当这些条件得到加强时,该目标满足了产生收敛速率的Polyak-lojasiewicz(梯度优势)条件。当其中一些条件放松时,我们还可以在任何固定点的最佳差距上提供界限。
translated by 谷歌翻译
政策梯度定理(Sutton等,2000)规定了目标政策下的累积折扣国家分配以近似梯度。实际上,基于该定理的大多数算法都打破了这一假设,引入了分布转移,该分配转移可能导致逆转溶液的收敛性。在本文中,我们提出了一种新的方法,可以从开始状态重建政策梯度,而无需采取特定的采样策略。可以根据梯度评论家来简化此形式的策略梯度计算,由于梯度的新钟声方程式,可以递归估算。通过使用来自差异数据流的梯度评论家的时间差异更新,我们开发了第一个以无模型方式避开分布变化问题的估计器。我们证明,在某些可实现的条件下,无论采样策略如何,我们的估计器都是公正的。我们从经验上表明,我们的技术在存在非政策样品的情况下实现了卓越的偏见变化权衡和性能。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, demonstrating their practical applicability. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
translated by 谷歌翻译
强大的增强学习(RL)的目的是学习一项与模型参数不确定性的强大策略。由于模拟器建模错误,随着时间的推移,现实世界系统动力学的变化以及对抗性干扰,参数不确定性通常发生在许多现实世界中的RL应用中。强大的RL通常被称为最大问题问题,其目的是学习最大化价值与不确定性集合中最坏可能的模型的策略。在这项工作中,我们提出了一种称为鲁棒拟合Q-材料(RFQI)的强大RL算法,该算法仅使用离线数据集来学习最佳稳健策略。使用离线数据的强大RL比其非持续性对应物更具挑战性,因为在强大的Bellman运营商中所有模型的最小化。这在离线数据收集,对模型的优化以及公正的估计中构成了挑战。在这项工作中,我们提出了一种系统的方法来克服这些挑战,从而导致了我们的RFQI算法。我们证明,RFQI在标准假设下学习了一项近乎最佳的强大政策,并证明了其在标准基准问题上的出色表现。
translated by 谷歌翻译
In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.
translated by 谷歌翻译
In this paper we develop a theoretical analysis of the performance of sampling-based fitted value iteration (FVI) to solve infinite state-space, discounted-reward Markovian decision processes (MDPs) under the assumption that a generative model of the environment is available. Our main results come in the form of finite-time bounds on the performance of two versions of sampling-based FVI. The convergence rate results obtained allow us to show that both versions of FVI are well behaving in the sense that by using a sufficiently large number of samples for a large class of MDPs, arbitrary good performance can be achieved with high probability. An important feature of our proof technique is that it permits the study of weighted L p -norm performance bounds. As a result, our technique applies to a large class of function-approximation methods (e.g., neural networks, adaptive regression trees, kernel machines, locally weighted learning), and our bounds scale well with the effective horizon of the MDP. The bounds show a dependence on the stochastic stability properties of the MDP: they scale with the discounted-average concentrability of the future-state distributions. They also depend on a new measure of the approximation power of the function space, the inherent Bellman residual, which reflects how well the function space is "aligned" with the dynamics and rewards of the MDP. The conditions of the main result, as well as the concepts introduced in the analysis, are extensively discussed and compared to previous theoretical results. Numerical experiments are used to substantiate the theoretical findings.
translated by 谷歌翻译
在这项工作中,我们研究了解决强化学习问题的基于政策的方法,其中采用了非政策性采样和线性函数近似进行政策评估,以及包括自然政策梯度(NPG)在内的各种政策更新规则,用于政策更新。为了在致命三合会的存在下解决政策评估子问题,我们提出了一个通用算法的多步型TD学习框架,具有广义的重要性抽样比率,其中包括两个特定的算法:$ \ lambda $ Q Q $ Q Q $ - 跟踪和双面$ Q $ - 跟踪。通用算法是单个时间尺度,具有可证明的有限样本保证,并克服了非政策学习中的高方差问题。至于策略更新,我们仅使用Bellman操作员的收缩属性和单调性属性提供通用分析,以在各种策略更新规则下建立几何融合。重要的是,通过将NPG视为实施政策迭代的近似方法,我们在不引入正则化的情况下建立了NPG的几何融合,并且不使用现有文献中的镜像下降类型的分析类型。将策略更新的几何融合与策略评估的有限样本分析相结合,我们首次建立了整​​体$ \ Mathcal {o}(\ Epsilon^{ - 2})$样本复杂性以找到最佳策略(最多达到函数近似误差)使用基于策略的方法和线性函数近似下的基于策略的方法。
translated by 谷歌翻译
从现有数据中学习最佳行为是加强学习(RL)中最重要的问题之一。这被称为RL中的“非政策控制”,其中代理的目标是根据从给定策略(称为行为策略)获得的数据计算最佳策略。由于最佳策略可能与行为策略有很大不同,因此与“政体”设置相比,学习最佳行为非常困难,在学习中将利用来自策略更新的新数据。这项工作提出了一种非政策的天然参与者批评算法,该算法利用州行动分布校正来处理外部行为和样本效率的自然政策梯度。具有收敛保证的现有基于天然梯度的参与者批评算法需要固定功能,以近似策略和价值功能。这通常会导致许多RL应用中的次级学习。另一方面,我们提出的算法利用兼容功能,使人们能够使用任意神经网络近似策略和价值功能,并保证收敛到本地最佳策略。我们通过将其与基准RL任务上的香草梯度参与者 - 批评算法进行比较,说明了提出的非政策自然梯度算法的好处。
translated by 谷歌翻译