In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.
translated by 谷歌翻译
尽管政策梯度方法的普及日益越来越大,但它们尚未广泛用于样品稀缺应用,例如机器人。通过充分利用可用信息,可以提高样本效率。作为强化学习中的关键部件,奖励功能通常仔细设计以引导代理商。因此,奖励功能通常是已知的,允许访问不仅可以访问标量奖励信号,而且允许奖励梯度。为了从奖励梯度中受益,之前的作品需要了解环境动态,这很难获得。在这项工作中,我们开发\ Textit {奖励政策梯度}估计器,这是一种新的方法,可以在不学习模型的情况下整合奖励梯度。绕过模型动态允许我们的估算器实现更好的偏差差异,这导致更高的样本效率,如经验分析所示。我们的方法还提高了在不同的Mujoco控制任务上的近端策略优化的性能。
translated by 谷歌翻译
由于策略梯度定理导致的策略设置存在各种理论上 - 声音策略梯度算法,其为梯度提供了简化的形式。然而,由于存在多重目标和缺乏明确的脱助政策政策梯度定理,截止策略设置不太明确。在这项工作中,我们将这些目标统一到一个违规目标,并为此统一目标提供了政策梯度定理。推导涉及强调的权重和利息职能。我们显示多种策略来近似梯度,以识别权重(ACE)称为Actor评论家的算法。我们证明了以前(半梯度)脱离政策演员 - 评论家 - 特别是offpac和DPG - 收敛到错误的解决方案,而Ace找到最佳解决方案。我们还强调为什么这些半梯度方法仍然可以在实践中表现良好,表明ace中的方差策略。我们经验研究了两个经典控制环境的若干ACE变体和基于图像的环境,旨在说明每个梯度近似的权衡。我们发现,通过直接逼近强调权重,ACE在所有测试的所有设置中执行或优于offpac。
translated by 谷歌翻译
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
translated by 谷歌翻译
从现有数据中学习最佳行为是加强学习(RL)中最重要的问题之一。这被称为RL中的“非政策控制”,其中代理的目标是根据从给定策略(称为行为策略)获得的数据计算最佳策略。由于最佳策略可能与行为策略有很大不同,因此与“政体”设置相比,学习最佳行为非常困难,在学习中将利用来自策略更新的新数据。这项工作提出了一种非政策的天然参与者批评算法,该算法利用州行动分布校正来处理外部行为和样本效率的自然政策梯度。具有收敛保证的现有基于天然梯度的参与者批评算法需要固定功能,以近似策略和价值功能。这通常会导致许多RL应用中的次级学习。另一方面,我们提出的算法利用兼容功能,使人们能够使用任意神经网络近似策略和价值功能,并保证收敛到本地最佳策略。我们通过将其与基准RL任务上的香草梯度参与者 - 批评算法进行比较,说明了提出的非政策自然梯度算法的好处。
translated by 谷歌翻译
政策梯度定理(Sutton等,2000)规定了目标政策下的累积折扣国家分配以近似梯度。实际上,基于该定理的大多数算法都打破了这一假设,引入了分布转移,该分配转移可能导致逆转溶液的收敛性。在本文中,我们提出了一种新的方法,可以从开始状态重建政策梯度,而无需采取特定的采样策略。可以根据梯度评论家来简化此形式的策略梯度计算,由于梯度的新钟声方程式,可以递归估算。通过使用来自差异数据流的梯度评论家的时间差异更新,我们开发了第一个以无模型方式避开分布变化问题的估计器。我们证明,在某些可实现的条件下,无论采样策略如何,我们的估计器都是公正的。我们从经验上表明,我们的技术在存在非政策样品的情况下实现了卓越的偏见变化权衡和性能。
translated by 谷歌翻译
Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译
最近基于进化的零级优化方法和基于策略梯度的一阶方法是解决加强学习(RL)问题的两个有希望的替代方案。前者的方法与任意政策一起工作,依赖状态依赖和时间扩展的探索,具有健壮性的属性,但遭受了较高的样本复杂性,而后者的方法更有效,但仅限于可区分的政策,并且学习的政策是不太强大。为了解决这些问题,我们提出了一种新颖的零级演员 - 批评算法(ZOAC),该算法将这两种方法统一为派对演员 - 批判性结构,以保留两者的优势。 ZOAC在参数空间,一阶策略评估(PEV)和零订单策略改进(PIM)的参数空间中进行了推出集合,每次迭代中都会进行推出。我们使用不同类型的策略在广泛的挑战连续控制基准上进行广泛评估我们的方法,其中ZOAC优于零阶和一阶基线算法。
translated by 谷歌翻译
软演员 - 评论家(SAC)是最先进的偏离策略强化学习(RL)算法之一,其在基于最大熵的RL框架内。 SAC被证明在具有良好稳定性和稳健性的持续控制任务的列表中表现得非常好。 SAC了解一个随机高斯政策,可以最大限度地提高预期奖励和政策熵之间的权衡。要更新策略,SAC可最大限度地减少当前策略密度与软值函数密度之间的kl分歧。然后用于获得这种分歧的近似梯度的回报。在本文中,我们提出了跨熵策略优化(SAC-CEPO)的软演员 - 评论家,它使用跨熵方法(CEM)来优化SAC的政策网络。初始思想是使用CEM来迭代地对软价函数密度的最接近的分布进行采样,并使用结果分布作为更新策略网络的目标。为了降低计算复杂性,我们还介绍了一个解耦的策略结构,该策略结构将高斯策略解耦为一个策略,了解了学习均值的均值和另一个策略,以便只有CEM训练平均政策。我们表明,这种解耦的政策结构确实会聚到最佳,我们还通过实验证明SAC-CEPO实现对原始囊的竞争性能。
translated by 谷歌翻译
In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译
基于我们先前关于绿色仿真辅助政策梯度(GS-PG)的研究,重点是基于轨迹的重复使用,在本文中,我们考虑了无限 - 马尔可夫马尔可夫决策过程,并创建了一种新的重要性采样策略梯度优化的方法来支持动态决策制造。现有的GS-PG方法旨在从完整的剧集或过程轨迹中学习,这将其适用性限制在低数据状态和灵活的在线过程控制中。为了克服这一限制,提出的方法可以选择性地重复使用最相关的部分轨迹,即,重用单元基于每步或每次派遣的历史观察。具体而言,我们创建了基于混合的可能性比率(MLR)策略梯度优化,该优化可以利用不同行为政策下产生的历史状态行动转变中的信息。提出的减少差异经验重播(VRER)方法可以智能地选择和重复使用最相关的过渡观察,改善策略梯度估计并加速最佳政策的学习。我们的实证研究表明,它可以改善优化融合并增强最先进的政策优化方法的性能,例如Actor-Critic方法和近端政策优化。
translated by 谷歌翻译
We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actorcritic methods, which can be viewed performing approximate inference on the corresponding energy-based model.
translated by 谷歌翻译
Off-policy learning is more unstable compared to on-policy learning in reinforcement learning (RL). One reason for the instability of off-policy learning is a discrepancy between the target ($\pi$) and behavior (b) policy distributions. The discrepancy between $\pi$ and b distributions can be alleviated by employing a smooth variant of the importance sampling (IS), such as the relative importance sampling (RIS). RIS has parameter $\beta\in[0, 1]$ which controls smoothness. To cope with instability, we present the first relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free algorithms in RL. In our method, the network yields a target policy (the actor), a value function (the critic) assessing the current policy ($\pi$) using samples drawn from behavior policy. We use action value generated from the behavior policy in reward function to train our algorithm rather than from the target policy. We also use deep neural networks to train both actor and critic. We evaluated our algorithm on a number of Open AI Gym benchmark problems and demonstrate better or comparable performance to several state-of-the-art RL baselines.
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译
资产分配(或投资组合管理)是确定如何最佳将有限预算的资金分配给一系列金融工具/资产(例如股票)的任务。这项研究调查了使用无模型的深RL代理应用于投资组合管理的增强学习(RL)的性能。我们培训了几个RL代理商的现实股票价格,以学习如何执行资产分配。我们比较了这些RL剂与某些基线剂的性能。我们还比较了RL代理,以了解哪些类别的代理表现更好。从我们的分析中,RL代理可以执行投资组合管理的任务,因为它们的表现明显优于基线代理(随机分配和均匀分配)。四个RL代理(A2C,SAC,PPO和TRPO)总体上优于最佳基线MPT。这显示了RL代理商发现更有利可图的交易策略的能力。此外,基于价值和基于策略的RL代理之间没有显着的性能差异。演员批评者的表现比其他类型的药物更好。同样,在政策代理商方面的表现要好,因为它们在政策评估方面更好,样品效率在投资组合管理中并不是一个重大问题。这项研究表明,RL代理可以大大改善资产分配,因为它们的表现优于强基础。基于我们的分析,在政策上,参与者批评的RL药物显示出最大的希望。
translated by 谷歌翻译
While risk-neutral reinforcement learning has shown experimental success in a number of applications, it is well-known to be non-robust with respect to noise and perturbations in the parameters of the system. For this reason, risk-sensitive reinforcement learning algorithms have been studied to introduce robustness and sample efficiency, and lead to better real-life performance. In this work, we introduce new model-free risk-sensitive reinforcement learning algorithms as variations of widely-used Policy Gradient algorithms with similar implementation properties. In particular, we study the effect of exponential criteria on the risk-sensitivity of the policy of a reinforcement learning agent, and develop variants of the Monte Carlo Policy Gradient algorithm and the online (temporal-difference) Actor-Critic algorithm. Analytical results showcase that the use of exponential criteria generalize commonly used ad-hoc regularization approaches. The implementation, performance, and robustness properties of the proposed methods are evaluated in simulated experiments.
translated by 谷歌翻译
Reinforcement learning (RL) gained considerable attention by creating decision-making agents that maximize rewards received from fully observable environments. However, many real-world problems are partially or noisily observable by nature, where agents do not receive the true and complete state of the environment. Such problems are formulated as partially observable Markov decision processes (POMDPs). Some studies applied RL to POMDPs by recalling previous decisions and observations or inferring the true state of the environment from received observations. Nevertheless, aggregating observations and decisions over time is impractical for environments with high-dimensional continuous state and action spaces. Moreover, so-called inference-based RL approaches require large number of samples to perform well since agents eschew uncertainty in the inferred state for the decision-making. Active inference is a framework that is naturally formulated in POMDPs and directs agents to select decisions by minimising expected free energy (EFE). This supplies reward-maximising (exploitative) behaviour in RL, with an information-seeking (exploratory) behaviour. Despite this exploratory behaviour of active inference, its usage is limited to discrete state and action spaces due to the computational difficulty of the EFE. We propose a unified principle for joint information-seeking and reward maximization that clarifies a theoretical connection between active inference and RL, unifies active inference and RL, and overcomes their aforementioned limitations. Our findings are supported by strong theoretical analysis. The proposed framework's superior exploration property is also validated by experimental results on partial observable tasks with high-dimensional continuous state and action spaces. Moreover, the results show that our model solves reward-free problems, making task reward design optional.
translated by 谷歌翻译