安全的加强学习(RL)研究智能代理人不仅必须最大程度地提高奖励,而且还要避免探索不安全领域的问题。在这项研究中,我们提出了CUP,这是一种基于约束更新投影框架的新型政策优化方法,享有严格的安全保证。我们杯杯发展的核心是新提出的替代功能以及性能结合。与以前的安全RL方法相比,杯子的好处1)杯子将代孕功能推广到广义优势估计量(GAE),从而导致强烈的经验性能。 2)杯赛统一性界限,为某些现有算法提供更好的理解和解释性; 3)CUP仅通过一阶优化器提供非凸的实现,该优化器不需要在目标的凸面上进行任何强近似。为了验证我们的杯子方法,我们将杯子与在各种任务上进行的安全RL基线的全面列表进行了比较。实验表明杯子在奖励和安全限制满意度方面的有效性。我们已经在https://github.com/rl-boxes/safe-rl/tree/ main/cup上打开了杯子源代码。
translated by 谷歌翻译
For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (
translated by 谷歌翻译
安全的强化学习旨在学习最佳政策,同时满足安全限制,这在现实世界中至关重要。但是,当前的算法仍在为有效的政策更新而努力,并具有严格的约束满意度。在本文中,我们提出了受惩罚的近端政策优化(P3O),该政策优化(P3O)通过单一的最小化等效不受约束的问题来解决麻烦的受约束政策迭代。具体而言,P3O利用了简单的罚款功能来消除成本限制,并通过剪裁的替代目标消除了信任区域的约束。从理论上讲,我们用有限的惩罚因素证明了所提出的方法的精确性,并在对样品轨迹进行评估时提供了最坏情况分析,以实现近似误差。此外,我们将P3O扩展到更具挑战性的多构造和多代理方案,这些方案在以前的工作中所研究的情况较少。广泛的实验表明,在一组受限的机车任务上,P3O优于奖励改进和约束满意度的最先进算法。
translated by 谷歌翻译
安全的加强学习(RL)旨在学习在将其部署到关键安全应用程序中之前满足某些约束的政策。以前的原始双重风格方法遭受了不稳定性问题的困扰,并且缺乏最佳保证。本文从概率推断的角度克服了问题。我们在政策学习过程中介绍了一种新颖的期望最大化方法来自然纳入约束:1)在凸优化(E-step)后,可以以封闭形式计算可证明的最佳非参数变异分布; 2)基于最佳变异分布(M-step),在信任区域内改进了策略参数。提出的算法将安全的RL问题分解为凸优化阶段和监督学习阶段,从而产生了更稳定的培训性能。对连续机器人任务进行的广泛实验表明,所提出的方法比基线获得了更好的约束满意度和更好的样品效率。该代码可在https://github.com/liuzuxin/cvpo-safe-rl上找到。
translated by 谷歌翻译
离线目标条件的强化学习(GCRL)承诺以从纯粹的离线数据集实现各种目标的形式的通用技能学习。我们提出$ \ textbf {go} $ al-al-conditioned $ f $ - $ \ textbf {a} $ dvantage $ \ textbf {r} $ egression(gofar),这是一种基于新颖的回归gcrl gcrl algorithm,它源自州越来越多匹配的视角;关键的直觉是,可以将目标任务提出为守护动态的模仿者和直接传送到目标的专家代理之间的状态占用匹配问题。与先前的方法相反,Gofar不需要任何事后重新标签,并且对其价值和策略网络享有未融合的优化。这些独特的功能允许Gofar具有更好的离线性能和稳定性以及统计性能保证,这对于先前的方法无法实现。此外,我们证明了Gofar的训练目标可以重新使用,以从纯粹的离线源数据域数据中学习独立于代理的目标条件计划的计划者,这可以使零射击传输到新的目标域。通过广泛的实验,我们验证了Gofar在各种问题设置和任务中的有效性,显着超过了先前的先验。值得注意的是,在真正的机器人灵活性操纵任务上,虽然没有其他方法取得了有意义的进步,但Gofar获得了成功实现各种目标的复杂操纵行为。
translated by 谷歌翻译
大多数加固学习算法优化了折扣标准,这些标准是有益的,可以加速收敛并降低估计的方差。虽然折扣标准适用于诸如财务相关问题的某些任务,但许多工程问题同样对待未来的奖励,并更喜欢长期的平均标准。在本文中,我们研究了长期平均标准的强化学习问题。首先,我们在折扣和平均标准中制定统一的信任区域理论,并在扰动分析(PA)理论中导出信托区域内的新颖性能。其次,我们提出了一种名为平均策略优化(APO)的实用算法,其提高了名为平均值约束的新颖技术的值估计。最后,实验在连续控制环境Mujoco中进行。在大多数任务中,APO比折扣PPO更好,这表明了我们方法的有效性。我们的工作提供了统一的信任地区方法,包括折扣和平均标准,这可能会补充折扣目标超出了钢筋学习的框架。
translated by 谷歌翻译
Constrained reinforcement learning (RL) is an area of RL whose objective is to find an optimal policy that maximizes expected cumulative return while satisfying a given constraint. Most of the previous constrained RL works consider expected cumulative sum cost as the constraint. However, optimization with this constraint cannot guarantee a target probability of outage event that the cumulative sum cost exceeds a given threshold. This paper proposes a framework, named Quantile Constrained RL (QCRL), to constrain the quantile of the distribution of the cumulative sum cost that is a necessary and sufficient condition to satisfy the outage constraint. This is the first work that tackles the issue of applying the policy gradient theorem to the quantile and provides theoretical results for approximating the gradient of the quantile. Based on the derived theoretical results and the technique of the Lagrange multiplier, we construct a constrained RL algorithm named Quantile Constrained Policy Optimization (QCPO). We use distributional RL with the Large Deviation Principle (LDP) to estimate quantiles and tail probability of the cumulative sum cost for the implementation of QCPO. The implemented algorithm satisfies the outage probability constraint after the training period.
translated by 谷歌翻译
几乎可以肯定(或使用概率)满足安全限制对于在现实生活中的增强学习(RL)的部署至关重要。例如,理想情况下,平面降落和起飞应以概率为单位发生。我们通过引入安全增强(SAUTE)马尔可夫决策过程(MDP)来解决该问题,在该过程中,通过将其扩大到州空间并重塑目标来消除安全限制。我们表明,Saute MDP满足了Bellman方程,并使我们更加接近解决安全的RL,几乎可以肯定地满足。我们认为,Saute MDP允许从不同的角度查看安全的RL问题,从而实现新功能。例如,我们的方法具有插件的性质,即任何RL算法都可以“炒”。此外,国家扩展允许跨安全限制进行政策概括。我们最终表明,当约束满意度非常重要时,SAUTE RL算法的表现可以胜过其最先进的对应物。
translated by 谷歌翻译
Safety comes first in many real-world applications involving autonomous agents. Despite a large number of reinforcement learning (RL) methods focusing on safety-critical tasks, there is still a lack of high-quality evaluation of those algorithms that adheres to safety constraints at each decision step under complex and unknown dynamics. In this paper, we revisit prior work in this scope from the perspective of state-wise safe RL and categorize them as projection-based, recovery-based, and optimization-based approaches, respectively. Furthermore, we propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection. This novel technique explicitly enforces hard constraints via the deep unrolling architecture and enjoys structural advantages in navigating the trade-off between reward improvement and constraint satisfaction. To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit, a toolkit that provides off-the-shelf interfaces and evaluation utilities for safety-critical tasks. We then perform a comparative study of the involved algorithms on six benchmarks ranging from robotic control to autonomous driving. The empirical results provide an insight into their applicability and robustness in learning zero-cost-return policies without task-dependent handcrafting. The project page is available at https://sites.google.com/view/saferlkit.
translated by 谷歌翻译
In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, demonstrating their practical applicability. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
translated by 谷歌翻译
这项工作开发了具有严格效率的新算法,可确保无限的地平线模仿学习(IL)具有线性函数近似而无需限制性相干假设。我们从问题的最小值开始,然后概述如何从优化中利用经典工具,尤其是近端点方法(PPM)和双平滑性,分别用于在线和离线IL。多亏了PPM,我们避免了在以前的文献中出现在线IL的嵌套政策评估和成本更新。特别是,我们通过优化单个凸的优化和在成本和Q函数上的平稳目标来消除常规交替更新。当不确定地解决时,我们将优化错误与恢复策略的次级优势联系起来。作为额外的奖励,通过将PPM重新解释为双重平滑以专家政策为中心,我们还获得了一个离线IL IL算法,该算法在所需的专家轨迹方面享有理论保证。最后,我们实现了线性和神经网络功能近似的令人信服的经验性能。
translated by 谷歌翻译
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
translated by 谷歌翻译
除了最大化奖励目标之外,现实世界中的强化学习(RL)代理商必须满足安全限制。基于模型的RL算法占据了减少不安全的现实世界行动的承诺:它们可以合成使用来自学习模型的模拟样本遵守所有约束的策略。但是,即使对于预测满足所有约束的操作,甚至可能导致真实的结构违规。我们提出了保守和自适应惩罚(CAP),一种基于模型的安全RL框架,其通过捕获模型不确定性并自适应利用它来平衡奖励和成本目标来占潜在的建模错误。首先,CAP利用基于不确定性的惩罚来膨胀预测成本。从理论上讲,我们展示了满足这种保守成本约束的政策,也可以保证在真正的环境中是可行的。我们进一步表明,这保证了在RL培训期间所有中间解决方案的安全性。此外,在使用环境中使用真正的成本反馈,帽子在培训期间自适应地调整这种惩罚。我们在基于状态和基于图像的环境中,评估了基于模型的安全RL的保守和自适应惩罚方法。我们的结果表明了样品效率的大量收益,同时产生比现有安全RL算法更少的违规行为。代码可用:https://github.com/redrew/cap
translated by 谷歌翻译
强化学习被广泛用于在与环境互动时需要执行顺序决策的应用中。当决策要求包括满足一些安全限制时,问题就变得更加具有挑战性。该问题在数学上是作为约束的马尔可夫决策过程(CMDP)提出的。在文献中,可以通过无模型的方式解决各种算法来解决CMDP问题,以实现$ \ epsilon $ - 最佳的累积奖励,并使用$ \ epsilon $可行的政策。 $ \ epsilon $可行的政策意味着它遭受了违规的限制。这里的一个重要问题是,我们是否可以实现$ \ epsilon $ - 最佳的累积奖励,并违反零约束。为此,我们主张使用随机原始偶对偶方法来解决CMDP问题,并提出保守的随机原始二重算法(CSPDA),该算法(CSPDA)显示出$ \ tilde {\ tilde {\ Mathcal {o}} \ left(1 /\ epsilon^2 \ right)$样本复杂性,以实现$ \ epsilon $ - 最佳累积奖励,违反零约束。在先前的工作中,$ \ epsilon $ - 最佳策略的最佳可用样本复杂性是零约束的策略是$ \ tilde {\ Mathcal {o}}} \ left(1/\ epsilon^5 \ right)$。因此,与最新技术相比,拟议的算法提供了重大改进。
translated by 谷歌翻译
In this work, we focus on the problem of safe policy transfer in reinforcement learning: we seek to leverage existing policies when learning a new task with specified constraints. This problem is important for safety-critical applications where interactions are costly and unconstrained policies can lead to undesirable or dangerous outcomes, e.g., with physical robots that interact with humans. We propose a Constrained Markov Decision Process (CMDP) formulation that simultaneously enables the transfer of policies and adherence to safety constraints. Our formulation cleanly separates task goals from safety considerations and permits the specification of a wide variety of constraints. Our approach relies on a novel extension of generalized policy improvement to constrained settings via a Lagrangian formulation. We devise a dual optimization algorithm that estimates the optimal dual variable of a target task, thus enabling safe transfer of policies derived from successor features learned on source tasks. Our experiments in simulated domains show that our approach is effective; it visits unsafe states less frequently and outperforms alternative state-of-the-art methods when taking safety constraints into account.
translated by 谷歌翻译
安全加强学习(RL)在对风险敏感的任务上取得了重大成功,并在自主驾驶方面也表现出了希望(AD)。考虑到这个社区的独特性,对于安全广告而言,仍然缺乏高效且可再现的基线。在本文中,我们将SAFERL-KIT释放到基准的安全RL方法,以实现倾向的任务。具体而言,SAFERL-KIT包含了针对零构成的侵略任务的几种最新算法,包括安全层,恢复RL,非政策Lagrangian方法和可行的Actor-Critic。除了现有方法外,我们还提出了一种名为精确惩罚优化(EPO)的新型一阶方法,并充分证明了其在安全AD中的能力。 SAFERL-KIT中的所有算法均在政策设置下实现(i),从而提高了样本效率并可以更好地利用过去的日志; (ii)具有统一的学习框架,为研究人员提供了现成的接口,以将其特定领域的知识纳入基本的安全RL方法中。最后,我们对上述算法进行了比较评估,并阐明了它们的安全自动驾驶功效。源代码可在\ href {https://github.com/zlr20/saferl_kit} {this https url}中获得。
translated by 谷歌翻译
强化学习的主要方法是根据预期的回报将信贷分配给行动。但是,我们表明回报可能取决于政策,这可能会导致价值估计的过度差异和减慢学习的速度。取而代之的是,我们证明了优势函数可以解释为因果效应,并与因果关系共享相似的属性。基于此洞察力,我们提出了直接优势估计(DAE),这是一种可以对优势函数进行建模并直接从政策数据进行估算的新方法,同时同时最大程度地减少了返回的方差而无需(操作 - )值函数。我们还通过显示如何无缝整合到DAE中来将我们的方法与时间差异方法联系起来。所提出的方法易于实施,并且可以通过现代参与者批评的方法很容易适应。我们对三个离散控制域进行经验评估DAE,并表明它可以超过广义优势估计(GAE),这是优势估计的强大基线,当将大多数环境应用于策略优化时。
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.
translated by 谷歌翻译
提高样本效率是加固学习的长期目标。本文提出了$ \ mathtt {vrmpo} $算法:具有随机镜血液的样本高效策略梯度方法。在$ \ mathtt {vrmpo} $中,提出了一种新的差异减少的政策梯度估计,以提高样本效率。我们证明了所提出的$ \ mathtt {vrmpo} $只需要$ \ mathcal {o}(\ epsilon ^ {-3})$ at \ epsilon $ att \ epsilon $-uppryoge一阶静止点,符合策略优化的最佳样本复杂性。广泛的实验结果表明,$ \ mathtt {vrmpo} $胜过各种设置中最先进的政策梯度方法。
translated by 谷歌翻译