深度加强学习是一种越来越流行的技术,用于综合政策以控制代理商与其环境的互动。正式验证此类策略是否正确并安全执行,并且安全地执行兴趣。通过建立现有工作来验证深神经网络和连续状态动态系统的现有工作,已经在这方面取得了进展。在本文中,我们解决了验证深度加强学习的概率政策的问题,这些政策用于例如解决对抗性环境,破坏对称和管理权衡。我们提出了一种基于间隔马尔可夫决策过程的抽象方法,它会产生策略的执行上的概率保证,并使用抽象解释,混合整数线性编程,基于熵的细化和概率模型检查来构建和解决这些模型的概率保证。我们实施了我们的方法,并说明了它在选择加强学习基准的效力。
translated by 谷歌翻译
Automated synthesis of provably correct controllers for cyber-physical systems is crucial for deploying these systems in safety-critical scenarios. However, their hybrid features and stochastic or unknown behaviours make this synthesis problem challenging. In this paper, we propose a method for synthesizing controllers for Markov jump linear systems (MJLSs), a particular class of cyber-physical systems, that certifiably satisfy a requirement expressed as a specification in probabilistic computation tree logic (PCTL). An MJLS consists of a finite set of linear dynamics with unknown additive disturbances, where jumps between these modes are governed by a Markov decision process (MDP). We consider both the case where the transition function of this MDP is given by probability intervals or where it is completely unknown. Our approach is based on generating a finite-state abstraction which captures both the discrete and the continuous behaviour of the original system. We formalise such abstraction as an interval Markov decision process (iMDP): intervals of transition probabilities are computed using sampling techniques from the so-called "scenario approach", resulting in a probabilistically sound approximation of the MJLS. This iMDP abstracts both the jump dynamics between modes, as well as the continuous dynamics within the modes. To demonstrate the efficacy of our technique, we apply our method to multiple realistic benchmark problems, in particular, temperature control, and aerial vehicle delivery problems.
translated by 谷歌翻译
在安全关键设置中运行的自治系统的控制器必须考虑随机扰动。这种干扰通常被建模为过程噪声,并且常见的假设是底层分布是已知的和/或高斯的。然而,在实践中,这些假设可能是不现实的并且可以导致真正噪声分布的近似值。我们提出了一种新的规划方法,不依赖于噪声分布的任何明确表示。特别是,我们解决了计算控制器的控制器,该控制器提供了安全地到达目标的概率保证。首先,我们将连续系统摘要进入一个离散状态模型,通过状态之间的概率转换捕获噪声。作为关键贡献,我们根据噪声的有限数量的样本来调整这些过渡概率的方案方法中的工具。我们在所谓的间隔马尔可夫决策过程(IMDP)的转换概率间隔中捕获这些界限。该IMDP在过渡概率中的不确定性稳健,并且可以通过样本的数量来控制概率间隔的紧张性。我们使用最先进的验证技术在IMDP上提供保证,并计算这些保证对自主系统的控制器。即使IMDP有数百万个州或过渡,也表明了我们方法的实际适用性。
translated by 谷歌翻译
我们研究了由测量和过程噪声引起的不确定性的动态系统的规划问题。测量噪声导致系统状态可观察性有限,并且过程噪声在给定控制的结果中导致不确定性。问题是找到一个控制器,保证系统在有限时间内达到所需的目标状态,同时避免障碍物,至少需要一些所需的概率。由于噪音,此问题不承认一般的精确算法或闭合性解决方案。我们的主要贡献是一种新颖的规划方案,采用卡尔曼滤波作为状态估计器,以获得动态系统的有限状态抽象,我们将作为马尔可夫决策过程(MDP)正式化。通过延长概率间隔的MDP,我们可以增强模型对近似过渡概率的数值不精确的鲁棒性。对于这种所谓的间隔MDP(IMDP),我们采用最先进的验证技术来有效地计算最大化目标状态概率的计划。我们展示了抽象的正确性,并提供了几种优化,旨在平衡计划的质量和方法的可扩展性。我们展示我们的方法能够处理具有6维状态的系统,该系统导致具有数万个状态和数百万个过渡的IMDP。
translated by 谷歌翻译
Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next $k$ steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.
translated by 谷歌翻译
在安全关键方案中利用自主系统需要在存在影响系统动态的不确定性和黑匣子组件存在下验证其行为。在本文中,我们开发了一个框架,用于验证部分可观察到的离散时间动态系统,从给定的输入输出数据集中具有针对时间逻辑规范的未暗模式可分散的动态系统。验证框架采用高斯进程(GP)回归,以了解数据集中的未知动态,并将连续空间系统抽象为有限状态,不确定的马尔可夫决策过程(MDP)。这种抽象依赖于通过使用可重复的内核Hilbert空间分析以及通过离散化引起的不确定性来捕获由于GP回归中的错误而捕获不确定性的过渡概率间隔。该框架利用现有的模型检查工具来验证对给定时间逻辑规范的不确定MDP抽象。我们建立将验证结果扩展到潜在部分可观察系统的抽象结果的正确性。我们表明框架的计算复杂性在数据集和离散抽象的大小中是多项式。复杂性分析说明了验证结果质量与处理较大数据集和更精细抽象的计算负担之间的权衡。最后,我们展示了我们的学习和验证框架在具有线性,非线性和切换动力系统的几种案例研究中的功效。
translated by 谷歌翻译
Safety is still one of the major research challenges in reinforcement learning (RL). In this paper, we address the problem of how to avoid safety violations of RL agents during exploration in probabilistic and partially unknown environments. Our approach combines automata learning for Markov Decision Processes (MDPs) and shield synthesis in an iterative approach. Initially, the MDP representing the environment is unknown. The agent starts exploring the environment and collects traces. From the collected traces, we passively learn MDPs that abstractly represent the safety-relevant aspects of the environment. Given a learned MDP and a safety specification, we construct a shield. For each state-action pair within a learned MDP, the shield computes exact probabilities on how likely it is that executing the action results in violating the specification from the current state within the next $k$ steps. After the shield is constructed, the shield is used during runtime and blocks any actions that induce a too large risk from the agent. The shielded agent continues to explore the environment and collects new data on the environment. Iteratively, we use the collected data to learn new MDPs with higher accuracy, resulting in turn in shields able to prevent more safety violations. We implemented our approach and present a detailed case study of a Q-learning agent exploring slippery Gridworlds. In our experiments, we show that as the agent explores more and more of the environment during training, the improved learned models lead to shields that are able to prevent many safety violations.
translated by 谷歌翻译
We study the problem of learning controllers for discrete-time non-linear stochastic dynamical systems with formal reach-avoid guarantees. This work presents the first method for providing formal reach-avoid guarantees, which combine and generalize stability and safety guarantees, with a tolerable probability threshold $p\in[0,1]$ over the infinite time horizon. Our method leverages advances in machine learning literature and it represents formal certificates as neural networks. In particular, we learn a certificate in the form of a reach-avoid supermartingale (RASM), a novel notion that we introduce in this work. Our RASMs provide reachability and avoidance guarantees by imposing constraints on what can be viewed as a stochastic extension of level sets of Lyapunov functions for deterministic systems. Our approach solves several important problems -- it can be used to learn a control policy from scratch, to verify a reach-avoid specification for a fixed control policy, or to fine-tune a pre-trained policy if it does not satisfy the reach-avoid specification. We validate our approach on $3$ stochastic non-linear reinforcement learning tasks.
translated by 谷歌翻译
本文介绍了Cool-MC,这是一种集成了最先进的加固学习(RL)和模型检查的工具。具体而言,该工具建立在OpenAI健身房和概率模型检查器风暴上。COOL-MC提供以下功能:(1)模拟器在OpenAI体育馆训练RL政策,用于Markov决策过程(MDPS),这些模拟器定义为暴风雨的输入,(2)使用“ SORM”的新型号构建器,用于使用回调功能要验证(神经网络)RL策略,(3)与OpenAI Gym或Storm中指定的模型和政策相关的正式抽象,以及(4)算法以获得有关所谓允许政策的性能的界限。我们描述了Cool-MC的组件和体系结构,并在多个基准环境中演示了其功能。
translated by 谷歌翻译
贝叶斯神经网络(BNNS)将分布放在神经网络的重量上,以模拟数据的不确定性和网络的预测。我们考虑在具有无限时间地平线系统的反馈循环中运行贝叶斯神经网络策略时验证安全的问题。与现有的基于样品的方法相比,这是不可用的无限时间地平线设置,我们训练一个单独的确定性神经网络,用作无限时间的地平线安全证书。特别是,我们证明证书网络保证了系统的安全性在BNN重量后部的子集上。我们的方法首先计算安全重量,然后改变BNN的重量后,以拒绝在该组外的样品。此外,我们展示了如何将我们的方法扩展到安全探索的强化学习环境,以避免在培训政策期间的不安全轨迹。我们在一系列加固学习基准上评估了我们的方法,包括非Lyapunovian安全规范。
translated by 谷歌翻译
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers. Stochastic noise causes aleatoric uncertainty, whereas imprecise knowledge of model parameters leads to epistemic uncertainty. Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability. However, the underlying models exclusively capture aleatoric but not epistemic uncertainty, and thus require that model parameters are known precisely. Our contribution to overcoming this restriction is a novel abstraction-based controller synthesis method for continuous-state models with stochastic noise and uncertain parameters. By sampling techniques and robust analysis, we capture both aleatoric and epistemic uncertainty, with a user-specified confidence level, in the transition probability intervals of a so-called interval Markov decision process (iMDP). We synthesize an optimal policy on this iMDP, which translates (with the specified confidence level) to a feedback controller for the continuous model with the same performance guarantees. Our experimental benchmarks confirm that accounting for epistemic uncertainty leads to controllers that are more robust against variations in parameter values.
translated by 谷歌翻译
我们考虑通过连续环境中的加强学习(RL)了解政策背景下的政策简化和验证的挑战。在良好的设置中,RL算法在限制中具有收敛保证。虽然这些保证是有价值的,但它们不足以安全关键型应用。此外,在应用Deep-RL等先进技术时丢失。在将先进的RL算法应用于更复杂的环境时恢复保证,(i)可达性,(ii)安全受限可达性,或(iii)折扣奖励目标,我们建立在Gelada等人介绍的深度框架上。在未知环境和学习的离散潜在模型之间获得新的双刺激界限。我们的BISIMULATION界限能够在马尔可夫决策过程中应用正式方法。最后,我们展示了如何使用通过最先进的RL获得的策略,以有效地训练变形式自动统计器,从而产生离散潜在模型,其具有可释放的近似正确的双刺激保证。此外,我们获得了潜在模型的策略的蒸馏版。
translated by 谷歌翻译
许多基于模型的强化学习方法(MBRL)为他们可以提供的马尔可夫决策过程(MDP)模型的准确性和学习效率提供了保证。同时,状态抽象技术允许减少MDP的大小,同时相对于原始问题保持有限的损失。因此,令人惊讶的是,在结合两种技术时,即MBRL仅观察抽象状态时,没有任何保证可用。我们的理论分析表明,抽象可以在网上收集的样本(例如在现实世界中)引入依赖性,这意味着MBRL的大多数结果不能直接扩展到此设置。这项工作的新结果表明,可以使用Martingales的浓度不平等来克服此问题,并允许将R-MAX等算法的结果扩展到以抽象为设置的算法。因此,通过抽象的模型为抽象的RL生成了第一个性能保证:基于模型的强化学习。
translated by 谷歌翻译
在钢筋学习(RL)中,代理必须探索最初未知的环境,以便学习期望的行为。当RL代理部署在现实世界环境中时,安全性是主要关注的。受约束的马尔可夫决策过程(CMDPS)可以提供长期的安全约束;但是,该代理人可能会违反探索其环境的制约因素。本文提出了一种称为显式探索,漏洞探索或转义($ e ^ {4} $)的基于模型的RL算法,它将显式探索或利用($ e ^ {3} $)算法扩展到强大的CMDP设置。 $ e ^ 4 $明确地分离开发,探索和逃脱CMDP,允许针对已知状态的政策改进的有针对性的政策,发现未知状态,以及安全返回到已知状态。 $ e ^ 4 $强制优化了从一组CMDP模型的最坏情况CMDP上的这些策略,该模型符合部署环境的经验观察。理论结果表明,在整个学习过程中满足安全限制的情况下,在多项式时间中找到近最优的约束政策。我们讨论了稳健约束的离线优化算法,以及如何基于经验推理和先验知识来结合未知状态过渡动态的不确定性。
translated by 谷歌翻译
In many real-world problems, the learning agent needs to learn a problem's abstractions and solution simultaneously. However, most such abstractions need to be designed and refined by hand for different problems and domains of application. This paper presents a novel top-down approach for constructing state abstractions while carrying out reinforcement learning. Starting with state variables and a simulator, it presents a novel domain-independent approach for dynamically computing an abstraction based on the dispersion of Q-values in abstract states as the agent continues acting and learning. Extensive empirical evaluation on multiple domains and problems shows that this approach automatically learns abstractions that are finely-tuned to the problem, yield powerful sample efficiency, and result in the RL agent significantly outperforming existing approaches.
translated by 谷歌翻译
我们使用线性时间逻辑(LTL)约束研究策略优化问题(PO)。LTL的语言允许灵活描述可能不自然的任务,以编码为标量成本函数。我们将LTL受限的PO视为系统框架,将任务规范与策略选择解耦,以及成本塑造标准的替代方案。通过访问生成模型,我们开发了一种基于模型的方法,该方法享有样本复杂性分析,以确保任务满意度和成本最佳性(通过减少到可达性问题)。从经验上讲,即使在低样本制度中,我们的算法也可以实现强大的性能。
translated by 谷歌翻译
已经开发了概率模型检查,用于验证具有随机和非季度行为的验证系统。鉴于概率系统,概率模型检查器占用属性并检查该系统中的属性是否保持。因此,概率模型检查提供严谨的保证。然而,到目前为止,概率模型检查专注于所谓的模型,其中一个状态由符号表示。另一方面,通常需要在规划和强化学习中进行关系抽象。各种框架处理关系域,例如条带规划和关系马尔可夫决策过程。使用命题模型检查关系设置需要一个地接地模型,这导致了众所周知的状态爆炸问题和难以承承性。我们提出了PCTL-Rebel,一种用于验证关系MDP的PCTL属性的提升模型检查方法。它延长了基于关系模型的强化学习技术的反叛者,朝着关系PCTL模型检查。 PCTL-REBEL被提升,这意味着而不是接地,模型利用对称在关系层面上整体的一组对象。从理论上讲,我们表明PCTL模型检查对于具有可能无限域的关系MDP可判定,条件是该状态具有有界大小。实际上,我们提供算法和提升关系模型检查的实现,并且我们表明提升方法提高了模型检查方法的可扩展性。
translated by 谷歌翻译
在对关键安全环境的强化学习中,通常希望代理在所有时间点(包括培训期间)服从安全性限制。我们提出了一种称为Spice的新型神经符号方法,以解决这个安全的探索问题。与现有工具相比,Spice使用基于符号最弱的先决条件的在线屏蔽层获得更精确的安全性分析,而不会不适当地影响培训过程。我们在连续控制基准的套件上评估了该方法,并表明它可以达到与现有的安全学习技术相当的性能,同时遭受较少的安全性违规行为。此外,我们提出的理论结果表明,在合理假设下,香料会收敛到最佳安全政策。
translated by 谷歌翻译
当环境稀疏和非马克维亚奖励时,使用标量奖励信号的训练加强学习(RL)代理通常是不可行的。此外,在训练之前对这些奖励功能进行手工制作很容易指定,尤其是当环境的动态仅部分知道时。本文提出了一条新型的管道,用于学习非马克维亚任务规格,作为简洁的有限状态“任务自动机”,从未知环境中的代理体验情节中。我们利用两种关键算法的见解。首先,我们通过将其视为部分可观察到的MDP并为隐藏的Markov模型使用现成的算法,从而学习了由规范的自动机和环境MDP组成的产品MDP,该模型是由规范的自动机和环境MDP组成的。其次,我们提出了一种从学习的产品MDP中提取任务自动机(假定为确定性有限自动机)的新方法。我们学到的任务自动机可以使任务分解为其组成子任务,从而提高了RL代理以后可以合成最佳策略的速率。它还提供了高级环境和任务功能的可解释编码,因此人可以轻松地验证代理商是否在没有错误的情况下学习了连贯的任务。此外,我们采取步骤确保学识渊博的自动机是环境不可静止的,使其非常适合用于转移学习。最后,我们提供实验结果,以说明我们在不同环境和任务中的算法的性能及其合并先前的领域知识以促进更有效学习的能力。
translated by 谷歌翻译
我们介绍了一种改进政策改进的方法,该方法在基于价值的强化学习(RL)的贪婪方法与基于模型的RL的典型计划方法之间进行了插值。新方法建立在几何视野模型(GHM,也称为伽马模型)的概念上,该模型对给定策略的折现状态验证分布进行了建模。我们表明,我们可以通过仔细的基本策略GHM的仔细组成,而无需任何其他学习,可以评估任何非马尔科夫策略,以固定的概率在一组基本马尔可夫策略之间切换。然后,我们可以将广义政策改进(GPI)应用于此类非马尔科夫政策的收集,以获得新的马尔可夫政策,通常将其表现优于其先驱。我们对这种方法提供了彻底的理论分析,开发了转移和标准RL的应用,并在经验上证明了其对标准GPI的有效性,对充满挑战的深度RL连续控制任务。我们还提供了GHM培训方法的分析,证明了关于先前提出的方法的新型收敛结果,并显示了如何在深度RL设置中稳定训练这些模型。
translated by 谷歌翻译