已经开发了概率模型检查,用于验证具有随机和非季度行为的验证系统。鉴于概率系统,概率模型检查器占用属性并检查该系统中的属性是否保持。因此,概率模型检查提供严谨的保证。然而,到目前为止,概率模型检查专注于所谓的模型,其中一个状态由符号表示。另一方面,通常需要在规划和强化学习中进行关系抽象。各种框架处理关系域,例如条带规划和关系马尔可夫决策过程。使用命题模型检查关系设置需要一个地接地模型,这导致了众所周知的状态爆炸问题和难以承承性。我们提出了PCTL-Rebel,一种用于验证关系MDP的PCTL属性的提升模型检查方法。它延长了基于关系模型的强化学习技术的反叛者,朝着关系PCTL模型检查。 PCTL-REBEL被提升,这意味着而不是接地,模型利用对称在关系层面上整体的一组对象。从理论上讲,我们表明PCTL模型检查对于具有可能无限域的关系MDP可判定,条件是该状态具有有界大小。实际上,我们提供算法和提升关系模型检查的实现,并且我们表明提升方法提高了模型检查方法的可扩展性。
translated by 谷歌翻译
We propose a framework for learning a fragment of probabilistic computation tree logic (pCTL) formulae from a set of states that are labeled as safe or unsafe. We work in a relational setting and combine ideas from relational Markov Decision Processes with pCTL model-checking. More specifically, we assume that there is an unknown relational pCTL target formula that is satisfied by only safe states, and has a horizon of maximum $k$ steps and a threshold probability $\alpha$. The task then consists of learning this unknown formula from states that are labeled as safe or unsafe by a domain expert. We apply principles of relational learning to induce a pCTL formula that is satisfied by all safe states and none of the unsafe ones. This formula can then be used as a safety specification for this domain, so that the system can avoid getting into dangerous situations in future. Following relational learning principles, we introduce a candidate formula generation process, as well as a method for deciding which candidate formula is a satisfactory specification for the given labeled states. The cases where the expert knows and does not know the system policy are treated, however, much of the learning process is the same for both cases. We evaluate our approach on a synthetic relational domain.
translated by 谷歌翻译
马尔可夫决策过程通常用于不确定性下的顺序决策。然而,对于许多方面,从受约束或安全规范到任务和奖励结构中的各种时间(非Markovian)依赖性,需要扩展。为此,近年来,兴趣已经发展成为强化学习和时间逻辑的组合,即灵活的行为学习方法的组合,具有稳健的验证和保证。在本文中,我们描述了最近引入的常规决策过程的实验调查,该过程支持非马洛维亚奖励功能以及过渡职能。特别是,我们为常规决策过程,与在线,增量学习有关的算法扩展,对无模型和基于模型的解决方案算法的实证评估,以及以常规但非马尔维亚,网格世界的应用程序的算法扩展。
translated by 谷歌翻译
我们概述了在其知识表示和声明问题解决的应用中的视角下的时间逻辑编程。这些程序是将通常规则与时间模态运算符组合的结果,如线性时间时间逻辑(LTL)。我们专注于最近的非单调形式主义的结果​​称为时间平衡逻辑(电话),该逻辑(电话)为LTL的全语法定义,但是基于平衡逻辑执行模型选择标准,答案集编程的众所周知的逻辑表征(ASP )。我们获得了稳定模型语义的适当延伸,以进行任意时间公式的一般情况。我们记得电话和单调基础的基本定义,这里的时间逻辑 - 和那里(THT),并研究无限和有限迹线之间的差异。我们还提供其他有用的结果,例如将转换成其他形式主义,如量化的平衡逻辑或二阶LTL,以及用于基于自动机计算的时间稳定模型的一些技术。在第二部分中,我们专注于实际方面,定义称为较近ASP的时间逻辑程序的句法片段,并解释如何在求解器Telingo的构建中被利用。
translated by 谷歌翻译
Safety is still one of the major research challenges in reinforcement learning (RL). In this paper, we address the problem of how to avoid safety violations of RL agents during exploration in probabilistic and partially unknown environments. Our approach combines automata learning for Markov Decision Processes (MDPs) and shield synthesis in an iterative approach. Initially, the MDP representing the environment is unknown. The agent starts exploring the environment and collects traces. From the collected traces, we passively learn MDPs that abstractly represent the safety-relevant aspects of the environment. Given a learned MDP and a safety specification, we construct a shield. For each state-action pair within a learned MDP, the shield computes exact probabilities on how likely it is that executing the action results in violating the specification from the current state within the next $k$ steps. After the shield is constructed, the shield is used during runtime and blocks any actions that induce a too large risk from the agent. The shielded agent continues to explore the environment and collects new data on the environment. Iteratively, we use the collected data to learn new MDPs with higher accuracy, resulting in turn in shields able to prevent more safety violations. We implemented our approach and present a detailed case study of a Q-learning agent exploring slippery Gridworlds. In our experiments, we show that as the agent explores more and more of the environment during training, the improved learned models lead to shields that are able to prevent many safety violations.
translated by 谷歌翻译
Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next $k$ steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.
translated by 谷歌翻译
归纳逻辑编程(ILP)是一种机器学习的形式。ILP的目标是诱导推广培训示例的假设(一组逻辑规则)。随着ILP转30,我们提供了对该领域的新介绍。我们介绍了必要的逻辑符号和主要学习环境;描述ILP系统的构建块;比较几个维度的几个系统;描述四个系统(Aleph,Tilde,Aspal和Metagol);突出关键应用领域;最后,总结了未来研究的当前限制和方向。
translated by 谷歌翻译
Automated synthesis of provably correct controllers for cyber-physical systems is crucial for deploying these systems in safety-critical scenarios. However, their hybrid features and stochastic or unknown behaviours make this synthesis problem challenging. In this paper, we propose a method for synthesizing controllers for Markov jump linear systems (MJLSs), a particular class of cyber-physical systems, that certifiably satisfy a requirement expressed as a specification in probabilistic computation tree logic (PCTL). An MJLS consists of a finite set of linear dynamics with unknown additive disturbances, where jumps between these modes are governed by a Markov decision process (MDP). We consider both the case where the transition function of this MDP is given by probability intervals or where it is completely unknown. Our approach is based on generating a finite-state abstraction which captures both the discrete and the continuous behaviour of the original system. We formalise such abstraction as an interval Markov decision process (iMDP): intervals of transition probabilities are computed using sampling techniques from the so-called "scenario approach", resulting in a probabilistically sound approximation of the MJLS. This iMDP abstracts both the jump dynamics between modes, as well as the continuous dynamics within the modes. To demonstrate the efficacy of our technique, we apply our method to multiple realistic benchmark problems, in particular, temperature control, and aerial vehicle delivery problems.
translated by 谷歌翻译
近年来,研究人员在设计了用于优化线性时间逻辑(LTL)目标和LTL的目标中的增强学习算法方面取得了重大进展。尽管有这些进步,但解决了这个问题的基本限制,以至于以前的研究暗示,但对我们的知识而言,尚未深入检查。在本文中,我们通过一般的LTL目标理解了学习的硬度。我们在马尔可夫决策过程(PAC-MDP)框架(PAC-MDP)框架中可能大致正确学习的问题正式化,这是一种测量加固学习中的样本复杂性的标准框架。在这一形式化中,我们证明,只有在LTL层次结构中最有限的类别中,才有于仅当公式中的最有限的类别,因此才能获得PAC-MDP的最佳政策。实际上,我们的结果意味着加强学习算法无法在与非有限范围可解除的LTL目标的无限环境的相互作用之后获得其学习政策的性能的PAC-MDP保证。
translated by 谷歌翻译
当环境稀疏和非马克维亚奖励时,使用标量奖励信号的训练加强学习(RL)代理通常是不可行的。此外,在训练之前对这些奖励功能进行手工制作很容易指定,尤其是当环境的动态仅部分知道时。本文提出了一条新型的管道,用于学习非马克维亚任务规格,作为简洁的有限状态“任务自动机”,从未知环境中的代理体验情节中。我们利用两种关键算法的见解。首先,我们通过将其视为部分可观察到的MDP并为隐藏的Markov模型使用现成的算法,从而学习了由规范的自动机和环境MDP组成的产品MDP,该模型是由规范的自动机和环境MDP组成的。其次,我们提出了一种从学习的产品MDP中提取任务自动机(假定为确定性有限自动机)的新方法。我们学到的任务自动机可以使任务分解为其组成子任务,从而提高了RL代理以后可以合成最佳策略的速率。它还提供了高级环境和任务功能的可解释编码,因此人可以轻松地验证代理商是否在没有错误的情况下学习了连贯的任务。此外,我们采取步骤确保学识渊博的自动机是环境不可静止的,使其非常适合用于转移学习。最后,我们提供实验结果,以说明我们在不同环境和任务中的算法的性能及其合并先前的领域知识以促进更有效学习的能力。
translated by 谷歌翻译
行为树(BT)是一种在自主代理中(例如机器人或计算机游戏中的虚拟实体)之间在不同任务之间进行切换的方法。 BT是创建模块化和反应性的复杂系统的一种非常有效的方法。这些属性在许多应用中至关重要,这导致BT从计算机游戏编程到AI和机器人技术的许多分支。在本书中,我们将首先对BTS进行介绍,然后我们描述BTS与早期切换结构的关系,并且在许多情况下如何概括。然后,这些想法被用作一套高效且易于使用的设计原理的基础。安全性,鲁棒性和效率等属性对于自主系统很重要,我们描述了一套使用BTS的状态空间描述正式分析这些系统的工具。借助新的分析工具,我们可以对BTS如何推广早期方法的形式形式化。我们还显示了BTS在自动化计划和机器学习中的使用。最后,我们描述了一组扩展的工具,以捕获随机BT的行为,其中动作的结果由概率描述。这些工具可以计算成功概率和完成时间。
translated by 谷歌翻译
在安全关键方案中利用自主系统需要在存在影响系统动态的不确定性和黑匣子组件存在下验证其行为。在本文中,我们开发了一个框架,用于验证部分可观察到的离散时间动态系统,从给定的输入输出数据集中具有针对时间逻辑规范的未暗模式可分散的动态系统。验证框架采用高斯进程(GP)回归,以了解数据集中的未知动态,并将连续空间系统抽象为有限状态,不确定的马尔可夫决策过程(MDP)。这种抽象依赖于通过使用可重复的内核Hilbert空间分析以及通过离散化引起的不确定性来捕获由于GP回归中的错误而捕获不确定性的过渡概率间隔。该框架利用现有的模型检查工具来验证对给定时间逻辑规范的不确定MDP抽象。我们建立将验证结果扩展到潜在部分可观察系统的抽象结果的正确性。我们表明框架的计算复杂性在数据集和离散抽象的大小中是多项式。复杂性分析说明了验证结果质量与处理较大数据集和更精细抽象的计算负担之间的权衡。最后,我们展示了我们的学习和验证框架在具有线性,非线性和切换动力系统的几种案例研究中的功效。
translated by 谷歌翻译
我们在Isabelle定理箴言中展示了有限马尔可夫决定流程的正式化。我们专注于动态编程和使用加固学习代理所需的基础。特别是,我们从第一个原则(在标量和向量形式中)导出Bellman方程,导出产生任何策略P的预期值的向量计算,并继续证明存在一个普遍的最佳政策的存在折扣因子不到一个。最后,我们证明了价值迭代和策略迭代算法在有限的时间内工作,分别产生ePsilon - 最佳和完全最佳的政策。
translated by 谷歌翻译
LCRL是一种软件工具,可在未知的马尔可夫决策过程(MDPS)上实现无模型增强学习(RL)算法,合成满足给定线性时间规范具有最大概率的策略。 LCRL利用被称为极限确定性Buchi Automata(LDBA)的部分确定性有限状态机器表达给定的线性时间规范。 RL算法的奖励函数是根据LDBA的结构即时塑造的。理论保证在适当的假设下确保RL算法与最大化满意度概率的最佳策略的收敛性。我们提出了案例研究,以证明LCRL的适用性,易用性,可伸缩性和性能。由于LDBA引导的探索和无LCRL模型架构,我们观察到了稳健的性能,与标准RL方法相比(每当适用于LTL规格)时,它也可以很好地缩放。有关如何执行本文所有案例研究的完整说明,请在lcrl分发www.github.com/grockious/lcrl的GitHub页面上提供。
translated by 谷歌翻译
形状约束语言(SHACL)是通过验证图表上的某些形状来验证RDF数据的最新W3C推荐语言。先前的工作主要集中在验证问题上,并且仅针对SHACL的简化版本研究了对设计和优化目的至关重要的可满足性和遏制的标准决策问题。此外,SHACL规范不能定义递归定义的约束的语义,这导致文献中提出了几种替代性递归语义。尚未研究这些不同语义与重要决策问题之间的相互作用。在本文中,我们通过向新的一阶语言(称为SCL)的翻译提供了对SHACL的不同特征的全面研究,该语言精确地捕获了SHACL的语义。我们还提出了MSCL,这是SCL的二阶扩展,它使我们能够在单个形式的逻辑框架中定义SHACL的主要递归语义。在这种语言中,我们还提供了对过滤器约束的有效处理,这些滤镜经常在相关文献中被忽略。使用此逻辑,我们为不同的SHACL片段的可满足性和遏制决策问题提供了(联合)可决定性和复杂性结果的详细图。值得注意的是,我们证明这两个问题对于完整的语言都是不可避免的,但是即使面对递归,我们也提供了有趣的功能的可决定性组合。
translated by 谷歌翻译
在AI,业务流程管理和数据库理论中,在国家关系代表方面运行的动态系统的建模和验证越来越多。为了使这些系统适合验证,需要对每个关系状态中存储的信息量进行界限,否则对动作的前提和影响施加了限制。我们介绍了关系行动基础(RABS)的一般框架,该框架通过抬高这两个限制来概括现有模型:可以通过可以通过数据量化和普遍量化数据的行动来进化无限的关系状态,并且可以利用ArithMmetic的数值数据来量化。谓词。然后,我们通过(近似)基于SMT的向后搜索来研究RABS的参数化安全性,挑选出最终过程的基本元主体,并显示如何通过国家现有验证模块的现成组合来实现它 - ART MCMT模型检查器。我们证明了这种方法在数据感知业务流程的基准上的有效性。最后,我们展示了如何利用通用不变性以使此过程完全正确。
translated by 谷歌翻译
虽然深增强学习已成为连续决策问题的有希望的机器学习方法,但对于自动驾驶或医疗应用等高利害域来说仍然不够成熟。在这种情况下,学习的政策需要例如可解释,因此可以在任何部署之前检查它(例如,出于安全性和验证原因)。本调查概述了各种方法,以实现加固学习(RL)的更高可解释性。为此,我们将解释性(作为模型的财产区分开来和解释性(作为HOC操作后的讲话,通过代理的干预),并在RL的背景下讨论它们,并强调前概念。特别是,我们认为可译文的RL可能会拥抱不同的刻面:可解释的投入,可解释(转型/奖励)模型和可解释的决策。根据该计划,我们总结和分析了与可解释的RL相关的最近工作,重点是过去10年来发表的论文。我们还简要讨论了一些相关的研究领域并指向一些潜在的有前途的研究方向。
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译
Two approaches to AI, neural networks and symbolic systems, have been proven very successful for an array of AI problems. However, neither has been able to achieve the general reasoning ability required for human-like intelligence. It has been argued that this is due to inherent weaknesses in each approach. Luckily, these weaknesses appear to be complementary, with symbolic systems being adept at the kinds of things neural networks have trouble with and vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry by combining neural networks and symbolic AI into integrated systems. Often this has been done by encoding symbolic knowledge into neural networks. Unfortunately, although many different methods for this have been proposed, there is no common definition of an encoding to compare them. We seek to rectify this problem by introducing a semantic framework for neural-symbolic AI, which is then shown to be general enough to account for a large family of neural-symbolic systems. We provide a number of examples and proofs of the application of the framework to the neural encoding of various forms of knowledge representation and neural network. These, at first sight disparate approaches, are all shown to fall within the framework's formal definition of what we call semantic encoding for neural-symbolic AI.
translated by 谷歌翻译
我们使用线性时间逻辑(LTL)约束研究策略优化问题(PO)。LTL的语言允许灵活描述可能不自然的任务,以编码为标量成本函数。我们将LTL受限的PO视为系统框架,将任务规范与策略选择解耦,以及成本塑造标准的替代方案。通过访问生成模型,我们开发了一种基于模型的方法,该方法享有样本复杂性分析,以确保任务满意度和成本最佳性(通过减少到可达性问题)。从经验上讲,即使在低样本制度中,我们的算法也可以实现强大的性能。
translated by 谷歌翻译