Deep Reinforcement Learning (DRL) has the potential to be used for synthesizing feedback controllers (agents) for various complex systems with unknown dynamics. These systems are expected to satisfy diverse safety and liveness properties best captured using temporal logic. In RL, the reward function plays a crucial role in specifying the desired behaviour of these agents. However, the problem of designing the reward function for an RL agent to satisfy complex temporal logic specifications has received limited attention in the literature. To address this, we provide a systematic way of generating rewards in real-time by using the quantitative semantics of Signal Temporal Logic (STL), a widely used temporal logic to specify the behaviour of cyber-physical systems. We propose a new quantitative semantics for STL having several desirable properties, making it suitable for reward generation. We evaluate our STL-based reinforcement learning mechanism on several complex continuous control benchmarks and compare our STL semantics with those available in the literature in terms of their efficacy in synthesizing the controller agent. Experimental results establish our new semantics to be the most suitable for synthesizing feedback controllers for complex continuous dynamical systems through reinforcement learning.
translated by 谷歌翻译
强化学习(RL)是一种有希望的方法,对现实世界的应用程序取得有限,因为确保安全探索或促进充分利用是控制具有未知模型和测量不确定性的机器人系统的挑战。这种学习问题对于连续空间(状态空间和动作空间)的复杂任务变得更加棘手。在本文中,我们提出了一种由几个方面组成的基于学习的控制框架:(1)线性时间逻辑(LTL)被利用,以便于可以通过无限视野的复杂任务转换为新颖的自动化结构; (2)我们为RL-Agent提出了一种创新的奖励计划,正式保证,使全球最佳政策最大化满足LTL规范的概率; (3)基于奖励塑造技术,我们开发了利用自动机构结构的好处进行了模块化的政策梯度架构来分解整体任务,并促进学习控制器的性能; (4)通过纳入高斯过程(GPS)来估计不确定的动态系统,我们使用指数控制屏障功能(ECBF)综合基于模型的保障措施来解决高阶相对度的问题。此外,我们利用LTL自动化和ECBF的性质来构建引导过程,以进一步提高勘探效率。最后,我们通过多个机器人环境展示了框架的有效性。我们展示了这种基于ECBF的模块化深RL算法在训练期间实现了近乎完美的成功率和保护安全性,并且在训练期间具有很高的概率信心。
translated by 谷歌翻译
In this paper, we propose a control synthesis method for signal temporal logic (STL) specifications with neural networks (NNs). Most of the previous works consider training a controller for only a given STL specification. These approaches, however, require retraining the NN controller if a new specification arises and needs to be satisfied, which results in large consumption of memory and inefficient training. To tackle this problem, we propose to construct NN controllers by introducing encoder-decoder structured NNs with an attention mechanism. The encoder takes an STL formula as input and encodes it into an appropriate vector, and the decoder outputs control signals that will meet the given specification. As the encoder, we consider three NN structures: sequential, tree-structured, and graph-structured NNs. All the model parameters are trained in an end-to-end manner to maximize the expected robustness that is known to be a quantitative semantics of STL formulae. We compare the control performances attained by the above NN structures through a numerical experiment of the path planning problem, showing the efficacy of the proposed approach.
translated by 谷歌翻译
在训练数据的分布中评估时,学到的模型和政策可以有效地概括,但可以在分布输入输入的情况下产生不可预测且错误的输出。为了避免在部署基于学习的控制算法时分配变化,我们寻求一种机制将代理商限制为类似于受过训练的国家和行动的机制。在控制理论中,Lyapunov稳定性和控制不变的集合使我们能够保证稳定系统周围系统的控制器,而在机器学习中,密度模型使我们能够估算培训数据分布。我们可以将这两个概念结合起来,产生基于学习的控制算法,这些算法仅使用分配动作将系统限制为分布状态?在这项工作中,我们建议通过结合Lyapunov稳定性和密度估计的概念来做到这一点,引入Lyapunov密度模型:控制Lyapunov函数和密度模型的概括,这些函数和密度模型可以保证代理商在其整个轨迹上保持分布的能力。
translated by 谷歌翻译
Reach-避免最佳控制问题,其中系统必须在保持某些目标条件的同时保持清晰的不可接受的故障模式,是自主机器人系统的安全和活力保证的核心,但它们的确切解决方案是复杂的动态和环境的难以解决。最近的钢筋学习方法的成功与绩效目标大致解决最佳控制问题,使其应用​​于认证问题有吸引力;然而,加固学习中使用的拉格朗日型客观不适合编码时间逻辑要求。最近的工作表明,在将加强学习机械扩展到安全型问题时,其目标不是总和,但随着时间的推移最小(或最大)。在这项工作中,我们概括了加强学习制定,以处理覆盖范围的所有最佳控制问题。我们推出了一个时间折扣 - 避免了收缩映射属性的贝尔曼备份,并证明了所得达到避免Q学习算法在类似条件下会聚到传统的拉格朗郎类型问题,从而避免任意紧凑的保守近似值放。我们进一步证明了这种配方利用深度加强学习方法,通过将近似解决方案视为模型预测监督控制框架中的不受信任的oracles来保持零违规保证。我们评估我们在一系列非线性系统上的提出框架,验证了对分析和数值解决方案的结果,并通过Monte Carlo仿真在以前的棘手问题中。我们的结果为一系列基于学习的自治行为开放了大门,具有机器人和自动化的应用。有关代码和补充材料,请参阅https://github.com/saferoboticslab/safett_rl。
translated by 谷歌翻译
Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers. Given a stochastic control system, an NN controller, and a specification equipped with a notion of trace robustness (e.g., constraint functions or signal temporal logic), we collect trajectories from the system that may or may not satisfy the specification. In particular, each of the trajectories produces a robustness value that indicates how well (severely) the specification is satisfied (violated). We then compute risk metrics over these robustness values to estimate the risk that the NN controller will not satisfy the specification. We are further interested in quantifying the difference in risk between two systems, and we show how the risk estimated from a nominal system can provide an upper bound the risk of a perturbed version of the system. In particular, the tightness of this bound depends on the closeness of the systems in terms of the closeness of their system trajectories. For Lipschitz continuous and incrementally input-to-state stable systems, we show how to exactly quantify system closeness with varying degrees of conservatism, while we estimate system closeness for more general systems from data in our experiments. We demonstrate our risk verification approach on two case studies, an underwater vehicle and an F1/10 autonomous car.
translated by 谷歌翻译
深度加强学习(RL)是一种优化驱动的框架,用于生产一般动力系统的控制策略,而无明确依赖过程模型。仿真报告了良好的结果。在这里,我们展示了在真实物理系统上实现了艺术深度RL算法状态的挑战。方面包括软件与现有硬件之间的相互作用;实验设计和样品效率;培训受输入限制;和算法和控制法的解释性。在我们的方法中,我们的方法是使用PID控制器作为培训RL策略。除了简单性之外,这种方法还具有多种吸引力功能:无需将额外的硬件添加到控制系统中,因为PID控制器可以通过标准可编程逻辑控制器轻松实现;控制法可以在参数空间的“安全”区域中很容易初始化;最终产品 - 一个调整良好的PID控制器 - 有一种形式,从业者可以充分推理和部署。
translated by 谷歌翻译
Reinforcement Learning (RL) can solve complex tasks but does not intrinsically provide any guarantees on system behavior. For real-world systems that fulfill safety-critical tasks, such guarantees on safety specifications are necessary. To bridge this gap, we propose a verifiably safe RL procedure with probabilistic guarantees. First, our approach probabilistically verifies a candidate controller with respect to a temporal logic specification, while randomizing the controller's inputs within a bounded set. Then, we use RL to improve the performance of this probabilistically verified, i.e. safe, controller and explore in the same bounded set around the controller's input as was randomized over in the verification step. Finally, we calculate probabilistic safety guarantees with respect to temporal logic specifications for the learned agent. Our approach is efficient for continuous action and state spaces and separates safety verification and performance improvement into two independent steps. We evaluate our approach on a safe evasion task where a robot has to evade a dynamic obstacle in a specific manner while trying to reach a goal. The results show that our verifiably safe RL approach leads to efficient learning and performance improvements while maintaining safety specifications.
translated by 谷歌翻译
本文介绍了一个名为STLCG的技术,使用计算图计算信号时间逻辑(STL)公式的定量语义。 STLCG提供了一个平台,它可以将逻辑规范纳入从基于梯度的解决方案中受益的机器人问题。具体而言,STL是一种强大且表现力的正式语言,可以指定连续和混合系统产生的信号的空间和时间特性。 STL的定量语义提供了鲁棒性度量,即,信号满足或违反STL规范的量。在这项工作中,我们设计了一种系统方法,用于将STL鲁棒性公式转化为计算图形。通过这种表示,通过利用现成的自动差异化工具,我们能够通过STL稳健性公式有效地反向,因此可以实现具有许多基于梯度的方法的STL规范的自然且易于使用的STL规范集成。通过各种机器人应用的许多示例,我们证明STLCG是多功能的,计算效率,并且能够将人域知识纳入问题制定中。
translated by 谷歌翻译
勘探是基于深入强化学习(DRL)的无模型导航控制的基本挑战,因为针对目标驱动的导航任务的典型勘探技术依赖于噪声或贪婪的政策,这些策略对奖励的密度敏感。实际上,机器人总是在复杂的混乱环境中部署,其中包含密集的障碍和狭窄的通道,从而提高了很难探索训练的自然备用奖励。当预定义的任务复杂并且具有丰富的表现力时,这种问题变得更加严重。在本文中,我们专注于这两个方面,并为任务指导的机器人提供了一种深层的政策梯度算法,该机器人在复杂的混乱环境中部署了未知的动态系统。线性时间逻辑(LTL)用于表达丰富的机器人规范。为了克服训练期间探索的环境挑战,我们提出了一种新颖的路径计划引导奖励方案,该方案在状态空间上密集,并且至关重要的是,由于黑盒动力学而导致计算的几何路径的不可行性。为了促进LTL满意度,我们的方法将LTL任务分解为使用分布式DRL解决的子任务,在该子任务中,可以使用深层政策梯度算法并行培训子任务。我们的框架被证明可显着提高性能(有效性,效率)和对大规模复杂环境中复杂任务的机器人的探索。可以在YouTube频道上找到视频演示:https://youtu.be/yqrq2-ymtik。
translated by 谷歌翻译
The deployment of robots in uncontrolled environments requires them to operate robustly under previously unseen scenarios, like irregular terrain and wind conditions. Unfortunately, while rigorous safety frameworks from robust optimal control theory scale poorly to high-dimensional nonlinear dynamics, control policies computed by more tractable "deep" methods lack guarantees and tend to exhibit little robustness to uncertain operating conditions. This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems with general nonlinear dynamics subject to bounded modeling error by combining game-theoretic safety analysis with adversarial reinforcement learning in simulation. Following a soft actor-critic scheme, a safety-seeking fallback policy is co-trained with an adversarial "disturbance" agent that aims to invoke the worst-case realization of model error and training-to-deployment discrepancy allowed by the designer's uncertainty. While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter (or shield) with robust safety guarantees based on forward reachability rollouts. This shield can be used in conjunction with a safety-agnostic control policy, precluding any task-driven actions that could result in loss of safety. We evaluate our learning-based safety approach in a 5D race car simulator, compare the learned safety policy to the numerically obtained optimal solution, and empirically validate the robust safety guarantee of our proposed safety shield against worst-case model discrepancy.
translated by 谷歌翻译
在对关键安全环境的强化学习中,通常希望代理在所有时间点(包括培训期间)服从安全性限制。我们提出了一种称为Spice的新型神经符号方法,以解决这个安全的探索问题。与现有工具相比,Spice使用基于符号最弱的先决条件的在线屏蔽层获得更精确的安全性分析,而不会不适当地影响培训过程。我们在连续控制基准的套件上评估了该方法,并表明它可以达到与现有的安全学习技术相当的性能,同时遭受较少的安全性违规行为。此外,我们提出的理论结果表明,在合理假设下,香料会收敛到最佳安全政策。
translated by 谷歌翻译
In contrast to the control-theoretic methods, the lack of stability guarantee remains a significant problem for model-free reinforcement learning (RL) methods. Jointly learning a policy and a Lyapunov function has recently become a promising approach to ensuring the whole system with a stability guarantee. However, the classical Lyapunov constraints researchers introduced cannot stabilize the system during the sampling-based optimization. Therefore, we propose the Adaptive Stability Certification (ASC), making the system reach sampling-based stability. Because the ASC condition can search for the optimal policy heuristically, we design the Adaptive Lyapunov-based Actor-Critic (ALAC) algorithm based on the ASC condition. Meanwhile, our algorithm avoids the optimization problem that a variety of constraints are coupled into the objective in current approaches. When evaluated on ten robotic tasks, our method achieves lower accumulated cost and fewer stability constraint violations than previous studies.
translated by 谷歌翻译
We revisit the domain of off-policy policy optimization in RL from the perspective of coordinate ascent. One commonly-used approach is to leverage the off-policy policy gradient to optimize a surrogate objective -- the total discounted in expectation return of the target policy with respect to the state distribution of the behavior policy. However, this approach has been shown to suffer from the distribution mismatch issue, and therefore significant efforts are needed for correcting this mismatch either via state distribution correction or a counterfactual method. In this paper, we rethink off-policy learning via Coordinate Ascent Policy Optimization (CAPO), an off-policy actor-critic algorithm that decouples policy improvement from the state distribution of the behavior policy without using the policy gradient. This design obviates the need for distribution correction or importance sampling in the policy improvement step of off-policy policy gradient. We establish the global convergence of CAPO with general coordinate selection and then further quantify the convergence rates of several instances of CAPO with popular coordinate selection rules, including the cyclic and the randomized variants of CAPO. We then extend CAPO to neural policies for a more practical implementation. Through experiments, we demonstrate that CAPO provides a competitive approach to RL in practice.
translated by 谷歌翻译
Learning-enabled control systems have demonstrated impressive empirical performance on challenging control problems in robotics, but this performance comes at the cost of reduced transparency and lack of guarantees on the safety or stability of the learned controllers. In recent years, new techniques have emerged to provide these guarantees by learning certificates alongside control policies -- these certificates provide concise, data-driven proofs that guarantee the safety and stability of the learned control system. These methods not only allow the user to verify the safety of a learned controller but also provide supervision during training, allowing safety and stability requirements to influence the training process itself. In this paper, we provide a comprehensive survey of this rapidly developing field of certificate learning. We hope that this paper will serve as an accessible introduction to the theory and practice of certificate learning, both to those who wish to apply these tools to practical robotics problems and to those who wish to dive more deeply into the theory of learning for control.
translated by 谷歌翻译
过去半年来,从控制和强化学习社区的真实机器人部署的安全学习方法的贡献数量急剧上升。本文提供了一种简洁的但整体审查,对利用机器学习实现的最新进展,以实现在不确定因素下的安全决策,重点是统一控制理论和加固学习研究中使用的语言和框架。我们的评论包括:基于学习的控制方法,通过学习不确定的动态,加强学习方法,鼓励安全或坚固性的加固学习方法,以及可以正式证明学习控制政策安全的方法。随着基于数据和学习的机器人控制方法继续获得牵引力,研究人员必须了解何时以及如何最好地利用它们在安全势在必行的现实情景中,例如在靠近人类的情况下操作时。我们突出了一些开放的挑战,即将在未来几年推动机器人学习领域,并强调需要逼真的物理基准的基准,以便于控制和加固学习方法之间的公平比较。
translated by 谷歌翻译
我们研究了逻辑规范给出的复杂任务的学习策略问题。最近的方法从给定的规范自动生成奖励功能,并使用合适的加强学习算法来学习最大化预期奖励的策略。然而,这些方法对需要高级别计划的复杂任务奠定了差。在这项工作中,我们开发了一种称为Dirl的组成学习方法,可交织高级别的规划和强化学习。首先,Dirl将规范编码为抽象图;直观地,图的顶点和边缘分别对应于状态空间的区域和更简单的子任务。我们的方法然后结合了增强学习,以便在Dijkstra风格的规划算法内为每个边缘(子任务)学习神经网络策略,以计算图表中的高级计划。对具有连续状态和行动空间的一套具有挑战性的控制基准测试的提出方法的评估表明它优于最先进的基线。
translated by 谷歌翻译
我们考虑在一个有限时间范围内的离散时间随机动力系统的联合设计和控制。我们将问题作为一个多步优化问题,在寻求识别系统设计和控制政策的不确定性下,共同最大化所考虑的时间范围内收集的预期奖励总和。转换函数,奖励函数和策略都是参数化的,假设与其参数有所不同。然后,我们引入了一种深度加强学习算法,将策略梯度方法与基于模型的优化技术相结合以解决这个问题。从本质上讲,我们的算法迭代地估计通过Monte-Carlo采样和自动分化的预期返回的梯度,并在环境和策略参数空间中投影梯度上升步骤。该算法称为直接环境和策略搜索(DEPS)。我们评估我们算法在三个环境中的性能,分别在三种环境中进行了一个群众弹簧阻尼系统的设计和控制,分别小型离网电力系统和无人机。此外,我们的算法是针对用于解决联合设计和控制问题的最先进的深增强学习算法的基准测试。我们表明,在所有三种环境中,DEPS至少在或更好地执行,始终如一地产生更高的迭代返回的解决方案。最后,通过我们的算法产生的解决方案也与由算法产生的解决方案相比,不共同优化环境和策略参数,突出显示在执行联合优化时可以实现更高返回的事实。
translated by 谷歌翻译
强化学习(RL)文献的最新进展使机器人主义者能够在模拟环境中自动训练复杂的政策。但是,由于这些方法的样本复杂性差,使用现实世界数据解决强化学习问题仍然是一个具有挑战性的问题。本文介绍了一种新颖的成本整形方法,旨在减少学习稳定控制器所需的样品数量。该方法添加了一个涉及控制Lyapunov功能(CLF)的术语 - 基于模型的控制文献的“能量样”功能 - 到典型的成本配方。理论结果表明,新的成本会导致使用较小的折现因子时稳定控制器,这是众所周知的,以降低样品复杂性。此外,通过确保即使是高度亚最佳的策略也可以稳定系统,添加CLF术语“鲁棒化”搜索稳定控制器。我们通过两个硬件示例演示了我们的方法,在其中我们学习了一个cartpole的稳定控制器和仅使用几秒钟和几分钟的微调数据的A1稳定控制器。
translated by 谷歌翻译
运行时执行是指针对运行时正式规范执行正确行为的理论,技术和工具。在本文中,我们对用于构建AI中执行安全性的混凝土应用程序域的运行时执行器的技术感兴趣。我们讨论了传统上如何在AI领域处理安全性,以及如何通过集成运行时执行器来提供自我学习代理的安全性。我们调查了此类执法者的一系列工作,在该工作中,我们区分了离散和连续动作空间的方法。本文的目的是更好地理解不同执法技术的优势和局限性,重点关注由于AI在AI中的应用而引起的特定挑战。最后,我们为未来的工作提出了一些开放的挑战和途径。
translated by 谷歌翻译