在自动车辆,健康和航空等安全关键系统领域中越来越多的加强学习引发了确保其安全的必要性。现有的安全机制,如对抗性训练,对抗性检测和强大的学习并不总是适应代理部署的所有干扰。这些干扰包括移动的对手,其行为可能无法预测的代理人,并且作为对其学习有害的事实问题。确保关键系统的安全性也需要提供正式保障对扰动环境中的代理人的行为的正式保障。因此,有必要提出适应代理人面临的学习挑战的新解决方案。在本文中,首先,我们通过提出移动对手,产生对代理人政策中的缺陷的对抗性代理人。其次,我们使用奖励塑造和修改的Q学习算法作为防御机制,在面临对抗扰动时改善代理人的政策。最后,采用概率模型检查来评估两种机制的有效性。我们在离散网格世界进行了实验,其中一个面临非学习和学习对手的单一代理人。我们的结果表明,代理商与对手之间的碰撞次数减少。概率模型检查提供了关于对普遍环境中的代理安全性的较低和上部概率范围。
translated by 谷歌翻译
Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next $k$ steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.
translated by 谷歌翻译
Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance. However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy. This paper presents a metric that measures the exact impact of adversarial attacks against such properties. We use this metric to craft optimal adversarial attacks. Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks. Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system's robustness against attacks.
translated by 谷歌翻译
Safety is still one of the major research challenges in reinforcement learning (RL). In this paper, we address the problem of how to avoid safety violations of RL agents during exploration in probabilistic and partially unknown environments. Our approach combines automata learning for Markov Decision Processes (MDPs) and shield synthesis in an iterative approach. Initially, the MDP representing the environment is unknown. The agent starts exploring the environment and collects traces. From the collected traces, we passively learn MDPs that abstractly represent the safety-relevant aspects of the environment. Given a learned MDP and a safety specification, we construct a shield. For each state-action pair within a learned MDP, the shield computes exact probabilities on how likely it is that executing the action results in violating the specification from the current state within the next $k$ steps. After the shield is constructed, the shield is used during runtime and blocks any actions that induce a too large risk from the agent. The shielded agent continues to explore the environment and collects new data on the environment. Iteratively, we use the collected data to learn new MDPs with higher accuracy, resulting in turn in shields able to prevent more safety violations. We implemented our approach and present a detailed case study of a Q-learning agent exploring slippery Gridworlds. In our experiments, we show that as the agent explores more and more of the environment during training, the improved learned models lead to shields that are able to prevent many safety violations.
translated by 谷歌翻译
未用性的自治车辆(无人机)在过去的美国军事活动中对侦察和监督任务进行了重大贡献。随着无人机的普遍性增加,柜台上还有改进,使他们难以在感兴趣的领域成功获得宝贵的智能。因此,现代无人机可以在最大化他们的生存机会的同时实现他们的任务已经重要。在这项工作中,我们专门研究从指定开始到目标的识别短路的问题,同时收集所有奖励,避免随机移动到网格上的对手。我们还可以在军事环境中提供框架的可能应用,即自动伤员疏散。我们展示了三种方法来解决这个问题的比较:即我们实施一个深度Q学习模型,一个$ \ varepsilon $ -greedy表格Q学习模型,以及在线优化框架。我们的计算实验,使用具有随机对手的简单网格世界环境设计,展示这些方法如何工作,并在性能,准确性和计算时间方面进行比较。
translated by 谷歌翻译
马尔可夫决策过程通常用于不确定性下的顺序决策。然而,对于许多方面,从受约束或安全规范到任务和奖励结构中的各种时间(非Markovian)依赖性,需要扩展。为此,近年来,兴趣已经发展成为强化学习和时间逻辑的组合,即灵活的行为学习方法的组合,具有稳健的验证和保证。在本文中,我们描述了最近引入的常规决策过程的实验调查,该过程支持非马洛维亚奖励功能以及过渡职能。特别是,我们为常规决策过程,与在线,增量学习有关的算法扩展,对无模型和基于模型的解决方案算法的实证评估,以及以常规但非马尔维亚,网格世界的应用程序的算法扩展。
translated by 谷歌翻译
安全探索是强化学习(RL)的常见问题,旨在防止代理在探索环境时做出灾难性的决定。一个解决这个问题的方法家庭以这种环境的(部分)模型的形式假设域知识,以决定动作的安全性。所谓的盾牌迫使RL代理只选择安全的动作。但是,要在各种应用中采用,必须超越执行安全性,还必须确保RL的适用性良好。我们通过与最先进的深度RL的紧密整合扩展了盾牌的适用性,并在部分可观察性下提供了充满挑战的,稀疏的奖励环境中的广泛实证研究。我们表明,经过精心整合的盾牌可确保安全性,并可以提高RL代理的收敛速度和最终性能。我们此外表明,可以使用盾牌来引导最先进的RL代理:它们在屏蔽环境中初步学习后保持安全,从而使我们最终可以禁用潜在的过于保守的盾牌。
translated by 谷歌翻译
尽管深度强化学习(DRL)为控制机器人和自主系统(RAS)的控制提供了变革能力,但DRL的黑盒性质和不确定的RAS部署环境对其可靠性构成了新的挑战。尽管现有的作品对DRL政策施加了限制,以确保成功完成任务,但考虑到所有可靠性的属性,以整体方式评估DRL驱动的RA远远不足。在本文中,我们正式定义了时间逻辑中的一组可靠性属性,并构建离散时间马尔可夫链(DTMC),以建模DRL驱动的RAS的风险/失败动力学与随机环境相互作用。然后,我们在设计的DTMC上进行概率模型检查(PMC)以验证这些属性。我们的实验结果表明,所提出的方法是作为整体评估框架有效的,同时发现可能需要在培训中需要权衡取舍的物业之间的冲突。此外,我们发现标准DRL培训无法提高可靠性属性,因此需要定制优化目标。最后,我们的方法对环境的干扰水平的可靠性分析提供了敏感性分析,从而提供了保证实际RA的见解。
translated by 谷歌翻译
值得信赖的强化学习算法应有能力解决挑战性的现实问题,包括{Robustly}处理不确定性,满足{安全}的限制以避免灾难性的失败,以及在部署过程中{prencepentiming}以避免灾难性的失败}。这项研究旨在概述这些可信赖的强化学习的主要观点,即考虑其在鲁棒性,安全性和概括性上的内在脆弱性。特别是,我们给出严格的表述,对相应的方法进行分类,并讨论每个观点的基准。此外,我们提供了一个前景部分,以刺激有希望的未来方向,并简要讨论考虑人类反馈的外部漏洞。我们希望这项调查可以在统一的框架中将单独的研究汇合在一起,并促进强化学习的可信度。
translated by 谷歌翻译
本文介绍了Cool-MC,这是一种集成了最先进的加固学习(RL)和模型检查的工具。具体而言,该工具建立在OpenAI健身房和概率模型检查器风暴上。COOL-MC提供以下功能:(1)模拟器在OpenAI体育馆训练RL政策,用于Markov决策过程(MDPS),这些模拟器定义为暴风雨的输入,(2)使用“ SORM”的新型号构建器,用于使用回调功能要验证(神经网络)RL策略,(3)与OpenAI Gym或Storm中指定的模型和政策相关的正式抽象,以及(4)算法以获得有关所谓允许政策的性能的界限。我们描述了Cool-MC的组件和体系结构,并在多个基准环境中演示了其功能。
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics. However, some of these applications and systems have been shown to be vulnerable to security or privacy attacks, resulting in unreliable or unstable services. A large number of studies have focused on these security and privacy problems in reinforcement learning. However, few surveys have provided a systematic review and comparison of existing problems and state-of-the-art solutions to keep up with the pace of emerging threats. Accordingly, we herein present such a comprehensive review to explain and summarize the challenges associated with security and privacy in reinforcement learning from a new perspective, namely that of the Markov Decision Process (MDP). In this survey, we first introduce the key concepts related to this area. Next, we cover the security and privacy issues linked to the state, action, environment, and reward function of the MDP process, respectively. We further highlight the special characteristics of security and privacy methodologies related to reinforcement learning. Finally, we discuss the possible future research directions within this area.
translated by 谷歌翻译
强化学习(RL)在很大程度上依赖于探索以从环境中学习并最大程度地获得观察到的奖励。因此,必须设计一个奖励功能,以确保从收到的经验中获得最佳学习。以前的工作将自动机和基于逻辑的奖励成型与环境假设相结合,以提供自动机制,以根据任务综合奖励功能。但是,关于如何将基于逻辑的奖励塑造扩大到多代理增强学习(MARL)的工作有限。如果任务需要合作,则环境将需要考虑联合状态,以跟踪其他代理,从而遭受对代理数量的维度的诅咒。该项目探讨了如何针对不同场景和任务设计基于逻辑的奖励成型。我们提出了一种针对半偏心逻辑基于逻辑的MARL奖励成型的新方法,该方法在代理数量中是可扩展的,并在多种情况下对其进行了评估。
translated by 谷歌翻译
In recent years, unmanned aerial vehicle (UAV) related technology has expanded knowledge in the area, bringing to light new problems and challenges that require solutions. Furthermore, because the technology allows processes usually carried out by people to be automated, it is in great demand in industrial sectors. The automation of these vehicles has been addressed in the literature, applying different machine learning strategies. Reinforcement learning (RL) is an automation framework that is frequently used to train autonomous agents. RL is a machine learning paradigm wherein an agent interacts with an environment to solve a given task. However, learning autonomously can be time consuming, computationally expensive, and may not be practical in highly-complex scenarios. Interactive reinforcement learning allows an external trainer to provide advice to an agent while it is learning a task. In this study, we set out to teach an RL agent to control a drone using reward-shaping and policy-shaping techniques simultaneously. Two simulated scenarios were proposed for the training; one without obstacles and one with obstacles. We also studied the influence of each technique. The results show that an agent trained simultaneously with both techniques obtains a lower reward than an agent trained using only a policy-based approach. Nevertheless, the agent achieves lower execution times and less dispersion during training.
translated by 谷歌翻译
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
translated by 谷歌翻译
Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
translated by 谷歌翻译
数字化和远程连接扩大了攻击面,使网络系统更脆弱。由于攻击者变得越来越复杂和资源丰富,仅仅依赖传统网络保护,如入侵检测,防火墙和加密,不足以保护网络系统。网络弹性提供了一种新的安全范式,可以使用弹性机制来补充保护不足。一种网络弹性机制(CRM)适应了已知的或零日威胁和实际威胁和不确定性,并对他们进行战略性地响应,以便在成功攻击时保持网络系统的关键功能。反馈架构在启用CRM的在线感应,推理和致动过程中发挥关键作用。强化学习(RL)是一个重要的工具,对网络弹性的反馈架构构成。它允许CRM提供有限或没有事先知识和攻击者的有限攻击的顺序响应。在这项工作中,我们审查了Cyber​​恢复力的RL的文献,并讨论了对三种主要类型的漏洞,即姿势有关,与信息相关的脆弱性的网络恢复力。我们介绍了三个CRM的应用领域:移动目标防御,防守网络欺骗和辅助人类安全技术。 RL算法也有漏洞。我们解释了RL的三个漏洞和目前的攻击模型,其中攻击者针对环境与代理商之间交换的信息:奖励,国家观察和行动命令。我们展示攻击者可以通过最低攻击努力来欺骗RL代理商学习邪恶的政策。最后,我们讨论了RL为基于RL的CRM的网络安全和恢复力和新兴应用的未来挑战。
translated by 谷歌翻译
在过去的十年中,深入的强化学习(DRL)算法已经越来越多地使用,以解决各种决策问题,例如自动驾驶和机器人技术。但是,这些算法在部署在安全至关重要的环境中时面临着巨大的挑战,因为它们经常表现出错误的行为,可能导致潜在的关键错误。评估DRL代理的安全性的一种方法是测试它们,以检测可能导致执行过程中严重失败的故障。这就提出了一个问题,即我们如何有效测试DRL政策以确保其正确性和遵守安全要求。测试DRL代理的大多数现有作品都使用扰动代理的对抗性攻击。但是,这种攻击通常会导致环境的不切实际状态。他们的主要目标是测试DRL代理的鲁棒性,而不是测试代理商在要求方面的合规性。由于DRL环境的巨大状态空间,测试执行的高成本以及DRL算法的黑盒性质,因此不可能对DRL代理进行详尽的测试。在本文中,我们提出了一种基于搜索的强化学习代理(Starla)的测试方法,以通过有效地在有限的测试预算中寻找无法执行的代理执行,以测试DRL代理的策略。我们使用机器学习模型和专用的遗传算法来缩小搜索错误的搜索。我们将Starla应用于深Q学习剂,该Qualla被广泛用作基准测试,并表明它通过检测到与代理商策略相关的更多故障来大大优于随机测试。我们还研究了如何使用我们的搜索结果提取表征DRL代理的错误事件的规则。这些规则可用于了解代理失败的条件,从而评估其部署风险。
translated by 谷歌翻译
概率模型检查是一种有用的技术,用于指定和验证随机系统的属性,包括随机协议和增强学习模型。现有方法依赖于某些系统过渡的假定结构和概率。这些假设可能是不正确的,甚至可能因对系统组件的控制而违反。在本文中,我们在模型以离散时间马尔可夫链(DTMC)为模型的系统中开发了一个正式的框架。我们将框架基于验证概率时间逻辑属性的现有方法,并将其扩展到包括在马尔可夫决策过程(MDP)中作用的确定性,无内存策略。我们的框架包括一种灵活的方法,用于指定结构保护和非结构的对抗模型。我们概述了一类威胁模型,在这些模型下,对手可以在原始过渡概率周围受到$ \ varepsilon $ ball的约束。我们定义三个主要DTMC对抗鲁棒性问题:对抗性鲁棒性验证,最大$ \ delta $综合和最坏情况攻击合成。我们为这三个问题提供了两个基于优化的解决方案,利用传统和参数概率模型检查技术。然后,我们在两个随机方案和一系列网格世界案例研究上评估我们的解决方案,该案例研究模拟了在称为MDP的环境中作用的代理。我们发现参数解决方案会导致小参数空间的快速计算。在限制性较小(更强)的对手的情况下,参数数量增加,直接计算属性满意度概率更可扩展。我们通过比较有关各种属性,威胁模型和案例研究的系统结果来证明我们的定义和解决方案的有用性。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译
深度加强学习(DEEPRL)方法已广泛用于机器人学,以了解环境,自主获取行为。深度互动强化学习(Deepirl)包括来自外部培训师或专家的互动反馈,提供建议,帮助学习者选择采取行动以加快学习过程。但是,目前的研究仅限于仅为特工现任提供可操作建议的互动。另外,在单个使用之后,代理丢弃该信息,该用途在为Revisit以相同状态引起重复过程。在本文中,我们提出了广泛的建议(BPA),这是一种广泛的持久的咨询方法,可以保留并重新使用加工信息。它不仅可以帮助培训师提供与类似状态相关的更一般性建议,而不是仅仅是当前状态,而且还允许代理加快学习过程。我们在两个连续机器人场景中测试提出的方法,即购物车极衡任务和模拟机器人导航任务。所得结果表明,使用BPA的代理的性能在于与深层方法相比保持培训师所需的相互作用的数量。
translated by 谷歌翻译