使用深神经网络作为函数近似器导致加强学习算法和应用的罢工进展。然而,我们在决策边界几何和神经政策的损失景观中的知识仍然非常有限。在本文中,我们提出了一个框架来调查各种州和跨MDP的决策边界和损失景观相似之处。我们在街机学习环境中进行各种游戏进行实验,并发现神经政策的高灵敏度方向横跨MDP相关。我们认为,这些高灵敏度方向支持非强大功能在加固学习代理的培训环境中共享非强大功能。我们相信我们的结果揭示了深度加强学习培训中使用的环境的基本属性,并代表了建立强大可靠的深度加固学习代理的有形步骤。
translated by 谷歌翻译
最近的工作表明,深增强学习(DRL)政策易受对抗扰动的影响。对手可以通过扰乱药剂观察到的环境来误导DRL代理商的政策。现有攻击原则上是可行的,但在实践中面临挑战,例如通过太慢,无法实时欺骗DRL政策。我们表明,使用通用的对冲扰动(UAP)方法来计算扰动,独立于应用它们的各个输入,可以有效地欺骗DRL策略。我们描述了三种这样的攻击变体。通过使用三个Atari 2600游戏的广泛评估,我们表明我们的攻击是有效的,因为它们完全降低了三种不同的DRL代理商的性能(高达100%,即使在扰乱的$ L_ infty $绑定时也很小为0.01)。与不同DRL策略的响应时间(平均0.6ms)相比,它比不同DRL策略的响应时间(0.6ms)更快,并且比使用对抗扰动的前攻击更快(平均1.8ms)。我们还表明,我们的攻击技术是高效的,平均地产生0.027ms的在线计算成本。使用涉及机器人运动的两个进一步任务,我们确认我们的结果概括了更复杂的DRL任务。此外,我们证明了已知防御的有效性降低了普遍扰动。我们提出了一种有效的技术,可检测针对DRL政策的所有已知的对抗性扰动,包括本文呈现的所有普遍扰动。
translated by 谷歌翻译
在国家观察中最强/最佳的对抗性扰动下评估增强学习(RL)代理的最坏情况性能(在某些限制内)对于理解RL代理商的鲁棒性至关重要。然而,在无论我们都能找到最佳攻击以及我们如何找到它,我们都可以找到最佳的对手是具有挑战性的。对普发拉利RL的现有工作要么使用基于启发式的方法,可以找不到最强大的对手,或者通过将代理人视为环境的一部分来说,直接培训基于RL的对手,这可以找到最佳的对手,但可能会变得棘手大状态空间。本文介绍了一种新的攻击方法,通过设计函数与名为“Director”的RL为基础的学习者的设计函数之间的合作找到最佳攻击。演员工艺在给定的政策扰动方向的状态扰动,主任学会提出最好的政策扰动方向。我们所提出的算法PA-AD,比具有大状态空间的环境中的基于RL的工作,理论上是最佳的,并且明显更有效。经验结果表明,我们建议的PA-AD普遍优惠各种Atari和Mujoco环境中最先进的攻击方法。通过将PA-AD应用于对抗性培训,我们在强烈的对手下实现了多个任务的最先进的经验稳健性。
translated by 谷歌翻译
Various types of Multi-Agent Reinforcement Learning (MARL) methods have been developed, assuming that agents' policies are based on true states. Recent works have improved the robustness of MARL under uncertainties from the reward, transition probability, or other partners' policies. However, in real-world multi-agent systems, state estimations may be perturbed by sensor measurement noise or even adversaries. Agents' policies trained with only true state information will deviate from optimal solutions when facing adversarial state perturbations during execution. MARL under adversarial state perturbations has limited study. Hence, in this work, we propose a State-Adversarial Markov Game (SAMG) and make the first attempt to study the fundamental properties of MARL under state uncertainties. We prove that the optimal agent policy and the robust Nash equilibrium do not always exist for an SAMG. Instead, we define the solution concept, robust agent policy, of the proposed SAMG under adversarial state perturbations, where agents want to maximize the worst-case expected state value. We then design a gradient descent ascent-based robust MARL algorithm to learn the robust policies for the MARL agents. Our experiments show that adversarial state perturbations decrease agents' rewards for several baselines from the existing literature, while our algorithm outperforms baselines with state perturbations and significantly improves the robustness of the MARL policies under state uncertainties.
translated by 谷歌翻译
部署到现实世界的自主智能代理必须与对感官输入的对抗性攻击保持强大的态度。在加强学习中的现有工作集中于最小值扰动攻击,这些攻击最初是为了模仿计算机视觉中感知不变性的概念。在本文中,我们注意到,这种最小值扰动攻击可以由受害者琐碎地检测到,因为这些导致观察序列与受害者的行为不符。此外,许多现实世界中的代理商(例如物理机器人)通常在人类主管下运行,这些代理商不容易受到这种扰动攻击的影响。结果,我们建议专注于幻觉攻击,这是一种与受害者的世界模式一致的新型攻击形式。我们为这个新颖的攻击框架提供了正式的定义,在各种条件下探索了其特征,并得出结论,代理必须寻求现实主义反馈以对幻觉攻击具有强大的态度。
translated by 谷歌翻译
Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance. However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy. This paper presents a metric that measures the exact impact of adversarial attacks against such properties. We use this metric to craft optimal adversarial attacks. Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks. Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system's robustness against attacks.
translated by 谷歌翻译
最近的研究表明,深层增强学习剂容易受到代理投入的小对抗扰动,这提出了对在现实世界中部署这些药剂的担忧。为了解决这个问题,我们提出了一个主要的框架,是培训加强学习代理的主要框架,以改善鲁棒性,以防止$ L_P $ -NORM偏见的对抗性攻击。我们的框架与流行的深度加强学习算法兼容,我们用深Q学习,A3C和PPO展示了其性能。我们在三个深度RL基准(Atari,Mujoco和Procgen)上进行实验,以展示我们稳健的培训算法的有效性。我们的径向-RL代理始终如一地占据了不同强度的攻击时的现有方法,并且培训更加计算效率。此外,我们提出了一种新的评估方法,称为贪婪最坏情况奖励(GWC)来衡量深度RL代理商的攻击不良鲁棒性。我们表明GWC可以有效地评估,并且对最糟糕的对抗攻击序列是对奖励的良好估计。用于我们实验的所有代码可在https://github.com/tuomaso/radial_rl_v2上获得。
translated by 谷歌翻译
值得信赖的强化学习算法应有能力解决挑战性的现实问题,包括{Robustly}处理不确定性,满足{安全}的限制以避免灾难性的失败,以及在部署过程中{prencepentiming}以避免灾难性的失败}。这项研究旨在概述这些可信赖的强化学习的主要观点,即考虑其在鲁棒性,安全性和概括性上的内在脆弱性。特别是,我们给出严格的表述,对相应的方法进行分类,并讨论每个观点的基准。此外,我们提供了一个前景部分,以刺激有希望的未来方向,并简要讨论考虑人类反馈的外部漏洞。我们希望这项调查可以在统一的框架中将单独的研究汇合在一起,并促进强化学习的可信度。
translated by 谷歌翻译
深增强学习模型容易受到对抗的攻击,可以通过操纵受害者的观察来减少受害者的累积预期奖励。尽管以前的优化基于优化的方法效率,用于在监督学习中产生对抗性噪声,因此这些方法可能无法实现最低的累积奖励,因为它们通常不会探索环境动态。在本文中,我们提供了一个框架,以通过重新制定函数空间中加固学习的对抗攻击问题来更好地了解现有方法。我们的重构在有针对性攻击的功能空间中产生最佳对手,通过通用的两级框架来排斥它们。在第一阶段,我们通过黑客攻击环境来培训欺骗性政策,并发现一组轨迹路由到最低奖励或最坏情况性能。接下来,对手误导受害者通过扰乱观察来模仿欺骗性政策。与现有方法相比,我们理论上表明我们的对手在适当的噪声水平下更强大。广泛的实验展示了我们在效率和效力方面的优越性,在Atari和Mujoco环境中实现了最先进的性能。
translated by 谷歌翻译
尽管深度强化学习取得了长足的进步,但已被证明非常容易受到对国家观察的影响。尽管如此,最近试图改善强化学习的对抗性鲁棒性的努力仍然可以忍受很小的扰动,并且随着扰动大小的增加而保持脆弱。我们提出了自举的机会对抗性课程学习(BCL),这是一种新型的灵活的对抗性课程学习框架,用于强大的增强学习。我们的框架结合了两个想法:保守地自行启动每个课程阶段以及从上一个阶段的多个运行中获得的最高质量解决方案,并在课程中进行了机会主义跳过。在我们的实验中,我们表明所提出的BCL框架可以使学到的政策的鲁棒性显着改善,从而使对抗性扰动。最大的改进是乒乓球,我们的框架在最多25/255的扰动中产生了稳健性。相比之下,最好的现有方法只能忍受最高5/255的对抗噪声。我们的代码可在以下网址提供:https://github.com/jlwu002/bcl。
translated by 谷歌翻译
多智能体增强学习(MARL)的最新进展提供了各种工具,支持代理能力适应其环境中的意外变化,并鉴于环境的动态性质(可能会通过其他情况加剧代理商)。在这项工作中,我们强调了集团有效合作的能力与集团的弹性之间的关系,我们衡量了该集团适应环境扰动的能力。为了促进恢复力,我们建议通过新的基于混乱的通信协议进行协作,这是根据其以前经验中未对准的观察结果。我们允许有关代理人自主学习的信息的宽度和频率的决定,这被激活以减少混淆。我们在各种MARL设置中展示了我们的方法的实证评估。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics. However, some of these applications and systems have been shown to be vulnerable to security or privacy attacks, resulting in unreliable or unstable services. A large number of studies have focused on these security and privacy problems in reinforcement learning. However, few surveys have provided a systematic review and comparison of existing problems and state-of-the-art solutions to keep up with the pace of emerging threats. Accordingly, we herein present such a comprehensive review to explain and summarize the challenges associated with security and privacy in reinforcement learning from a new perspective, namely that of the Markov Decision Process (MDP). In this survey, we first introduce the key concepts related to this area. Next, we cover the security and privacy issues linked to the state, action, environment, and reward function of the MDP process, respectively. We further highlight the special characteristics of security and privacy methodologies related to reinforcement learning. Finally, we discuss the possible future research directions within this area.
translated by 谷歌翻译
大多数强化学习算法隐含地假设强同步。我们提出了针对Q学习的新颖攻击,该攻击通过延迟有限时间段的奖励信号来利用该假设所带来的漏洞。我们考虑了两种类型的攻击目标:目标攻击,旨在使目标政策被学习,以及不靶向的攻击,这只是旨在诱使奖励低的政策。我们通过一系列实验评估了提出的攻击的功效。我们的第一个观察结果是,当目标仅仅是为了最大程度地减少奖励时,奖励延迟​​攻击非常有效。的确,我们发现即使是天真的基线奖励 - 延迟攻击也在最大程度地减少奖励方面也非常成功。另一方面,有针对性的攻击更具挑战性,尽管我们表明,提出的方法在实现攻击者的目标方面仍然非常有效。此外,我们引入了第二个威胁模型,该模型捕获了一种最小的缓解措施,该模型可确保不能超出顺序使用奖励。我们发现,这种缓解仍然不足以确保稳定性延迟但保留奖励的命令。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
在自动车辆,健康和航空等安全关键系统领域中越来越多的加强学习引发了确保其安全的必要性。现有的安全机制,如对抗性训练,对抗性检测和强大的学习并不总是适应代理部署的所有干扰。这些干扰包括移动的对手,其行为可能无法预测的代理人,并且作为对其学习有害的事实问题。确保关键系统的安全性也需要提供正式保障对扰动环境中的代理人的行为的正式保障。因此,有必要提出适应代理人面临的学习挑战的新解决方案。在本文中,首先,我们通过提出移动对手,产生对代理人政策中的缺陷的对抗性代理人。其次,我们使用奖励塑造和修改的Q学习算法作为防御机制,在面临对抗扰动时改善代理人的政策。最后,采用概率模型检查来评估两种机制的有效性。我们在离散网格世界进行了实验,其中一个面临非学习和学习对手的单一代理人。我们的结果表明,代理商与对手之间的碰撞次数减少。概率模型检查提供了关于对普遍环境中的代理安全性的较低和上部概率范围。
translated by 谷歌翻译
许多高级决策遵循了循环结构,因为人类操作员从算法中收到建议,但是最终的决策者。因此,该算法的建议可能与实践中实施的实际决定有所不同。但是,大多数算法建议是通过解决假设建议将得到完美实施的优化问题来获得的。我们提出了一个依从性的优化框架,以捕获推荐和实施政策之间的二分法,并分析部分依从性对最佳建议的影响。我们表明,与当前的人类基线性能和建议算法相比,忽视部分依从现象,就像目前正在使用的大多数建议引擎所做的那样,可能会导致任意严重的性能恶化。我们的框架还提供了有用的工具来分析结构并计算自然可以抵抗这种人类偏差的最佳建议政策,并保证可以改善基线政策。
translated by 谷歌翻译
转移学习是开发性能RL代理的越来越普遍的方法。但是,尚不清楚如何定义源和目标任务之间的关系,以及这种关系如何有助于成功转移。我们提出了一种称为两个MDP或SS2的结构相似性的算法,该算法基于先前开发的双仿真指标来计算两个有限MDP的状态的状态相似性度量,并表明该量度满足距离度量的属性。然后,通过GRIDWORLD导航任务的经验结果,我们提供了证据表明,距离度量可用于改善Q学习剂的转移性能,而不是先前的实现。
translated by 谷歌翻译
Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.
translated by 谷歌翻译
A long-standing challenge in artificial intelligence is lifelong learning. In lifelong learning, many tasks are presented in sequence and learners must efficiently transfer knowledge between tasks while avoiding catastrophic forgetting over long lifetimes. On these problems, policy reuse and other multi-policy reinforcement learning techniques can learn many tasks. However, they can generate many temporary or permanent policies, resulting in memory issues. Consequently, there is a need for lifetime-scalable methods that continually refine a policy library of a pre-defined size. This paper presents a first approach to lifetime-scalable policy reuse. To pre-select the number of policies, a notion of task capacity, the maximal number of tasks that a policy can accurately solve, is proposed. To evaluate lifetime policy reuse using this method, two state-of-the-art single-actor base-learners are compared: 1) a value-based reinforcement learner, Deep Q-Network (DQN) or Deep Recurrent Q-Network (DRQN); and 2) an actor-critic reinforcement learner, Proximal Policy Optimisation (PPO) with or without Long Short-Term Memory layer. By selecting the number of policies based on task capacity, D(R)QN achieves near-optimal performance with 6 policies in a 27-task MDP domain and 9 policies in an 18-task POMDP domain; with fewer policies, catastrophic forgetting and negative transfer are observed. Due to slow, monotonic improvement, PPO requires fewer policies, 1 policy for the 27-task domain and 4 policies for the 18-task domain, but it learns the tasks with lower accuracy than D(R)QN. These findings validate lifetime-scalable policy reuse and suggest using D(R)QN for larger and PPO for smaller library sizes.
translated by 谷歌翻译