Reinforcement Learning (RL) algorithms have been successfully applied to real world situations like illegal smuggling, poaching, deforestation, climate change, airport security, etc. These scenarios can be framed as Stackelberg security games (SSGs) where defenders and attackers compete to control target resources. The algorithm's competency is assessed by which agent is controlling the targets. This review investigates modeling of SSGs in RL with a focus on possible improvements of target representations in RL algorithms.
translated by 谷歌翻译
Various types of Multi-Agent Reinforcement Learning (MARL) methods have been developed, assuming that agents' policies are based on true states. Recent works have improved the robustness of MARL under uncertainties from the reward, transition probability, or other partners' policies. However, in real-world multi-agent systems, state estimations may be perturbed by sensor measurement noise or even adversaries. Agents' policies trained with only true state information will deviate from optimal solutions when facing adversarial state perturbations during execution. MARL under adversarial state perturbations has limited study. Hence, in this work, we propose a State-Adversarial Markov Game (SAMG) and make the first attempt to study the fundamental properties of MARL under state uncertainties. We prove that the optimal agent policy and the robust Nash equilibrium do not always exist for an SAMG. Instead, we define the solution concept, robust agent policy, of the proposed SAMG under adversarial state perturbations, where agents want to maximize the worst-case expected state value. We then design a gradient descent ascent-based robust MARL algorithm to learn the robust policies for the MARL agents. Our experiments show that adversarial state perturbations decrease agents' rewards for several baselines from the existing literature, while our algorithm outperforms baselines with state perturbations and significantly improves the robustness of the MARL policies under state uncertainties.
translated by 谷歌翻译
In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have. In order to give defenders an advantage, it is important to understand an attacker's motivation and their likely next best action. As a first step in modeling this behavior, we introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment, focusing on the factors that drive decision making for attackers and defenders in the variants of the game with full knowledge and observability, knowledge of the parameters but no observability of the state (``partial knowledge''), and zero knowledge or observability (``zero knowledge''). We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes. Furthermore, there is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes, suggesting that a more false-positive prone environment may be acceptable under conditions where true-positives are also higher.
translated by 谷歌翻译
经济学和政策等现实世界应用程序往往涉及解决多智能运动游戏与两个独特的特点:(1)代理人本质上是不对称的,并分成领导和追随者; (2)代理商有不同的奖励功能,因此游戏是普通的。该领域的大多数现有结果侧重于对称解决方案概念(例如纳什均衡)或零和游戏。它仍然开放了如何学习Stackelberg均衡 - 从嘈杂的样本有效地纳入均衡的不对称模拟 - 纳入均衡。本文启动了对Birtit反馈设置中Stackelberg均衡的样本高效学习的理论研究,我们只观察奖励的噪音。我们考虑三个代表双人普通和游戏:强盗游戏,强盗加固学习(Bandit-RL)游戏和线性匪徒游戏。在所有这些游戏中,我们使用有义的许多噪声样本来确定Stackelberg均衡和其估计版本的确切值之间的基本差距,无论算法如何,都无法封闭信息。然后,我们在对上面识别的差距最佳的基础上的数据高效学习的样本高效学习的敏锐积极结果,在依赖于依赖性的差距,误差容限和动作空间的大小,匹配下限。总体而言,我们的结果在嘈杂的强盗反馈下学习Stackelberg均衡的独特挑战,我们希望能够在未来的研究中阐明这一主题。
translated by 谷歌翻译
我们与指定为领导者的球员之一和其他球员读为追随者的球员学习多人一般汇总马尔可夫游戏。特别是,我们专注于追随者是近视的游戏,即,他们的目标是最大限度地提高他们的瞬间奖励。对于这样的游戏,我们的目标是找到一个Stackelberg-Nash均衡(SNE),这是一个策略对$(\ pi ^ *,\ nu ^ *)$,这样(i)$ \ pi ^ * $是追随者始终发挥最佳回应的领导者的最佳政策,(ii)$ \ nu ^ * $是追随者的最佳反应政策,这是由$ \ pi ^ *引起的追随者游戏的纳什均衡$。我们开发了用于在线和离线设置中的SNE解决SNE的采样高效的强化学习(RL)算法。我们的算法是最小二乘值迭代的乐观和悲观的变体,并且它们很容易能够在大状态空间的设置中结合函数近似工具。此外,对于线性函数近似的情况,我们证明我们的算法分别在线和离线设置下实现了Sublinear遗憾和次优。据我们所知,我们建立了第一种可用于解决近代Markov游戏的SNES的第一款可透明的RL算法。
translated by 谷歌翻译
我们调查攻击者的效果如何,当它只从受害者的行为中学习时,没有受害者的奖励。在这项工作中,当受害者的动机未知时,我们被攻击者想要行事的情景。我们认为一个启发式方法可以使用攻击者是最大化受害者政策的熵。政策通常不会被滥用,这意味着它可以通过被动地观察受害者来提取。我们以奖励无源勘探算法的形式提供这样的策略,可以在勘探阶段最大化攻击者的熵,然后在规划阶段最大化受害者的经验熵。在我们的实验中,受害者代理商通过政策熵最大化而颠覆,暗示攻击者可能无法访问受害者的奖励成功。因此,仅基于观察行为的无奖励攻击表明,即使受害者的奖励信息受到保护,攻击者的可行性也在不了解受害者的动机。
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics. However, some of these applications and systems have been shown to be vulnerable to security or privacy attacks, resulting in unreliable or unstable services. A large number of studies have focused on these security and privacy problems in reinforcement learning. However, few surveys have provided a systematic review and comparison of existing problems and state-of-the-art solutions to keep up with the pace of emerging threats. Accordingly, we herein present such a comprehensive review to explain and summarize the challenges associated with security and privacy in reinforcement learning from a new perspective, namely that of the Markov Decision Process (MDP). In this survey, we first introduce the key concepts related to this area. Next, we cover the security and privacy issues linked to the state, action, environment, and reward function of the MDP process, respectively. We further highlight the special characteristics of security and privacy methodologies related to reinforcement learning. Finally, we discuss the possible future research directions within this area.
translated by 谷歌翻译
We consider a multi-agent episodic MDP setup where an agent (leader) takes action at each step of the episode followed by another agent (follower). The state evolution and rewards depend on the joint action pair of the leader and the follower. Such type of interactions can find applications in many domains such as smart grids, mechanism design, security, and policymaking. We are interested in how to learn policies for both the players with provable performance guarantee under a bandit feedback setting. We focus on a setup where both the leader and followers are {\em non-myopic}, i.e., they both seek to maximize their rewards over the entire episode and consider a linear MDP which can model continuous state-space which is very common in many RL applications. We propose a {\em model-free} RL algorithm and show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret bounds can be achieved for both the leader and the follower, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps under the bandit feedback information setup. Thus, our result holds even when the number of states becomes infinite. The algorithm relies on {\em novel} adaptation of the LSVI-UCB algorithm. Specifically, we replace the standard greedy policy (as the best response) with the soft-max policy for both the leader and the follower. This turns out to be key in establishing uniform concentration bound for the value functions. To the best of our knowledge, this is the first sub-linear regret bound guarantee for the Markov games with non-myopic followers with function approximation.
translated by 谷歌翻译
独立的强化学习算法没有理论保证,用于在多代理设置中找到最佳策略。然而,在实践中,先前的作品报告了在某些域中的独立算法和其他方面的良好性能。此外,文献中缺乏对独立算法的优势和弱点的全面研究。在本文中,我们对四个Pettingzoo环境进行了独立算法的性能的实证比较,这些环境跨越了三种主要类别的多助理环境,即合作,竞争和混合。我们表明,在完全可观察的环境中,独立的算法可以在协作和竞争环境中与多代理算法进行同步。对于混合环境,我们表明通过独立算法培训的代理商学会单独执行,但未能学会与盟友合作并与敌人竞争。我们还表明,添加重复性提高了合作部分可观察环境中独立算法的学习。
translated by 谷歌翻译
我们研究了竞争激烈的马尔可夫游戏(MG)环境中的NASH平衡学习,其中多个代理商竞争,并且可以存在多个NASH均衡。特别是,对于寡头的动态定价环境,由于差异性的诅咒,难以获得精确的NASH平衡。我们开发了一种新的无模型方法来找到近似NASH平衡。然后,将无梯度的黑匣子优化应用于估计$ \ epsilon $,这是代理商单方面偏离任何联合政策的最大奖励优势,并估算了任何给定州的$ \ epsilon $降低政策。政策 - $ \ epsilon $通讯和国家对$ \ epsilon $ - 缩小政策的政策由神经网络表示,后者是NASH策略网。在批处理更新期间,我们通过使用NASH策略网调整操作概率在系统上进行NASH Q学习。我们证明可以学习近似的NASH平衡,尤其是在精确溶液通常很棘手的动态定价域中。
translated by 谷歌翻译
Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
translated by 谷歌翻译
We study the problem of training a principal in a multi-agent general-sum game using reinforcement learning (RL). Learning a robust principal policy requires anticipating the worst possible strategic responses of other agents, which is generally NP-hard. However, we show that no-regret dynamics can identify these worst-case responses in poly-time in smooth games. We propose a framework that uses this policy evaluation method for efficiently learning a robust principal policy using RL. This framework can be extended to provide robustness to boundedly rational agents too. Our motivating application is automated mechanism design: we empirically demonstrate our framework learns robust mechanisms in both matrix games and complex spatiotemporal games. In particular, we learn a dynamic tax policy that improves the welfare of a simulated trade-and-barter economy by 15%, even when facing previously unseen boundedly rational RL taxpayers.
translated by 谷歌翻译
本文有助于将读者介绍到多功能增强学习(MARL)领域及其与因果关系研究的方法的交叉。我们突出了Marl中的关键挑战,并在因因果方法如何协助解决它们的情况下讨论这些问题。我们促进了对Marl的'因果首先'的透视。具体而言,我们认为因果关系可以提高安全性,可解释性和稳健性,同时还为紧急行为提供了强烈的理论保障。我们讨论潜在的挑战解决方案,并使用这种背景激励未来的研究方向。
translated by 谷歌翻译
Reinforcement Learning (RL) is currently one of the most commonly used techniques for traffic signal control (TSC), which can adaptively adjusted traffic signal phase and duration according to real-time traffic data. However, a fully centralized RL approach is beset with difficulties in a multi-network scenario because of exponential growth in state-action space with increasing intersections. Multi-agent reinforcement learning (MARL) can overcome the high-dimension problem by employing the global control of each local RL agent, but it also brings new challenges, such as the failure of convergence caused by the non-stationary Markov Decision Process (MDP). In this paper, we introduce an off-policy nash deep Q-Network (OPNDQN) algorithm, which mitigates the weakness of both fully centralized and MARL approaches. The OPNDQN algorithm solves the problem that traditional algorithms cannot be used in large state-action space traffic models by utilizing a fictitious game approach at each iteration to find the nash equilibrium among neighboring intersections, from which no intersection has incentive to unilaterally deviate. One of main advantages of OPNDQN is to mitigate the non-stationarity of multi-agent Markov process because it considers the mutual influence among neighboring intersections by sharing their actions. On the other hand, for training a large traffic network, the convergence rate of OPNDQN is higher than that of existing MARL approaches because it does not incorporate all state information of each agent. We conduct an extensive experiments by using Simulation of Urban MObility simulator (SUMO), and show the dominant superiority of OPNDQN over several existing MARL approaches in terms of average queue length, episode training reward and average waiting time.
translated by 谷歌翻译
大型人口系统的分析和控制对研究和工程的各个领域引起了极大的兴趣,从机器人群的流行病学到经济学和金融。一种越来越流行和有效的方法来实现多代理系统中的顺序决策,这是通过多机构增强学习,因为它允许对高度复杂的系统进行自动和无模型的分析。但是,可伸缩性的关键问题使控制和增强学习算法的设计变得复杂,尤其是在具有大量代理的系统中。尽管强化学习在许多情况下都发现了经验成功,但许多代理商的问题很快就变得棘手了,需要特别考虑。在这项调查中,我们将阐明当前的方法,以通过多代理强化学习以及通过诸如平均场游戏,集体智能或复杂的网络理论等研究领域进行仔细理解和分析大型人口系统。这些经典独立的主题领域提供了多种理解或建模大型人口系统的方法,这可能非常适合将来的可拖动MARL算法制定。最后,我们调查了大规模控制的潜在应用领域,并确定了实用系统中学习算法的富有成果的未来应用。我们希望我们的调查可以为理论和应用科学的初级和高级研究人员提供洞察力和未来的方向。
translated by 谷歌翻译
流动性和流量的许多方案都涉及多种不同的代理,需要合作以找到共同解决方案。行为计划的最新进展使用强化学习以寻找有效和绩效行为策略。但是,随着自动驾驶汽车和车辆对X通信变得越来越成熟,只有使用单身独立代理的解决方案在道路上留下了潜在的性能增长。多代理增强学习(MARL)是一个研究领域,旨在为彼此相互作用的多种代理找到最佳解决方案。这项工作旨在将该领域的概述介绍给研究人员的自主行动能力。我们首先解释Marl并介绍重要的概念。然后,我们讨论基于Marl算法的主要范式,并概述每个范式中最先进的方法和思想。在这种背景下,我们调查了MAL在自动移动性场景中的应用程序,并概述了现有的场景和实现。
translated by 谷歌翻译
在线电子商务平台上的算法定价引起了人们对默认勾结的关注,在这种情况下,强化学习算法学会以分散的方式设定合格价格,而无非是利润反馈。这就提出了一个问题,即是否可以通过设计合适的“购买盒子”来防止合格定价,即通过设计管理电子商务网站要素的规则,这些规则将特定产品和价格推向消费者。在本文中,我们证明了平台也可以使用增强学习(RL)来学习有效防止RL卖家勾结的框规则。为此,我们采用了Stackelberg POMDP的方法,并在学习强大的规则方面取得了成功,这些规则继续提供高昂的消费者福利,以及采用不同行为模型或对商品的分发费用的卖家。
translated by 谷歌翻译
多代理系统(例如自动驾驶或工厂)作为服务的一些最相关的应用程序显示混合动机方案,代理商可能具有相互矛盾的目标。在这些环境中,代理可能会在独立学习下的合作方面学习不良的结果,例如过度贪婪的行为。在现实世界社会的动机中,在这项工作中,我们建议利用市场力量为代理商成为合作的激励措施。正如囚犯困境的迭代版本所证明的那样,拟议的市场配方可以改变游戏的动力,以始终如一地学习合作政策。此外,我们在空间和时间扩展的设置中评估了不同数量的代理的方法。我们从经验上发现,市场的存在可以通过其交易活动改善总体结果和代理人的回报。
translated by 谷歌翻译
增强学习的数据毒害历史上专注于一般性绩效退化,目标攻击已经通过扰动取得了成功,涉及控制受害者的政策和奖励。我们介绍了一个阴险的中毒攻误,用于加强学习,这只会在特定目标状态下引起代理人不端行为 - 所有的,而且在最小地修改小数一小部分的培训观察,而不假设任何控制政策或奖励。我们通过调整最近的技术,梯度对准来实现这一目标,以加强学习。我们测试我们的方法,并在两个Atari游戏中展示了不同困难的成功。
translated by 谷歌翻译
数字化和远程连接扩大了攻击面,使网络系统更脆弱。由于攻击者变得越来越复杂和资源丰富,仅仅依赖传统网络保护,如入侵检测,防火墙和加密,不足以保护网络系统。网络弹性提供了一种新的安全范式,可以使用弹性机制来补充保护不足。一种网络弹性机制(CRM)适应了已知的或零日威胁和实际威胁和不确定性,并对他们进行战略性地响应,以便在成功攻击时保持网络系统的关键功能。反馈架构在启用CRM的在线感应,推理和致动过程中发挥关键作用。强化学习(RL)是一个重要的工具,对网络弹性的反馈架构构成。它允许CRM提供有限或没有事先知识和攻击者的有限攻击的顺序响应。在这项工作中,我们审查了Cyber​​恢复力的RL的文献,并讨论了对三种主要类型的漏洞,即姿势有关,与信息相关的脆弱性的网络恢复力。我们介绍了三个CRM的应用领域:移动目标防御,防守网络欺骗和辅助人类安全技术。 RL算法也有漏洞。我们解释了RL的三个漏洞和目前的攻击模型,其中攻击者针对环境与代理商之间交换的信息:奖励,国家观察和行动命令。我们展示攻击者可以通过最低攻击努力来欺骗RL代理商学习邪恶的政策。最后,我们讨论了RL为基于RL的CRM的网络安全和恢复力和新兴应用的未来挑战。
translated by 谷歌翻译