了解强化学习(RL)代理的新兴行为可能很困难,因为这种代理通常使用高度复杂的决策程序在复杂的环境中进行训练。这引起了RL中解释性的多种方法,旨在调和可能在主体行为与观察者预期的行为之间产生的差异。最近的方法取决于域知识,这可能并非总是可用的,分析代理商的策略,或者是对基础环境的特定要素的分析,通常被建模为马尔可夫决策过程(MDP)。我们的主要主张是,即使基本的MDP尚不完全了解(例如,尚未准确地了解过渡概率),也没有由代理商维护(即,在使用无模型方法时),但仍可以利用它为自动生成解释。为此,我们建议使用以前在文献中使用的正式MDP抽象和转换来加快寻找最佳策略的搜索,以自动产生解释。由于这种转换通常基于环境的符号表示,因此它们可能代表了预期和实际代理行为之间差距的有意义的解释。我们正式定义了这个问题,建议一类可用于解释新兴行为的转换,并提出了有效搜索解释的方法。我们演示了一组标准基准测试的方法。
translated by 谷歌翻译
多智能体增强学习(MARL)的最新进展提供了各种工具,支持代理能力适应其环境中的意外变化,并鉴于环境的动态性质(可能会通过其他情况加剧代理商)。在这项工作中,我们强调了集团有效合作的能力与集团的弹性之间的关系,我们衡量了该集团适应环境扰动的能力。为了促进恢复力,我们建议通过新的基于混乱的通信协议进行协作,这是根据其以前经验中未对准的观察结果。我们允许有关代理人自主学习的信息的宽度和频率的决定,这被激活以减少混淆。我们在各种MARL设置中展示了我们的方法的实证评估。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
虽然深增强学习已成为连续决策问题的有希望的机器学习方法,但对于自动驾驶或医疗应用等高利害域来说仍然不够成熟。在这种情况下,学习的政策需要例如可解释,因此可以在任何部署之前检查它(例如,出于安全性和验证原因)。本调查概述了各种方法,以实现加固学习(RL)的更高可解释性。为此,我们将解释性(作为模型的财产区分开来和解释性(作为HOC操作后的讲话,通过代理的干预),并在RL的背景下讨论它们,并强调前概念。特别是,我们认为可译文的RL可能会拥抱不同的刻面:可解释的投入,可解释(转型/奖励)模型和可解释的决策。根据该计划,我们总结和分析了与可解释的RL相关的最近工作,重点是过去10年来发表的论文。我们还简要讨论了一些相关的研究领域并指向一些潜在的有前途的研究方向。
translated by 谷歌翻译
Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next $k$ steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
In this paper, we consider the problem of path finding for a set of homogeneous and autonomous agents navigating a previously unknown stochastic environment. In our problem setting, each agent attempts to maximize a given utility function while respecting safety properties. Our solution is based on ideas from evolutionary game theory, namely replicating policies that perform well and diminishing ones that do not. We do a comprehensive comparison with related multiagent planning methods, and show that our technique beats state of the art RL algorithms in minimizing path length by nearly 30% in large spaces. We show that our algorithm is computationally faster than deep RL methods by at least an order of magnitude. We also show that it scales better with an increase in the number of agents as compared to other methods, path planning methods in particular. Lastly, we empirically prove that the policies that we learn are evolutionarily stable and thus impervious to invasion by any other policy.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
顺序决策的两种常见方法是AI计划(AIP)和强化学习(RL)。每个都有优点和缺点。 AIP是可解释的,易于与象征知识集成,并且通常是有效的,但需要前期逻辑域的规范,并且对噪声敏感; RL仅需要奖励的规范,并且对噪声是强大的,但效率低下,不容易提供外部知识。我们提出了一种综合方法,将高级计划与RL结合在一起,保留可解释性,转移和效率,同时允许对低级计划行动进行强有力的学习。我们的方法通过在AI计划问题的状态过渡模型与Markov决策过程(MDP)的抽象状态过渡系统(MDP)之间建立对应关系,从而定义了AIP操作员的分层增强学习(HRL)的选项。通过添加内在奖励来鼓励MDP和AIP过渡模型之间的一致性来学习选项。我们通过比较Minigrid和N房间环境中RL和HRL算法的性能来证明我们的综合方法的好处,从而显示了我们方法比现有方法的优势。
translated by 谷歌翻译
嘈杂的传感,不完美的控制和环境变化是许多现实世界机器人任务的定义特征。部分可观察到的马尔可夫决策过程(POMDP)提供了一个原则上的数学框架,用于建模和解决不确定性下的机器人决策和控制任务。在过去的十年中,它看到了许多成功的应用程序,涵盖了本地化和导航,搜索和跟踪,自动驾驶,多机器人系统,操纵和人类机器人交互。这项调查旨在弥合POMDP模型的开发与算法之间的差距,以及针对另一端的不同机器人决策任务的应用。它分析了这些任务的特征,并将它们与POMDP框架的数学和算法属性联系起来,以进行有效的建模和解决方案。对于从业者来说,调查提供了一些关键任务特征,以决定何时以及如何成功地将POMDP应用于机器人任务。对于POMDP算法设计师,该调查为将POMDP应用于机器人系统的独特挑战提供了新的见解,并指出了有希望的新方向进行进一步研究。
translated by 谷歌翻译
最近的自主代理和机器人的应用,如自动驾驶汽车,情景的培训师,勘探机器人和服务机器人带来了关注与当前生成人工智能(AI)系统相关的至关重要的信任相关挑战。尽管取得了巨大的成功,基于连接主义深度学习神经网络方法的神经网络方法缺乏解释他们对他人的决策和行动的能力。没有符号解释能力,它们是黑色盒子,这使得他们的决定或行动不透明,这使得难以信任它们在安全关键的应用中。最近对AI系统解释性的立场目睹了可解释的人工智能(XAI)的几种方法;然而,大多数研究都专注于应用于计算科学中的数据驱动的XAI系统。解决越来越普遍的目标驱动器和机器人的研究仍然缺失。本文评论了可解释的目标驱动智能代理和机器人的方法,重点是解释和沟通代理人感知功能的技术(示例,感官和愿景)和认知推理(例如,信仰,欲望,意图,计划和目标)循环中的人类。审查强调了强调透明度,可辨与和持续学习以获得解释性的关键策略。最后,本文提出了解释性的要求,并提出了用于实现有效目标驱动可解释的代理和机器人的路线图。
translated by 谷歌翻译
深度强化学习(DRL)赋予了各种人工智能领域,包括模式识别,机器人技术,推荐系统和游戏。同样,图神经网络(GNN)也证明了它们在图形结构数据的监督学习方面的出色表现。最近,GNN与DRL用于图形结构环境的融合引起了很多关注。本文对这些混合动力作品进行了全面评论。这些作品可以分为两类:(1)算法增强,其中DRL和GNN相互补充以获得更好的实用性; (2)特定于应用程序的增强,其中DRL和GNN相互支持。这种融合有效地解决了工程和生命科学方面的各种复杂问题。基于审查,我们进一步分析了融合这两个领域的适用性和好处,尤其是在提高通用性和降低计算复杂性方面。最后,集成DRL和GNN的关键挑战以及潜在的未来研究方向被突出显示,这将引起更广泛的机器学习社区的关注。
translated by 谷歌翻译
强化学习和最近的深度增强学习是解决如Markov决策过程建模的顺序决策问题的流行方法。问题和选择算法和超参数的RL建模需要仔细考虑,因为不同的配置可能需要完全不同的性能。这些考虑因素主要是RL专家的任务;然而,RL在研究人员和系统设计师不是RL专家的其他领域中逐渐变得流行。此外,许多建模决策,例如定义状态和动作空间,批次的大小和批量更新的频率以及时间戳的数量通常是手动进行的。由于这些原因,RL框架的自动化不同组成部分具有重要意义,近年来它引起了很多关注。自动RL提供了一个框架,其中RL的不同组件包括MDP建模,算法选择和超参数优化是自动建模和定义的。在本文中,我们探讨了可以在自动化RL中使用的文献和目前的工作。此外,我们讨论了Autorl中的挑战,打开问题和研究方向。
translated by 谷歌翻译
强化学习(RL)在很大程度上依赖于探索以从环境中学习并最大程度地获得观察到的奖励。因此,必须设计一个奖励功能,以确保从收到的经验中获得最佳学习。以前的工作将自动机和基于逻辑的奖励成型与环境假设相结合,以提供自动机制,以根据任务综合奖励功能。但是,关于如何将基于逻辑的奖励塑造扩大到多代理增强学习(MARL)的工作有限。如果任务需要合作,则环境将需要考虑联合状态,以跟踪其他代理,从而遭受对代理数量的维度的诅咒。该项目探讨了如何针对不同场景和任务设计基于逻辑的奖励成型。我们提出了一种针对半偏心逻辑基于逻辑的MARL奖励成型的新方法,该方法在代理数量中是可扩展的,并在多种情况下对其进行了评估。
translated by 谷歌翻译
在过去的十年中,多智能经纪人强化学习(Marl)已经有了重大进展,但仍存在许多挑战,例如高样本复杂性和慢趋同稳定的政策,在广泛的部署之前需要克服,这是可能的。然而,在实践中,许多现实世界的环境已经部署了用于生成策略的次优或启发式方法。一个有趣的问题是如何最好地使用这些方法作为顾问,以帮助改善多代理领域的加强学习。在本文中,我们提供了一个原则的框架,用于将动作建议纳入多代理设置中的在线次优顾问。我们描述了在非传记通用随机游戏环境中提供多种智能强化代理(海军上将)的问题,并提出了两种新的基于Q学习的算法:海军上将决策(海军DM)和海军上将 - 顾问评估(Admiral-AE) ,这使我们能够通过适当地纳入顾问(Admiral-DM)的建议来改善学习,并评估顾问(Admiral-AE)的有效性。我们从理论上分析了算法,并在一般加上随机游戏中提供了关于他们学习的定点保证。此外,广泛的实验说明了这些算法:可以在各种环境中使用,具有对其他相关基线的有利相比的性能,可以扩展到大状态行动空间,并且对来自顾问的不良建议具有稳健性。
translated by 谷歌翻译
人工智能的扩散越来越依赖于模型理解。理解既需要一种解释 - 关于模型行为的人类推理,又是解释 - 模型功能的象征性表示。尽管必须对安全性,信任和接受的透明度,但最先进的强化学习算法的不透明性掩盖了其学习策略的基础。我们已经开发了一种政策正规化方法,该方法主张了学识渊博的策略的全球固有亲和力。这些亲和力提供了一种关于政策行为的推理手段,从而使其固有地解释。我们已经在个性化的繁荣管理中展示了我们的方法,其中个人的支出行为及时决定了他们的投资策略,即不同的支出人物可能与不同的投资类别有不同的关联。现在,我们通过使用离散的Markov模型重现潜在的原型策略来解释我们的模型。这些全球替代物是原型政策的符号表示。
translated by 谷歌翻译
具有很多玩家的非合作和合作游戏具有许多应用程序,但是当玩家数量增加时,通常仍然很棘手。由Lasry和Lions以及Huang,Caines和Malham \'E引入的,平均野外运动会(MFGS)依靠平均场外近似值,以使玩家数量可以成长为无穷大。解决这些游戏的传统方法通常依赖于以完全了解模型的了解来求解部分或随机微分方程。最近,增强学习(RL)似乎有望解决复杂问题。通过组合MFGS和RL,我们希望在人口规模和环境复杂性方面能够大规模解决游戏。在这项调查中,我们回顾了有关学习MFG中NASH均衡的最新文献。我们首先确定最常见的设置(静态,固定和进化)。然后,我们为经典迭代方法(基于最佳响应计算或策略评估)提供了一个通用框架,以确切的方式解决MFG。在这些算法和与马尔可夫决策过程的联系的基础上,我们解释了如何使用RL以无模型的方式学习MFG解决方案。最后,我们在基准问题上介绍了数值插图,并以某些视角得出结论。
translated by 谷歌翻译
由于数据量增加,金融业的快速变化已经彻底改变了数据处理和数据分析的技术,并带来了新的理论和计算挑战。与古典随机控制理论和解决财务决策问题的其他分析方法相比,解决模型假设的财务决策问题,强化学习(RL)的新发展能够充分利用具有更少模型假设的大量财务数据并改善复杂的金融环境中的决策。该调查纸目的旨在审查最近的资金途径的发展和使用RL方法。我们介绍了马尔可夫决策过程,这是许多常用的RL方法的设置。然后引入各种算法,重点介绍不需要任何模型假设的基于价值和基于策略的方法。连接是用神经网络进行的,以扩展框架以包含深的RL算法。我们的调查通过讨论了这些RL算法在金融中各种决策问题中的应用,包括最佳执行,投资组合优化,期权定价和对冲,市场制作,智能订单路由和Robo-Awaring。
translated by 谷歌翻译