While deep reinforcement learning has proven to be successful in solving control tasks, the "black-box" nature of an agent has received increasing concerns. We propose a prototype-based post-hoc policy explainer, ProtoX, that explains a blackbox agent by prototyping the agent's behaviors into scenarios, each represented by a prototypical state. When learning prototypes, ProtoX considers both visual similarity and scenario similarity. The latter is unique to the reinforcement learning context, since it explains why the same action is taken in visually different states. To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent. We then add an isometry layer to allow ProtoX to adapt scenario similarity to the downstream task. ProtoX is trained via imitation learning using behavior cloning, and thus requires no access to the environment or agent. In addition to explanation fidelity, we design different prototype shaping terms in the objective function to encourage better interpretability. We conduct various experiments to test ProtoX. Results show that ProtoX achieved high fidelity to the original black-box agent while providing meaningful and understandable explanations.
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
虽然深增强学习已成为连续决策问题的有希望的机器学习方法,但对于自动驾驶或医疗应用等高利害域来说仍然不够成熟。在这种情况下,学习的政策需要例如可解释,因此可以在任何部署之前检查它(例如,出于安全性和验证原因)。本调查概述了各种方法,以实现加固学习(RL)的更高可解释性。为此,我们将解释性(作为模型的财产区分开来和解释性(作为HOC操作后的讲话,通过代理的干预),并在RL的背景下讨论它们,并强调前概念。特别是,我们认为可译文的RL可能会拥抱不同的刻面:可解释的投入,可解释(转型/奖励)模型和可解释的决策。根据该计划,我们总结和分析了与可解释的RL相关的最近工作,重点是过去10年来发表的论文。我们还简要讨论了一些相关的研究领域并指向一些潜在的有前途的研究方向。
translated by 谷歌翻译
自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用中使用了使用,但他们仍然存在缺乏可解释性的问题。面包缺乏对研究人员和公众使用DRL解决方案的使用。为了解决这个问题,已经出现了可解释的人工智能(XAI)领域。这是各种不同的方法,它们希望打开DRL黑框,范围从使用可解释的符号决策树到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及使用了哪些应用程序。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
由于部分可观察性,高维视觉感知和延迟奖励,在MINECRAFT等开放世界游戏中的学习理性行为仍然是挑战,以便对加固学习(RL)研究造成挑战性,高维视觉感知和延迟奖励。为了解决这个问题,我们提出了一种具有代表学习和模仿学习的样本有效的等级RL方法,以应对感知和探索。具体来说,我们的方法包括两个层次结构,其中高级控制器学习控制策略来控制选项,低级工作人员学会解决每个子任务。为了提高子任务的学习,我们提出了一种技术组合,包括1)动作感知表示学习,其捕获了行动和表示之间的基础关系,2)基于鉴别者的自模仿学习,以实现有效的探索,以及3)合奏行为克隆一致性筛选政策鲁棒性。广泛的实验表明,Juewu-MC通过大边缘显着提高了样品效率并优于一组基线。值得注意的是,我们赢得了神经脂溢斯矿业锦标赛2021年研究竞赛的冠军,并实现了最高的绩效评分。
translated by 谷歌翻译
Many works in explainable AI have focused on explaining black-box classification models. Explaining deep reinforcement learning (RL) policies in a manner that could be understood by domain users has received much less attention. In this paper, we propose a novel perspective to understanding RL policies based on identifying important states from automatically learned meta-states. The key conceptual difference between our approach and many previous ones is that we form meta-states based on locality governed by the expert policy dynamics rather than based on similarity of actions, and that we do not assume any particular knowledge of the underlying topology of the state space. Theoretically, we show that our algorithm to find meta-states converges and the objective that selects important states from each meta-state is submodular leading to efficient high quality greedy selection. Experiments on four domains (four rooms, door-key, minipacman, and pong) and a carefully conducted user study illustrate that our perspective leads to better understanding of the policy. We conjecture that this is a result of our meta-states being more intuitive in that the corresponding important states are strong indicators of tractable intermediate goals that are easier for humans to interpret and follow.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
深度加强学习(RL)代理在一系列复杂的控制任务中变得越来越精通。然而,由于引入黑盒功能,代理的行为通常很难解释,使得难以获得用户的信任。虽然存在一些基于视觉的RL的有趣的解释方法,但大多数人都无法发现时间因果信息,提高其可靠性的问题。为了解决这个问题,我们提出了一个时间空间因果解释(TSCI)模型,以了解代理人的长期行为,这对于连续决策至关重要。 TSCI模型构建了颞会因果关系的制定,这反映了连续观测结果与RL代理的决策之间的时间因果关系。然后,采用单独的因果发现网络来识别时间空间因果特征,这被限制为满足时间因果关系。 TSCI模型适用于复发代理,可用于发现培训效率高效率的因果特征。经验结果表明,TSCI模型可以产生高分辨率和敏锐的关注掩模,以突出大多数关于视觉的RL代理如何顺序决策的最大证据的任务相关的时间空间信息。此外,我们还表明,我们的方法能够为从时刻视角提供有价值的基于视觉的RL代理的因果解释。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
在模仿学习的背景下,提供专家轨迹通常是昂贵且耗时的。因此,目标必须是创建算法,这些算法需要尽可能少的专家数据。在本文中,我们提出了一种算法,该算法模仿了专家的高级战略,而不仅仅是模仿行动水平的专家,我们假设这需要更少的专家数据并使培训更加稳定。作为先验,我们假设高级策略是达到未知的目标状态区域,我们假设这对于强化学习中许多领域是有效的先验。目标国家地区未知,但是由于专家已经证明了如何达到目标,因此代理商试图到达与专家类似的州。我们的算法以时间连贯性的思想为基础,训练神经网络,以预测两个状态是否相似,从某种意义上说,它们可能会随着时间的流逝而发生。在推论期间,代理将其当前状态与案例基础的专家状态进行比较以获得相似性。结果表明,我们的方法仍然可以在很少有专家数据的设置中学习一个近乎最佳的政策,这些算法试图模仿动作级别的专家,这一算法再也无法做到了。
translated by 谷歌翻译
背景信息:在过去几年中,机器学习(ML)一直是许多创新的核心。然而,包括在所谓的“安全关键”系统中,例如汽车或航空的系统已经被证明是非常具有挑战性的,因为ML的范式转变为ML带来完全改变传统认证方法。目的:本文旨在阐明与ML为基础的安全关键系统认证有关的挑战,以及文献中提出的解决方案,以解决它们,回答问题的问题如何证明基于机器学习的安全关键系统?'方法:我们开展2015年至2020年至2020年之间发布的研究论文的系统文献综述(SLR),涵盖了与ML系统认证有关的主题。总共确定了217篇论文涵盖了主题,被认为是ML认证的主要支柱:鲁棒性,不确定性,解释性,验证,安全强化学习和直接认证。我们分析了每个子场的主要趋势和问题,并提取了提取的论文的总结。结果:单反结果突出了社区对该主题的热情,以及在数据集和模型类型方面缺乏多样性。它还强调需要进一步发展学术界和行业之间的联系,以加深域名研究。最后,它还说明了必须在上面提到的主要支柱之间建立连接的必要性,这些主要柱主要主要研究。结论:我们强调了目前部署的努力,以实现ML基于ML的软件系统,并讨论了一些未来的研究方向。
translated by 谷歌翻译
强化学习(RL)已经证明了通过利用非线性函数近似器来解决高维任务的能力。但是,这些成功主要通过模拟域中的“黑匣子”政策实现。将RL部署到现实世界时,可能会提出有关使用“黑匣子”策略的几个问题。为了使学习的政策更加透明,我们提出了一个策略迭代方案,该方案保留了一个复杂的函数近似器的内部值预测,而是限制策略基于a的简明,分层和人类可读结构。可解释专家的混合。每个专家根据与原型状态的距离选择原始操作。保持这些专家解释的关键设计决定是从轨迹数据中选择原型状态。本文的主要技术贡献是解决这一非可分子原型态选拔程序引入的挑战。我们认为,我们的建议算法可以在连续动作深度RL基准测试中学习引人注目的政策,匹配基于神经网络的策略的性能,但返回比神经网络或线性型政策更适合人类检查的策略。
translated by 谷歌翻译
Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains. However, deploying visual RL techniques in the real world remains challenging due to their low sample efficiency and large generalization gaps. To tackle these obstacles, data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies by diversifying the training data. This survey aims to provide a timely and essential review of DA techniques in visual RL in recognition of the thriving development in this field. In particular, we propose a unified framework for analyzing visual RL and understanding the role of DA in it. We then present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data in different scenarios. Moreover, we report a systematic empirical evaluation of DA-based techniques in visual RL and conclude by highlighting the directions for future research. As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
translated by 谷歌翻译
Reinforcement learning (RL) algorithms have achieved notable success in recent years, but still struggle with fundamental issues in long-term credit assignment. It remains difficult to learn in situations where success is contingent upon multiple critical steps that are distant in time from each other and from a sparse reward; as is often the case in real life. Moreover, how RL algorithms assign credit in these difficult situations is typically not coded in a way that can rapidly generalize to new situations. Here, we present an approach using offline contrastive learning, which we call contrastive introspection (ConSpec), that can be added to any existing RL algorithm and addresses both issues. In ConSpec, a contrastive loss is used during offline replay to identify invariances among successful episodes. This takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon than it is to prospectively predict reward at every step taken in the environment. ConSpec stores this knowledge in a collection of prototypes summarizing the intermediate states required for success. During training, arrival at any state that matches these prototypes generates an intrinsic reward that is added to any external rewards. As well, the reward shaping provided by ConSpec can be made to preserve the optimal policy of the underlying RL agent. The prototypes in ConSpec provide two key benefits for credit assignment: (1) They enable rapid identification of all the critical states. (2) They do so in a readily interpretable manner, enabling out of distribution generalization when sensory features are altered. In summary, ConSpec is a modular system that can be added to any existing RL algorithm to improve its long-term credit assignment.
translated by 谷歌翻译
强化学习(RL)代理商广泛用于解决复杂的连续决策任务,但仍然表现出概括到培训期间未见的情景。在先前的在线方法证明,使用超出奖励功能的其他信号可以导致RL代理商中的更好的泛化能力,即使用自我监督学习(SSL),他们在离线RL设置中奋斗,即从静态数据集中学习。我们表明,由于观察之间的相似性差异差,可以在离线设置中阻碍用于RL的普遍的在线算法的性能。我们提出了一种称为广义相似性功能(GSF)的新的理论上动机框架,它使用对比学习来训练基于其预期未来行为的相似性的离线RL代理,以便使用\ EMPH {广义值来量化此相似性。职能}。我们表明GSF足以恢复现有的SSL目标,同时还可以在复杂的离线RL基准,离线Procgen上提高零拍泛化性能。
translated by 谷歌翻译
在嘈杂的互联网规模数据集上进行了预测,已对具有广泛的文本,图像和其他模式能力的培训模型进行了大量研究。但是,对于许多顺序决策域,例如机器人技术,视频游戏和计算机使用,公开可用的数据不包含以相同方式训练行为先验所需的标签。我们通过半监督的模仿学习将互联网规模的预处理扩展到顺序的决策域,其中代理通过观看在线未标记的视频来学习行动。具体而言,我们表明,使用少量标记的数据,我们可以训练一个足够准确的反向动力学模型,可以标记一个巨大的未标记在线数据来源 - 在这里,在线播放Minecraft的在线视频 - 然后我们可以从中训练一般行为先验。尽管使用了本地人类界面(鼠标和键盘为20Hz),但我们表明,这种行为先验具有非平凡的零射击功能,并且可以通过模仿学习和加强学习,可以对其进行微调,以进行硬探索任务。不可能通过增强学习从头开始学习。对于许多任务,我们的模型都表现出人类水平的性能,我们是第一个报告可以制作钻石工具的计算机代理,这些工具可以花费超过20分钟(24,000个环境动作)的游戏玩法来实现。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译