深度强化学习使用深层神经网络来编码一项策略,该策略在广泛的应用程序中取得了很好的性能,但被广泛视为黑匣子模型。神经模糊的控制器给出了更可解释的深网替代品。不幸的是,神经模糊的控制器通常需要大量规则来解决相对简单的任务,从而难以解释。在这项工作中,我们提出了一种算法,将政策从深Q网络提取为紧凑的神经模糊控制器。这使我们能够通过蒸馏来训练紧凑的神经模糊控制器,以解决他们无法直接解决的任务,结合了深度强化学习的灵活性和紧凑的规则基础的可解释性。我们在OpenAI体育馆的三个知名环境中演示了算法,在那里我们仅使用2至6个模糊规则匹配DQN代理的性能。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
强化学习的许多应用都可以正式化为目标条件的环境,在每个情节中,都有一个“目标”会影响该情节中获得的奖励,但不会影响动态。已经提出了各种技术来提高目标条件环境的性能,例如自动课程生成和目标重新标记。在这项工作中,我们探讨了在目标条件设置中的损失钢筋学习与知识蒸馏之间的联系。特别是:当前的Q值函数和目标Q值估计是该目标的函数,我们想训练Q值函数以匹配其所有目标的目标。因此,我们将基于梯度的注意转移(Zagoruyko和Komodakis 2017)(一种知识蒸馏技术)应用于Q功能更新。我们从经验上表明,当目标空间高维时,这可以提高目标条件的非政策强化学习的性能。我们还表明,在多个同时稀疏目标的情况下,可以对该技术进行调整,以允许有效学习,在这种情况下,代理可以通过在测试时间指定的所有大型目标来实现奖励。最后,为了提供理论支持,我们给出了环境类别的示例,在某些假设下(在某些假设)中,标准的非政策算法至少需要O(d^2)观察到的过渡以学习最佳策略,而我们的建议技术仅需O( d)过渡,其中d是目标和状态空间的维度。
translated by 谷歌翻译
虽然深增强学习已成为连续决策问题的有希望的机器学习方法,但对于自动驾驶或医疗应用等高利害域来说仍然不够成熟。在这种情况下,学习的政策需要例如可解释,因此可以在任何部署之前检查它(例如,出于安全性和验证原因)。本调查概述了各种方法,以实现加固学习(RL)的更高可解释性。为此,我们将解释性(作为模型的财产区分开来和解释性(作为HOC操作后的讲话,通过代理的干预),并在RL的背景下讨论它们,并强调前概念。特别是,我们认为可译文的RL可能会拥抱不同的刻面:可解释的投入,可解释(转型/奖励)模型和可解释的决策。根据该计划,我们总结和分析了与可解释的RL相关的最近工作,重点是过去10年来发表的论文。我们还简要讨论了一些相关的研究领域并指向一些潜在的有前途的研究方向。
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
With the development of experimental quantum technology, quantum control has attracted increasing attention due to the realization of controllable artificial quantum systems. However, because quantum-mechanical systems are often too difficult to analytically deal with, heuristic strategies and numerical algorithms which search for proper control protocols are adopted, and, deep learning, especially deep reinforcement learning (RL), is a promising generic candidate solution for the control problems. Although there have been a few successful applications of deep RL to quantum control problems, most of the existing RL algorithms suffer from instabilities and unsatisfactory reproducibility, and require a large amount of fine-tuning and a large computational budget, both of which limit their applicability. To resolve the issue of instabilities, in this dissertation, we investigate the non-convergence issue of Q-learning. Then, we investigate the weakness of existing convergent approaches that have been proposed, and we develop a new convergent Q-learning algorithm, which we call the convergent deep Q network (C-DQN) algorithm, as an alternative to the conventional deep Q network (DQN) algorithm. We prove the convergence of C-DQN and apply it to the Atari 2600 benchmark. We show that when DQN fail, C-DQN still learns successfully. Then, we apply the algorithm to the measurement-feedback cooling problems of a quantum quartic oscillator and a trapped quantum rigid body. We establish the physical models and analyse their properties, and we show that although both C-DQN and DQN can learn to cool the systems, C-DQN tends to behave more stably, and when DQN suffers from instabilities, C-DQN can achieve a better performance. As the performance of DQN can have a large variance and lack consistency, C-DQN can be a better choice for researches on complicated control problems.
translated by 谷歌翻译
In this thesis, we consider two simple but typical control problems and apply deep reinforcement learning to them, i.e., to cool and control a particle which is subject to continuous position measurement in a one-dimensional quadratic potential or in a quartic potential. We compare the performance of reinforcement learning control and conventional control strategies on the two problems, and show that the reinforcement learning achieves a performance comparable to the optimal control for the quadratic case, and outperforms conventional control strategies for the quartic case for which the optimal control strategy is unknown. To our knowledge, this is the first time deep reinforcement learning is applied to quantum control problems in continuous real space. Our research demonstrates that deep reinforcement learning can be used to control a stochastic quantum system in real space effectively as a measurement-feedback closed-loop controller, and our research also shows the ability of AI to discover new control strategies and properties of the quantum systems that are not well understood, and we can gain insights into these problems by learning from the AI, which opens up a new regime for scientific research.
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
在人类中,感知意识促进了来自感官输入的快速识别和提取信息。这种意识在很大程度上取决于人类代理人如何与环境相互作用。在这项工作中,我们提出了主动神经生成编码,用于学习动作驱动的生成模型的计算框架,而不会在动态环境中反正出错误(Backprop)。具体而言,我们开发了一种智能代理,即使具有稀疏奖励,也可以从规划的认知理论中汲取灵感。我们展示了我们框架与深度Q学习竞争力的几个简单的控制问题。我们的代理的强劲表现提供了有希望的证据,即神经推断和学习的无背方法可以推动目标定向行为。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
深度神经网络的强大学习能力使强化学习者能够直接从连续环境中学习有效的控制政策。从理论上讲,为了实现稳定的性能,神经网络假设I.I.D.不幸的是,在训练数据在时间上相关且非平稳的一般强化学习范式中,输入不存在。这个问题可能导致“灾难性干扰”和性能崩溃的现象。在本文中,我们提出智商,即干涉意识深度Q学习,以减轻单任务深度加固学习中的灾难性干扰。具体来说,我们求助于在线聚类,以实现在线上下文部门,以及一个多头网络和一个知识蒸馏正规化术语,用于保留学习上下文的政策。与现有方法相比,智商基于深Q网络,始终如一地提高稳定性和性能,并通过对经典控制和ATARI任务进行了广泛的实验。该代码可在以下网址公开获取:https://github.com/sweety-dm/interference-aware-ware-deep-q-learning。
translated by 谷歌翻译
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies "end-to-end": directly from raw pixel inputs.
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
强化学习(RL)已经证明了通过利用非线性函数近似器来解决高维任务的能力。但是,这些成功主要通过模拟域中的“黑匣子”政策实现。将RL部署到现实世界时,可能会提出有关使用“黑匣子”策略的几个问题。为了使学习的政策更加透明,我们提出了一个策略迭代方案,该方案保留了一个复杂的函数近似器的内部值预测,而是限制策略基于a的简明,分层和人类可读结构。可解释专家的混合。每个专家根据与原型状态的距离选择原始操作。保持这些专家解释的关键设计决定是从轨迹数据中选择原型状态。本文的主要技术贡献是解决这一非可分子原型态选拔程序引入的挑战。我们认为,我们的建议算法可以在连续动作深度RL基准测试中学习引人注目的政策,匹配基于神经网络的策略的性能,但返回比神经网络或线性型政策更适合人类检查的策略。
translated by 谷歌翻译
This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \reinforcement." The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
translated by 谷歌翻译
在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
离线强化学习用于在实时访问环境昂贵或不可能的情况下培训策略。作为这些恶劣条件的自然后果,在采取行动之前,代理商可能缺乏完全遵守在线环境的资源。我们配备了这种情况资源受限的设置。这导致脱机数据集(可用于培训)的情况可以包含完全处理的功能(使用功能强大的语言模型,图像模型,复杂传感器等)在实际在线时不可用。此断开连接导致离线RL中的有趣和未开发的问题:是否可以使用丰富地处理的脱机数据集来培训可访问在线环境中的更少功能的策略?在这项工作中,我们介绍并正式化这一新颖的资源受限的问题设置。我们突出了使用有限功能培训的完整脱机数据集和策略培训的策略之间的性能差距。我们通过策略传输算法解决了这种性能缺口,该策略传输算法首先使用功能完全可用的脱机数据集列举教师代理,然后将此知识传输到仅使用资源约束功能的学生代理。为了更好地捕获此设置的挑战,我们提出了一个数据收集过程:RL(RC-D4RL)的资源受限数据集。我们在RC-D4RL和流行的D4RL基准测试中评估传输算法,并观察到基线上的一致性改进(无需传输)。实验的代码在https://github.com/jayanthrr /rc-offlinerl上获得。
translated by 谷歌翻译