转移学习(TL)是一个有效的机器学习范式,允许克服一些障碍,这些障碍是对深神经网络的成功培训,从长期训练时间到大型数据集的需求。在利用TL的同时,在监督学习(SL)中建立了成熟和成功的培训实践,其在深度加强学习(DRL)中的适用性是罕见的。在本文中,我们研究了在流行的DRL基准测试中的三种不同变体的可转移性水平以及一套小说,精心设计的控制任务。我们的结果表明,在DRL环境中转移神经网络可能特别具有挑战性,并且是在大多数情况下导致负转移的过程。在理解为什么深Q网络转移得如此糟糕时,我们对培训动力学的识别动态进行了新颖的洞察力。
translated by 谷歌翻译
在许多增强学习(RL)应用中,观察空间由人类开发人员指定并受到物理实现的限制,因此可能会随时间的巨大变化(例如,观察特征的数量增加)。然而,当观察空间发生变化时,前一项策略可能由于输入特征不匹配而失败,并且另一个策略必须从头开始培训,这在计算和采样复杂性方面效率低。在理论上见解之后,我们提出了一种新颖的算法,该算法提取源任务中的潜在空间动态,并将动态模型传送到目标任务用作基于模型的常规程序。我们的算法适用于观察空间的彻底变化(例如,从向量的基于矢量的观察到图像的观察),没有任何任务映射或目标任务的任何先前知识。实证结果表明,我们的算法显着提高了目标任务中学习的效率和稳定性。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
translated by 谷歌翻译
近年来,稀疏神经网络的使用迅速增长,尤其是在计算机视觉中。它们的吸引力在很大程度上源于培训和存储所需的参数数量以及学习效率的提高。有些令人惊讶的是,很少有努力探索他们在深度强化学习中的使用(DRL)。在这项工作中,我们进行了系统的调查,以在各种DRL代理和环境上应用许多现有的稀疏培训技术。我们的结果证实了计算机视觉域中稀疏训练的发现 - 稀疏网络在DRL域中对相同的参数计数的稀疏网络表现更好。我们提供了有关DRL中各种组件如何受到稀疏网络的影响的详细分析,并通过建议有希望的途径提高稀疏训练方法的有效性以及推进其在DRL中的使用来结论。
translated by 谷歌翻译
深厚的增强学习表明,仅在原始像素中解决复杂的增强学习(RL)任务,可以实现超人表现。但是,它无法从以前学习的任务中重复使用知识来解决新的,看不见的知识。概括和重复使用知识是创建真正智能代理的基本要求。这项工作提出了一种基于针对RL任务的生成对抗网络模型的一对一转移学习的通用方法。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译
在探索中,由于当前的低效率而引起的强化学习领域,具有较大动作空间的学习控制政策是一个具有挑战性的问题。在这项工作中,我们介绍了深入的强化学习(DRL)算法呼叫多动作网络(MAN)学习,以应对大型离散动作空间的挑战。我们建议将动作空间分为两个组件,从而为每个子行动创建一个值神经网络。然后,人使用时间差异学习来同步训练网络,这比训练直接动作输出的单个网络要简单。为了评估所提出的方法,我们在块堆叠任务上测试了人,然后扩展了人类从Atari Arcade学习环境中使用18个动作空间的12个游戏。我们的结果表明,人的学习速度比深Q学习和双重Q学习更快,这意味着我们的方法比当前可用于大型动作空间的方法更好地执行同步时间差异算法。
translated by 谷歌翻译
Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.
translated by 谷歌翻译
Q学习是最著名的增强学习算法之一。使用神经网络开发该算法已经做出了巨大的努力。其中包括引导深度Q学习网络。它利用多个神经网络头将多样性引入Q学习。有时可以将多样性视为代理商在给定状态下可以采取的合理移动量,类似于RL勘探比的定义。因此,引导深度Q学习网络的性能与算法中的多样性水平深厚相关。在最初的研究中,有人指出,随机的先验可以提高模型的性能。在本文中,我们进一步探讨了用噪声代替先验的可能性,并从高斯分布中采样噪声,以将更多的多样性引入该算法。我们对Atari基准测试进行实验,并将我们的算法与原始算法和其他相关算法进行比较。结果表明,我们对自举的深Q学习算法的修改可在不同类型的Atari游戏中获得更高的评估得分。因此,我们得出的结论是,用噪声代替先验可以通过确保多样性的完整性来改善自举的深度Q学习的性能。
translated by 谷歌翻译
当涉及涉及高维和空间组织的输入(例如图像)的问题时,视觉变压器(VTS)已成为卷积神经网络(CNN)的宝贵替代方法。但是,他们的传递学习(TL)属性尚未进行很好的研究,尚不完全清楚这些神经体系结构是否可以跨不同领域和CNN转移。在本文中,我们研究了在流行的Imagenet数据集上预先训练的VT是否学习可转移到非天然图像域的表示形式。为此,我们考虑了三个经过深入研究的艺术分类问题,并将其用作研究四个流行VT的TL潜力的代孕问题。它们的性能与几个TL实验的四个常见CNN进行了广泛的比较。我们的结果表明,VTS具有强大的概括属性,并且这些网络比CNN更强大。
translated by 谷歌翻译
强化学习(RL)的成功在很大程度上取决于从环境观察中学习强大表示的能力。在大多数情况下,根据价值功能的变化,在各州之间纯粹通过强化学习损失所学的表示形式可能会有很大差异。但是,所学的表示形式不必非常具体地针对手头的任务。仅依靠RL目标可能会产生在连续的时间步骤中变化很大的表示形式。此外,由于RL损失的目标变化,因此所学的表示将取决于当前价值/策略的良好。因此,从主要任务中解开表示形式将使他们更多地专注于捕获可以改善概括的过渡动态。为此,我们提出了局部约束的表示,辅助损失迫使国家表示由邻近状态的表示可以预测。这不仅鼓励表示形式受到价值/政策学习的驱动,还可以自我监督的学习来驱动,这会限制表示表示的变化太快。我们在几个已知的基准上评估了所提出的方法,并观察到强劲的性能。尤其是在连续控制任务中,我们的实验比强基线显示出显着的优势。
translated by 谷歌翻译
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
translated by 谷歌翻译
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications. This paper studies offline RL using the DQN Replay Dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this fixed dataset, outperform the fully-trained DQN agent. To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates. Offline REM trained on the DQN Replay Dataset surpasses strong RL baselines. Ablation studies highlight the role of offline dataset size and diversity as well as the algorithm choice in our positive results. Overall, the results here present an optimistic view that robust RL algorithms used on sufficiently large and diverse offline datasets can lead to high quality policies. To provide a testbed for offline RL and reproduce our results, the DQN Replay Dataset is released at offline-rl.github.io.
translated by 谷歌翻译
我们确定和研究政策流失的现象,即基于价值的强化学习中贪婪政策的快速变化。政策流失以惊人的快速步伐运作,改变了少数学习更新(在Atari上的DQN等典型的深层RL设置中)中大量州的贪婪行动。我们从经验上表征了现象,验证它不限于特定算法或环境特性。许多消融有助于削弱关于为什么流失仅与深度学习有关的少数相关的合理解释。最后,我们假设政策流失是一种有益但被忽视的隐性探索形式,它以新鲜的方式铸造了$ \ epsilon $ greedy探索,即$ \ epsilon $ - noise的作用比预期的要小得多。
translated by 谷歌翻译
A long-standing challenge in artificial intelligence is lifelong learning. In lifelong learning, many tasks are presented in sequence and learners must efficiently transfer knowledge between tasks while avoiding catastrophic forgetting over long lifetimes. On these problems, policy reuse and other multi-policy reinforcement learning techniques can learn many tasks. However, they can generate many temporary or permanent policies, resulting in memory issues. Consequently, there is a need for lifetime-scalable methods that continually refine a policy library of a pre-defined size. This paper presents a first approach to lifetime-scalable policy reuse. To pre-select the number of policies, a notion of task capacity, the maximal number of tasks that a policy can accurately solve, is proposed. To evaluate lifetime policy reuse using this method, two state-of-the-art single-actor base-learners are compared: 1) a value-based reinforcement learner, Deep Q-Network (DQN) or Deep Recurrent Q-Network (DRQN); and 2) an actor-critic reinforcement learner, Proximal Policy Optimisation (PPO) with or without Long Short-Term Memory layer. By selecting the number of policies based on task capacity, D(R)QN achieves near-optimal performance with 6 policies in a 27-task MDP domain and 9 policies in an 18-task POMDP domain; with fewer policies, catastrophic forgetting and negative transfer are observed. Due to slow, monotonic improvement, PPO requires fewer policies, 1 policy for the 27-task domain and 4 policies for the 18-task domain, but it learns the tasks with lower accuracy than D(R)QN. These findings validate lifetime-scalable policy reuse and suggest using D(R)QN for larger and PPO for smaller library sizes.
translated by 谷歌翻译
深度神经网络的强大学习能力使强化学习者能够直接从连续环境中学习有效的控制政策。从理论上讲,为了实现稳定的性能,神经网络假设I.I.D.不幸的是,在训练数据在时间上相关且非平稳的一般强化学习范式中,输入不存在。这个问题可能导致“灾难性干扰”和性能崩溃的现象。在本文中,我们提出智商,即干涉意识深度Q学习,以减轻单任务深度加固学习中的灾难性干扰。具体来说,我们求助于在线聚类,以实现在线上下文部门,以及一个多头网络和一个知识蒸馏正规化术语,用于保留学习上下文的政策。与现有方法相比,智商基于深Q网络,始终如一地提高稳定性和性能,并通过对经典控制和ATARI任务进行了广泛的实验。该代码可在以下网址公开获取:https://github.com/sweety-dm/interference-aware-ware-deep-q-learning。
translated by 谷歌翻译
Starcraft II(SC2)对强化学习(RL)提出了巨大的挑战,其中主要困难包括巨大的状态空间,不同的动作空间和长期的视野。在这项工作中,我们研究了《星际争霸II》全长游戏的一系列RL技术。我们研究了涉及提取的宏观活动和神经网络的层次结构的层次RL方法。我们研究了课程转移培训程序,并在具有4个GPU和48个CPU线的单台计算机上训练代理。在64x64地图并使用限制性单元上,我们对内置AI的获胜率达到99%。通过课程转移学习算法和战斗模型的混合物,我们在最困难的非作战水平内置AI(7级)中获得了93%的胜利率。在本文的扩展版本中,我们改进了架构,以针对作弊水平训练代理商,并在8级,9级和10级AIS上达到胜利率,为96%,97%和94 %, 分别。我们的代码在https://github.com/liuruoze/hiernet-sc2上。为了为我们的工作以及研究和开源社区提供基线,我们将其复制了一个缩放版本的Mini-Alphastar(MAS)。 MAS的最新版本为1.07,可以在具有564个动作的原始动作空间上进行培训。它旨在通过使超参数可调节来在单个普通机器上进行训练。然后,我们使用相同的资源将我们的工作与MAS进行比较,并表明我们的方法更有效。迷你α的代码在https://github.com/liuruoze/mini-alphastar上。我们希望我们的研究能够阐明对SC2和其他大型游戏有效增强学习的未来研究。
translated by 谷歌翻译
Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.
translated by 谷歌翻译
In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.
translated by 谷歌翻译