我们在这项工作中的主要贡献是一个实证发现随机通用价值函数(GVF),即深度动作条件预测 - 随机观察到他们预测的观察的特征以及预测的操作顺序中 - 为强化学习(RL)问题形成良好的辅助任务。特别是,我们表明当用作辅助任务时,随机深度动作条件预测产生了产生控制性能的状态表示,其具有与最先进的手工制作的辅助任务相同的辅助辅助任务,如atari中的值预测,像素控制和卷曲和DeepMind实验室任务。在另一组实验中,我们将梯度从网络的RL部分停止到网络的状态代表性学习部分,也许令人惊讶的是,单独的辅助任务足以学习州表示足以超过最终的状态 - 训练的演员 - 评论家基线。我们在https://github.com/hwhitetooth/random_gvs ovensourced我们的代码。
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
我们提出BYOL-QUENPLORE,这是一种在视觉复杂环境中进行好奇心驱动的探索的概念上简单但一般的方法。Byol-explore通过优化潜在空间中的单个预测损失而没有其他辅助目标,从而学习了世界代表,世界动态和探索政策。我们表明,BYOL探索在DM-HARD-8中有效,DM-HARD-8是一种具有挑战性的部分可观察的连续操作硬探索基准,具有视觉富含3-D环境。在这个基准上,我们完全通过使用Byol-explore的内在奖励来纯粹通过增强外部奖励来解决大多数任务,而先前的工作只能通过人类的示威来脱颖而出。作为Byol-explore的一般性的进一步证据,我们表明它在Atari的十个最难的探索游戏中实现了超人的性能,同时设计比其他竞争力代理人要简单得多。
translated by 谷歌翻译
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://www. github.com/MishaLaskin/curl.
translated by 谷歌翻译
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games -the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled -our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
translated by 谷歌翻译
从像素中学习控制很难进行加固学习(RL)代理,因为表示和政策学习是交织在一起的。以前的方法通过辅助表示任务来解决这个问题,但他们要么不考虑问题的时间方面,要么仅考虑单步过渡。取而代之的是,我们提出了层次结构$ k $ -Step Letent(HKSL),这是一项辅助任务,通过向前模型的层次结构来学习表示形式,该层次结构以不同的步骤跳过的不同幅度运行,同时也学习在层次结构中的级别之间进行交流。我们在30个机器人控制任务的套件中评估了HKSL,发现HKSL要么比几个当前基线更快地达到更高的发作回报或收敛到最高性能。此外,我们发现,HKSL层次结构中的水平可以学会专注于代理行动的长期或短期后果,从而为下游控制政策提供更有信息的表示。最后,我们确定层次结构级别之间的通信渠道基于通信过程的两侧组织信息,从而提高了样本效率。
translated by 谷歌翻译
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near toplevel human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10×10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units.
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
自成立以来,建立在广泛任务中表现出色的普通代理的任务一直是强化学习的重要目标。这个问题一直是对Alarge工作体系的研究的主题,并且经常通过观察Atari 57基准中包含的广泛范围环境的分数来衡量的性能。 Agent57是所有57场比赛中第一个超过人类基准的代理商,但这是以数据效率差的代价,需要实现近800亿帧的经验。以Agent57为起点,我们采用了各种各样的形式,以降低超过人类基线所需的经验200倍。在减少数据制度和Propose有效的解决方案时,我们遇到了一系列不稳定性和瓶颈,以构建更强大,更有效的代理。我们还使用诸如Muesli和Muzero之类的高性能方法证明了竞争性的性能。 TOOUR方法的四个关键组成部分是(1)近似信任区域方法,该方法可以从TheOnline网络中稳定引导,(2)损失和优先级的归一化方案,在学习具有广泛量表的一组值函数时,可以提高鲁棒性, (3)改进的体系结构采用了NFNET的技术技术来利用更深的网络而无需标准化层,并且(4)政策蒸馏方法可使瞬时贪婪的策略加班。
translated by 谷歌翻译
基于模型的强化学习的关键承诺之一是使用世界内部模型拓展到新颖的环境和任务中的预测。然而,模型的代理商的泛化能力尚不清楚,因为现有的工作在基准测试概括时专注于无模型剂。在这里,我们明确测量模型的代理的泛化能力与其无模型对应物相比。我们专注于Muzero(Schrittwieser等,2020),强大的基于模型的代理商的分析,并评估其在过程和任务泛化方面的性能。我们确定了一个程序概括规划,自我监督代表学习和程序数据分集的三个因素 - 并表明通过组合这些技术,我们实现了普通的最先进的概括性和数据效率(Cobbe等人。,2019)。但是,我们发现这些因素并不总是为Meta-World中的任务泛化基准提供相同的益处(Yu等人,2019),表明转移仍然是一个挑战,可能需要不同的方法而不是程序泛化。总的来说,我们建议建立一个推广的代理需要超越单任务,无模型范例,并朝着在丰富,程序,多任务环境中培训的基于自我监督的模型的代理。
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
强化学习的主要方法是根据预期的回报将信贷分配给行动。但是,我们表明回报可能取决于政策,这可能会导致价值估计的过度差异和减慢学习的速度。取而代之的是,我们证明了优势函数可以解释为因果效应,并与因果关系共享相似的属性。基于此洞察力,我们提出了直接优势估计(DAE),这是一种可以对优势函数进行建模并直接从政策数据进行估算的新方法,同时同时最大程度地减少了返回的方差而无需(操作 - )值函数。我们还通过显示如何无缝整合到DAE中来将我们的方法与时间差异方法联系起来。所提出的方法易于实施,并且可以通过现代参与者批评的方法很容易适应。我们对三个离散控制域进行经验评估DAE,并表明它可以超过广义优势估计(GAE),这是优势估计的强大基线,当将大多数环境应用于策略优化时。
translated by 谷歌翻译
强化学习在许多应用中取得了巨大的成功。然而,样本效率仍然是一个关键挑战,突出的方法需要训练数百万(甚至数十亿)的环境步骤。最近,基于样本的基于图像的RL算法存在显着进展;然而,Atari游戏基准上的一致人级表现仍然是一个难以捉摸的目标。我们提出了一种在Muzero上建立了基于模式的基于模型的Visual RL算法,我们名称为高效零。我们的方法达到了194.3%的人类性能和Atari 100K基准的109.0%的中位数,只有两个小时的实时游戏体验,并且在DMControl 100k基准测试中的某些任务中优于状态萨克。这是第一次算法在atari游戏中实现超级人类性能,具有如此少的数据。高效零的性能也在2亿帧的比赛中靠近DQN的性能,而我们使用的数据减少了500倍。高效零的低样本复杂性和高性能可以使RL更接近现实世界的适用性。我们以易于理解的方式实现我们的算法,它可以在https://github.com/yewr/effionszero中获得。我们希望它将加速更广泛社区中MCT的RL算法的研究。
translated by 谷歌翻译
Real-world reinforcement learning tasks often involve some form of partial observability where the observations only give a partial or noisy view of the true state of the world. Such tasks typically require some form of memory, where the agent has access to multiple past observations, in order to perform well. One popular way to incorporate memory is by using a recurrent neural network to access the agent's history. However, recurrent neural networks in reinforcement learning are often fragile and difficult to train, susceptible to catastrophic forgetting and sometimes fail completely as a result. In this work, we propose Deep Transformer Q-Networks (DTQN), a novel architecture utilizing transformers and self-attention to encode an agent's history. DTQN is designed modularly, and we compare results against several modifications to our base model. Our experiments demonstrate the transformer can solve partially observable tasks faster and more stably than previous recurrent approaches.
translated by 谷歌翻译
The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC) [22], are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based [23,38,24] methods and recently proposed contrastive learning [50]. Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN [43] and significantly improve its data-efficiency on the Atari 100k [31] benchmark. An implementation can be found at https://sites. google.com/view/data-regularized-q.
translated by 谷歌翻译
本文探讨了在深度参与者批评的增强学习模型中同时学习价值功能和政策的问题。我们发现,由于这两个任务之间的噪声水平差异差异,共同学习这些功能的共同实践是亚最佳选择。取而代之的是,我们表明独立学习这些任务,但是由于蒸馏阶段有限,可以显着提高性能。此外,我们发现可以使用较低的\ textIt {方差}返回估计值来降低策略梯度噪声水平。鉴于,值学习噪声水平降低了较低的\ textit {bias}估计值。这些见解共同为近端策略优化的扩展提供了信息,我们称为\ textit {dual Network Archituction}(DNA),这极大地超过了其前身。DNA还超过了受欢迎的彩虹DQN算法在测试的五个环境中的四个环境中的性能,即使在更困难的随机控制设置下也是如此。
translated by 谷歌翻译
数据驱动的模型预测控制比无模型方法具有两个关键优势:通过模型学习提高样本效率的潜力,并且作为计划增加的计算预算的更好性能。但是,在漫长的视野上进行计划既昂贵又挑战,以获得准确的环境模型。在这项工作中,我们结合了无模型和基于模型的方法的优势。我们在短范围内使用学习的面向任务的潜在动力学模型进行局部轨迹优化,并使用学习的终端值函数来估计长期回报,这两者都是通过时间差异学习共同学习的。我们的TD-MPC方法比在DMCONTROL和META-WORLD的状态和基于图像的连续控制任务上实现了卓越的样本效率和渐近性能。代码和视频结果可在https://nicklashansen.github.io/td-mpc上获得。
translated by 谷歌翻译
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
translated by 谷歌翻译