无监督的视觉表示学习提供了一个机会,可以利用大型无标记轨迹的大型语料库形成有用的视觉表示,这可以使强化学习(RL)算法的培训受益。但是,评估此类表示的适应性需要培训RL算法,该算法在计算上是密集型且具有较高的差异结果。为了减轻此问题,我们为无监督的RL表示方案设计了一个评估协议,其差异较低,计算成本降低了600倍。受愿景社区的启发,我们提出了两个线性探测任务:预测在给定状态下观察到的奖励,并预测特定状态下专家的行动。这两个任务通常适用于许多RL域,我们通过严格的实验表明,它们与Atari100k基准的实际下游控制性能密切相关。这提供了一种更好的方法,可以探索预处理算法的空间,而无需为每个设置运行RL评估。利用这一框架,我们进一步改善了RL的现有自学学习(SSL)食谱,突出了前向模型的重要性,视觉骨架的大小以及无监督目标的精确配方。
translated by 谷歌翻译
我们研究自我监督学习(SSL)是否可以从像素中改善在线增强学习(RL)。我们扩展了对比度增强学习框架(例如卷曲),该框架共同优化了SSL和RL损失,并进行了大量的实验,并具有各种自我监督的损失。我们的观察结果表明,现有的RL的SSL框架未能在使用相同数量的数据和增强时利用图像增强来实现对基准的有意义的改进。我们进一步执行进化搜索,以找到RL的多个自我监督损失的最佳组合,但是发现即使是这种损失组合也无法有意义地超越仅利用精心设计的图像增强的方法。通常,在现有框架下使用自制损失降低了RL性能。我们在多个不同环境中评估了该方法,包括现实世界的机器人环境,并确认没有任何单一的自我监督损失或图像增强方法可以主导所有环境,并且当前的SSL和RL联合优化框架是有限的。最后,我们从经验上研究了SSL + RL的预训练框架以及使用不同方法学到的表示的特性。
translated by 谷歌翻译
我们提出BYOL-QUENPLORE,这是一种在视觉复杂环境中进行好奇心驱动的探索的概念上简单但一般的方法。Byol-explore通过优化潜在空间中的单个预测损失而没有其他辅助目标,从而学习了世界代表,世界动态和探索政策。我们表明,BYOL探索在DM-HARD-8中有效,DM-HARD-8是一种具有挑战性的部分可观察的连续操作硬探索基准,具有视觉富含3-D环境。在这个基准上,我们完全通过使用Byol-explore的内在奖励来纯粹通过增强外部奖励来解决大多数任务,而先前的工作只能通过人类的示威来脱颖而出。作为Byol-explore的一般性的进一步证据,我们表明它在Atari的十个最难的探索游戏中实现了超人的性能,同时设计比其他竞争力代理人要简单得多。
translated by 谷歌翻译
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://www. github.com/MishaLaskin/curl.
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
基于像素的控制的学习表示,最近在加固学习中获得了重大关注。已经提出了广泛的方法来实现高效学习,导致类似于完整状态设置中的复杂性。然而,超越仔细策划的像素数据集(以居中作物,适当的照明,清晰的背景等)仍然具有挑战性。在本文中,我们采用更困难的环境,纳入背景干扰者,作为解决这一挑战的第一步。我们提出了一种简单的基线方法,可以学习有意义的表示,没有基于度量的学习,没有数据增强,没有世界模型学习,也没有对比学习。然后,我们分析何时何种以及为什么先前提出的方法可能会失败或减少与此更难设置中的基线相同的表现,以及为什么我们应该仔细考虑扩展在井策良好环境之外的这种方法。我们的研究结果表明,基于奖励密度,问题的规划地平线,任务 - 无关组件等的规划等的粮食基准,对评估算法至关重要。基于这些观察,我们提出了在评估基准任务的算法时考虑不同的指标。我们希望在调查如何最佳地将RL应用于现实世界任务时激励研究人员对重新思考代表学习来激发研究人员。
translated by 谷歌翻译
强化学习在许多应用中取得了巨大的成功。然而,样本效率仍然是一个关键挑战,突出的方法需要训练数百万(甚至数十亿)的环境步骤。最近,基于样本的基于图像的RL算法存在显着进展;然而,Atari游戏基准上的一致人级表现仍然是一个难以捉摸的目标。我们提出了一种在Muzero上建立了基于模式的基于模型的Visual RL算法,我们名称为高效零。我们的方法达到了194.3%的人类性能和Atari 100K基准的109.0%的中位数,只有两个小时的实时游戏体验,并且在DMControl 100k基准测试中的某些任务中优于状态萨克。这是第一次算法在atari游戏中实现超级人类性能,具有如此少的数据。高效零的性能也在2亿帧的比赛中靠近DQN的性能,而我们使用的数据减少了500倍。高效零的低样本复杂性和高性能可以使RL更接近现实世界的适用性。我们以易于理解的方式实现我们的算法,它可以在https://github.com/yewr/effionszero中获得。我们希望它将加速更广泛社区中MCT的RL算法的研究。
translated by 谷歌翻译
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
translated by 谷歌翻译
视觉变压器体系结构已显示在计算机视觉(CV)空间中具有竞争力,在该空间中,它在几个基准测试中剥夺了基于卷积的网络。然而,卷积神经网络(CNN)仍然是强化学习中表示模块的优先体系结构。在这项工作中,我们使用几种最先进的自我监督方法研究了视觉变压器预处理,并评估了该培训框架中的数据效率收益。我们提出了一种称为TOV-VICREG的新的自我监督的学习方法,该方法通过添加时间订单验证任务来扩展Vicreg,以更好地捕获观测值之间的时间关系。此外,我们在样本效率方面通过Atari游戏评估了所得编码器。我们的结果表明,当通过TOV-VICREG进行预估计时,视觉变压器的表现优于其他自我监督的方法,但仍在努力克服CNN。尽管如此,我们在十场比赛中的两场比赛中,我们能够胜过CNN,在我们执行100k台阶评估中。最终,我们认为,深入强化学习(DRL)中的这种方法可能是实现自然语言处理和计算机视觉中所见的新表现的关键。源代码将提供:https://github.com/mgoulao/tov-vicreg
translated by 谷歌翻译
众所周知,从像素观察中进行的非质量增强学习(RL)是不稳定的。结果,许多成功的算法必须结合不同领域的实践和辅助损失,以在复杂的环境中学习有意义的行为。在这项工作中,我们提供了新颖的分析,表明这些不稳定性是通过卷积编码器和低质量奖励进行时间差异学习而产生的。我们表明,这种新的视觉致命三合会导致不稳定的训练和过早的融合归化解决方案,这是一种现象,我们将灾难性的自相传为。基于我们的分析,我们提出了A-LIX,这是一种为编码器梯度提供适应性正则化的方法,该梯度明确防止使用双重目标防止灾难性的自我抗辩发生。通过应用A-LIX,我们在DeepMind Control和Atari 100K基准测试方面显着优于先前的最先进,而无需任何数据增强或辅助损失。
translated by 谷歌翻译
基于模型的强化学习的关键承诺之一是使用世界内部模型拓展到新颖的环境和任务中的预测。然而,模型的代理商的泛化能力尚不清楚,因为现有的工作在基准测试概括时专注于无模型剂。在这里,我们明确测量模型的代理的泛化能力与其无模型对应物相比。我们专注于Muzero(Schrittwieser等,2020),强大的基于模型的代理商的分析,并评估其在过程和任务泛化方面的性能。我们确定了一个程序概括规划,自我监督代表学习和程序数据分集的三个因素 - 并表明通过组合这些技术,我们实现了普通的最先进的概括性和数据效率(Cobbe等人。,2019)。但是,我们发现这些因素并不总是为Meta-World中的任务泛化基准提供相同的益处(Yu等人,2019),表明转移仍然是一个挑战,可能需要不同的方法而不是程序泛化。总的来说,我们建议建立一个推广的代理需要超越单任务,无模型范例,并朝着在丰富,程序,多任务环境中培训的基于自我监督的模型的代理。
translated by 谷歌翻译
尽管深度强化学习(RL)最近取得了许多成功,但其方法仍然效率低下,这使得在数据方面解决了昂贵的许多问题。我们的目标是通过利用未标记的数据中的丰富监督信号来进行学习状态表示,以解决这一问题。本文介绍了三种不同的表示算法,可以访问传统RL算法使用的数据源的不同子集使用:(i)GRICA受到独立组件分析(ICA)的启发,并训练深层神经网络以输出统计独立的独立特征。输入。 Grica通过最大程度地减少每个功能与其他功能之间的相互信息来做到这一点。此外,格里卡仅需要未分类的环境状态。 (ii)潜在表示预测(LARP)还需要更多的上下文:除了要求状态作为输入外,它还需要先前的状态和连接它们的动作。该方法通过预测当前状态和行动的环境的下一个状态来学习状态表示。预测器与图形搜索算法一起使用。 (iii)重新培训通过训练深层神经网络来学习国家表示,以学习奖励功能的平滑版本。该表示形式用于预处理输入到深度RL,而奖励预测指标用于奖励成型。此方法仅需要环境中的状态奖励对学习表示表示。我们发现,每种方法都有其优势和缺点,并从我们的实验中得出结论,包括无监督的代表性学习在RL解决问题的管道中可以加快学习的速度。
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC) [22], are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based [23,38,24] methods and recently proposed contrastive learning [50]. Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN [43] and significantly improve its data-efficiency on the Atari 100k [31] benchmark. An implementation can be found at https://sites. google.com/view/data-regularized-q.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译
Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains. However, deploying visual RL techniques in the real world remains challenging due to their low sample efficiency and large generalization gaps. To tackle these obstacles, data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies by diversifying the training data. This survey aims to provide a timely and essential review of DA techniques in visual RL in recognition of the thriving development in this field. In particular, we propose a unified framework for analyzing visual RL and understanding the role of DA in it. We then present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data in different scenarios. Moreover, we report a systematic empirical evaluation of DA-based techniques in visual RL and conclude by highlighting the directions for future research. As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
translated by 谷歌翻译
从像素中学习控制很难进行加固学习(RL)代理,因为表示和政策学习是交织在一起的。以前的方法通过辅助表示任务来解决这个问题,但他们要么不考虑问题的时间方面,要么仅考虑单步过渡。取而代之的是,我们提出了层次结构$ k $ -Step Letent(HKSL),这是一项辅助任务,通过向前模型的层次结构来学习表示形式,该层次结构以不同的步骤跳过的不同幅度运行,同时也学习在层次结构中的级别之间进行交流。我们在30个机器人控制任务的套件中评估了HKSL,发现HKSL要么比几个当前基线更快地达到更高的发作回报或收敛到最高性能。此外,我们发现,HKSL层次结构中的水平可以学会专注于代理行动的长期或短期后果,从而为下游控制政策提供更有信息的表示。最后,我们确定层次结构级别之间的通信渠道基于通信过程的两侧组织信息,从而提高了样本效率。
translated by 谷歌翻译
这篇综述解决了在深度强化学习(DRL)背景下学习测量数据的抽象表示的问题。尽管数据通常是模棱两可,高维且复杂的解释,但许多动态系统可以通过一组低维状态变量有效地描述。从数据中发现这些状态变量是提高数据效率,稳健性和DRL方法的概括,应对维度的诅咒以及将可解释性和见解带入Black-Box DRL的关键方面。这篇综述通过描述用于学习世界的学习代表的主要深度学习工具,提供对方法和原则的系统观点,总结应用程序,基准和评估策略,并讨论开放的方式,从而提供了DRL中无监督的代表性学习的全面概述,挑战和未来的方向。
translated by 谷歌翻译
Several self-supervised representation learning methods have been proposed for reinforcement learning (RL) with rich observations. For real-world applications of RL, recovering underlying latent states is crucial, particularly when sensory inputs contain irrelevant and exogenous information. In this work, we study how information bottlenecks can be used to construct latent states efficiently in the presence of task-irrelevant information. We propose architectures that utilize variational and discrete information bottlenecks, coined as RepDIB, to learn structured factorized representations. Exploiting the expressiveness bought by factorized representations, we introduce a simple, yet effective, bottleneck that can be integrated with any existing self-supervised objective for RL. We demonstrate this across several online and offline RL benchmarks, along with a real robot arm task, where we find that compressed representations with RepDIB can lead to strong performance improvements, as the learned bottlenecks help predict only the relevant state while ignoring irrelevant information.
translated by 谷歌翻译
本文探讨了在深度参与者批评的增强学习模型中同时学习价值功能和政策的问题。我们发现,由于这两个任务之间的噪声水平差异差异,共同学习这些功能的共同实践是亚最佳选择。取而代之的是,我们表明独立学习这些任务,但是由于蒸馏阶段有限,可以显着提高性能。此外,我们发现可以使用较低的\ textIt {方差}返回估计值来降低策略梯度噪声水平。鉴于,值学习噪声水平降低了较低的\ textit {bias}估计值。这些见解共同为近端策略优化的扩展提供了信息,我们称为\ textit {dual Network Archituction}(DNA),这极大地超过了其前身。DNA还超过了受欢迎的彩虹DQN算法在测试的五个环境中的四个环境中的性能,即使在更困难的随机控制设置下也是如此。
translated by 谷歌翻译