虽然由强化学习(RL)训练的代理商可以直接解决越来越具有挑战性的任务,但概括到新颖环境的学习技能仍然非常具有挑战性。大量使用数据增强是一种有助于改善RL的泛化的有希望的技术,但经常发现它降低样品效率,甚至可以导致发散。在本文中,我们在常见的脱离政策RL算法中使用数据增强时调查不稳定性的原因。我们识别两个问题,均植根于高方差Q-targets。基于我们的研究结果,我们提出了一种简单但有效的技术,可以在增强下稳定这类算法。我们在基于Deepmind Control Suite的基准系列和机器人操纵任务中使用扫描和视觉变压器(VIT)对基于图像的RL进行广泛的实证评估。我们的方法极大地提高了增强下的呼声集的稳定性和样本效率,并实现了在具有看不见的视野视觉效果的环境中的图像的RL的最先进方法竞争的普遍化结果。我们进一步表明,我们的方法与基于Vit的亚体系结构的RL缩放,并且数据增强在此设置中可能尤为重要。
translated by 谷歌翻译
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC) [22], are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based [23,38,24] methods and recently proposed contrastive learning [50]. Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN [43] and significantly improve its data-efficiency on the Atari 100k [31] benchmark. An implementation can be found at https://sites. google.com/view/data-regularized-q.
translated by 谷歌翻译
Transformer在学习视觉和语言表示方面取得了巨大的成功,这在各种下游任务中都是一般的。在视觉控制中,可以在不同控制任务之间转移的可转移状态表示对于减少训练样本量很重要。但是,将变压器移植到样品有效的视觉控制仍然是一个具有挑战性且未解决的问题。为此,我们提出了一种新颖的控制变压器(CTRLFORMER),具有先前艺术所没有的许多吸引人的好处。首先,CTRLFORMER共同学习视觉令牌和政策令牌之间的自我注意事项机制,在不同的控制任务之间可以学习和转移多任务表示无灾难性遗忘。其次,我们仔细设计了一种对比的增强学习范式来训练Ctrlformer,从而使其能够达到高样本效率,这在控制问题中很重要。例如,在DMControl基准测试中,与最近的高级方法不同,该方法在使用100K样品转移学习后通过在“ Cartpole”任务中产生零分数而失败,CTRLFORMER可以在维持100K样本的同时获得最先进的分数先前任务的性能。代码和模型已在我们的项目主页中发布。
translated by 谷歌翻译
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://www. github.com/MishaLaskin/curl.
translated by 谷歌翻译
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations -random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as Ope-nAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks. Our RAD module and training code are available at https://www.github.com/MishaLaskin/rad.
translated by 谷歌翻译
深厚的强化学习政策尽管在模拟的视觉控制任务中出色地效率,但表现出令人失望的能力,可以在输入培训图像中跨越跨干扰。图像统计或分散背景元素的变化是防止这种控制策略的概括和现实世界中适用性的陷阱。我们阐述了这样的直觉,即良好的视觉政策应该能够确定哪些像素对其决策很重要,并保留对图像跨图像的重要信息来源的识别。这意味着对具有较小概括差距的政策进行培训应集中在如此重要的像素上,而忽略其他像素。这导致引入显着引导的Q-Networks(SGQN),这是一种视觉增强学习的通用方法,与任何值函数学习方法兼容。 SGQN极大地提高了软演员 - 批评者的概括能力,并且在DeepMind Control Generalization基准上胜过现有的现有方法,为训练效率,概括性差距和政策解释性提供了新的参考。
translated by 谷歌翻译
Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
translated by 谷歌翻译
How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA.
translated by 谷歌翻译
Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains. However, deploying visual RL techniques in the real world remains challenging due to their low sample efficiency and large generalization gaps. To tackle these obstacles, data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies by diversifying the training data. This survey aims to provide a timely and essential review of DA techniques in visual RL in recognition of the thriving development in this field. In particular, we propose a unified framework for analyzing visual RL and understanding the role of DA in it. We then present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data in different scenarios. Moreover, we report a systematic empirical evaluation of DA-based techniques in visual RL and conclude by highlighting the directions for future research. As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
translated by 谷歌翻译
我们研究自我监督学习(SSL)是否可以从像素中改善在线增强学习(RL)。我们扩展了对比度增强学习框架(例如卷曲),该框架共同优化了SSL和RL损失,并进行了大量的实验,并具有各种自我监督的损失。我们的观察结果表明,现有的RL的SSL框架未能在使用相同数量的数据和增强时利用图像增强来实现对基准的有意义的改进。我们进一步执行进化搜索,以找到RL的多个自我监督损失的最佳组合,但是发现即使是这种损失组合也无法有意义地超越仅利用精心设计的图像增强的方法。通常,在现有框架下使用自制损失降低了RL性能。我们在多个不同环境中评估了该方法,包括现实世界的机器人环境,并确认没有任何单一的自我监督损失或图像增强方法可以主导所有环境,并且当前的SSL和RL联合优化框架是有限的。最后,我们从经验上研究了SSL + RL的预训练框架以及使用不同方法学到的表示的特性。
translated by 谷歌翻译
数据驱动的模型预测控制比无模型方法具有两个关键优势:通过模型学习提高样本效率的潜力,并且作为计划增加的计算预算的更好性能。但是,在漫长的视野上进行计划既昂贵又挑战,以获得准确的环境模型。在这项工作中,我们结合了无模型和基于模型的方法的优势。我们在短范围内使用学习的面向任务的潜在动力学模型进行局部轨迹优化,并使用学习的终端值函数来估计长期回报,这两者都是通过时间差异学习共同学习的。我们的TD-MPC方法比在DMCONTROL和META-WORLD的状态和基于图像的连续控制任务上实现了卓越的样本效率和渐近性能。代码和视频结果可在https://nicklashansen.github.io/td-mpc上获得。
translated by 谷歌翻译
有效的探索是深度强化学习的关键挑战。几种方法,例如行为先验,能够利用离线数据,以便在复杂任务上有效加速加强学习。但是,如果手动的任务与所证明的任务过度偏离,则此类方法的有效性是有限的。在我们的工作中,我们建议从离线数据中学习功能,这些功能由更加多样化的任务共享,例如动作与定向之间的相关性。因此,我们介绍了无国有先验,该先验直接在显示的轨迹中直接建模时间一致性,并且即使在对简单任务收集的数据进行培训时,也能够在复杂的任务中推动探索。此外,我们通过从政策和行动之前的概率混合物中动态采样动作,引入了一种新颖的集成方案,用于非政策强化学习中的动作研究。我们将我们的方法与强大的基线相提并论,并提供了经验证据,表明它可以在稀疏奖励环境下的长途持续控制任务中加速加强学习。
translated by 谷歌翻译
将信号与噪声分开的能力以及干净的抽象对智能至关重要。有了这种能力,人类可以在不考虑所有可能的滋扰因素的情况下有效执行现实世界任务。人造代理可以做同样的事情?当噪音时,代理可以安全地丢弃什么样的信息?在这项工作中,我们根据可控性和与奖励的关系将野外信息分为四种类型,并将有用的信息归为可控和奖励相关的有用信息。该框架阐明了有关强化学习(RL)中的各种先前工作所删除的信息,并导致我们提出的学习方法,即学习一种已明确影响某些噪声分散注意器的DeNOCONE MDP。对DeepMind Control Suite和Robodesk的变体进行的广泛实验表明,我们的DeNocy World模型的表现优于仅使用原始观测值,并且超过了先前的工作,跨政策优化控制任务以及关节位置回归的非控制任务。
translated by 谷歌翻译
离线强化学习在利用大型预采用的数据集进行政策学习方面表现出了巨大的希望,使代理商可以放弃经常廉价的在线数据收集。但是,迄今为止,离线强化学习的探索相对较小,并且缺乏对剩余挑战所在的何处的了解。在本文中,我们试图建立简单的基线以在视觉域中连续控制。我们表明,对两个基于最先进的在线增强学习算法,Dreamerv2和DRQ-V2进行了简单的修改,足以超越事先工作并建立竞争性的基准。我们在现有的离线数据集中对这些算法进行了严格的评估,以及从视觉观察结果中进行离线强化学习的新测试台,更好地代表现实世界中离线增强学习问题中存在的数据分布,并开放我们的代码和数据以促进此方面的进度重要领域。最后,我们介绍并分析了来自视觉观察的离线RL所独有的几个关键Desiderata,包括视觉分散注意力和动态视觉上可识别的变化。
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies "end-to-end": directly from raw pixel inputs.
translated by 谷歌翻译
通过提供丰富的训练信号来塑造代理人的潜国空间,建模世界可以使机器人学习受益。然而,在诸如图像之类的高维观察空间上的无约束环境中学习世界模型是具有挑战性的。一个难度来源是存在无关但难以模范的背景干扰,以及不重要的任务相关实体的视觉细节。我们通过学习经常性潜在的动态模型来解决这个问题,该模型对比预测下一次观察。即使使用同时的相机,背景和色调分散,这种简单的模型也会导致令人惊讶的鲁棒机器人控制。我们优于替代品,如双刺激方法,这些方法施加来自未来奖励或未来最佳行为的不同性措施。我们在分散注意力控制套件上获得最先进的结果,是基于像素的机器人控制的具有挑战性的基准。
translated by 谷歌翻译
我们提出了一种新颖的方法,即在强化学习框架中使用样式转移和对抗性学习的方式学习样式反应表示。在这里,样式是指任务核算的细节,例如图像中背景的颜色,在这种情况下,在具有不同样式的环境中概括学到的策略仍然是一个挑战。我们的方法着眼于学习样式不合时宜的表示,以固有的对抗性风格的发电机产生的不同图像样式训练演员,该样式在演员和发电机之间扮演最小游戏,而无需提供数据扩展的专家知识或其他类别的课程。对抗训练的标签。我们验证我们的方法比Procgen的最先进方法和分散控制套件的基准,并进一步研究从我们的模型中提取的功能,表明该模型更好地捕获不变性,并且不分散注意力,我们的方法可以实现竞争性或更好的性能。通过移动的风格。该代码可在https://github.com/postech-cvlab/style-agnostic-rl上找到。
translated by 谷歌翻译
In order to avoid conventional controlling methods which created obstacles due to the complexity of systems and intense demand on data density, developing modern and more efficient control methods are required. In this way, reinforcement learning off-policy and model-free algorithms help to avoid working with complex models. In terms of speed and accuracy, they become prominent methods because the algorithms use their past experience to learn the optimal policies. In this study, three reinforcement learning algorithms; DDPG, TD3 and SAC have been used to train Fetch robotic manipulator for four different tasks in MuJoCo simulation environment. All of these algorithms are off-policy and able to achieve their desired target by optimizing both policy and value functions. In the current study, the efficiency and the speed of these three algorithms are analyzed in a controlled environment.
translated by 谷歌翻译