We revisit a simple Learning-from-Scratch baseline for visuo-motor control that uses data augmentation and a shallow ConvNet. We find that this baseline has competitive performance with recent methods that leverage frozen visual representations trained on large-scale vision datasets.
translated by 谷歌翻译
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
近年来,预先培训的表述的出现是计算机视觉,自然语言和语音中AI应用的强大抽象。但是,控制策略学习仍然由Tabula-Rasa学习范式主导,而Visuo-Motor策略经常使用部署环境中的数据进行培训。在这种情况下,我们重新审视并研究了预训练的视觉表示对控制的作用,以及在大规模计算机视觉数据集中训练的特定表示。通过对不同控制域(栖息地,深态控制,Adroit,Franka Kitchen)的广泛经验评估,我们隔离和研究了不同表示培训方法,数据增强和功能层次结构的重要性。总体而言,我们发现,预先训练的视觉表示可以比培训控制政策的基本真实状态表示能力更具竞争力甚至更好。尽管仅使用来自标准视觉数据集中的室外数据,但这是没有部署环境中的任何域内数据。源代码以及更多信息,请访问https://sites.google.com/view/pvr-control。
translated by 谷歌翻译
Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
translated by 谷歌翻译
虽然由强化学习(RL)训练的代理商可以直接解决越来越具有挑战性的任务,但概括到新颖环境的学习技能仍然非常具有挑战性。大量使用数据增强是一种有助于改善RL的泛化的有希望的技术,但经常发现它降低样品效率,甚至可以导致发散。在本文中,我们在常见的脱离政策RL算法中使用数据增强时调查不稳定性的原因。我们识别两个问题,均植根于高方差Q-targets。基于我们的研究结果,我们提出了一种简单但有效的技术,可以在增强下稳定这类算法。我们在基于Deepmind Control Suite的基准系列和机器人操纵任务中使用扫描和视觉变压器(VIT)对基于图像的RL进行广泛的实证评估。我们的方法极大地提高了增强下的呼声集的稳定性和样本效率,并实现了在具有看不见的视野视觉效果的环境中的图像的RL的最先进方法竞争的普遍化结果。我们进一步表明,我们的方法与基于Vit的亚体系结构的RL缩放,并且数据增强在此设置中可能尤为重要。
translated by 谷歌翻译
通过直接互动环境中的直接交互自主学习行为的能力可以导致能够提高生产力或在非结构化环境中提供护理的通用机器人。这种无限量的设置仅需要使用机器人的壁虎搜索传感器,例如车载相机,联合编码器等,这可能是由于高维度和部分可观察性问题而挑战政策学习。我们提出RRL:RESNET作为强化学习的代表 - 这是一种直接且有效的方法,可以直接从丙虫精神投入学习复杂的行为。 RRL熔断器功能从预先培训的RESET中提取到标准强化学习管道中,并可直接从州的学习提供结果。在模拟的灵巧操纵基准测试中,在最先进方法无法进行重大进展情况下,RRL提供了富裕的行为。 RRL的上诉在于,从代表学习,模仿学习和加强学习领域汇集进步。它在直接从具有性能和采样效率匹配的视觉输入中直接从状态从状态匹配的效力,即使在复杂的高维域中也远未显而易见。
translated by 谷歌翻译
离线强化学习在利用大型预采用的数据集进行政策学习方面表现出了巨大的希望,使代理商可以放弃经常廉价的在线数据收集。但是,迄今为止,离线强化学习的探索相对较小,并且缺乏对剩余挑战所在的何处的了解。在本文中,我们试图建立简单的基线以在视觉域中连续控制。我们表明,对两个基于最先进的在线增强学习算法,Dreamerv2和DRQ-V2进行了简单的修改,足以超越事先工作并建立竞争性的基准。我们在现有的离线数据集中对这些算法进行了严格的评估,以及从视觉观察结果中进行离线强化学习的新测试台,更好地代表现实世界中离线增强学习问题中存在的数据分布,并开放我们的代码和数据以促进此方面的进度重要领域。最后,我们介绍并分析了来自视觉观察的离线RL所独有的几个关键Desiderata,包括视觉分散注意力和动态视觉上可识别的变化。
translated by 谷歌翻译
Developing robots that are capable of many skills and generalization to unseen scenarios requires progress on two fronts: efficient collection of large and diverse datasets, and training of high-capacity policies on the collected data. While large datasets have propelled progress in other fields like computer vision and natural language processing, collecting data of comparable scale is particularly challenging for physical systems like robotics. In this work, we propose a framework to bridge this gap and better scale up robot learning, under the lens of multi-task, multi-scene robot manipulation in kitchen environments. Our framework, named CACTI, has four stages that separately handle data collection, data augmentation, visual representation learning, and imitation policy training. In the CACTI framework, we highlight the benefit of adapting state-of-the-art models for image generation as part of the augmentation stage, and the significant improvement of training efficiency by using pretrained out-of-domain visual representations at the compression stage. Experimentally, we demonstrate that 1) on a real robot setup, CACTI enables efficient training of a single policy capable of 10 manipulation tasks involving kitchen objects, and robust to varying layouts of distractor objects; 2) in a simulated kitchen environment, CACTI trains a single policy on 18 semantic tasks across up to 50 layout variations per task. The simulation task benchmark and augmented datasets in both real and simulated environments will be released to facilitate future research.
translated by 谷歌翻译
无监督的表示学习的最新进展显着提高了模拟环境中培训强化学习政策的样本效率。但是,尚未看到针对实体强化学习的类似收益。在这项工作中,我们专注于从像素中启用数据有效的实体机器人学习。我们提出了有效的机器人学习(编码器)的对比前训练和数据增强,该方法利用数据增强和无监督的学习来从稀疏奖励中实现对实体ARM策略的样本效率培训。虽然对比预训练,数据增强,演示和强化学习不足以进行有效学习,但我们的主要贡献表明,这些不同技术的组合导致了一种简单而数据效率的方法。我们表明,只有10个示范,一个机器人手臂可以从像素中学习稀疏的奖励操纵策略,例如到达,拾取,移动,拉动大物体,翻转开关并在短短30分钟内打开抽屉现实世界训练时间。我们在项目网站上包括视频和代码:https://sites.google.com/view/felfficited-robotic-manipulation/home
translated by 谷歌翻译
第三人称视频的逆增强学习(IRL)研究表明,令人鼓舞的结果是消除了对机器人任务的手动奖励设计的需求。但是,大多数先前的作品仍然受到相对受限域视频领域的培训的限制。在本文中,我们认为第三人称IRL的真正潜力在于增加视频的多样性以更好地扩展。为了从不同的视频中学习奖励功能,我们建议在视频上执行图形抽象,然后在图表空间中进行时间匹配,以衡量任务进度。我们的见解是,可以通过形成图形的实体交互来描述任务,并且该图抽象可以帮助删除无关紧要的信息,例如纹理,从而产生更强大的奖励功能。我们评估了我们的方法,即Graphirl,关于X魔术中的跨体制学习,并从人类的示范中学习进行真实机器人操纵。我们对以前的方法表现出对各种视频演示的鲁棒性的显着改善,甚至比真正的机器人推动任务上的手动奖励设计获得了更好的结果。视频可从https://sateeshkumar21.github.io/graphirl获得。
translated by 谷歌翻译
我们提出了VRL3,这是一个强大的数据驱动框架,其简单设计用于解决挑战性的视觉深度强化学习(DRL)任务。我们分析了采用数据驱动方法的许多主要障碍,并提出了一系列设计原理,新颖的发现以及有关数据驱动的视觉DRL的关键见解。我们的框架有三个阶段:在第1阶段,我们利用非RL数据集(例如ImageNet)学习任务无关的视觉表示;在第2阶段,我们使用离线RL数据(例如,专家演示数量有限)将任务不合时宜的表示转换为更强大的特定任务表示;在第3阶段,我们用在线RL微调了代理商。与以前的SOTA相比,在一系列具有稀疏奖励和现实视觉输入的具有挑战性的手动操纵任务上,VRL3平均达到了780%的样本效率。在最艰巨的任务上,VRL3的样本有效效率高1220%(使用较宽的编码器时2440%),仅使用计算的10%来解决任务。这些重要的结果清楚地表明了数据驱动的深度强化学习的巨大潜力。
translated by 谷歌翻译
最近无监督的预训练方法已证明通过学习多个下游任务的有用表示,对语言和视觉域有效。在本文中,我们研究了这种无监督的预训练方法是否也可以有效地基于视觉的增强学习(RL)。为此,我们介绍了一个框架,该框架学习了通过视频的生成预训练来理解动态的表示形式。我们的框架由两个阶段组成:我们预先培训无动作的潜在视频预测模型,然后利用预训练的表示形式在看不见的环境上有效地学习动作条件的世界模型。为了在微调过程中纳入其他动作输入,我们引入了一种新的体系结构,该结构将动作条件潜在预测模型堆叠在预先训练的无动作预测模型之上。此外,为了更好地探索,我们提出了一种基于视频的内在奖励,以利用预培训的表示。我们证明,在各种操纵和运动任务中,我们的框架显着改善了基于视力的RL的最终性能和样本效率。代码可在https://github.com/younggyoseo/apv上找到。
translated by 谷歌翻译
虽然视觉模仿学习提供了从视觉演示中学习最有效的方法之一,但从它们中概括需要数百个不同的演示,任务特定的前瞻或大型难以列车的参数模型。此类复杂性出现的一个原因是因为标准的视觉模仿框架尝试一次解决两个耦合问题:从不同的视觉数据中学习简洁但良好的表示,同时学习将显示的动作与这样的表示相关联。这种联合学习导致这两个问题之间的相互依存,这通常会导致需要大量的学习演示。为了解决这一挑战,我们建议与对视觉模仿的行为学习的表现脱钩。首先,我们使用标准监督和自我监督的学习方法从离线数据中学习视觉表示编码器。培训表示,我们使用非参数局部加权回归来预测动作。我们通过实验表明,与目视模仿的先前工作相比,这种简单的去耦可提高离线演示数据集和实际机器人门开口的视觉模仿模型的性能。我们所有生成的数据,代码和机器人视频都在https://jyopari.github.io/vinn/处公开提供。
translated by 谷歌翻译
Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations -random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as Ope-nAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks. Our RAD module and training code are available at https://www.github.com/MishaLaskin/rad.
translated by 谷歌翻译
模仿学习在有效地学习政策方面对复杂的决策问题有着巨大的希望。当前的最新算法经常使用逆增强学习(IRL),在给定一组专家演示的情况下,代理会替代奖励功能和相关的最佳策略。但是,这种IRL方法通常需要在复杂控制问题上进行实质性的在线互动。在这项工作中,我们提出了正规化的最佳运输(ROT),这是一种新的模仿学习算法,基于最佳基于最佳运输轨迹匹配的最新进展。我们的主要技术见解是,即使只有少量演示,即使只有少量演示,也可以自适应地将轨迹匹配的奖励与行为克隆相结合。我们对横跨DeepMind Control Suite,OpenAI Robotics和Meta-World基准的20个视觉控制任务进行的实验表明,与先前最新的方法相比,平均仿真达到了90%的专家绩效的速度,达到了90%的专家性能。 。在现实世界的机器人操作中,只有一次演示和一个小时的在线培训,ROT在14个任务中的平均成功率为90.1%。
translated by 谷歌翻译
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs offpolicy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://www. github.com/MishaLaskin/curl.
translated by 谷歌翻译
旨在将原始视觉观察映射到动作的深度视觉运动策略学习在控制任务(例如机器人操纵和自动驾驶)中实现了有希望的结果。但是,它需要与培训环境进行大量在线互动,这限制了其现实世界的应用程序。与流行的无监督功能学习以进行视觉识别相比,探索视觉运动控制任务的功能预读量要少得多。在这项工作中,我们的目标是通过观看长达数小时的未经保育的YouTube视频来预先驾驶任务的政策表示。具体而言,我们使用少量标记数据训练一个反向动态模型,并使用它来预测所有YouTube视频帧的动作标签。然后开发了一种新的对比策略预告片,以从带有伪动作标签的视频框架中学习动作条件的功能。实验表明,由此产生的动作条件特征为下游增强学习和模仿学习任务提供了实质性改进,超出了从以前的无监督学习方法和图预审预周化的体重中预见的重量。代码,模型权重和数据可在以下网址提供:https://metadriverse.github.io/aco。
translated by 谷歌翻译
The potential of offline reinforcement learning (RL) is that high-capacity models trained on large, heterogeneous datasets can lead to agents that generalize broadly, analogously to similar advances in vision and NLP. However, recent works argue that offline RL methods encounter unique challenges to scaling up model capacity. Drawing on the learnings from these works, we re-examine previous design choices and find that with appropriate choices: ResNets, cross-entropy based distributional backups, and feature normalization, offline Q-learning algorithms exhibit strong performance that scales with model capacity. Using multi-task Atari as a testbed for scaling and generalization, we train a single policy on 40 games with near-human performance using up-to 80 million parameter networks, finding that model performance scales favorably with capacity. In contrast to prior work, we extrapolate beyond dataset performance even when trained entirely on a large (400M transitions) but highly suboptimal dataset (51% human-level performance). Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal. Finally, we show that offline Q-learning with a diverse dataset is sufficient to learn powerful representations that facilitate rapid transfer to novel games and fast online learning on new variations of a training game, improving over existing state-of-the-art representation learning approaches.
translated by 谷歌翻译
最近的工作表明,单独监督学习,没有时间差异(TD)学习,可以对离线RL显着有效。什么时候保持真实,需要哪些算法组件?通过广泛的实验,我们致力于将RL离线的监督学习到其基本要素。在我们考虑的每个环境套件中,只需通过双层前馈MLP最大化的可能性,与基于TD学习或与变压器的序列建模的基本更复杂的方法具有竞争力的竞争性。仔细选择模型容量(例如,通过正则化或架构),并选择哪些信息(例如,目标或奖励)对性能至关重要。这些见解是通过监督学习进行加强学习的从业者(我们投入“RVS学习”)的实践指南。他们还探讨了现有RVS方法的限制,在随机数据上相对较弱,并提出了许多打开问题。
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译