Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem. However, real-world navigation presents a complex set of physical challenges that defies simple geometric abstractions. Machine learning offers a promising way to go beyond geometry and conventional planning, allowing for navigational systems that make decisions based on actual prior experience. Such systems can reason about traversability in ways that go beyond geometry, accounting for the physical outcomes of their actions and exploiting patterns in real-world environments. They can also improve as more data is collected, potentially providing a powerful network effect. In this article, we present a general toolkit for experiential learning of robotic navigation skills that unifies several recent approaches, describe the underlying design principles, summarize experimental results from several of our recent papers, and discuss open problems and directions for future work.
translated by 谷歌翻译
Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass. However, online learning from trial-and-error for real-world robots is logistically challenging, and methods that instead can utilize existing datasets of robotic navigation data could be significantly more scalable and enable broader generalization. In this paper, we present ReViND, the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world. We evaluate our system for off-road navigation without any additional data collection or fine-tuning, and show that it can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
translated by 谷歌翻译
嘈杂的传感,不完美的控制和环境变化是许多现实世界机器人任务的定义特征。部分可观察到的马尔可夫决策过程(POMDP)提供了一个原则上的数学框架,用于建模和解决不确定性下的机器人决策和控制任务。在过去的十年中,它看到了许多成功的应用程序,涵盖了本地化和导航,搜索和跟踪,自动驾驶,多机器人系统,操纵和人类机器人交互。这项调查旨在弥合POMDP模型的开发与算法之间的差距,以及针对另一端的不同机器人决策任务的应用。它分析了这些任务的特征,并将它们与POMDP框架的数学和算法属性联系起来,以进行有效的建模和解决方案。对于从业者来说,调查提供了一些关键任务特征,以决定何时以及如何成功地将POMDP应用于机器人任务。对于POMDP算法设计师,该调查为将POMDP应用于机器人系统的独特挑战提供了新的见解,并指出了有希望的新方向进行进一步研究。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
这项工作研究了图像目标导航问题,需要通过真正拥挤的环境引导具有嘈杂传感器和控制的机器人。最近的富有成效的方法依赖于深度加强学习,并学习模拟环境中的导航政策,这些环境比真实环境更简单。直接将这些训练有素的策略转移到真正的环境可能非常具有挑战性甚至危险。我们用由四个解耦模块组成的分层导航方法来解决这个问题。第一模块在机器人导航期间维护障碍物映射。第二个将定期预测实时地图上的长期目标。第三个计划碰撞命令集以导航到长期目标,而最终模块将机器人正确靠近目标图像。四个模块是单独开发的,以适应真实拥挤的情景中的图像目标导航。此外,分层分解对导航目标规划,碰撞避免和导航结束预测的学习进行了解耦,这在导航训练期间减少了搜索空间,并有助于改善以前看不见的真实场景的概括。我们通过移动机器人评估模拟器和现实世界中的方法。结果表明,我们的方法优于多种导航基线,可以在这些方案中成功实现导航任务。
translated by 谷歌翻译
机器人导航的目标条件政策可以在大型未注释的数据集上进行培训,从而为现实世界中的设置提供了良好的概括。但是,尤其是在指定目标需要图像的基于视觉的设置中,这是一个不自然的界面。语言为与机器人的通信提供了一种更方便的方式,但是现代方法通常需要以语言描述注释的轨迹的形式进行昂贵的监督。我们提出了一个用于机器人导航的系统,该系统享受着未注释的大型轨迹数据集培训的好处,同时仍为用户提供高级接口。我们没有在数据集之后使用标记的指令,而是表明可以完全从预先训练的导航模型(VING),图像语言关联(剪辑)和语言建模(GPT-3)中构建这样的系统,而无需任何微调或语言宣布的机器人数据。我们将LM-NAV实例化在现实世界中的移动机器人上,并通过自然语言指令通过复杂的室外环境演示长途导航。有关我们的实验的视频,代码发布和在浏览器中运行的交互式COLAB笔记本,请查看我们的项目页面https://sites.google.com/view/lmnav
translated by 谷歌翻译
通过学习占用和公制地图来解决开放世界越野导航任务的几何方法,提供良好的泛化,但在违反他们的假设(例如,高草)的户外环境中可能是脆弱的。基于学习的方法可以直接从原始观察中学习无碰撞行为,但难以与标准的基于几何的管道集成。这创造了一个不幸的冲突 - 要么使用学习,要么丢失很好的几何导航组件,要么不使用它,或者不使用它,支持广泛的手动调整几何的成本图。在这项工作中,我们通过以一种方式设计学习和非学习的组件来拒绝这种二分法,使得它们可以以自我监督的方式有效地组合。这两个组件都有助于规划标准:学习组件作为奖励有助于预测的可遍历,而几何组件会有助于障碍成本信息。我们实例化并相对评价我们的系统在分销和分发的外部环境中,表明这种方法继承了来自学习和几何成分的互补收益,并显着优于其中任何一个。我们的结果视频在https://sites.google.com/view/hybrid -imitative-planning托管
translated by 谷歌翻译
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.
translated by 谷歌翻译
主动同时定位和映射(SLAM)是规划和控制机器人运动以构建周围环境中最准确,最完整的模型的问题。自从三十多年前出现了积极感知的第一项基础工作以来,该领域在不同科学社区中受到了越来越多的关注。这带来了许多不同的方法和表述,并回顾了当前趋势,对于新的和经验丰富的研究人员来说都是非常有价值的。在这项工作中,我们在主动大满贯中调查了最先进的工作,并深入研究了仍然需要注意的公开挑战以满足现代应用程序的需求。为了实现现实世界的部署。在提供了历史观点之后,我们提出了一个统一的问题制定并审查经典解决方案方案,该方案将问题分解为三个阶段,以识别,选择和执行潜在的导航措施。然后,我们分析替代方法,包括基于深入强化学习的信念空间规划和现代技术,以及审查有关多机器人协调的相关工作。该手稿以讨论新的研究方向的讨论,解决可再现的研究,主动的空间感知和实际应用,以及其他主题。
translated by 谷歌翻译
强化学习可以培训有效执行复杂任务的政策。然而,对于长地平线任务,这些方法的性能与地平线脱落,通常需要推理和构成较低级别的技能。等级强化学习旨在通过为行动抽象提供一组低级技能来实现这一点。通过抽象空间状态,层次结构也可以进一步提高这一点。我们对适当的状态抽象应取决于可用的较低级别策略的功能。我们提出了价值函数空间:通过使用与每个较低级别的技能对应的值函数来产生这种表示的简单方法。这些价值函数捕获场景的可取性,从而形成了紧凑型摘要任务相关信息的表示,并强大地忽略了分散的人。迷宫解决和机器人操纵任务的实证评估表明,我们的方法提高了长地平的性能,并且能够比替代的无模型和基于模型的方法能够更好的零拍泛化。
translated by 谷歌翻译
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
translated by 谷歌翻译
深度加强学习算法需要大型和多样化的数据集,以便学习基于感知的移动导航的成功策略。但是,通过单个机器人收集此类数据集可能会非常昂贵。使用多个不同的机器人平台收集数据可能不同的动态是一种更可扩展的大规模数据收集方法。但是深度加强学习算法如何利用这种异构数据集?在这项工作中,我们提出了一种具有分层集成模型(提示)的深增强学习算法。在培训时间,提示了解单独的感知和动态模型,并且在测试时间内,提示以分层方式集成了两个模型,并计划使用集成模型的操作。这种使用分层集成模型的规划方法允许算法在由各种不同平台收集的数据集上训练,同时尊重部署机器人的物理功能在测试时间。我们的移动导航实验表明,提示优于传统的等级政策和单源方法。
translated by 谷歌翻译
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.
translated by 谷歌翻译
本文展示了单个机制如何通过直接从代理的原始传感器流流层构建层。这种机制,一般值函数(GVF)或“预测”,捕获高级,抽象知识,作为一组关于现有特征和知识的一组预测,其专门基于代理的低级感官和动作。因此,预测提供了将原始传感器数据组织成有用的抽象的表示 - 通过无限数量的层 - AI和认知科学的长寻求目标。本文的核心是一个详细的思想实验,提供了一个具体,逐步的正式说明,逐步的人工代理商如何从其原始的传感器体验中构建真实,有用的抽象知识。知识表示为关于代理人的观察到其行为后果的一组分层预测(预测)。该图示出了十二个独立的图层:最低的原始像素,触摸和力传感器以及少量动作;较高层次增加抽象,最终导致了对代理商世界的丰富知识,对应于门口,墙壁,房间和平面图。然后,我认为这种一般机制可以允许表示广泛的日常人类知识。
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
Learning long-horizon tasks such as navigation has presented difficult challenges for successfully applying reinforcement learning. However, from another perspective, under a known environment model, methods such as sampling-based planning can robustly find collision-free paths in environments without learning. In this work, we propose Control Transformer which models return-conditioned sequences from low-level policies guided by a sampling-based Probabilistic Roadmap (PRM) planner. Once trained, we demonstrate that our framework can solve long-horizon navigation tasks using only local information. We evaluate our approach on partially-observed maze navigation with MuJoCo robots, including Ant, Point, and Humanoid, and show that Control Transformer can successfully navigate large mazes and generalize to new, unknown environments. Additionally, we apply our method to a differential drive robot (Turtlebot3) and show zero-shot sim2real transfer under noisy observations.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
长摩根和包括一系列隐性子任务的日常任务仍然在离线机器人控制中构成了重大挑战。尽管许多先前的方法旨在通过模仿和离线增强学习的变体来解决这种设置,但学习的行为通常是狭窄的,并且经常努力实现可配置的长匹配目标。由于这两个范式都具有互补的优势和劣势,因此我们提出了一种新型的层次结构方法,结合了两种方法的优势,以从高维相机观察中学习任务无关的长胜压策略。具体而言,我们结合了一项低级政策,该政策通过模仿学习和从离线强化学习中学到的高级政策学习潜在的技能,以促进潜在的行为先验。各种模拟和真实机器人控制任务的实验表明,我们的配方使以前看不见的技能组合能够通过“缝制”潜在技能通过目标链条,并在绩效上提高绩效的顺序,从而实现潜在的目标。艺术基线。我们甚至还学习了一个多任务视觉运动策略,用于现实世界中25个不同的操纵任务,这既优于模仿学习和离线强化学习技术。
translated by 谷歌翻译
在现实世界中经营通常需要代理商来了解复杂的环境,并应用这种理解以实现一系列目标。这个问题被称为目标有条件的强化学习(GCRL),对长地平线的目标变得特别具有挑战性。目前的方法通过使用基于图形的规划算法增强目标条件的策略来解决这个问题。然而,他们努力缩放到大型高维状态空间,并采用用于有效地收集训练数据的探索机制。在这项工作中,我们介绍了继任者功能标志性(SFL),这是一种探索大型高维环境的框架,以获得熟练的政策熟练的策略。 SFL利用继承特性(SF)来捕获转换动态的能力,通过估计状态新颖性来驱动探索,并通过将状态空间作为基于非参数标志的图形来实现高级规划。我们进一步利用SF直接计算地标遍历的目标条件调节策略,我们用于在探索状态空间边缘执行计划“前沿”地标。我们在我们的Minigrid和VizDoom进行了实验,即SFL可以高效地探索大型高维状态空间和优于长地平线GCRL任务的最先进的基线。
translated by 谷歌翻译
The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.
translated by 谷歌翻译