Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
A household robot should be able to navigate to target locations without requiring users to first annotate everything in their home. Current approaches to this object navigation challenge do not test on real robots and rely on expensive semantically labeled 3D meshes. In this work, our aim is an agent that builds self-supervised models of the world via exploration, the same as a child might. We propose an end-to-end self-supervised embodied agent that leverages exploration to train a semantic segmentation model of 3D objects, and uses those representations to learn an object navigation policy purely from self-labeled 3D meshes. The key insight is that embodied agents can leverage location consistency as a supervision signal - collecting images from different views/angles and applying contrastive learning to fine-tune a semantic segmentation model. In our experiments, we observe that our framework performs better than other self-supervised baselines and competitively with supervised baselines, in both simulation and when deployed in real houses.
translated by 谷歌翻译
这项工作研究了图像目标导航问题,需要通过真正拥挤的环境引导具有嘈杂传感器和控制的机器人。最近的富有成效的方法依赖于深度加强学习,并学习模拟环境中的导航政策,这些环境比真实环境更简单。直接将这些训练有素的策略转移到真正的环境可能非常具有挑战性甚至危险。我们用由四个解耦模块组成的分层导航方法来解决这个问题。第一模块在机器人导航期间维护障碍物映射。第二个将定期预测实时地图上的长期目标。第三个计划碰撞命令集以导航到长期目标,而最终模块将机器人正确靠近目标图像。四个模块是单独开发的,以适应真实拥挤的情景中的图像目标导航。此外,分层分解对导航目标规划,碰撞避免和导航结束预测的学习进行了解耦,这在导航训练期间减少了搜索空间,并有助于改善以前看不见的真实场景的概括。我们通过移动机器人评估模拟器和现实世界中的方法。结果表明,我们的方法优于多种导航基线,可以在这些方案中成功实现导航任务。
translated by 谷歌翻译
Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.
translated by 谷歌翻译
对象目标导航的最新方法依赖于增强学习,通常需要大量的计算资源和学习时间。我们提出了使用无互动学习(PONI)的对象导航的潜在功能,这是一种模块化方法,可以散布“在哪里看?”的技能?对于对象和“如何导航到(x,y)?”。我们的主要见解是“在哪里看?”可以纯粹将其视为感知问题,而没有环境相互作用就可以学习。为了解决这个问题,我们提出了一个网络,该网络可以预测两个在语义图上的互补电位功能,并使用它们来决定在哪里寻找看不见的对象。我们使用在自上而下的语义图的被动数据集上使用受监督的学习来训练潜在的功能网络,并将其集成到模块化框架中以执行对象目标导航。 Gibson和MatterPort3D的实验表明,我们的方法可实现对象目标导航的最新方法,同时减少培训计算成本高达1,600倍。可以使用代码和预训练的模型:https://vision.cs.utexas.edu/projects/poni/
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
移动机器人的视觉导航经典通过SLAM加上最佳规划,最近通过实现作为深网络的端到端培训。虽然前者通常仅限于航点计划,但即使在真实的物理环境中已经证明了它们的效率,后一种解决方案最常用于模拟中,但已被证明能够学习更复杂的视觉推理,涉及复杂的语义规则。通过实际机器人在物理环境中导航仍然是一个开放问题。端到端的培训方法仅在模拟中进行了彻底测试,实验涉及实际机器人的实际机器人在简化的实验室条件下限制为罕见的性能评估。在这项工作中,我们对真实物理代理的性能和推理能力进行了深入研究,在模拟中培训并部署到两个不同的物理环境。除了基准测试之外,我们提供了对不同条件下不同代理商培训的泛化能力的见解。我们可视化传感器使用以及不同类型信号的重要性。我们展示了,对于Pointgoal Task,一个代理在各种任务上进行预先培训,并在目标环境的模拟版本上进行微调,可以达到竞争性能,而无需建模任何SIM2重传,即通过直接从仿真部署培训的代理即可一个真正的物理机器人。
translated by 谷歌翻译
Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world environments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional procedural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materials. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's diverse distribution of generated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrangement, lighting changes, or clutter.
translated by 谷歌翻译
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-toend development of embodied AI algorithms -defining tasks (e.g. navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents.These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works [20,16] and find evidence for the opposite conclusion -that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} × {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.
translated by 谷歌翻译
对象目标导航(ObjectNAV)任务是在没有预先构建的地图的情况下将代理导航到看不见的环境中的对象类别。在本文中,我们通过使用语义相关对象作为线索来预测目标的距离来解决此任务。根据与目标对象的估计距离,我们的方法直接选择最佳的中期目标,这些目标更可能具有较短的目标途径。具体而言,基于学习的知识,我们的模型将鸟眼视图语义图作为输入,并估算从边界图单元到目标对象的路径长度。借助估计的距离图,代理可以同时探索环境并基于简单的人类设计策略导航到目标对象。在视觉上逼真的模拟环境中,经验结果表明,该提出的方法的表现优于成功率和效率的广泛基准。 Realobot实验还表明,我们的方法很好地推广到了现实世界。视频https://www.youtube.com/watch?v=r79pwvgfks4
translated by 谷歌翻译
In recent years several learning approaches to point goal navigation in previously unseen environments have been proposed. They vary in the representations of the environments, problem decomposition, and experimental evaluation. In this work, we compare the state-of-the-art Deep Reinforcement Learning based approaches with Partially Observable Markov Decision Process (POMDP) formulation of the point goal navigation problem. We adapt the (POMDP) sub-goal framework proposed by [1] and modify the component that estimates frontier properties by using partial semantic maps of indoor scenes built from images' semantic segmentation. In addition to the well-known completeness of the model-based approach, we demonstrate that it is robust and efficient in that it leverages informative, learned properties of the frontiers compared to an optimistic frontier-based planner. We also demonstrate its data efficiency compared to the end-to-end deep reinforcement learning approaches. We compare our results against an optimistic planner, ANS and DD-PPO on Matterport3D dataset using the Habitat Simulator. We show comparable, though slightly worse performance than the SOTA DD-PPO approach, yet with far fewer data.
translated by 谷歌翻译
Object goal navigation (ObjectNav) in unseen environments is a fundamental task for Embodied AI. Agents in existing works learn ObjectNav policies based on 2D maps, scene graphs, or image sequences. Considering this task happens in 3D space, a 3D-aware agent can advance its ObjectNav capability via learning from fine-grained spatial information. However, leveraging 3D scene representation can be prohibitively unpractical for policy learning in this floor-level task, due to low sample efficiency and expensive computational cost. In this work, we propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices, namely corner-guided exploration policy and category-aware identification policy, simultaneously perform by utilizing online fused 3D points as observation. Through extensive experiments, we show that this framework can dramatically improve the performance in ObjectNav through learning from 3D scene representation. Our framework achieves the best performance among all modular-based methods on the Matterport3D and Gibson datasets, while requiring (up to 30x) less computational cost for training.
translated by 谷歌翻译
我们介绍了栖息地2.0(H2.0),这是一个模拟平台,用于培训交互式3D环境和复杂物理的场景中的虚拟机器人。我们为体现的AI堆栈 - 数据,仿真和基准任务做出了全面的贡献。具体来说,我们提出:(i)复制:一个由艺术家的,带注释的,可重新配置的3D公寓(匹配真实空间)与铰接对象(例如可以打开/关闭的橱柜和抽屉); (ii)H2.0:一个高性能物理学的3D模拟器,其速度超过8-GPU节点上的每秒25,000个模拟步骤(实时850x实时),代表先前工作的100倍加速;和(iii)家庭助理基准(HAB):一套辅助机器人(整理房屋,准备杂货,设置餐桌)的一套常见任务,以测试一系列移动操作功能。这些大规模的工程贡献使我们能够系统地比较长期结构化任务中的大规模加固学习(RL)和经典的感官平面操作(SPA)管道,并重点是对新对象,容器和布局的概括。 。我们发现(1)与层次结构相比,(1)平面RL政策在HAB上挣扎; (2)具有独立技能的层次结构遭受“交接问题”的困扰,(3)水疗管道比RL政策更脆。
translated by 谷歌翻译
对象看起来和声音的方式提供了对其物理特性的互补反射。在许多设置中,视觉和试听的线索都异步到达,但必须集成,就像我们听到一个物体掉落在地板上,然后必须找到它时。在本文中,我们介绍了一个设置,用于研究3D虚拟环境中的多模式对象定位。一个物体在房间的某个地方掉落。配备了摄像头和麦克风的具体机器人剂必须通过将音频和视觉信号与知识的基础物理学结合来确定已删除的对象以及位置。为了研究此问题,我们生成了一个大规模数据集 - 倒下的对象数据集 - 其中包括64个房间中30个物理对象类别的8000个实例。该数据集使用Threedworld平台,该平台可以模拟基于物理的影响声音和在影片设置中对象之间的复杂物理交互。作为解决这一挑战的第一步,我们基于模仿学习,强化学习和模块化计划,开发了一组具体的代理基线,并对这项新任务的挑战进行了深入的分析。
translated by 谷歌翻译
在本文中,我们专注于在线学习主动视觉在未知室内环境中的对象的搜索(AVS)的最优策略问题。我们建议POMP++,规划战略,介绍了经典的部分可观察蒙特卡洛规划(POMCP)框架之上的新制剂,允许免费培训,在线政策在未知的环境中学习。我们提出了一个新的信仰振兴战略,允许使用POMCP与动态扩展状态空间来解决在线生成平面地图的。我们评估我们在两个公共标准数据集的方法,AVD由是从真正的3D场景渲染扫描真正的机器人平台和人居ObjectNav收购,用>10%,比国家的the-改善达到最佳的成功率技术方法。
translated by 谷歌翻译
自主代理可以在新环境中导航而不构建明确的地图吗?对于PointGoal Navigation的任务(“转到$ \ delta x $,$ \ delta y $'),在理想化的设置(否RGB -D和驱动噪声,完美的GPS+Compass)下,答案是一个明确的“是” - 由任务无形组件(CNNS和RNN)组成的无地图神经模型接受了大规模增强学习训练,在标准数据集(Gibson)上取得了100%的成功。但是,对于PointNav在现实环境中(RGB-D和致动噪声,没有GPS+Compass),这是一个悬而未决的问题。我们在本文中解决了一个。该任务的最强成绩是成功的71.7%。首先,我们确定了性能下降的主要原因:GPS+指南针的缺失。带有RGB-D传感和致动噪声的完美GPS+指南针的代理商取得了99.8%的成功(Gibson-V2 Val)。这表明(解释模因)强大的视觉探子仪是我们对逼真的PointNav所需的全部。如果我们能够实现这一目标,我们可以忽略感应和致动噪声。作为我们的操作假设,我们扩展了数据集和模型大小,并开发了无人批准的数据启发技术来训练模型以进行视觉探测。我们在栖息地现实的PointNAV挑战方面的最新状态从71%降低到94%的成功(+23,31%相对)和53%至74%的SPL(+21,40%相对)。虽然我们的方法不饱和或“解决”该数据集,但这种强大的改进与有希望的零射击SIM2REAL转移(到Locobot)相结合提供了与假设一致的证据,即即使在现实环境中,显式映射也不是必需的。 。
translated by 谷歌翻译
我们介绍了一个目标驱动的导航系统,以改善室内场景中的Fapless视觉导航。我们的方法在每次步骤中都将机器人和目标的多视图观察为输入,以提供将机器人移动到目标的一系列动作,而不依赖于运行时在运行时。通过优化包含三个关键设计的组合目标来了解该系统。首先,我们建议代理人在做出行动决定之前构建下一次观察。这是通过从专家演示中学习变分生成模块来实现的。然后,我们提出预测预先预测静态碰撞,作为辅助任务,以改善导航期间的安全性。此外,为了减轻终止动作预测的训练数据不平衡问题,我们还介绍了一个目标检查模块来区分与终止动作的增强导航策略。这三种建议的设计都有助于提高培训数据效率,静态冲突避免和导航泛化性能,从而产生了一种新颖的目标驱动的FLASES导航系统。通过对Turtlebot的实验,我们提供了证据表明我们的模型可以集成到机器人系统中并在现实世界中导航。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
深度强化学习在基于激光的碰撞避免有效的情况下取得了巨大的成功,因为激光器可以感觉到准确的深度信息而无需太多冗余数据,这可以在算法从模拟环境迁移到现实世界时保持算法的稳健性。但是,高成本激光设备不仅很难为大型机器人部署,而且还表现出对复杂障碍的鲁棒性,包括不规则的障碍,例如桌子,桌子,椅子和架子,以及复杂的地面和特殊材料。在本文中,我们提出了一个新型的基于单眼相机的复杂障碍避免框架。特别是,我们创新地将捕获的RGB图像转换为伪激光测量,以进行有效的深度强化学习。与在一定高度捕获的传统激光测量相比,仅包含距离附近障碍的一维距离信息,我们提议的伪激光测量融合了捕获的RGB图像的深度和语义信息,这使我们的方法有效地有效障碍。我们还设计了一个功能提取引导模块,以加重输入伪激光测量,并且代理对当前状态具有更合理的关注,这有利于提高障碍避免政策的准确性和效率。
translated by 谷歌翻译
自然语言提供可访问和富有富有态度的界面,以指定机器人代理的长期任务。但是,非专家可能会使用高级指令指定此类任务,其中通过多个抽象层摘要通过特定的机器人操作。我们建议将语言和机器人行动之间的这种差距延长长的执行视野是持久的表示。我们提出了一种持久的空间语义表示方法,并展示它是如何构建执行分层推理的代理,以有效执行长期任务。尽管完全避免了常用的逐步说明,我们评估了我们对阿尔弗雷德基准的方法并实现了最先进的结果。
translated by 谷歌翻译