Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译
体现了AI已经显示出对模拟中的丰富机器人任务的结果,包括视觉导航和操纵。事先工作通常与最短的路径一起追求高成功率,同时在很大程度上忽略了互动期间碰撞引起的问题。这种缺乏优先级识别是可以理解的:在模拟环境中,不存在破坏虚拟对象的固有成本。因此,尽管最终成功,但训练有素的代理经常具有与对象的灾难性碰撞。在机器人社区中,碰撞成本大,碰撞避免是一项长期的和关键的话题,以确保机器人可以安全地部署在现实世界中。在这项工作中,我们将第一步迈向碰撞/干扰体现AI代理,用于视觉移动操作,促进真正的机器人安全部署。我们在核心开发了一种新的干扰 - 避免方法是扰动预测的辅助任务。当与干扰罚款结合时,我们的辅助任务通过知识蒸馏到代理商的知识蒸馏而大大提高了样本效率和最终性能。我们对Manipulathor的实验表明,在用新型物体的测试场景上,我们的方法将成功率提高了61.7%至85.6%,而且在原始基线的29.8%至50.2%的情况下,成功率没有干扰。广泛的消融研究表明了我们流水线方法的价值。项目网站位于https://sites.google.com/view/disturb-free
translated by 谷歌翻译
对比语言图像预测(剪辑)编码器已被证明是有利于对分类和检测到标题和图像操纵的一系列视觉任务。我们调查剪辑视觉骨干网的有效性,以实现AI任务。我们构建令人难以置信的简单基线,名为Emplip,没有任务特定的架构,归纳偏差(如使用语义地图),培训期间的辅助任务,或深度映射 - 但我们发现我们的改进的基线在范围内表现得非常好任务和模拟器。 empclip将Robothor ObjectNav排行榜上面的20分的巨额边缘(成功率)。它使ithor 1相重新安排排行榜上面,击败了采用主动神经映射的下一个最佳提交,而且多于固定的严格度量(0.08至0.17)。它还击败了2021年栖息地对象挑战的获奖者,该挑战采用辅助任务,深度地图和人类示范以及2019年栖息地进程挑战的挑战。我们评估剪辑视觉表示在捕获有关输入观测的语义信息时的能力 - 用于导航沉重的体现任务的基元 - 并且发现剪辑的表示比想象成掠过的骨干更有效地编码这些基元。最后,我们扩展了我们的一个基线,产生了能够归零对象导航的代理,该导航可以导航到在训练期间未被用作目标的对象。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
大量数据集和高容量模型推动了计算机视觉和自然语言理解方面的许多最新进步。这项工作提出了一个平台,可以在体现的AI中实现类似的成功案例。我们提出了Procthor,这是一个程序生成体现的AI环境的框架。 Procthor使我们能够采样多种,交互式,可自定义和性能的虚拟环境的任意大型数据集,以训练和评估在导航,互动和操纵任务中的体现代理。我们通过10,000个生成的房屋和简单的神经模型的样本来证明procthor的能力和潜力。仅在Procthor上仅使用RGB图像训练的模型,没有明确的映射,并且没有人类任务监督在6个体现的AI基准中产生最先进的结果,用于导航,重排和手臂操纵,包括目前正在运行的Habitat 2022,AI2-- Thor重新安排2022,以及机器人挑战。我们还通过对procthor进行预训练,在下游基准测试上没有进行微调,通常会击败以前的最先进的系统,从而访问下游训练数据。
translated by 谷歌翻译
Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world environments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional procedural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materials. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's diverse distribution of generated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrangement, lighting changes, or clutter.
translated by 谷歌翻译
为了基于深度加强学习(RL)来增强目标驱动的视觉导航的交叉目标和跨场景,我们将信息理论正则化术语引入RL目标。正则化最大化导航动作与代理的视觉观察变换之间的互信息,从而促进更明智的导航决策。这样,代理通过学习变分生成模型来模拟动作观察动态。基于该模型,代理生成(想象)从其当前观察和导航目标的下一次观察。这样,代理学会了解导航操作与其观察变化之间的因果关系,这允许代理通过比较当前和想象的下一个观察来预测导航的下一个动作。 AI2-Thor框架上的交叉目标和跨场景评估表明,我们的方法在某些最先进的模型上获得了平均成功率的10美元。我们进一步评估了我们的模型在两个现实世界中:来自离散的活动视觉数据集(AVD)和带有TurtleBot的连续现实世界环境中的看不见的室内场景导航。我们证明我们的导航模型能够成功实现导航任务这些情景。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
提出了一个新颖的框架,以逐步收集基于标志的图形存储器,并使用收集的内存进行图像目标导航。给定目标图像搜索,具体的机器人利用语义内存在未知环境中找到目标。 %从RGB-D摄像机的全景观察中收集语义图存储器,而无需知道机器人的姿势。在本文中,我们提出了一个拓扑语义图存储(TSGM),该记忆由(1)一个图形构建器组成,该图将观察到的RGB-D图像构造拓扑语义图,(2)横图搅拌器模块,该模块采用该模块收集的节点以获取上下文信息,以及(3)将上下文内存作为输入的内存解码器,以找到对目标的操作。在图像目标导航的任务上,TSGM明显优于成功率的竞争基线,而SPL上的竞争性基线的表现为 +5.0-9.0%,这意味着TSGM可以找到有效的路径。此外,我们在现实世界图像目标方案中在移动机器人上演示了我们的方法。
translated by 谷歌翻译
我们提出了一种可扩展的方法,用于学习开放世界对象目标导航(ObjectNAV) - 要求虚拟机器人(代理)在未探索的环境中找到对象的任何实例(例如,“查找接收器”)。我们的方法完全是零拍的 - 即,它不需要任何形式的objectNav奖励或演示。取而代之的是,我们训练图像目标导航(ImagenAv)任务,在该任务中,代理在其中找到了捕获图片(即目标图像)的位置。具体而言,我们将目标图像编码为多模式的语义嵌入空间,以在未注释的3D环境(例如HM3D)中以大规模训练语义目标导航(Senanticnav)代理。训练后,可以指示Semanticnav代理查找以自由形式的自然语言描述的对象(例如,“接收器”,“浴室水槽”等),通过将语言目标投射到相同的多模式,语义嵌入空间中。结果,我们的方法启用了开放世界的ObjectNAV。我们在三个ObjectNAV数据集(Gibson,HM3D和MP3D)上广泛评估了我们的代理商,并观察到成功的4.2%-20.0%的绝对改进。作为参考,这些收益与2020年至2021年Objectnav挑战赛竞争对手之间成功的5%改善相似或更好。在开放世界的环境中,我们发现我们的代理商可以概括为明确提到的房间(例如,“找到厨房水槽”)的复合说明,并且何时可以推断目标室(例如,”找到水槽和炉子”)。
translated by 谷歌翻译
对象目标导航的最新方法依赖于增强学习,通常需要大量的计算资源和学习时间。我们提出了使用无互动学习(PONI)的对象导航的潜在功能,这是一种模块化方法,可以散布“在哪里看?”的技能?对于对象和“如何导航到(x,y)?”。我们的主要见解是“在哪里看?”可以纯粹将其视为感知问题,而没有环境相互作用就可以学习。为了解决这个问题,我们提出了一个网络,该网络可以预测两个在语义图上的互补电位功能,并使用它们来决定在哪里寻找看不见的对象。我们使用在自上而下的语义图的被动数据集上使用受监督的学习来训练潜在的功能网络,并将其集成到模块化框架中以执行对象目标导航。 Gibson和MatterPort3D的实验表明,我们的方法可实现对象目标导航的最新方法,同时减少培训计算成本高达1,600倍。可以使用代码和预训练的模型:https://vision.cs.utexas.edu/projects/poni/
translated by 谷歌翻译
第一人称视频在其持续环境的背景下突出了摄影师的活动。但是,当前的视频理解方法是从短视频剪辑中的视觉特征的原因,这些视频片段与基础物理空间分离,只捕获直接看到的东西。我们提出了一种方法,该方法通过学习摄影师(潜在看不见的)本地环境来促进以人为中心的环境的了解来链接以自我为中心的视频和摄像机随着时间的推移而张开。我们使用来自模拟的3D环境中的代理商的视频进行训练,在该环境中,环境完全可以观察到,并在看不见的环境的房屋旅行的真实视频中对其进行测试。我们表明,通过将视频接地在其物理环境中,我们的模型超过了传统的场景分类模型,可以预测摄影师所处的哪个房间(其中帧级信息不足),并且可以利用这种基础来定位与环境相对应的视频瞬间 - 中心查询,优于先验方法。项目页面:http://vision.cs.utexas.edu/projects/ego-scene-context/
translated by 谷歌翻译
Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.
translated by 谷歌翻译
在这项工作中,我们提出了一种用于图像目标导航的内存调格方法。早期的尝试,包括基于RL的基于RL的方法和基于SLAM的方法的概括性能差,或者在姿势/深度传感器上稳定稳定。我们的方法基于一个基于注意力的端到端模型,该模型利用情节记忆来学习导航。首先,我们以自我监督的方式训练一个国家安置的网络,然后将其嵌入以前访问的状态中的代理商的记忆中。我们的导航政策通过注意机制利用了此信息。我们通过广泛的评估来验证我们的方法,并表明我们的模型在具有挑战性的吉布森数据集上建立了新的最新技术。此外,与相关工作形成鲜明对比的是,我们仅凭RGB输入就实现了这种令人印象深刻的性能,而无需访问其他信息,例如位置或深度。
translated by 谷歌翻译
For robots to be generally useful, they must be able to find arbitrary objects described by people (i.e., be language-driven) even without expensive navigation training on in-domain data (i.e., perform zero-shot inference). We explore these capabilities in a unified setting: language-driven zero-shot object navigation (L-ZSON). Inspired by the recent success of open-vocabulary models for image classification, we investigate a straightforward framework, CLIP on Wheels (CoW), to adapt open-vocabulary models to this task without fine-tuning. To better evaluate L-ZSON, we introduce the Pasture benchmark, which considers finding uncommon objects, objects described by spatial and appearance attributes, and hidden objects described relative to visible objects. We conduct an in-depth empirical study by directly deploying 21 CoW baselines across Habitat, RoboTHOR, and Pasture. In total, we evaluate over 90k navigation episodes and find that (1) CoW baselines often struggle to leverage language descriptions, but are proficient at finding uncommon objects. (2) A simple CoW, with CLIP-based object localization and classical exploration -- and no additional training -- matches the navigation efficiency of a state-of-the-art ZSON method trained for 500M steps on Habitat MP3D data. This same CoW provides a 15.6 percentage point improvement in success over a state-of-the-art RoboTHOR ZSON model.
translated by 谷歌翻译
Learning how to navigate among humans in an occluded and spatially constrained indoor environment, is a key ability required to embodied agent to be integrated into our society. In this paper, we propose an end-to-end architecture that exploits Socially-Aware Tasks (referred as to Risk and Social Compass) to inject into a reinforcement learning navigation policy the ability to infer common-sense social behaviors. To this end, our tasks exploit the notion of immediate and future dangers of collision. Furthermore, we propose an evaluation protocol specifically designed for the Social Navigation Task in simulated environments. This is done to capture fine-grained features and characteristics of the policy by analyzing the minimal unit of human-robot spatial interaction, called Encounter. We validate our approach on Gibson4+ and Habitat-Matterport3D datasets.
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
在实践中,只要可以设计教学代理以提供专家监督,仿制学习就是纯粹的加强学习。但是,我们表明,当教学代理商决定与学生无法访问的特权信息时,在模仿学习期间,此信息被边缘化,导致“模仿差距”,导致潜在,差距。先前的工作通过仿制学习的仿制学习来弥合这一差距。虽然经常成功,但逐步的进展失败,需要频繁切换勘探和记忆之间的频繁交换。为了更好地解决这些任务并减轻模仿缺口,我们提出“适应性不管”(顾问)。顾问在培训期间动态重量仿制和奖励的加固学习损失,在模仿和探索之间启用了在线切换。在Gridworlds中设置的一套充满挑战的任务,多代理粒子环境和高保真3D模拟器,我们展示了与顾问的在线交换,优于纯粹的模仿,纯粹的加固学习以及它们的顺序和并行组合。
translated by 谷歌翻译
Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.
translated by 谷歌翻译
我们介绍了一个目标驱动的导航系统,以改善室内场景中的Fapless视觉导航。我们的方法在每次步骤中都将机器人和目标的多视图观察为输入,以提供将机器人移动到目标的一系列动作,而不依赖于运行时在运行时。通过优化包含三个关键设计的组合目标来了解该系统。首先,我们建议代理人在做出行动决定之前构建下一次观察。这是通过从专家演示中学习变分生成模块来实现的。然后,我们提出预测预先预测静态碰撞,作为辅助任务,以改善导航期间的安全性。此外,为了减轻终止动作预测的训练数据不平衡问题,我们还介绍了一个目标检查模块来区分与终止动作的增强导航策略。这三种建议的设计都有助于提高培训数据效率,静态冲突避免和导航泛化性能,从而产生了一种新颖的目标驱动的FLASES导航系统。通过对Turtlebot的实验,我们提供了证据表明我们的模型可以集成到机器人系统中并在现实世界中导航。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-toend development of embodied AI algorithms -defining tasks (e.g. navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents.These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works [20,16] and find evidence for the opposite conclusion -that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} × {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.
translated by 谷歌翻译