A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a naturallanguage navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visuallygrounded navigation instructions, we present the Matter-port3D Simulator -a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -the Room-to-Room (R2R) dataset 1 .1 https://bringmeaspoon.org Instruction: Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with nonreversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing visionand-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
translated by 谷歌翻译
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities.
translated by 谷歌翻译
We present a new AI task -Embodied Question Answering (EmbodiedQA) -where an agent is spawned at a random location in a 3D environment and asked a question ('What color is the car?'). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question ('orange'). This challenging task requires a range of AI skills -active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.
translated by 谷歌翻译
在视觉和语言导航(VLN)中,按照自然语言指令在现实的3D环境中需要具体的代理。现有VLN方法的一个主要瓶颈是缺乏足够的培训数据,从而导致对看不见的环境的概括不令人满意。虽然通常会手动收集VLN数据,但这种方法很昂贵,并且可以防止可扩展性。在这项工作中,我们通过建议从HM3D自动创建900个未标记的3D建筑物的大规模VLN数据集来解决数据稀缺问题。我们为每个建筑物生成一个导航图,并通过交叉视图一致性从2D传输对象预测,从2D传输伪3D对象标签。然后,我们使用伪对象标签来微调一个预处理的语言模型,作为减轻教学生成中跨模式差距的提示。在导航环境和说明方面,我们生成的HM3D-AUTOVLN数据集是比现有VLN数据集大的数量级。我们通过实验表明,HM3D-AUTOVLN显着提高了所得VLN模型的概括能力。在SPL指标上,我们的方法分别在Reverie和DataSet的看不见的验证分裂分别对艺术的状态提高了7.1%和8.1%。
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-toend development of embodied AI algorithms -defining tasks (e.g. navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents.These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works [20,16] and find evidence for the opposite conclusion -that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} × {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.
translated by 谷歌翻译
We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image. Unlike related navigation tasks, ImageNav does not have a standardized task definition which makes comparison across methods difficult. Further, existing formulations have two problematic properties; (1) image-goals are sampled from random locations which can lead to ambiguity (e.g., looking at walls), and (2) image-goals match the camera specification and embodiment of the agent; this rigidity is limiting when considering user-driven downstream applications. We present the Instance-specific ImageNav task (InstanceImageNav) to address these limitations. Specifically, the goal image is 'focused' on some particular object instance in the scene and is taken with camera parameters independent of the agent. We instantiate InstanceImageNav in the Habitat Simulator using scenes from the Habitat-Matterport3D dataset (HM3D) and release a standardized benchmark to measure community progress.
translated by 谷歌翻译
视觉室内导航(VIN)任务已从数据驱动的机器学习社区中引起了人们的关注,尤其是在最近报告的基于学习方法的成功中。由于这项任务的先天复杂性,研究人员尝试从各种不同角度解决问题,其全部范围尚未在总体报告中捕获。这项调查首先总结了VIN任务的基于学习的方法的代表性工作,然后确定并讨论了阻碍VIN绩效的问题,并激发了值得探索社区的这些关键领域的未来研究。
translated by 谷歌翻译
我们研究了开发自主代理的问题,这些自主代理可以遵循人类的指示来推断和执行一系列行动以完成基础任务。近年来取得了重大进展,尤其是对于短范围的任务。但是,当涉及具有扩展动作序列的长匹马任务时,代理可以轻松忽略某些指令或陷入长长指令中间,并最终使任务失败。为了应对这一挑战,我们提出了一个基于模型的里程碑的任务跟踪器(M-Track),以指导代理商并监视其进度。具体而言,我们提出了一个里程碑构建器,该建筑商通过导航和交互里程碑标记指令,代理商需要逐步完成,以及一个系统地检查代理商当前里程碑的进度并确定何时继续进行下一个的里程碑检查器。在具有挑战性的Alfred数据集上,我们的M轨道在两个竞争基本模型中,未见成功率的相对成功率显着提高了33%和52%。
translated by 谷歌翻译
在人类空间中运营的机器人必须能够与人的自然语言互动,既有理解和执行指示,也可以使用对话来解决歧义并从错误中恢复。为此,我们介绍了教学,一个超过3,000人的互动对话的数据集,以完成模拟中的家庭任务。一个有关任务的Oracle信息的指挥官以自然语言与追随者通信。追随者通过环境导航并与环境进行互动,以完成从“咖啡”到“准备早餐”的复杂性不同的任务,提出问题并从指挥官获取其他信息。我们提出三个基准使用教学研究体现了智能挑战,我们评估了对话理解,语言接地和任务执行中的初始模型的能力。
translated by 谷歌翻译
对比语言图像预测(剪辑)编码器已被证明是有利于对分类和检测到标题和图像操纵的一系列视觉任务。我们调查剪辑视觉骨干网的有效性,以实现AI任务。我们构建令人难以置信的简单基线,名为Emplip,没有任务特定的架构,归纳偏差(如使用语义地图),培训期间的辅助任务,或深度映射 - 但我们发现我们的改进的基线在范围内表现得非常好任务和模拟器。 empclip将Robothor ObjectNav排行榜上面的20分的巨额边缘(成功率)。它使ithor 1相重新安排排行榜上面,击败了采用主动神经映射的下一个最佳提交,而且多于固定的严格度量(0.08至0.17)。它还击败了2021年栖息地对象挑战的获奖者,该挑战采用辅助任务,深度地图和人类示范以及2019年栖息地进程挑战的挑战。我们评估剪辑视觉表示在捕获有关输入观测的语义信息时的能力 - 用于导航沉重的体现任务的基元 - 并且发现剪辑的表示比想象成掠过的骨干更有效地编码这些基元。最后,我们扩展了我们的一个基线,产生了能够归零对象导航的代理,该导航可以导航到在训练期间未被用作目标的对象。
translated by 谷歌翻译
愿景 - 语言导航(VLN)任务要求代理逐步导航,同时感知视觉观察并理解自然语言指令。大数据偏置,这是由小数据量表和大型导航空间之间的视差比率引起的,使得VLN任务具有挑战性。以前的作品提出了各种数据增强方法来减少数据偏差。但是,这些作品不会明确降低不同房间场景的数据偏差。因此,该代理将覆盖所见的场景,并在看不见的场景中实现较差的导航性能。为了解决这个问题,我们提出了随机环境混合(REM)方法,它通过混合环境作为增强数据生成交叉连接的房屋场景。具体而言,我们首先根据每个场景的房间连接图选择键视点。然后,我们交叉连接不同场景的关键视图,以构建增强场景。最后,我们在交叉连接场景中生成增强的指令路径对。基准数据集的实验结果表明,我们的增强数据通过REM帮助代理商会降低所见和看不见的环境之间的性能差距,提高整体性能,使我们的模型成为标准VLN基准的最佳现有方法。该代码已发布:https://github.com/lcfractal/vlnrem。
translated by 谷歌翻译
视觉和语言导航(VLN)是一种任务,即遵循语言指令以导航到目标位置的语言指令,这依赖于在移动期间与环境的持续交互。最近的基于变压器的VLN方法取得了很大的进步,从视觉观测和语言指令之间的直接连接通过多模式跨关注机制。然而,这些方法通常代表通过使用LSTM解码器或使用手动设计隐藏状态来构建反复变压器的时间上下文作为固定长度矢量。考虑到单个固定长度向量通常不足以捕获长期时间上下文,在本文中,我们通过显式建模时间上下文来引入具有可变长度存储器(MTVM)的多模式变压器,通过模拟时间上下文。具体地,MTVM使代理能够通过直接存储在存储体中的先前激活来跟踪导航轨迹。为了进一步提高性能,我们提出了内存感知的一致性损失,以帮助学习随机屏蔽指令的时间上下文的更好关节表示。我们在流行的R2R和CVDN数据集上评估MTVM,我们的模型在R2R看不见的验证和测试中提高了2%的成功率,并在CVDN测试集上减少了1.6米的目标进程。
translated by 谷歌翻译
与人类在环境中共存的通用机器人必须学会将人类语言与其在一系列日常任务中有用的看法和行动联系起来。此外,他们需要获取各种曲目的一般专用技能,允许通过遵循无约束语言指示来组成长地平任务。在本文中,我们呈现了凯文(从语言和愿景撰写的行动),是一个露天模拟基准,用于学习Long-Horizo​​ n语言条件的任务。我们的目的是使可以开发能够通过船上传感器解决许多机器人操纵任务的代理商,并且仅通过人类语言指定。 Calvin任务在序列长度,动作空间和语言方面更复杂,而不是现有的视觉和语言任务数据集,并支持灵活的传感器套件规范。我们评估零拍摄的代理商以新颖的语言指示以及新的环境和对象。我们表明,基于多语境模仿学习的基线模型在凯文中表现不佳,表明有很大的空间,用于开发创新代理,了解学习将人类语言与这款基准相关的世界模型。
translated by 谷歌翻译
这项工作研究了图像目标导航问题,需要通过真正拥挤的环境引导具有嘈杂传感器和控制的机器人。最近的富有成效的方法依赖于深度加强学习,并学习模拟环境中的导航政策,这些环境比真实环境更简单。直接将这些训练有素的策略转移到真正的环境可能非常具有挑战性甚至危险。我们用由四个解耦模块组成的分层导航方法来解决这个问题。第一模块在机器人导航期间维护障碍物映射。第二个将定期预测实时地图上的长期目标。第三个计划碰撞命令集以导航到长期目标,而最终模块将机器人正确靠近目标图像。四个模块是单独开发的,以适应真实拥挤的情景中的图像目标导航。此外,分层分解对导航目标规划,碰撞避免和导航结束预测的学习进行了解耦,这在导航训练期间减少了搜索空间,并有助于改善以前看不见的真实场景的概括。我们通过移动机器人评估模拟器和现实世界中的方法。结果表明,我们的方法优于多种导航基线,可以在这些方案中成功实现导航任务。
translated by 谷歌翻译
我们提出了一种可扩展的方法,用于学习开放世界对象目标导航(ObjectNAV) - 要求虚拟机器人(代理)在未探索的环境中找到对象的任何实例(例如,“查找接收器”)。我们的方法完全是零拍的 - 即,它不需要任何形式的objectNav奖励或演示。取而代之的是,我们训练图像目标导航(ImagenAv)任务,在该任务中,代理在其中找到了捕获图片(即目标图像)的位置。具体而言,我们将目标图像编码为多模式的语义嵌入空间,以在未注释的3D环境(例如HM3D)中以大规模训练语义目标导航(Senanticnav)代理。训练后,可以指示Semanticnav代理查找以自由形式的自然语言描述的对象(例如,“接收器”,“浴室水槽”等),通过将语言目标投射到相同的多模式,语义嵌入空间中。结果,我们的方法启用了开放世界的ObjectNAV。我们在三个ObjectNAV数据集(Gibson,HM3D和MP3D)上广泛评估了我们的代理商,并观察到成功的4.2%-20.0%的绝对改进。作为参考,这些收益与2020年至2021年Objectnav挑战赛竞争对手之间成功的5%改善相似或更好。在开放世界的环境中,我们发现我们的代理商可以概括为明确提到的房间(例如,“找到厨房水槽”)的复合说明,并且何时可以推断目标室(例如,”找到水槽和炉子”)。
translated by 谷歌翻译
在这项工作中,我们提出了一种用于图像目标导航的内存调格方法。早期的尝试,包括基于RL的基于RL的方法和基于SLAM的方法的概括性能差,或者在姿势/深度传感器上稳定稳定。我们的方法基于一个基于注意力的端到端模型,该模型利用情节记忆来学习导航。首先,我们以自我监督的方式训练一个国家安置的网络,然后将其嵌入以前访问的状态中的代理商的记忆中。我们的导航政策通过注意机制利用了此信息。我们通过广泛的评估来验证我们的方法,并表明我们的模型在具有挑战性的吉布森数据集上建立了新的最新技术。此外,与相关工作形成鲜明对比的是,我们仅凭RGB输入就实现了这种令人印象深刻的性能,而无需访问其他信息,例如位置或深度。
translated by 谷歌翻译
Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译