Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities.
translated by 谷歌翻译
在视觉和语言导航(VLN)中,按照自然语言指令在现实的3D环境中需要具体的代理。现有VLN方法的一个主要瓶颈是缺乏足够的培训数据,从而导致对看不见的环境的概括不令人满意。虽然通常会手动收集VLN数据,但这种方法很昂贵,并且可以防止可扩展性。在这项工作中,我们通过建议从HM3D自动创建900个未标记的3D建筑物的大规模VLN数据集来解决数据稀缺问题。我们为每个建筑物生成一个导航图,并通过交叉视图一致性从2D传输对象预测,从2D传输伪3D对象标签。然后,我们使用伪对象标签来微调一个预处理的语言模型,作为减轻教学生成中跨模式差距的提示。在导航环境和说明方面,我们生成的HM3D-AUTOVLN数据集是比现有VLN数据集大的数量级。我们通过实验表明,HM3D-AUTOVLN显着提高了所得VLN模型的概括能力。在SPL指标上,我们的方法分别在Reverie和DataSet的看不见的验证分裂分别对艺术的状态提高了7.1%和8.1%。
translated by 谷歌翻译
我们研究了在室内路线上捕获的360度图像中的自动生成导航指令。现有的发电机遭受较差的视觉接地,导致它们依赖语言前沿和幻觉对象。我们的Marky-MT5系统通过专注于视觉地标来解决这一点;它包括第一阶段地标检测器和第二级发生器 - 多峰,多语言,多任务编码器 - 解码器。要培训它,我们在房间顶部(RXR)数据集的顶部引导地标注释。使用文本解析器,来自RXR的姿势迹线的弱监督,以及在1.8B图像上培训的多语言图像文本编码器,我们识别1.1M英语,印地语和泰卢语的地标描述并将其接地为Panoramas的特定区域。在房间到室内,人类途径在Marky-MT5的指示之后获得了71%的成功率(SR),只害羞他们的75%SR在人类指令之后 - 以及与其他发电机的SR高于SRS。对RXR更长的评估,不同的路径上的三种语言获得61-64%的SRS。在新颖环境中生成这种高质量的导航指令是迈向对话导航工具的一步,可以促进对指令跟随代理的大规模培训。
translated by 谷歌翻译
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a naturallanguage navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visuallygrounded navigation instructions, we present the Matter-port3D Simulator -a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -the Room-to-Room (R2R) dataset 1 .1 https://bringmeaspoon.org Instruction: Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.
translated by 谷歌翻译
了解空间和视觉信息对于遵循自然语言说明的导航代理至关重要。当前的基于变压器的VLN代理纠缠了方向和视觉信息,这限制了每个信息源的学习中的增益。在本文中,我们设计了具有明确取向和视觉模块的神经药物。这些模块学会了将空间信息和地标在视觉环境中的说明中提及。为了加强代理的空间推理和视觉感知,我们设计了特定的预训练任务,以进食并更好地利用我们最终导航模型中的相应模块。我们在Room2Room(R2R)和Room4Room(R4R)数据集上评估我们的方法,并在两个基准测试中实现最新结果。
translated by 谷歌翻译
视觉和语言导航(VLN)是一种任务,即遵循语言指令以导航到目标位置的语言指令,这依赖于在移动期间与环境的持续交互。最近的基于变压器的VLN方法取得了很大的进步,从视觉观测和语言指令之间的直接连接通过多模式跨关注机制。然而,这些方法通常代表通过使用LSTM解码器或使用手动设计隐藏状态来构建反复变压器的时间上下文作为固定长度矢量。考虑到单个固定长度向量通常不足以捕获长期时间上下文,在本文中,我们通过显式建模时间上下文来引入具有可变长度存储器(MTVM)的多模式变压器,通过模拟时间上下文。具体地,MTVM使代理能够通过直接存储在存储体中的先前激活来跟踪导航轨迹。为了进一步提高性能,我们提出了内存感知的一致性损失,以帮助学习随机屏蔽指令的时间上下文的更好关节表示。我们在流行的R2R和CVDN数据集上评估MTVM,我们的模型在R2R看不见的验证和测试中提高了2%的成功率,并在CVDN测试集上减少了1.6米的目标进程。
translated by 谷歌翻译
视觉和语言导航(VLN)任务要求代理根据语言说明浏览环境。在本文中,我们旨在解决此任务中的两个关键挑战:利用多语言指令改进教学路径接地,并在培训期间看不见的新环境中导航。为了应对这些挑战,我们提出了明确的:跨语性和环境不可屈服的表示。首先,我们的经纪人在室内室内数据集中学习了三种语言(英语,印地语和泰卢固语)的共享且视觉上的跨语言表示。我们的语言表示学习是由视觉信息对齐的文本对指导的。其次,我们的代理商通过从不同环境中最大化语义对齐的图像对(对象匹配的约束)之间的相似性来学习环境不足的视觉表示。我们的环境不可知的视觉表示可以减轻低级视觉信息引起的环境偏见。从经验上讲,在房间 - 室内数据集中,我们表明,当通过跨语性语言表示和环境 - 非局部视觉表示形式概括地看不见的环境时,我们的多语言代理在所有指标上都比强大的基线模型进行了巨大改进。此外,我们表明我们学到的语言和视觉表示可以成功地转移到房间和合作的视觉和二元式导航任务上,并提出详细的定性和定量的概括和基础分析。我们的代码可从https://github.com/jialuli-luka/clear获得
translated by 谷歌翻译
视觉导航要求代理商遵循自然语言说明以达到特定目标。可见的环境和看不见的环境之间的巨大差异使代理商概括良好的挑战。先前的研究提出了数据增强方法,以明确或隐式地减轻数据偏见并提供概括的改进。但是,他们试图记住增强的轨迹,并在测试时忽略在看不见的环境下的分布变化。在本文中,我们提出了一个看不见的差异,预期视力和语言导航(戴维斯),该差异通过鼓励测试时间的视觉一致性来概括为看不见的环境。具体来说,我们设计了:1)半监督框架戴维斯(Davis),该框架利用类似的语义观测来利用视觉一致性信号。 2)一个两阶段的学习程序,鼓励适应测试时间分布。该框架增强了模仿和强化学习的基本混合物与动量形成对比,以鼓励在联合训练阶段和测试时间适应阶段对类似观察的稳定决策。广泛的实验表明,戴维斯在R2R和RXR基准上实现了与先前最先进的VLN基线相比,取得了模型不合命源性的改进。我们的源代码和数据是补充材料。
translated by 谷歌翻译
Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
In this work, we study the problem of Embodied Referring Expression Grounding, where an agent needs to navigate in a previously unseen environment and localize a remote object described by a concise high-level natural language instruction. When facing such a situation, a human tends to imagine what the destination may look like and to explore the environment based on prior knowledge of the environmental layout, such as the fact that a bathroom is more likely to be found near a bedroom than a kitchen. We have designed an autonomous agent called Layout-aware Dreamer (LAD), including two novel modules, that is, the Layout Learner and the Goal Dreamer to mimic this cognitive decision process. The Layout Learner learns to infer the room category distribution of neighboring unexplored areas along the path for coarse layout estimation, which effectively introduces layout common sense of room-to-room transitions to our agent. To learn an effective exploration of the environment, the Goal Dreamer imagines the destination beforehand. Our agent achieves new state-of-the-art performance on the public leaderboard of the REVERIE dataset in challenging unseen test environments with improvement in navigation success (SR) by 4.02% and remote grounding success (RGS) by 3.43% compared to the previous state-of-the-art. The code is released at https://github.com/zehao-wang/LAD
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
我们提出了一种可扩展的方法,用于学习开放世界对象目标导航(ObjectNAV) - 要求虚拟机器人(代理)在未探索的环境中找到对象的任何实例(例如,“查找接收器”)。我们的方法完全是零拍的 - 即,它不需要任何形式的objectNav奖励或演示。取而代之的是,我们训练图像目标导航(ImagenAv)任务,在该任务中,代理在其中找到了捕获图片(即目标图像)的位置。具体而言,我们将目标图像编码为多模式的语义嵌入空间,以在未注释的3D环境(例如HM3D)中以大规模训练语义目标导航(Senanticnav)代理。训练后,可以指示Semanticnav代理查找以自由形式的自然语言描述的对象(例如,“接收器”,“浴室水槽”等),通过将语言目标投射到相同的多模式,语义嵌入空间中。结果,我们的方法启用了开放世界的ObjectNAV。我们在三个ObjectNAV数据集(Gibson,HM3D和MP3D)上广泛评估了我们的代理商,并观察到成功的4.2%-20.0%的绝对改进。作为参考,这些收益与2020年至2021年Objectnav挑战赛竞争对手之间成功的5%改善相似或更好。在开放世界的环境中,我们发现我们的代理商可以概括为明确提到的房间(例如,“找到厨房水槽”)的复合说明,并且何时可以推断目标室(例如,”找到水槽和炉子”)。
translated by 谷歌翻译
在本文中,我们介绍了地图语言导航任务,代理在其中执行自然语言指令,并仅基于给定的3D语义图移至目标位置。为了解决任务,我们设计了指导感的路径建议和歧视模型(IPPD)。我们的方法利用MAP信息来提供指导感知的路径建议,即,它选择所有潜在的指令一致的候选路径以减少解决方案空间。接下来,为表示沿路径的地图观测值以获得更好的模态对准,提出了针对语义图定制的新型路径特征编码方案。基于注意力的语言驱动的歧视者旨在评估候选路径,并确定最佳路径作为最终结果。与单步贪婪决策方法相比,我们的方法自然可以避免误差积累。与单步仿制学习方法相比,IPPD在导航成功方面的性能增长超过17%,而在有挑战性的看不见的环境中,在路径匹配测量NDTW上的性能增长了0.18。
translated by 谷歌翻译
事实证明,演讲者的追随者模型在视觉和语言导航中有效,在该导航中,扬声器模型用于合成新的说明,以增强追随者导航模型的培训数据。但是,在以前的许多方法中,生成的指令未直接训练以优化追随者的性能。在本文中,我们介绍\ textsc {foam},a \ textsc {fo} llower- \ textsc {a} ware speaker \ textsc {m} odel,它不断更新给定关注的反馈,以便生成的指令可以是更多的指令。适合当前追随者的学习状态。具体而言,我们使用BI级优化框架优化了扬声器,并通过评估标记数据的跟随器来获得其训练信号。房间对房间和房间的室内数据集中的实验结果表明,我们的方法可以超越跨设置的强大基线模型。分析还表明,我们生成的指示的质量比基线更高。
translated by 谷歌翻译
We study the problem of synthesizing immersive 3D indoor scenes from one or more images. Our aim is to generate high-resolution images and videos from novel viewpoints, including viewpoints that extrapolate far beyond the input images while maintaining 3D consistency. Existing approaches are highly complex, with many separately trained stages and components. We propose a simple alternative: an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images. On the Matterport3D and RealEstate10K datasets, our approach significantly outperforms prior work when evaluated by humans, as well as on FID scores. Further, we show that our model is useful for generative data augmentation. A vision-and-language navigation (VLN) agent trained with trajectories spatially-perturbed by our model improves success rate by up to 1.5% over a state of the art baseline on the R2R benchmark. Our code will be made available to facilitate generative data augmentation and applications to downstream robotics and embodied AI tasks.
translated by 谷歌翻译
我们研究了开发自主代理的问题,这些自主代理可以遵循人类的指示来推断和执行一系列行动以完成基础任务。近年来取得了重大进展,尤其是对于短范围的任务。但是,当涉及具有扩展动作序列的长匹马任务时,代理可以轻松忽略某些指令或陷入长长指令中间,并最终使任务失败。为了应对这一挑战,我们提出了一个基于模型的里程碑的任务跟踪器(M-Track),以指导代理商并监视其进度。具体而言,我们提出了一个里程碑构建器,该建筑商通过导航和交互里程碑标记指令,代理商需要逐步完成,以及一个系统地检查代理商当前里程碑的进度并确定何时继续进行下一个的里程碑检查器。在具有挑战性的Alfred数据集上,我们的M轨道在两个竞争基本模型中,未见成功率的相对成功率显着提高了33%和52%。
translated by 谷歌翻译
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with nonreversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing visionand-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
translated by 谷歌翻译
愿景 - 语言导航(VLN)任务要求代理逐步导航,同时感知视觉观察并理解自然语言指令。大数据偏置,这是由小数据量表和大型导航空间之间的视差比率引起的,使得VLN任务具有挑战性。以前的作品提出了各种数据增强方法来减少数据偏差。但是,这些作品不会明确降低不同房间场景的数据偏差。因此,该代理将覆盖所见的场景,并在看不见的场景中实现较差的导航性能。为了解决这个问题,我们提出了随机环境混合(REM)方法,它通过混合环境作为增强数据生成交叉连接的房屋场景。具体而言,我们首先根据每个场景的房间连接图选择键视点。然后,我们交叉连接不同场景的关键视图,以构建增强场景。最后,我们在交叉连接场景中生成增强的指令路径对。基准数据集的实验结果表明,我们的增强数据通过REM帮助代理商会降低所见和看不见的环境之间的性能差距,提高整体性能,使我们的模型成为标准VLN基准的最佳现有方法。该代码已发布:https://github.com/lcfractal/vlnrem。
translated by 谷歌翻译
视觉和语言导航(VLN)是人工智能领域的一个具有挑战性的任务。虽然在过去几年中,在这项任务中取得了大规模进展,但由于深远和语言模型的突破,仍然是突破,仍然很难建立可以概括和人类的VLN模型。在本文中,我们提供了一种改进VLN模型的新视角。基于我们发现,即使它们的成功率相对相同,同一VLN模型的快照表现出显着不同,我们提出了一种基于快照的合并解决方案,该解决方案利用了多个快照之间的预测。构建在现有最先进的(SOTA)型号$ \ CirclearRowright $ Bert的快照和我们的过去动作感知修改,我们所提出的集合在导航错误中实现了新的SOTA性能(NE)和成功由路径长度(SPL)加权。
translated by 谷歌翻译
对比语言图像预测(剪辑)编码器已被证明是有利于对分类和检测到标题和图像操纵的一系列视觉任务。我们调查剪辑视觉骨干网的有效性,以实现AI任务。我们构建令人难以置信的简单基线,名为Emplip,没有任务特定的架构,归纳偏差(如使用语义地图),培训期间的辅助任务,或深度映射 - 但我们发现我们的改进的基线在范围内表现得非常好任务和模拟器。 empclip将Robothor ObjectNav排行榜上面的20分的巨额边缘(成功率)。它使ithor 1相重新安排排行榜上面,击败了采用主动神经映射的下一个最佳提交,而且多于固定的严格度量(0.08至0.17)。它还击败了2021年栖息地对象挑战的获奖者,该挑战采用辅助任务,深度地图和人类示范以及2019年栖息地进程挑战的挑战。我们评估剪辑视觉表示在捕获有关输入观测的语义信息时的能力 - 用于导航沉重的体现任务的基元 - 并且发现剪辑的表示比想象成掠过的骨干更有效地编码这些基元。最后,我们扩展了我们的一个基线,产生了能够归零对象导航的代理,该导航可以导航到在训练期间未被用作目标的对象。
translated by 谷歌翻译