Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
在视觉和语言导航(VLN)中,按照自然语言指令在现实的3D环境中需要具体的代理。现有VLN方法的一个主要瓶颈是缺乏足够的培训数据,从而导致对看不见的环境的概括不令人满意。虽然通常会手动收集VLN数据,但这种方法很昂贵,并且可以防止可扩展性。在这项工作中,我们通过建议从HM3D自动创建900个未标记的3D建筑物的大规模VLN数据集来解决数据稀缺问题。我们为每个建筑物生成一个导航图,并通过交叉视图一致性从2D传输对象预测,从2D传输伪3D对象标签。然后,我们使用伪对象标签来微调一个预处理的语言模型,作为减轻教学生成中跨模式差距的提示。在导航环境和说明方面,我们生成的HM3D-AUTOVLN数据集是比现有VLN数据集大的数量级。我们通过实验表明,HM3D-AUTOVLN显着提高了所得VLN模型的概括能力。在SPL指标上,我们的方法分别在Reverie和DataSet的看不见的验证分裂分别对艺术的状态提高了7.1%和8.1%。
translated by 谷歌翻译
了解空间和视觉信息对于遵循自然语言说明的导航代理至关重要。当前的基于变压器的VLN代理纠缠了方向和视觉信息,这限制了每个信息源的学习中的增益。在本文中,我们设计了具有明确取向和视觉模块的神经药物。这些模块学会了将空间信息和地标在视觉环境中的说明中提及。为了加强代理的空间推理和视觉感知,我们设计了特定的预训练任务,以进食并更好地利用我们最终导航模型中的相应模块。我们在Room2Room(R2R)和Room4Room(R4R)数据集上评估我们的方法,并在两个基准测试中实现最新结果。
translated by 谷歌翻译
In this work, we study the problem of Embodied Referring Expression Grounding, where an agent needs to navigate in a previously unseen environment and localize a remote object described by a concise high-level natural language instruction. When facing such a situation, a human tends to imagine what the destination may look like and to explore the environment based on prior knowledge of the environmental layout, such as the fact that a bathroom is more likely to be found near a bedroom than a kitchen. We have designed an autonomous agent called Layout-aware Dreamer (LAD), including two novel modules, that is, the Layout Learner and the Goal Dreamer to mimic this cognitive decision process. The Layout Learner learns to infer the room category distribution of neighboring unexplored areas along the path for coarse layout estimation, which effectively introduces layout common sense of room-to-room transitions to our agent. To learn an effective exploration of the environment, the Goal Dreamer imagines the destination beforehand. Our agent achieves new state-of-the-art performance on the public leaderboard of the REVERIE dataset in challenging unseen test environments with improvement in navigation success (SR) by 4.02% and remote grounding success (RGS) by 3.43% compared to the previous state-of-the-art. The code is released at https://github.com/zehao-wang/LAD
translated by 谷歌翻译
视觉和语言导航(VLN)是一种任务,即遵循语言指令以导航到目标位置的语言指令,这依赖于在移动期间与环境的持续交互。最近的基于变压器的VLN方法取得了很大的进步,从视觉观测和语言指令之间的直接连接通过多模式跨关注机制。然而,这些方法通常代表通过使用LSTM解码器或使用手动设计隐藏状态来构建反复变压器的时间上下文作为固定长度矢量。考虑到单个固定长度向量通常不足以捕获长期时间上下文,在本文中,我们通过显式建模时间上下文来引入具有可变长度存储器(MTVM)的多模式变压器,通过模拟时间上下文。具体地,MTVM使代理能够通过直接存储在存储体中的先前激活来跟踪导航轨迹。为了进一步提高性能,我们提出了内存感知的一致性损失,以帮助学习随机屏蔽指令的时间上下文的更好关节表示。我们在流行的R2R和CVDN数据集上评估MTVM,我们的模型在R2R看不见的验证和测试中提高了2%的成功率,并在CVDN测试集上减少了1.6米的目标进程。
translated by 谷歌翻译
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions. However, given the scarcity of human instruction data and limited diversity in the training environments, these agents still struggle with complex language grounding and spatial language understanding. Pretraining on large text and image-text datasets from the web has been extensively explored but the improvements are limited. We investigate large-scale augmentation with synthetic instructions. We take 500+ indoor environments captured in densely-sampled 360 degree panoramas, construct navigation trajectories through these panoramas, and generate a visually-grounded instruction for each trajectory using Marky, a high-quality multilingual navigation instruction generator. We also synthesize image observations from novel viewpoints using an image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs is two orders of magnitude larger than existing human-annotated datasets, and contains a wider variety of environments and viewpoints. To efficiently leverage data at this scale, we train a simple transformer agent with imitation learning. On the challenging RxR dataset, our approach outperforms all existing RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen environments, and from 64.6 to 66.8 in unseen test environments. Our work points to a new path to improving instruction-following agents, emphasizing large-scale imitation learning and the development of synthetic instruction generation capabilities.
translated by 谷歌翻译
在本文中,我们介绍了地图语言导航任务,代理在其中执行自然语言指令,并仅基于给定的3D语义图移至目标位置。为了解决任务,我们设计了指导感的路径建议和歧视模型(IPPD)。我们的方法利用MAP信息来提供指导感知的路径建议,即,它选择所有潜在的指令一致的候选路径以减少解决方案空间。接下来,为表示沿路径的地图观测值以获得更好的模态对准,提出了针对语义图定制的新型路径特征编码方案。基于注意力的语言驱动的歧视者旨在评估候选路径,并确定最佳路径作为最终结果。与单步贪婪决策方法相比,我们的方法自然可以避免误差积累。与单步仿制学习方法相比,IPPD在导航成功方面的性能增长超过17%,而在有挑战性的看不见的环境中,在路径匹配测量NDTW上的性能增长了0.18。
translated by 谷歌翻译
旨在为通用机器人铺平道路的边界研究,视觉和语言导航(VLN)一直是计算机视觉和自然语言处理社区的热门话题。 VLN任务要求代理在不熟悉的环境中按照自然语言说明导航到目标位置。最近,基于变压器的模型已在VLN任务上获得了重大改进。由于变压器体系结构中的注意力机制可以更好地整合视觉和语言的模式内和模式信息。但是,当前基于变压器的模型中存在两个问题。 1)模型独立处理每个视图,而无需考虑对象的完整性。 2)在视觉模态的自我注意操作期间,在空间上遥远的视图可以彼此交织而无需明确的限制。这种混合可能会引入额外的噪音而不是有用的信息。为了解决这些问题,我们建议1)基于插槽注意的模块,以合并来自同一对象的分割的信息。 2)局部注意力掩模机制限制视觉注意力跨度。所提出的模块可以轻松地插入任何VLN体系结构中,我们将复发的VLN-Bert用作基本模型。 R2R数据集的实验表明,我们的模型已达到最新结果。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a naturallanguage navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visuallygrounded navigation instructions, we present the Matter-port3D Simulator -a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -the Room-to-Room (R2R) dataset 1 .1 https://bringmeaspoon.org Instruction: Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.
translated by 谷歌翻译
视觉和语言导航(VLN)任务要求代理根据语言说明浏览环境。在本文中,我们旨在解决此任务中的两个关键挑战:利用多语言指令改进教学路径接地,并在培训期间看不见的新环境中导航。为了应对这些挑战,我们提出了明确的:跨语性和环境不可屈服的表示。首先,我们的经纪人在室内室内数据集中学习了三种语言(英语,印地语和泰卢固语)的共享且视觉上的跨语言表示。我们的语言表示学习是由视觉信息对齐的文本对指导的。其次,我们的代理商通过从不同环境中最大化语义对齐的图像对(对象匹配的约束)之间的相似性来学习环境不足的视觉表示。我们的环境不可知的视觉表示可以减轻低级视觉信息引起的环境偏见。从经验上讲,在房间 - 室内数据集中,我们表明,当通过跨语性语言表示和环境 - 非局部视觉表示形式概括地看不见的环境时,我们的多语言代理在所有指标上都比强大的基线模型进行了巨大改进。此外,我们表明我们学到的语言和视觉表示可以成功地转移到房间和合作的视觉和二元式导航任务上,并提出详细的定性和定量的概括和基础分析。我们的代码可从https://github.com/jialuli-luka/clear获得
translated by 谷歌翻译
我们研究了开发自主代理的问题,这些自主代理可以遵循人类的指示来推断和执行一系列行动以完成基础任务。近年来取得了重大进展,尤其是对于短范围的任务。但是,当涉及具有扩展动作序列的长匹马任务时,代理可以轻松忽略某些指令或陷入长长指令中间,并最终使任务失败。为了应对这一挑战,我们提出了一个基于模型的里程碑的任务跟踪器(M-Track),以指导代理商并监视其进度。具体而言,我们提出了一个里程碑构建器,该建筑商通过导航和交互里程碑标记指令,代理商需要逐步完成,以及一个系统地检查代理商当前里程碑的进度并确定何时继续进行下一个的里程碑检查器。在具有挑战性的Alfred数据集上,我们的M轨道在两个竞争基本模型中,未见成功率的相对成功率显着提高了33%和52%。
translated by 谷歌翻译
视觉导航要求代理商遵循自然语言说明以达到特定目标。可见的环境和看不见的环境之间的巨大差异使代理商概括良好的挑战。先前的研究提出了数据增强方法,以明确或隐式地减轻数据偏见并提供概括的改进。但是,他们试图记住增强的轨迹,并在测试时忽略在看不见的环境下的分布变化。在本文中,我们提出了一个看不见的差异,预期视力和语言导航(戴维斯),该差异通过鼓励测试时间的视觉一致性来概括为看不见的环境。具体来说,我们设计了:1)半监督框架戴维斯(Davis),该框架利用类似的语义观测来利用视觉一致性信号。 2)一个两阶段的学习程序,鼓励适应测试时间分布。该框架增强了模仿和强化学习的基本混合物与动量形成对比,以鼓励在联合训练阶段和测试时间适应阶段对类似观察的稳定决策。广泛的实验表明,戴维斯在R2R和RXR基准上实现了与先前最先进的VLN基线相比,取得了模型不合命源性的改进。我们的源代码和数据是补充材料。
translated by 谷歌翻译
本报告介绍了CVPR 2022中RXR-HABITAT竞赛获胜的方法。该竞赛解决了连续环境中的视觉和语言导航问题(VLN-CE),该问题要求代理商遵循逐步遵循步骤自然语言指示达到目标。我们为任务提供了模块化的计划与控制方法。我们的模型由三个模块组成:候选Waypoints预测器(CWP),历史增强的计划者和试用控制器。在每个决策循环中,CWP首先根据来自多个视图的深度观察来预测一组候选航路点。它可以降低动作空间的复杂性并促进计划。然后,采用历史增强的计划者选择候选航路点之一。计划者还编码历史记忆以跟踪导航进度,这对于长途导航特别有效。最后,我们提出了一个名为Trutout的非参数启发式控制器,以执行低级动作以达到计划的子目标。它是基于反复试验的机制,该机制可以帮助代理避免障碍并避免卡住。所有三个模块都在层次上工作,直到代理停止为止。我们进一步采取了视力和语言导航(VLN)的最新进展,以改善基于大规模合成域内数据集,环境级数据增强和快照模型集成等性能。我们的模型赢得了2022年RXR-HABITAT竞赛,比NDTW和​​SR指标的现有方法分别相对改善,相对改善为48%和90%。
translated by 谷歌翻译
对象目标导航的最新方法依赖于增强学习,通常需要大量的计算资源和学习时间。我们提出了使用无互动学习(PONI)的对象导航的潜在功能,这是一种模块化方法,可以散布“在哪里看?”的技能?对于对象和“如何导航到(x,y)?”。我们的主要见解是“在哪里看?”可以纯粹将其视为感知问题,而没有环境相互作用就可以学习。为了解决这个问题,我们提出了一个网络,该网络可以预测两个在语义图上的互补电位功能,并使用它们来决定在哪里寻找看不见的对象。我们使用在自上而下的语义图的被动数据集上使用受监督的学习来训练潜在的功能网络,并将其集成到模块化框架中以执行对象目标导航。 Gibson和MatterPort3D的实验表明,我们的方法可实现对象目标导航的最新方法,同时减少培训计算成本高达1,600倍。可以使用代码和预训练的模型:https://vision.cs.utexas.edu/projects/poni/
translated by 谷歌翻译
在人类环境中,预计在简单的自然语言指导下,机器人将完成各种操纵任务。然而,机器人的操纵极具挑战性,因为它需要精细颗粒的运动控制,长期记忆以及对以前看不见的任务和环境的概括。为了应对这些挑战,我们提出了一种基于统一的变压器方法,该方法考虑了多个输入。特别是,我们的变压器体系结构集成了(i)自然语言指示和(ii)多视图场景观察,而(iii)跟踪观察和动作的完整历史。这种方法使历史和指示之间的学习依赖性可以使用多个视图提高操纵精度。我们评估我们的方法在具有挑战性的RLBench基准和现实世界机器人方面。值得注意的是,我们的方法扩展到74个不同的RLBench任务,并超越了最新的现状。我们还解决了指导条件的任务,并证明了对以前看不见的变化的出色概括。
translated by 谷歌翻译
自然语言提供可访问和富有富有态度的界面,以指定机器人代理的长期任务。但是,非专家可能会使用高级指令指定此类任务,其中通过多个抽象层摘要通过特定的机器人操作。我们建议将语言和机器人行动之间的这种差距延长长的执行视野是持久的表示。我们提出了一种持久的空间语义表示方法,并展示它是如何构建执行分层推理的代理,以有效执行长期任务。尽管完全避免了常用的逐步说明,我们评估了我们对阿尔弗雷德基准的方法并实现了最先进的结果。
translated by 谷歌翻译
Current computer vision models, unlike the human visual system, cannot yet achieve general-purpose visual understanding. Existing efforts to create a general vision model are limited in the scope of assessed tasks and offer no overarching framework to perform them holistically. We present a new comprehensive benchmark, General-purpose Visual Understanding Evaluation (G-VUE), covering the full spectrum of visual cognitive abilities with four functional domains $\unicode{x2014}$ Perceive, Ground, Reason, and Act. The four domains are embodied in 11 carefully curated tasks, from 3D reconstruction to visual reasoning and manipulation. Along with the benchmark, we provide a general encoder-decoder framework to allow for the evaluation of arbitrary visual representation on all 11 tasks. We evaluate various pre-trained visual representations with our framework and observe that (1) Transformer-based visual backbone generally outperforms CNN-based backbone on G-VUE, (2) visual representations from vision-language pre-training are superior to those with vision-only pre-training across visual tasks. With G-VUE, we provide a holistic evaluation standard to motivate research toward building general-purpose visual systems via obtaining more general-purpose visual representations.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
愿景 - 语言导航(VLN)任务要求代理逐步导航,同时感知视觉观察并理解自然语言指令。大数据偏置,这是由小数据量表和大型导航空间之间的视差比率引起的,使得VLN任务具有挑战性。以前的作品提出了各种数据增强方法来减少数据偏差。但是,这些作品不会明确降低不同房间场景的数据偏差。因此,该代理将覆盖所见的场景,并在看不见的场景中实现较差的导航性能。为了解决这个问题,我们提出了随机环境混合(REM)方法,它通过混合环境作为增强数据生成交叉连接的房屋场景。具体而言,我们首先根据每个场景的房间连接图选择键视点。然后,我们交叉连接不同场景的关键视图,以构建增强场景。最后,我们在交叉连接场景中生成增强的指令路径对。基准数据集的实验结果表明,我们的增强数据通过REM帮助代理商会降低所见和看不见的环境之间的性能差距,提高整体性能,使我们的模型成为标准VLN基准的最佳现有方法。该代码已发布:https://github.com/lcfractal/vlnrem。
translated by 谷歌翻译