我们介绍了互动室(Thor),这是一个视觉AI研究的框架,可在http://ai2thor.allenai.org上找到。AI2-这是由几乎逼真的3D室内场景组成的,在该场景中,AI代理可以在场景中导航并与对象进行交互以执行任务。AI2-这可以在许多不同的领域进行研究,包括但不限于深入强化学习,模仿学习,通过互动,计划,视觉问答答案,无监督的表示学习,对象检测和细分以及认知模型。AI2的目的是促进构建视觉上智能模型,并将研究推向该领域。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
对象看起来和声音的方式提供了对其物理特性的互补反射。在许多设置中,视觉和试听的线索都异步到达,但必须集成,就像我们听到一个物体掉落在地板上,然后必须找到它时。在本文中,我们介绍了一个设置,用于研究3D虚拟环境中的多模式对象定位。一个物体在房间的某个地方掉落。配备了摄像头和麦克风的具体机器人剂必须通过将音频和视觉信号与知识的基础物理学结合来确定已删除的对象以及位置。为了研究此问题,我们生成了一个大规模数据集 - 倒下的对象数据集 - 其中包括64个房间中30个物理对象类别的8000个实例。该数据集使用Threedworld平台,该平台可以模拟基于物理的影响声音和在影片设置中对象之间的复杂物理交互。作为解决这一挑战的第一步,我们基于模仿学习,强化学习和模块化计划,开发了一组具体的代理基线,并对这项新任务的挑战进行了深入的分析。
translated by 谷歌翻译
我们介绍了ThreedWorld(TDW),是交互式多模态物理模拟的平台。 TDW能够模拟高保真感官数据和富裕的3D环境中的移动代理和对象之间的物理交互。独特的属性包括:实时近光 - 真实图像渲染;对象和环境库,以及他们定制的例程;有效构建新环境课程的生成程序;高保真音频渲染;各种材料类型的现实物理相互作用,包括布料,液体和可变形物体;可定制的代理体现AI代理商;并支持与VR设备的人类交互。 TDW的API使多个代理能够在模拟中进行交互,并返回一系列表示世界状态的传感器和物理数据。我们在计算机视觉,机器学习和认知科学中的新兴的研究方向上提供了通过TDW的初始实验,包括多模态物理场景理解,物理动态预测,多代理交互,像孩子一样学习的模型,并注意研究人类和神经网络。
translated by 谷歌翻译
大量数据集和高容量模型推动了计算机视觉和自然语言理解方面的许多最新进步。这项工作提出了一个平台,可以在体现的AI中实现类似的成功案例。我们提出了Procthor,这是一个程序生成体现的AI环境的框架。 Procthor使我们能够采样多种,交互式,可自定义和性能的虚拟环境的任意大型数据集,以训练和评估在导航,互动和操纵任务中的体现代理。我们通过10,000个生成的房屋和简单的神经模型的样本来证明procthor的能力和潜力。仅在Procthor上仅使用RGB图像训练的模型,没有明确的映射,并且没有人类任务监督在6个体现的AI基准中产生最先进的结果,用于导航,重排和手臂操纵,包括目前正在运行的Habitat 2022,AI2-- Thor重新安排2022,以及机器人挑战。我们还通过对procthor进行预训练,在下游基准测试上没有进行微调,通常会击败以前的最先进的系统,从而访问下游训练数据。
translated by 谷歌翻译
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-toend development of embodied AI algorithms -defining tasks (e.g. navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents.These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works [20,16] and find evidence for the opposite conclusion -that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} × {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.
translated by 谷歌翻译
Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world environments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional procedural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materials. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's diverse distribution of generated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrangement, lighting changes, or clutter.
translated by 谷歌翻译
我们介绍了栖息地2.0(H2.0),这是一个模拟平台,用于培训交互式3D环境和复杂物理的场景中的虚拟机器人。我们为体现的AI堆栈 - 数据,仿真和基准任务做出了全面的贡献。具体来说,我们提出:(i)复制:一个由艺术家的,带注释的,可重新配置的3D公寓(匹配真实空间)与铰接对象(例如可以打开/关闭的橱柜和抽屉); (ii)H2.0:一个高性能物理学的3D模拟器,其速度超过8-GPU节点上的每秒25,000个模拟步骤(实时850x实时),代表先前工作的100倍加速;和(iii)家庭助理基准(HAB):一套辅助机器人(整理房屋,准备杂货,设置餐桌)的一套常见任务,以测试一系列移动操作功能。这些大规模的工程贡献使我们能够系统地比较长期结构化任务中的大规模加固学习(RL)和经典的感官平面操作(SPA)管道,并重点是对新对象,容器和布局的概括。 。我们发现(1)与层次结构相比,(1)平面RL政策在HAB上挣扎; (2)具有独立技能的层次结构遭受“交接问题”的困扰,(3)水疗管道比RL政策更脆。
translated by 谷歌翻译
Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with highquality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently.We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), ( 4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.The supplementary video can be accessed at the following link: https://youtu.be/SmBxMDiOrvs.
translated by 谷歌翻译
我们为合作和异构多机构学习提供了多模式(视觉和语言)基准。我们介绍了一个基准的多模式数据集,其任务涉及在丰富的多房间环境中多个模拟异质机器人之间的协作。我们提供了一个集成的学习框架,最先进的多机构增强学习技术的多模式实现以及一致的评估协议。我们的实验研究了不同方式对多代理学习绩效的影响。我们还引入了代理之间的简单消息传递方法。结果表明,多模式为合作多学院学习带来了独特的挑战,并且在此类环境中推进多机构增强学习方法还有很大的空间。
translated by 谷歌翻译
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a naturallanguage navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visuallygrounded navigation instructions, we present the Matter-port3D Simulator -a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -the Room-to-Room (R2R) dataset 1 .1 https://bringmeaspoon.org Instruction: Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.
translated by 谷歌翻译
我们提出了Midgard,这是一个用于室外非结构化环境中自动机器人导航的开源模拟平台。 Midgard旨在实现在影照相3D环境中对自主代理(例如,无人接地车)进行培训,并通过培训场景中的可变性来支持基于学习的代理的概括技巧。 Midgard的主要功能包括可配置,可扩展和难度驱动的程序景观生成管道,并具有基于虚幻引擎的快速和影像现实主义场景。此外,Midgard还对OpenAi Gym进行了内置支持,OpenAi Gym是一个用于功能扩展的编程接口(例如,集成新型的传感器,自定义曝光内部模拟变量)和各种模拟代理传感器(例如RGB,DEPTH和实例/实例/语义细分)。我们评估了Midgard的功能,作为使用一组最先进的强化学习算法的机器人导航的基准测试工具。结果表明,Midgard作为模拟和训练环境的适用性,以及我们程序生成方法在控制场景难度方面的有效性,这直接反映了准确度量指标。 Midgard构建,源代码和文档可在https://midgardsim.org/上找到。
translated by 谷歌翻译
最近的视听导航工作是无噪音音频环境中的单一静态声音,并努力推广到闻名声音。我们介绍了一种新型动态视听导航基准测试,其中一个体现的AI代理必须在存在分散的人和嘈杂的声音存在下在未映射的环境中捕获移动声源。我们提出了一种依赖于多模态架构的端到端增强学习方法,该方法依赖于融合来自双耳音频信号和空间占用映射的空间视听信息,以编码为我们的新的稳健导航策略进行编码所需的功能复杂的任务设置。我们展示了我们的方法优于当前的最先进状态,以更好地推广到闻名声音以及对嘈杂的3D扫描现实世界数据集副本和TASTPORT3D上的嘈杂情景更好地对嘈杂的情景进行了更好的稳健性,以实现静态和动态的视听导航基准。我们的小型基准将在http://dav-nav.cs.uni-freiburg.de提供。
translated by 谷歌翻译
Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译
最近在体现AI中的研究已经通过使用模拟环境来开发和培训机器人学习方法。然而,使用模拟已经引起了只需要机器人模拟器可以模拟的任务:运动和物理接触的任务。我们呈现IGIBSON 2.0,一个开源仿真环境,通过三个关键创新支持模拟更多样化的家庭任务。首先,IGIBSON 2.0支持对象状态,包括温度,湿度水平,清洁度和切割和切片状态,以涵盖更广泛的任务。其次,IGIBSON 2.0实现了一组谓词逻辑函数,该逻辑函数将模拟器状态映射到烹饪或浸泡等逻辑状态。另外,给定逻辑状态,IGIBSON 2.0可以对满足它的有效物理状态进行示例。此功能可以以最少的努力从用户生成潜在的无限实例。采样机制允许我们的场景在语义有意义的位置中的小对象更密集地填充。第三,IGIBSON 2.0包括虚拟现实(VR)界面,以将人类浸入其场景以收集示威操作。因此,我们可以从这些新型任务中收集人类的示威活动,并使用它们进行模仿学习。我们评估了IGIBSON 2.0的新功能,以实现新的任务的机器人学习,希望能够展示这一新模拟器的潜力来支持体现AI的新研究。 IGIBSON 2.0及其新数据集可在http://svl.stanford.edu/igibson/上公开提供。
translated by 谷歌翻译
基于学习的培训方法的方法通常需要大量包含现实布局的高质量场景并支持有意义的互动。然而,用于体现AI(EAI)挑战的当前模拟器仅提供具有有限数量的布局的模拟室内场景。本文呈现出发光,第一研究框架采用最先进的室内场景综合算法,以在体现AI挑战的情况下生成大规模模拟场景。此外,我们通过支持复杂的家庭任务的能力自动和定量地评估生成的室内场景的质量。发光结合了一种新颖的场景生成算法(受限的随机现场生成(CSSG)),实现了具有人类设计的场景的竞争性能。在发光,EAI任务执行器,任务指令生成模块和视频呈现工具包中可以集体为实现的AI代理商的培训和评估集体为新场景产生大量多模式数据集。广泛的实验结果表明了发光产生的数据的有效性,使对泛化和鲁棒性的体现特性进行全面评估。
translated by 谷歌翻译
在工厂或房屋等环境中协助我们的机器人必须学会使用对象作为执行任务的工具,例如使用托盘携带对象。我们考虑了学习常识性知识何时可能有用的问题,以及如何与其他工具一起使用其使用以完成由人类指示的高级任务。具体而言,我们引入了一种新型的神经模型,称为Tooltango,该模型首先预测要使用的下一个工具,然后使用此信息来预测下一项动作。我们表明,该联合模型可以告知学习精细的策略,从而使机器人可以顺序使用特定工具,并在使模型更加准确的情况下增加了重要价值。 Tooltango使用图神经网络编码世界状态,包括对象和它们之间的符号关系,并使用人类教师的演示进行了培训,这些演示是指导物理模拟器中的虚拟机器人的演示。该模型学会了使用目标和动作历史的知识来参加场景,最终将符号动作解码为执行。至关重要的是,我们解决了缺少一些已知工具的看不见的环境的概括,但是存在其他看不见的工具。我们表明,通过通过从知识库中得出的预训练的嵌入来增强环境的表示,该模型可以有效地将其推广到新的环境中。实验结果表明,在预测具有看不见对象的新型环境中模拟移动操纵器的成功符号计划时,至少48.8-58.1%的绝对改善对基准的绝对改善。这项工作朝着使机器人能够快速合成复杂任务的强大计划的方向,尤其是在新颖的环境中
translated by 谷歌翻译
We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with nonreversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing visionand-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
translated by 谷歌翻译
在本文VisualEnv中,介绍了一种用于强化学习的可视环境的新工具。它是开源建模和渲染软件,搅拌机和用于生成仿真环境模型的Python模块的产品的产品。VisualEnv允许用户创建具有照片拟真渲染功能的自定义环境,并与Python完全集成。框架描述并测试了一系列示例问题,这些问题展示了培训强化学习代理的功能。
translated by 谷歌翻译