We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.
translated by 谷歌翻译
这项工作研究了图像目标导航问题,需要通过真正拥挤的环境引导具有嘈杂传感器和控制的机器人。最近的富有成效的方法依赖于深度加强学习,并学习模拟环境中的导航政策,这些环境比真实环境更简单。直接将这些训练有素的策略转移到真正的环境可能非常具有挑战性甚至危险。我们用由四个解耦模块组成的分层导航方法来解决这个问题。第一模块在机器人导航期间维护障碍物映射。第二个将定期预测实时地图上的长期目标。第三个计划碰撞命令集以导航到长期目标,而最终模块将机器人正确靠近目标图像。四个模块是单独开发的,以适应真实拥挤的情景中的图像目标导航。此外,分层分解对导航目标规划,碰撞避免和导航结束预测的学习进行了解耦,这在导航训练期间减少了搜索空间,并有助于改善以前看不见的真实场景的概括。我们通过移动机器人评估模拟器和现实世界中的方法。结果表明,我们的方法优于多种导航基线,可以在这些方案中成功实现导航任务。
translated by 谷歌翻译
移动机器人的视觉导航经典通过SLAM加上最佳规划,最近通过实现作为深网络的端到端培训。虽然前者通常仅限于航点计划,但即使在真实的物理环境中已经证明了它们的效率,后一种解决方案最常用于模拟中,但已被证明能够学习更复杂的视觉推理,涉及复杂的语义规则。通过实际机器人在物理环境中导航仍然是一个开放问题。端到端的培训方法仅在模拟中进行了彻底测试,实验涉及实际机器人的实际机器人在简化的实验室条件下限制为罕见的性能评估。在这项工作中,我们对真实物理代理的性能和推理能力进行了深入研究,在模拟中培训并部署到两个不同的物理环境。除了基准测试之外,我们提供了对不同条件下不同代理商培训的泛化能力的见解。我们可视化传感器使用以及不同类型信号的重要性。我们展示了,对于Pointgoal Task,一个代理在各种任务上进行预先培训,并在目标环境的模拟版本上进行微调,可以达到竞争性能,而无需建模任何SIM2重传,即通过直接从仿真部署培训的代理即可一个真正的物理机器人。
translated by 谷歌翻译
A household robot should be able to navigate to target locations without requiring users to first annotate everything in their home. Current approaches to this object navigation challenge do not test on real robots and rely on expensive semantically labeled 3D meshes. In this work, our aim is an agent that builds self-supervised models of the world via exploration, the same as a child might. We propose an end-to-end self-supervised embodied agent that leverages exploration to train a semantic segmentation model of 3D objects, and uses those representations to learn an object navigation policy purely from self-labeled 3D meshes. The key insight is that embodied agents can leverage location consistency as a supervision signal - collecting images from different views/angles and applying contrastive learning to fine-tune a semantic segmentation model. In our experiments, we observe that our framework performs better than other self-supervised baselines and competitively with supervised baselines, in both simulation and when deployed in real houses.
translated by 谷歌翻译
自主代理可以在新环境中导航而不构建明确的地图吗?对于PointGoal Navigation的任务(“转到$ \ delta x $,$ \ delta y $'),在理想化的设置(否RGB -D和驱动噪声,完美的GPS+Compass)下,答案是一个明确的“是” - 由任务无形组件(CNNS和RNN)组成的无地图神经模型接受了大规模增强学习训练,在标准数据集(Gibson)上取得了100%的成功。但是,对于PointNav在现实环境中(RGB-D和致动噪声,没有GPS+Compass),这是一个悬而未决的问题。我们在本文中解决了一个。该任务的最强成绩是成功的71.7%。首先,我们确定了性能下降的主要原因:GPS+指南针的缺失。带有RGB-D传感和致动噪声的完美GPS+指南针的代理商取得了99.8%的成功(Gibson-V2 Val)。这表明(解释模因)强大的视觉探子仪是我们对逼真的PointNav所需的全部。如果我们能够实现这一目标,我们可以忽略感应和致动噪声。作为我们的操作假设,我们扩展了数据集和模型大小,并开发了无人批准的数据启发技术来训练模型以进行视觉探测。我们在栖息地现实的PointNAV挑战方面的最新状态从71%降低到94%的成功(+23,31%相对)和53%至74%的SPL(+21,40%相对)。虽然我们的方法不饱和或“解决”该数据集,但这种强大的改进与有希望的零射击SIM2REAL转移(到Locobot)相结合提供了与假设一致的证据,即即使在现实环境中,显式映射也不是必需的。 。
translated by 谷歌翻译
我们介绍了一个目标驱动的导航系统,以改善室内场景中的Fapless视觉导航。我们的方法在每次步骤中都将机器人和目标的多视图观察为输入,以提供将机器人移动到目标的一系列动作,而不依赖于运行时在运行时。通过优化包含三个关键设计的组合目标来了解该系统。首先,我们建议代理人在做出行动决定之前构建下一次观察。这是通过从专家演示中学习变分生成模块来实现的。然后,我们提出预测预先预测静态碰撞,作为辅助任务,以改善导航期间的安全性。此外,为了减轻终止动作预测的训练数据不平衡问题,我们还介绍了一个目标检查模块来区分与终止动作的增强导航策略。这三种建议的设计都有助于提高培训数据效率,静态冲突避免和导航泛化性能,从而产生了一种新颖的目标驱动的FLASES导航系统。通过对Turtlebot的实验,我们提供了证据表明我们的模型可以集成到机器人系统中并在现实世界中导航。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
从“Internet AI”的时代到“体现AI”的时代,AI算法和代理商出现了一个新兴范式转变,其中不再从主要来自Internet策划的图像,视频或文本的数据集。相反,他们通过与与人类类似的Enocentric感知来通过与其环境的互动学习。因此,对体现AI模拟器的需求存在大幅增长,以支持各种体现的AI研究任务。这种越来越多的体现AI兴趣是有利于对人工综合情报(AGI)的更大追求,但对这一领域并无一直存在当代和全面的调查。本文旨在向体现AI领域提供百科全书的调查,从其模拟器到其研究。通过使用我们提出的七种功能评估九个当前体现的AI模拟器,旨在了解模拟器,以其在体现AI研究和其局限性中使用。最后,本文调查了体现AI - 视觉探索,视觉导航和体现问题的三个主要研究任务(QA),涵盖了最先进的方法,评估指标和数据集。最后,随着通过测量该领域的新见解,本文将为仿真器 - 任务选择和建议提供关于该领域的未来方向的建议。
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world environments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional procedural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materials. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's diverse distribution of generated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrangement, lighting changes, or clutter.
translated by 谷歌翻译
这项工作提出了一种体现的代理,可以以完全自主的方式将其语义分割网络调整到新的室内环境中。由于语义分割网络无法很好地推广到看不见的环境,因此代理会收集新环境的图像,然后将其用于自我监督的域适应性。我们将其作为一个有益的路径计划问题提出,并提出一种新的信息增益,该信息利用从语义模型中提取的不确定性来安全地收集相关数据。随着域的适应性的进展,这些不确定性会随着时间的推移而发生变化,并且我们系统的快速学习反馈驱使代理收集不同的数据。实验表明,与勘探目标相比,我们的方法更快地适应了新环境,最终性能更高,并且可以成功部署到物理机器人上的现实环境中。
translated by 谷歌翻译
为了基于深度加强学习(RL)来增强目标驱动的视觉导航的交叉目标和跨场景,我们将信息理论正则化术语引入RL目标。正则化最大化导航动作与代理的视觉观察变换之间的互信息,从而促进更明智的导航决策。这样,代理通过学习变分生成模型来模拟动作观察动态。基于该模型,代理生成(想象)从其当前观察和导航目标的下一次观察。这样,代理学会了解导航操作与其观察变化之间的因果关系,这允许代理通过比较当前和想象的下一个观察来预测导航的下一个动作。 AI2-Thor框架上的交叉目标和跨场景评估表明,我们的方法在某些最先进的模型上获得了平均成功率的10美元。我们进一步评估了我们的模型在两个现实世界中:来自离散的活动视觉数据集(AVD)和带有TurtleBot的连续现实世界环境中的看不见的室内场景导航。我们证明我们的导航模型能够成功实现导航任务这些情景。视频和型号可以在补充材料中找到。
translated by 谷歌翻译
机器人社区已经开始严重依赖越来越逼真的3D模拟器,以便在大量数据上进行大规模培训机器人。但是,一旦机器人部署在现实世界中,仿真差距以及现实世界的变化(例如,灯,物体位移)导致错误。在本文中,我们介绍了SIM2Realviz,这是一种视觉分析工具,可以帮助专家了解并减少机器人EGO-POSE估计任务的这种差距,即使用训练型模型估计机器人的位置。 Sim2Realviz显示了给定模型的详细信息以及在模拟和现实世界中的实例的性能。专家可以识别在给定位置影响模型预测的环境差异,并通过与模型假设的直接交互来探索来解决它。我们详细介绍了工具的设计,以及与对平均偏差的回归利用以及如何解决的案例研究以及如何解决,以及模型如何被诸如自行车等地标的消失的扰动。
translated by 谷歌翻译
在本文中,我们探索如何在互联网图像的数据和型号上构建,并使用它们适应机器人视觉,而无需任何额外的标签。我们提出了一个叫做自我监督体现的主动学习(密封)的框架。它利用互联网图像培训的感知模型来学习主动探索政策。通过3D一致性标记此探索策略收集的观察结果,并用于改善感知模型。我们构建并利用3D语义地图以完全自我监督的方式学习动作和感知。语义地图用于计算用于培训勘探政策的内在动机奖励,并使用时空3D一致性和标签传播标记代理观察。我们证明了密封框架可用于关闭动作 - 感知循环:通过在训练环境中移动,改善预读的感知模型的对象检测和实例分割性能,并且可以使用改进的感知模型来改善对象目标导航。
translated by 谷歌翻译
在开放世界中运行的机器人会遇到各种不同的环境,这些环境可能彼此之间有很大的不同。该域差距也对同时本地化和映射(SLAM)构成了挑战,它是导航的基本任务之一。尤其是,已知基于学习的大满贯方法概括地概括了看不见的环境,阻碍了其一般采用。在这项工作中,我们介绍了连续猛击的新任务,即从单个动态变化的环境扩展到终生的概念到几个截然不同的环境中的顺序部署。为了解决这一任务,我们提出了CL-SLAM利用双NETWORK体系结构来适应新环境,并保留有关先前访问的环境的知识。我们将CL-SLAM与基于学习的和经典的大满贯方法进行比较,并显示了利用在线数据的优势。我们在三个不同的数据集上广泛评估CL-SLAM,并证明它的表现优于几个受到现有基于基于学习的视觉探测方法的基准。我们在http://continual-slam.cs.uni-freiburg.de上公开提供工作代码。
translated by 谷歌翻译
准确的本地化是大多数机器人任务的关键要求。现有工作的主体集中在被动定位上,其中假定了机器人的动作,从而从对抽样信息性观察的影响中抽象出来。尽管最近的工作表明学习动作的好处是消除机器人的姿势,但这些方法仅限于颗粒状的离散动作,直接取决于全球地图的大小。我们提出了主动粒子滤网网络(APFN),这种方法仅依赖于本地信息来进行可能的评估以及决策。为此,我们将可区分的粒子过滤器与加固学习剂进行了介绍,该材料会参与地图中最相关的部分。最终的方法继承了粒子过滤器的计算益处,并且可以直接在连续的动作空间中起作用,同时保持完全可区分,从而端到端优化以及对输入模式的不可知。我们通过在现实世界3D扫描公寓建造的影像现实主义室内环境中进行广泛的实验来证明我们的方法的好处。视频和代码可在http://apfn.cs.uni-freiburg.de上找到。
translated by 谷歌翻译
我们提出了一种新颖的方法,可以可靠地估计相机的姿势,并在极端环境中获得的一系列图像,例如深海或外星地形。在这些挑战性条件下获得的数据被无纹理表面,图像退化以及重复性和高度模棱两可的结构所破坏。当天真地部署时,最先进的方法可能会在我们的经验分析确认的那些情况下失败。在本文中,我们试图在这些极端情况下使摄像机重新定位起作用。为此,我们提出:(i)一个分层定位系统,我们利用时间信息和(ii)一种新颖的环境感知图像增强方法来提高鲁棒性和准确性。我们广泛的实验结果表明,在两个极端环境下我们的方法有利于我们的方法:将自动的水下车辆定位,并将行星漫游者定位在火星样的沙漠中。此外,我们的方法仅使用20%的培训数据就可以在室内基准(7片数据集)上使用最先进的方法(7片数据集)实现可比性的性能。
translated by 谷歌翻译
Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts. In this work, we present a straightforward method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps. Our method differs from previous prediction-based navigation methods, such as frontier potential prediction or egocentric map completion, by directly predicting unseen targets while leveraging the global context from all previously explored areas. Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data. Once trained, the model can be incorporated into a modular pipeline for ObjectNav without the need for any reinforcement learning. We validate the effectiveness of our method on the HM3D and MP3D ObjectNav datasets. We find that it achieves the state-of-the-art on both datasets, despite not using any additional data for training.
translated by 谷歌翻译
Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
在这项工作中,我们解决了主动摄像机定位的问题,该问题可积极控制相机运动以实现精确的相机姿势。过去的解决方案主要基于马尔可夫定位,从而减少了定位的位置摄像头的不确定性。这些方法将摄像机定位在离散姿势空间中,并且对定位驱动的场景属性不可知,从而限制了相机姿势的精度。我们建议通过由被动和主动定位模块组成的新型活动相机定位算法克服这些局限性。前者通过建立对点的摄像头通信来优化连续姿势空间中的相机姿势。后者明确对场景和相机不确定性组件进行建模,以计划正确的摄像头姿势估计的正确路径。我们在合成和扫描现实世界室内场景的挑战性本地化场景上验证了算法。实验结果表明,我们的算法表现优于基于马尔可夫定位的最先进的方法和优质相机姿势精度的其他方法。代码和数据在https://github.com/qhfang/accurateacl上发布。
translated by 谷歌翻译
Deep motion forecasting models have achieved great success when trained on a massive amount of data. Yet, they often perform poorly when training data is limited. To address this challenge, we propose a transfer learning approach for efficiently adapting pre-trained forecasting models to new domains, such as unseen agent types and scene contexts. Unlike the conventional fine-tuning approach that updates the whole encoder, our main idea is to reduce the amount of tunable parameters that can precisely account for the target domain-specific motion style. To this end, we introduce two components that exploit our prior knowledge of motion style shifts: (i) a low-rank motion style adapter that projects and adjusts the style features at a low-dimensional bottleneck; and (ii) a modular adapter strategy that disentangles the features of scene context and motion history to facilitate a fine-grained choice of adaptation layers. Through extensive experimentation, we show that our proposed adapter design, coined MoSA, outperforms prior methods on several forecasting benchmarks.
translated by 谷歌翻译