跨场型模型适应对于在实际场景中的摄像机重新定位至关重要。通常,最好将预学的模型快速适应新颖的场景,并尽可能少地训练样本。但是,由于图像特征提取和场景坐标回归的纠缠,现有的最新方法几乎不能支持如此少的场景适应。为了解决此问题,我们使用解耦的解决方案接近摄像机重新定位,在该解决方案中,分别执行特征提取,坐标回归和姿势估计。我们的关键见解是,应通过删除坐标系的分心因子来学习用于坐标回归的功能编码器,从而从多个场景中学到了特征编码器,以获得一般特征表示和更重要的,不敏感的功能。具有此功能先验,并与坐标回归器结合使用,与现有集成解决方案相比,新场景中几乎没有射击的观测比3D世界更容易。实验表明,与最先进的方法相比,我们的方法的优越性,在具有不同的视觉外观和观点分布的几个场景上产生了更高的精度。
translated by 谷歌翻译
视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
在3D视觉中,视觉重新定位已被广泛讨论:鉴于预构建的3D视觉图,估计查询图像的6 DOF(自由度)姿势。大规模室内环境中的重新定位可实现有吸引力的应用程序,例如增强现实和机器人导航。但是,当相机移动时,在这种环境中,外观变化很快,这对于重新定位系统来说是具有挑战性的。为了解决这个问题,我们建议一种基于虚拟视图综合方法Rendernet,以丰富有关此特定情况的数据库和完善姿势。我们选择直接渲染虚拟观点的必要全局和本地特征,而不是渲染需要高质量3D模型的真实图像,并分别将它们应用于后续图像检索和功能匹配操作中。所提出的方法在很大程度上可以改善大规模室内环境中的性能,例如,在INLOC数据集中获得7.1 \%和12.2 \%的改善。
translated by 谷歌翻译
6D object pose estimation problem has been extensively studied in the field of Computer Vision and Robotics. It has wide range of applications such as robot manipulation, augmented reality, and 3D scene understanding. With the advent of Deep Learning, many breakthroughs have been made; however, approaches continue to struggle when they encounter unseen instances, new categories, or real-world challenges such as cluttered backgrounds and occlusions. In this study, we will explore the available methods based on input modality, problem formulation, and whether it is a category-level or instance-level approach. As a part of our discussion, we will focus on how 6D object pose estimation can be used for understanding 3D scenes.
translated by 谷歌翻译
深度学习的关键批评之一是,需要大量昂贵且难以获得的训练数据,以便培训具有高性能和良好的概率功能的模型。专注于通过场景坐标回归(SCR)的单眼摄像机姿势估计的任务,我们描述了一种新的方法,用于相机姿势估计(舞蹈)网络的域改编,这使得培训模型无需访问目标任务上的任何标签。舞蹈需要未标记的图像(没有已知的姿势,订购或场景坐标标签)和空间的3D表示(例如,扫描点云),这两者都可以使用现成的商品硬件最少的努力来捕获。舞蹈渲染从3D模型标记的合成图像,通过应用无监督的图像级域适应技术(未配对图像到图像转换)来桥接合成和实图像之间的不可避免的域间隙。在实际图像上进行测试时,舞蹈培训的SCR模型在成本的一小部分中对其完全监督的对应物(在两种情况下使用PNP-RANSAC进行最终姿势估算的情况下)进行了相当的性能。我们的代码和数据集可以在https://github.com/jacklangerman/dance获得
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
在这项工作中,我们解决了主动摄像机定位的问题,该问题可积极控制相机运动以实现精确的相机姿势。过去的解决方案主要基于马尔可夫定位,从而减少了定位的位置摄像头的不确定性。这些方法将摄像机定位在离散姿势空间中,并且对定位驱动的场景属性不可知,从而限制了相机姿势的精度。我们建议通过由被动和主动定位模块组成的新型活动相机定位算法克服这些局限性。前者通过建立对点的摄像头通信来优化连续姿势空间中的相机姿势。后者明确对场景和相机不确定性组件进行建模,以计划正确的摄像头姿势估计的正确路径。我们在合成和扫描现实世界室内场景的挑战性本地化场景上验证了算法。实验结果表明,我们的算法表现优于基于马尔可夫定位的最先进的方法和优质相机姿势精度的其他方法。代码和数据在https://github.com/qhfang/accurateacl上发布。
translated by 谷歌翻译
Figure 1: PoseNet: Convolutional neural network monocular camera relocalization. Relocalization results for an input image (top), the predicted camera pose of a visual reconstruction (middle), shown again overlaid in red on the original image (bottom). Our system relocalizes to within approximately 2m and 6 • for large outdoor scenes spanning 50, 000m 2 . For an online demonstration, please see our project webpage: mi.eng.cam.ac.uk/projects/relocalisation/
translated by 谷歌翻译
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
translated by 谷歌翻译
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
translated by 谷歌翻译
在本文中,我们建议超越建立的基于视觉的本地化方法,该方法依赖于查询图像和3D点云之间的视觉描述符匹配。尽管通过视觉描述符匹配关键点使本地化高度准确,但它具有重大的存储需求,提出了隐私问题,并需要长期对描述符进行更新。为了优雅地应对大规模定位的实用挑战,我们提出了Gomatch,这是基于视觉的匹配的替代方法,仅依靠几何信息来匹配图像键点与地图的匹配,这是轴承矢量集。我们的新型轴承矢量表示3D点,可显着缓解基于几何的匹配中的跨模式挑战,这阻止了先前的工作在现实环境中解决本地化。凭借额外的仔细建筑设计,Gomatch在先前的基于几何的匹配工作中改善了(1067m,95.7升)和(1.43m,34.7摄氏度),平均中位数姿势错误,同时需要7个尺寸,同时需要7片。与最佳基于视觉的匹配方法相比,几乎1.5/1.7%的存储容量。这证实了其对现实世界本地化的潜力和可行性,并为不需要存储视觉描述符的城市规模的视觉定位方法打开了未来努力的大门。
translated by 谷歌翻译
Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene. However, methods for pose estimation often fail when only a few images are available because they rely on the ability to robustly identify and match visual features between image pairs. While these methods can work robustly with dense camera views, capturing a large set of images can be time-consuming or impractical. We propose SparsePose for recovering accurate camera poses given a sparse set of wide-baseline images (fewer than 10). The method learns to regress initial camera poses and then iteratively refine them after training on a large-scale dataset of objects (Co3D: Common Objects in 3D). SparsePose significantly outperforms conventional and learning-based baselines in recovering accurate camera rotations and translations. We also demonstrate our pipeline for high-fidelity 3D reconstruction using only 5-9 images of an object.
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
我们提出了一种视觉本地化系统,这些系统在合成数据的帮助下学习在现实世界中估算相机姿势。尽管近年来取得了重大进展,但大多数基于学习的学习方法在单个域中的视觉定位目标,并需要良好的地理标记图像的密集数据库运行。为了减轻数据稀缺问题并提高神经定位模型的可扩展性,我们介绍了Topo-Datagen,这是一个多功能合成数据生成工具,在真实和虚拟世界之间平稳地遍历,铰接在地理相机视点。建议新的大型SIM-to-Real基准数据集展示并评估所述合成数据的效用。我们的实验表明,合成数据在实际上提高了真实数据的神经网络性能。此外,我们介绍Crossloc,一种跨模型视觉表示学习方法来姿态估计,可以通过自我监督充分利用现场坐标地面真理。在没有任何额外数据的情况下,Crossloc显着优于最先进的方法,并实现了更高的实际数据样本效率。我们的代码可在https://github.com/topo-epfl/crossloc获得。
translated by 谷歌翻译
实时机器人掌握,支持随后的精确反对操作任务,是高级高级自治系统的优先目标。然而,尚未找到这样一种可以用时间效率进行充分准确的掌握的算法。本文提出了一种新的方法,其具有2阶段方法,它使用深神经网络结合快速的2D对象识别,以及基于点对特征框架的随后的精确和快速的6D姿态估计来形成实时3D对象识别和抓握解决方案能够多对象类场景。所提出的解决方案有可能在实时应用上稳健地进行,需要效率和准确性。为了验证我们的方法,我们进行了广泛且彻底的实验,涉及我们自己的数据集的费力准备。实验结果表明,该方法在5CM5DEG度量标准中的精度97.37%,平均距离度量分数99.37%。实验结果显示了通过使用该方法的总体62%的相对改善(5cm5deg度量)和52.48%(平均距离度量)。此外,姿势估计执行也显示出运行时间的平均改善47.6%。最后,为了说明系统在实时操作中的整体效率,进行了一个拾取和放置的机器人实验,并显示了90%的准确度的令人信服的成功率。此实验视频可在https://sites.google.com/view/dl-ppf6dpose/上获得。
translated by 谷歌翻译
估计对象的6D姿势是必不可少的计算机视觉任务。但是,大多数常规方法从单个角度依赖相机数据,因此遭受遮挡。我们通过称为MV6D的新型多视图6D姿势估计方法克服了这个问题,该方法从多个角度根据RGB-D图像准确地预测了混乱场景中所有对象的6D姿势。我们将方法以PVN3D网络为基础,该网络使用单个RGB-D图像来预测目标对象的关键点。我们通过从多个视图中使用组合点云来扩展此方法,并将每个视图中的图像与密集层层融合。与当前的多视图检测网络(例如Cosypose)相反,我们的MV6D可以以端到端的方式学习多个观点的融合,并且不需要多个预测阶段或随后对预测的微调。此外,我们介绍了三个新颖的影像学数据集,这些数据集具有沉重的遮挡的混乱场景。所有这些都从多个角度包含RGB-D图像,例如语义分割和6D姿势估计。即使在摄像头不正确的情况下,MV6D也明显优于多视图6D姿势估计中最新的姿势估计。此外,我们表明我们的方法对动态相机设置具有强大的态度,并且其准确性随着越来越多的观点而逐渐增加。
translated by 谷歌翻译
我们提出了一种对类别级别的6D对象姿势和大小估计的新方法。为了解决类内的形状变化,我们学习规范形状空间(CASS),统一表示,用于某个对象类别的各种情况。特别地,CASS被建模为具有标准化姿势的规范3D形状深度生成模型的潜在空间。我们训练变形式自动编码器(VAE),用于从RGBD图像中的规范空间中生成3D点云。 VAE培训以跨类方式培训,利用公开的大型3D形状存储库。由于3D点云在归一化姿势(具有实际尺寸)中生成,因此VAE的编码器学习视图分解RGBD嵌入。它将RGBD图像映射到任意视图中以独立于姿势的3D形状表示。然后通过将对象姿势与用单独的深神经网络提取的输入RGBD的姿势相关的特征进行对比姿势估计。我们将CASS和姿势和大小估计的学习集成到最终的培训网络中,实现了最先进的性能。
translated by 谷歌翻译
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.
translated by 谷歌翻译
我们引入了一个相机重新定位管道,该管道结合了绝对姿势回归(APR)和直接功能匹配。通过结合曝光自适应的新视图综合,我们的方法成功地解决了现有基于光度法方法无法处理的室外环境中的光度扭曲。借助域不变的功能匹配,我们的解决方案通过对未标记数据的半监督学习提高了姿势回归精度。特别是,该管道由两个组成部分组成:新型视图合成器和DFNET。前者综合了新的视图,以补偿暴露的变化,后者会回归摄像头的姿势,并提取了可靠的功能,这些特征弥补了真实图像和合成图像之间的域间隙。此外,我们引入了在线合成数据生成方案。我们表明,这些方法有效地增强了室内和室外场景中的相机姿势估计。因此,我们的方法通过优于现有的单位图APR方法高达56%,可与基于3D结构的方法相当。
translated by 谷歌翻译