我们提出了CPO,这是一种快速且强大的算法,该算法与可能包含更改的场景的3D点云相对于2D全景图。为了稳健地处理场景的变化,我们的方法偏离了传统的特征点匹配,并着重于全景图像提供的空间上下文。具体而言,我们建议使用得分图提出有效的颜色直方图生成和随后的鲁棒定位。通过利用球形投影的唯一模棱两可,我们提出了大量相机姿势的非常快的颜色直方图生成,而无需明确渲染所有候选姿势的图像。我们将全景云和点云的区域一致性作为2D/3D分数图,并使用它们来称量输入颜色值以进一步提高鲁棒性。加权颜色分布很快找到了良好的初始姿势,并实现了基于梯度的优化的稳定收敛。 CPO是轻量级的,在所有测试的场景中都能实现有效的本地化,尽管场景变化,重复性结构或无特征区域都显示出稳定的性能,这是带有透视摄像头视觉定位的典型挑战。
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
在本文中,我们建议超越建立的基于视觉的本地化方法,该方法依赖于查询图像和3D点云之间的视觉描述符匹配。尽管通过视觉描述符匹配关键点使本地化高度准确,但它具有重大的存储需求,提出了隐私问题,并需要长期对描述符进行更新。为了优雅地应对大规模定位的实用挑战,我们提出了Gomatch,这是基于视觉的匹配的替代方法,仅依靠几何信息来匹配图像键点与地图的匹配,这是轴承矢量集。我们的新型轴承矢量表示3D点,可显着缓解基于几何的匹配中的跨模式挑战,这阻止了先前的工作在现实环境中解决本地化。凭借额外的仔细建筑设计,Gomatch在先前的基于几何的匹配工作中改善了(1067m,95.7升)和(1.43m,34.7摄氏度),平均中位数姿势错误,同时需要7个尺寸,同时需要7片。与最佳基于视觉的匹配方法相比,几乎1.5/1.7%的存储容量。这证实了其对现实世界本地化的潜力和可行性,并为不需要存储视觉描述符的城市规模的视觉定位方法打开了未来努力的大门。
translated by 谷歌翻译
在3D视觉中,视觉重新定位已被广泛讨论:鉴于预构建的3D视觉图,估计查询图像的6 DOF(自由度)姿势。大规模室内环境中的重新定位可实现有吸引力的应用程序,例如增强现实和机器人导航。但是,当相机移动时,在这种环境中,外观变化很快,这对于重新定位系统来说是具有挑战性的。为了解决这个问题,我们建议一种基于虚拟视图综合方法Rendernet,以丰富有关此特定情况的数据库和完善姿势。我们选择直接渲染虚拟观点的必要全局和本地特征,而不是渲染需要高质量3D模型的真实图像,并分别将它们应用于后续图像检索和功能匹配操作中。所提出的方法在很大程度上可以改善大规模室内环境中的性能,例如,在INLOC数据集中获得7.1 \%和12.2 \%的改善。
translated by 谷歌翻译
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
translated by 谷歌翻译
深度学习的关键批评之一是,需要大量昂贵且难以获得的训练数据,以便培训具有高性能和良好的概率功能的模型。专注于通过场景坐标回归(SCR)的单眼摄像机姿势估计的任务,我们描述了一种新的方法,用于相机姿势估计(舞蹈)网络的域改编,这使得培训模型无需访问目标任务上的任何标签。舞蹈需要未标记的图像(没有已知的姿势,订购或场景坐标标签)和空间的3D表示(例如,扫描点云),这两者都可以使用现成的商品硬件最少的努力来捕获。舞蹈渲染从3D模型标记的合成图像,通过应用无监督的图像级域适应技术(未配对图像到图像转换)来桥接合成和实图像之间的不可避免的域间隙。在实际图像上进行测试时,舞蹈培训的SCR模型在成本的一小部分中对其完全监督的对应物(在两种情况下使用PNP-RANSAC进行最终姿势估算的情况下)进行了相当的性能。我们的代码和数据集可以在https://github.com/jacklangerman/dance获得
translated by 谷歌翻译
跨场型模型适应对于在实际场景中的摄像机重新定位至关重要。通常,最好将预学的模型快速适应新颖的场景,并尽可能少地训练样本。但是,由于图像特征提取和场景坐标回归的纠缠,现有的最新方法几乎不能支持如此少的场景适应。为了解决此问题,我们使用解耦的解决方案接近摄像机重新定位,在该解决方案中,分别执行特征提取,坐标回归和姿势估计。我们的关键见解是,应通过删除坐标系的分心因子来学习用于坐标回归的功能编码器,从而从多个场景中学到了特征编码器,以获得一般特征表示和更重要的,不敏感的功能。具有此功能先验,并与坐标回归器结合使用,与现有集成解决方案相比,新场景中几乎没有射击的观测比3D世界更容易。实验表明,与最先进的方法相比,我们的方法的优越性,在具有不同的视觉外观和观点分布的几个场景上产生了更高的精度。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
We present a robust, privacy-preserving visual localization algorithm using event cameras. While event cameras can potentially make robust localization due to high dynamic range and small motion blur, the sensors exhibit large domain gaps making it difficult to directly apply conventional image-based localization algorithms. To mitigate the gap, we propose applying event-to-image conversion prior to localization which leads to stable localization. In the privacy perspective, event cameras capture only a fraction of visual information compared to normal cameras, and thus can naturally hide sensitive visual details. To further enhance the privacy protection in our event-based pipeline, we introduce privacy protection at two levels, namely sensor and network level. Sensor level protection aims at hiding facial details with lightweight filtering while network level protection targets hiding the entire user's view in private scene applications using a novel neural network inference pipeline. Both levels of protection involve light-weight computation and incur only a small performance loss. We thus project our method to serve as a building block for practical location-based services using event cameras. The code and dataset will be made public through the following link: https://github.com/82magnolia/event_localization.
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene. However, methods for pose estimation often fail when only a few images are available because they rely on the ability to robustly identify and match visual features between image pairs. While these methods can work robustly with dense camera views, capturing a large set of images can be time-consuming or impractical. We propose SparsePose for recovering accurate camera poses given a sparse set of wide-baseline images (fewer than 10). The method learns to regress initial camera poses and then iteratively refine them after training on a large-scale dataset of objects (Co3D: Common Objects in 3D). SparsePose significantly outperforms conventional and learning-based baselines in recovering accurate camera rotations and translations. We also demonstrate our pipeline for high-fidelity 3D reconstruction using only 5-9 images of an object.
translated by 谷歌翻译
视觉定位,即相机姿势估计的问题,是应用程序和增强现实系统等应用的核心组成部分。文献中的主要方法是基于从图像中提取的局部特征来扩展到大型场景并处理复杂的照明和季节性变化。场景表示形式是与特定本地特征相关的稀疏结构云。切换到另一种功能类型需要在用于构造点云的数据库图像之间昂贵的功能匹配步骤。在这项工作中,我们基于密集的3D网格探索了一个更灵活的替代方案,该替代方案不需要在数据库图像之间匹配的功能来构建场景表示。我们表明,这种方法可以实现最新的结果。我们进一步表明,当在没有任何神经渲染阶段的渲染效果上提取功能时,即使在没有颜色或纹理的原始场景几何形状时,也可以获得令人惊讶的竞争结果。我们的结果表明,基于3D模型的密集表示是现有表示形式的有希望的替代方法,并指出了未来研究的有趣且具有挑战性的方向。
translated by 谷歌翻译
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment and loop detection in simultaneous localization and mapping. The loop detection sub-task is especially relevant when a robot with an on-board RGB-D camera can drive past the same place (``Point") at different angles. The dataset is based on the popular Habitat simulator, in which it is possible to generate photorealistic indoor scenes using both own sensor data and open datasets, such as Matterport3D. To study the main stages of solving the place recognition problem on the HPointLoc dataset, we proposed a new modular approach named as PNTR. It first performs an image retrieval with the Patch-NetVLAD method, then extracts keypoints and matches them using R2D2, LoFTR or SuperPoint with SuperGlue, and finally performs a camera pose optimization step with TEASER++. Such a solution to the place recognition problem has not been previously studied in existing publications. The PNTR approach has shown the best quality metrics on the HPointLoc dataset and has a high potential for real use in localization systems for unmanned vehicles. The proposed dataset and framework are publicly available: https://github.com/metra4ok/HPointLoc.
translated by 谷歌翻译
我们引入了一个相机重新定位管道,该管道结合了绝对姿势回归(APR)和直接功能匹配。通过结合曝光自适应的新视图综合,我们的方法成功地解决了现有基于光度法方法无法处理的室外环境中的光度扭曲。借助域不变的功能匹配,我们的解决方案通过对未标记数据的半监督学习提高了姿势回归精度。特别是,该管道由两个组成部分组成:新型视图合成器和DFNET。前者综合了新的视图,以补偿暴露的变化,后者会回归摄像头的姿势,并提取了可靠的功能,这些特征弥补了真实图像和合成图像之间的域间隙。此外,我们引入了在线合成数据生成方案。我们表明,这些方法有效地增强了室内和室外场景中的相机姿势估计。因此,我们的方法通过优于现有的单位图APR方法高达56%,可与基于3D结构的方法相当。
translated by 谷歌翻译
隐式3D表示的最新进展,即神经辐射场(NERFS),以可区分的方式使准确且具有逼真的3D重建成为可能。这种新的表示可以有效地以一种紧凑的格式传达数百个高分辨率图像的信息,并允许对新观点的逼真综合。在这项工作中,使用NERF的变体称为全体氧,我们为感知任务创建了第一个大规模隐式表示数据集,称为Fustection,该数据集由两个部分组成,这些部分既包含以对象为中心和场景为中心的扫描,用于分类和分段, 。它显示了原始数据集的显着内存压缩率(96.4 \%),同时以统一形式包含2D和3D信息。我们构建了直接作为输入这种隐式格式的分类和分割模型,并提出了一种新颖的增强技术,以避免在图像的背景上过度拟合。代码和数据可在https://postech-cvlab.github.io/perfception中公开获得。
translated by 谷歌翻译
We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.
translated by 谷歌翻译
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.
translated by 谷歌翻译
我们提出了一种视觉本地化系统,这些系统在合成数据的帮助下学习在现实世界中估算相机姿势。尽管近年来取得了重大进展,但大多数基于学习的学习方法在单个域中的视觉定位目标,并需要良好的地理标记图像的密集数据库运行。为了减轻数据稀缺问题并提高神经定位模型的可扩展性,我们介绍了Topo-Datagen,这是一个多功能合成数据生成工具,在真实和虚拟世界之间平稳地遍历,铰接在地理相机视点。建议新的大型SIM-to-Real基准数据集展示并评估所述合成数据的效用。我们的实验表明,合成数据在实际上提高了真实数据的神经网络性能。此外,我们介绍Crossloc,一种跨模型视觉表示学习方法来姿态估计,可以通过自我监督充分利用现场坐标地面真理。在没有任何额外数据的情况下,Crossloc显着优于最先进的方法,并实现了更高的实际数据样本效率。我们的代码可在https://github.com/topo-epfl/crossloc获得。
translated by 谷歌翻译