本文介绍了一种新型的多视图6 DOF对象姿势细化方法,重点是改进对合成数据训练的方法。它基于DPOD检测器,该检测器会在每个帧中产生密集的2D-3D对应关系。我们选择使用多个具有已知相机转换的帧,因为它允许通过可解释的ICP样损耗函数引入几何约束。损耗函数是通过可区分的渲染器实现的,并经过迭代进行了优化。我们还证明,仅根据合成数据训练的完整检测和完善管道可用于自动标记的真实数据。我们对linemod,caslusion,自制和YCB-V数据集执行定量评估,并与对合成和真实数据训练的最新方法相比,报告出色的性能。我们从经验上证明,我们的方法仅需要几个帧,并且可以在外部摄像机校准中关闭相机位置和噪音,从而使其实际用法更加容易且无处不在。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
translated by 谷歌翻译
最新的6D对象构成估计方法,包括无监督的方法,需要许多真实的训练图像。不幸的是,对于某些应用,例如在空间或深水下的应用程序,几乎是不可能获取真实图像的,即使是未注释的。在本文中,我们提出了一种可以仅在合成图像上训练的方法,也可以选择使用一些其他真实图像。鉴于从第一个网络获得的粗糙姿势估计,它使用第二个网络来预测使用粗糙姿势和真实图像呈现的图像之间的密集2D对应场,并渗透了所需的姿势校正。与最新方法相比,这种方法对合成图像和真实图像之间的域变化敏感得多。它与需要注释的真实图像进行训练时的方法表现出色,并且在使用二十个真实图像的情况下,它们的表现要优于它们。
translated by 谷歌翻译
6D object pose estimation problem has been extensively studied in the field of Computer Vision and Robotics. It has wide range of applications such as robot manipulation, augmented reality, and 3D scene understanding. With the advent of Deep Learning, many breakthroughs have been made; however, approaches continue to struggle when they encounter unseen instances, new categories, or real-world challenges such as cluttered backgrounds and occlusions. In this study, we will explore the available methods based on input modality, problem formulation, and whether it is a category-level or instance-level approach. As a part of our discussion, we will focus on how 6D object pose estimation can be used for understanding 3D scenes.
translated by 谷歌翻译
我们提出了一种方法,用于估计具有单个RGB图像的可用3D模型的刚性对象的6DOF姿势。与基于经典对应的方法不同,该方法可以预测输入图像的像素的3D对象坐标,该建议的方法可以预测3D对象坐标在相机frustum中采样的3D查询点。从像素到3D点的移动,这是受到3D重建方法的最新PIFU式方法的启发,可以对整个对象(包括(自我)遮挡部分)进行推理。对于与与像素对齐的图像功能相关的3D查询点,我们训练完全连接的神经网络来预测:(i)相应的3D对象坐标,以及(ii)签名到对象表面的签名距离,首先定义仅适用于地表附近的查询点。我们将该网络实现的映射称为神经通信字段。然后,通过Kabsch-Ransac算法从预测的3D-3D对应关系中稳健地估计对象姿势。所提出的方法在三个BOP数据集上实现了最先进的结果,并且在咬合挑战性案例中表现出了优越。项目网站在:linhuang17.github.io/ncf。
translated by 谷歌翻译
我们介绍了日常桌面对象的998 3D型号的数据集及其847,000个现实世界RGB和深度图像。每个图像的相机姿势和对象姿势的准确注释都以半自动化方式执行,以促进将数据集用于多种3D应用程序,例如形状重建,对象姿势估计,形状检索等。3D重建由于缺乏适当的现实世界基准来完成该任务,并证明我们的数据集可以填补该空白。整个注释数据集以及注释工具和评估基线的源代码可在http://www.ocrtoc.org/3d-reconstruction.html上获得。
translated by 谷歌翻译
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
translated by 谷歌翻译
从RGB图像中对刚性对象进行精确的6D构成估计是机器人技术和增强现实中的一项至关重要的任务。为了解决这个问题,我们提出了DeepRM,这是一种新型的经过精炼的新型经过的网络体系结构。 DeepRM利用初始粗姿势估计来渲染目标对象的合成图像。然后将渲染图像与观察到的图像匹配,以预测更新先前姿势估计值的刚性变换。重复此过程以逐步完善每次迭代的估计值。 LSTM单元用于通过每个完善步骤来传播信息,从而显着提高整体性能。与许多基于2阶段的透视点解决方案相反,DEEPRM是端到端训练的,并使用可扩展的主链,可以通过单个参数调整以提高准确性和效率。在训练过程中,添加了多尺度的光流头,以预测观察到的和合成图像之间的光流。光流预测稳定了训练过程,并强制学习与姿势估计任务相关的功能。我们的结果表明,DEEPRM在两个广泛接受的具有挑战性的数据集上实现了最先进的性能。
translated by 谷歌翻译
We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [11] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster -50 fps on a Titan X (Pascal) GPU -and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [28,29] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm.For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent 26] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.
translated by 谷歌翻译
在这项工作中,我们通过利用3D Suite Blender生产具有6D姿势的合成RGBD图像数据集来提出数据生成管道。提出的管道可以有效地生成大量的照片现实的RGBD图像,以了解感兴趣的对象。此外,引入了域随机化技术的集合来弥合真实数据和合成数据之间的差距。此外,我们通过整合对象检测器Yolo-V4微型和6D姿势估计算法PVN3D来开发实时的两阶段6D姿势估计方法,用于时间敏感的机器人应用。借助提出的数据生成管道,我们的姿势估计方法可以仅使用没有任何预训练模型的合成数据从头开始训练。在LineMod数据集评估时,与最先进的方法相比,所得网络显示出竞争性能。我们还证明了在机器人实验中提出的方法,在不同的照明条件下从混乱的背景中抓住家用物体。
translated by 谷歌翻译
估计对象的6D姿势是必不可少的计算机视觉任务。但是,大多数常规方法从单个角度依赖相机数据,因此遭受遮挡。我们通过称为MV6D的新型多视图6D姿势估计方法克服了这个问题,该方法从多个角度根据RGB-D图像准确地预测了混乱场景中所有对象的6D姿势。我们将方法以PVN3D网络为基础,该网络使用单个RGB-D图像来预测目标对象的关键点。我们通过从多个视图中使用组合点云来扩展此方法,并将每个视图中的图像与密集层层融合。与当前的多视图检测网络(例如Cosypose)相反,我们的MV6D可以以端到端的方式学习多个观点的融合,并且不需要多个预测阶段或随后对预测的微调。此外,我们介绍了三个新颖的影像学数据集,这些数据集具有沉重的遮挡的混乱场景。所有这些都从多个角度包含RGB-D图像,例如语义分割和6D姿势估计。即使在摄像头不正确的情况下,MV6D也明显优于多视图6D姿势估计中最新的姿势估计。此外,我们表明我们的方法对动态相机设置具有强大的态度,并且其准确性随着越来越多的观点而逐渐增加。
translated by 谷歌翻译
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.
translated by 谷歌翻译
We present a novel method for detecting 3D model instances and estimating their 6D poses from RGB data in a single shot. To this end, we extend the popular SSD paradigm to cover the full 6D pose space and train on synthetic model data only. Our approach competes or surpasses current state-of-the-art methods that leverage RGB-D data on multiple challenging datasets. Furthermore, our method produces these results at around 10Hz, which is many times faster than the related methods. For the sake of reproducibility, we make our trained networks and detection code publicly available. 1
translated by 谷歌翻译
我们提出了一个基于按键的对象级别的SLAM框架,该框架可以为对称和不对称对象提供全球一致的6DOF姿势估计。据我们所知,我们的系统是最早利用来自SLAM的相机姿势信息的系统之一,以提供先验知识,以跟踪对称对象的关键点 - 确保新测量与当前的3D场景一致。此外,我们的语义关键点网络经过训练,可以预测捕获预测的真实错误的关键点的高斯协方差,因此不仅可以作为系统优化问题中残留物的权重,而且还可以作为检测手段有害的统计异常值,而无需选择手动阈值。实验表明,我们的方法以6DOF对象姿势估算和实时速度为最先进的状态提供了竞争性能。我们的代码,预培训模型和关键点标签可用https://github.com/rpng/suo_slam。
translated by 谷歌翻译
视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割掩码。创建此类培训数据集的过程可能很难或耗时,可以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深层方法表现出良好的性能。但是,将这些网络调整为其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展的方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller还旨在支持透明或半透明的对象,以深度密集重建的先前方法将失败。我们通过快速创建一个超过1M样品的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的抓地力。 ProgressLabeller是https://github.com/huijiezh/progresslabeller的开放源代码。
translated by 谷歌翻译
We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a "holistic" approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7% [2] to 89.3% of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1] using color images only. We obtain 54% of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67% of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.
translated by 谷歌翻译
我们呈现ROCA,一种新的端到端方法,可以从形状数据库到单个输入图像中检索并对齐3D CAD模型。这使得从2D RGB观察开始观察到的场景的3D感知,其特征在于轻质,紧凑,清洁的CAD表示。我们的方法的核心是我们基于密集的2D-3D对象对应关系和促使对齐的可差的对准优化。因此,罗卡可以提供强大的CAD对准,同时通过利用2D-3D对应关系来学习几何上类似CAD模型来同时通知CAD检索。SCANNET的真实世界图像实验表明,Roca显着提高了现有技术,从检索感知CAD准确度为9.5%至17.6%。
translated by 谷歌翻译