Traditional approaches to extrinsic calibration use fiducial markers and learning-based approaches rely heavily on simulation data. In this work, we present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data. We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data. We use a transformation trick to get EE pose estimates from rotation predictions and a matching algorithm to get EE pose estimates from keypoint predictions. We further utilize the iterative closest point algorithm, multiple-frames, filtering and outlier detection to increase calibration robustness. Our evaluations with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter and sub-deciradian average calibration and pose estimation errors. We also show that a carefully selected single training pose gives comparable results.
translated by 谷歌翻译
掌控的摄像机校准是机器人中的基本和长期研究的问题。我们展示了使用基于学习的方法在线解决此问题的研究,同时使用完全合成数据培训我们的模型。我们研究了三种主要方法:直接从图像预测外部矩阵的直接回归模型,一个稀疏的对应模型回归2D关键点,然后使用PNP,以及使用回归深度和分割映射的一个密集对应模型来实现ICP姿势估计。在我们的实验中,我们将这些方法互相基准,并反对建立良好的经典方法,以找到直接回归优于其他方法的令人惊讶的结果,并且我们对这些结果进行了进一步的见解来进行噪声敏感性分析。
translated by 谷歌翻译
在许多机器人应用中,要执行已知,刚体对象及其随后的抓握的6多-DOF姿势估计的环境设置几乎保持不变,甚至可能是机器人事先知道的。在本文中,我们将此问题称为特定实例的姿势估计:只有在有限的一组熟悉的情况下,该机器人将以高度准确性估算姿势。场景中的微小变化,包括照明条件和背景外观的变化,是可以接受的,但没有预期的改变。为此,我们提出了一种方法,可以快速训练和部署管道,以估算单个RGB图像的对象的连续6-DOF姿势。关键的想法是利用已知的相机姿势和刚性的身体几何形状部分自动化大型标记数据集的生成。然后,数据集以及足够的域随机化来监督深度神经网络的培训,以预测语义关键。在实验上,我们证明了我们提出的方法的便利性和有效性,以准确估计物体姿势,仅需要少量的手动注释才能进行训练。
translated by 谷歌翻译
估计对象的6D姿势是必不可少的计算机视觉任务。但是,大多数常规方法从单个角度依赖相机数据,因此遭受遮挡。我们通过称为MV6D的新型多视图6D姿势估计方法克服了这个问题,该方法从多个角度根据RGB-D图像准确地预测了混乱场景中所有对象的6D姿势。我们将方法以PVN3D网络为基础,该网络使用单个RGB-D图像来预测目标对象的关键点。我们通过从多个视图中使用组合点云来扩展此方法,并将每个视图中的图像与密集层层融合。与当前的多视图检测网络(例如Cosypose)相反,我们的MV6D可以以端到端的方式学习多个观点的融合,并且不需要多个预测阶段或随后对预测的微调。此外,我们介绍了三个新颖的影像学数据集,这些数据集具有沉重的遮挡的混乱场景。所有这些都从多个角度包含RGB-D图像,例如语义分割和6D姿势估计。即使在摄像头不正确的情况下,MV6D也明显优于多视图6D姿势估计中最新的姿势估计。此外,我们表明我们的方法对动态相机设置具有强大的态度,并且其准确性随着越来越多的观点而逐渐增加。
translated by 谷歌翻译
Estimating the 6D pose of objects is one of the major fields in 3D computer vision. Since the promising outcomes from instance-level pose estimation, the research trends are heading towards category-level pose estimation for more practical application scenarios. However, unlike well-established instance-level pose datasets, available category-level datasets lack annotation quality and provided pose quantity. We propose the new category level 6D pose dataset HouseCat6D featuring 1) Multi-modality of Polarimetric RGB+P and Depth, 2) Highly diverse 194 objects of 10 household object categories including 2 photometrically challenging categories, 3) High-quality pose annotation with an error range of only 1.35 mm to 1.74 mm, 4) 41 large scale scenes with extensive viewpoint coverage, 5) Checkerboard-free environment throughout the entire scene. We also provide benchmark results of state-of-the-art category-level pose estimation networks.
translated by 谷歌翻译
A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.
translated by 谷歌翻译
实时机器人掌握,支持随后的精确反对操作任务,是高级高级自治系统的优先目标。然而,尚未找到这样一种可以用时间效率进行充分准确的掌握的算法。本文提出了一种新的方法,其具有2阶段方法,它使用深神经网络结合快速的2D对象识别,以及基于点对特征框架的随后的精确和快速的6D姿态估计来形成实时3D对象识别和抓握解决方案能够多对象类场景。所提出的解决方案有可能在实时应用上稳健地进行,需要效率和准确性。为了验证我们的方法,我们进行了广泛且彻底的实验,涉及我们自己的数据集的费力准备。实验结果表明,该方法在5CM5DEG度量标准中的精度97.37%,平均距离度量分数99.37%。实验结果显示了通过使用该方法的总体62%的相对改善(5cm5deg度量)和52.48%(平均距离度量)。此外,姿势估计执行也显示出运行时间的平均改善47.6%。最后,为了说明系统在实时操作中的整体效率,进行了一个拾取和放置的机器人实验,并显示了90%的准确度的令人信服的成功率。此实验视频可在https://sites.google.com/view/dl-ppf6dpose/上获得。
translated by 谷歌翻译
Ultrasound is progressing toward becoming an affordable and versatile solution to medical imaging. With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging as it requires trained operators in close proximity to patients for long period of time. In this work, we investigate the important yet seldom-studied problem of scan target localization, under the setting of lung ultrasound imaging. We propose a purely vision-based, data driven method that incorporates learning-based computer vision techniques. We combine a human pose estimation model with a specially designed regression model to predict the lung ultrasound scan targets, and deploy multiview stereo vision to enhance the consistency of 3D target localization. While related works mostly focus on phantom experiments, we collect data from 30 human subjects for testing. Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69){\deg} for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets. Moreover, our approach can serve as a general solution to other types of ultrasound modalities. The code for implementation has been released.
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
本文介绍了一个数据集,用于培训和评估方法,以估算由标准RGB摄像机捕获的任务演示中手持工具的6D姿势。尽管6D姿势估计方法取得了重大进展,但它们的性能通常受到严重遮挡的对象的限制,这在模仿学习中是一个常见的情况,而操纵手通常会部分遮住对象。当前,缺乏数据集可以使这些条件的稳健6D姿势估计方法开发。为了克服这个问题,我们收集了一个新的数据集(IMITROB),该数据集针对模仿学习和其他人类持有工具并执行任务的其他应用中的6D姿势估计。该数据集包含三个不同工具和六个操纵任务的图像序列,这些任务具有两个相机观点,四个人类受试者和左/右手。每个图像都伴随着由HTC Vive运动跟踪设备获得的6D对象姿势的准确地面真相测量。通过训练和评估各种设置中的最新6D对象估计方法(DOPE)来证明数据集的使用。数据集和代码可在http://imitrob.ciirc.cvut.cz/imitrobdataset.php上公开获得。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割掩码。创建此类培训数据集的过程可能很难或耗时,可以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深层方法表现出良好的性能。但是,将这些网络调整为其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展的方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller还旨在支持透明或半透明的对象,以深度密集重建的先前方法将失败。我们通过快速创建一个超过1M样品的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的抓地力。 ProgressLabeller是https://github.com/huijiezh/progresslabeller的开放源代码。
translated by 谷歌翻译
We present a new dataset for 6-DoF pose estimation of known objects, with a focus on robotic manipulation research. We propose a set of toy grocery objects, whose physical instantiations are readily available for purchase and are appropriately sized for robotic grasping and manipulation. We provide 3D scanned textured models of these objects, suitable for generating synthetic training data, as well as RGBD images of the objects in challenging, cluttered scenes exhibiting partial occlusion, extreme lighting variations, multiple instances per image, and a large variety of poses. Using semi-automated RGBD-to-model texture correspondences, the images are annotated with ground truth poses accurate within a few millimeters. We also propose a new pose evaluation metric called ADD-H based on the Hungarian assignment algorithm that is robust to symmetries in object geometry without requiring their explicit enumeration. We share pre-trained pose estimators for all the toy grocery objects, along with their baseline performance on both validation and test sets. We offer this dataset to the community to help connect the efforts of computer vision researchers with the needs of roboticists.
translated by 谷歌翻译
透明的物体在家庭环境中无处不在,并且对视觉传感和感知系统构成了不同的挑战。透明物体的光学特性使常规的3D传感器仅对物体深度和姿势估计不可靠。这些挑战是由重点关注现实世界中透明对象的大规模RGB深度数据集突出了这些挑战。在这项工作中,我们为名为ClearPose的大规模现实世界RGB深度透明对象数据集提供了一个用于分割,场景级深度完成和以对象为中心的姿势估计任务的基准数据集。 ClearPose数据集包含超过350K标记的现实世界RGB深度框架和5M实例注释,涵盖了63个家用对象。该数据集包括在各种照明和遮挡条件下在日常生活中常用的对象类别,以及具有挑战性的测试场景,例如不透明或半透明物体的遮挡病例,非平面取向,液体的存在等。 - 艺术深度完成和对象构成清晰度上的深神经网络。数据集和基准源代码可在https://github.com/opipari/clearpose上获得。
translated by 谷歌翻译
6D object pose estimation has been a research topic in the field of computer vision and robotics. Many modern world applications like robot grasping, manipulation, autonomous navigation etc, require the correct pose of objects present in a scene to perform their specific task. It becomes even harder when the objects are placed in a cluttered scene and the level of occlusion is high. Prior works have tried to overcome this problem but could not achieve accuracy that can be considered reliable in real-world applications. In this paper, we present an architecture that, unlike prior work, is context-aware. It utilizes the context information available to us about the objects. Our proposed architecture treats the objects separately according to their types i.e; symmetric and non-symmetric. A deeper estimator and refiner network pair is used for non-symmetric objects as compared to symmetric due to their intrinsic differences. Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset, which is considered a benchmark for pose estimation in the occluded and cluttered scenes, against the prior state-of-the-art DenseFusion. Our results also show that the inference time we got is sufficient for real-time usage.
translated by 谷歌翻译
机器人钉孔组件由于其准确性的高度需求而仍然是一项具有挑战性的任务。先前的工作倾向于通过限制最终效果的自由度,或限制目标与初始姿势位置之间的距离来简化问题,从而阻止它们部署在现实世界中。因此,我们提出了一种粗到精细的视觉致毒(CFV)钉孔法,基于3D视觉反馈实现了6DOF最终效应器运动控制。CFV可以通过在细化前进行快速姿势估计来处理任意倾斜角度和较大的初始对齐误差。此外,通过引入置信度图来忽略对象无关的轮廓,CFV可以抵抗噪声,并且可以处理训练数据以外的各种目标。广泛的实验表明,CFV的表现优于最先进的方法,并分别获得100%,91%和82%的平均成功率,分别为3-DOF,4-DOF和6-DOF PEG-IN-IN-HOLE。
translated by 谷歌翻译
我们研究了在紧邻人类机器人相互作用的背景下,最先进的人关键点探测器的性能。在这种情况下的检测是具体的,因为只有手和躯干等身体部位的子集在视野中。特别是(i)我们从近距离图像的角度调查了具有人类姿势注释的现有数据集,并准备并使公开可用的新人(HICP)数据集; (ii)我们在此数据集上进行定量和定性比较人类全身2D关键点检测方法(openpose,mmpose,onphapose,detectron2); (iii)由于对手指的准确检测对于使用交接的应用至关重要,因此我们评估了介质手工检测器的性能; (iv)我们在头部上带有RGB-D摄像头的人形机器人上部署算法,并在3D Human KeyPoint检测中评估性能。运动捕获系统用作参考。在紧邻近端的最佳性能全身关键点探测器是mmpose和字母,但两者都难以检测手指。因此,我们提出了在单个框架中为人体和手介载体的mmpose或字母组合的组合,提供了最准确,最强大的检测。我们还分析了单个探测器的故障模式 - 例如,图像中人的头部缺失在多大程度上降低了性能。最后,我们在一个场景中演示了框架,其中类人类机器人与人相互作用的人类机器人使用检测到的3D关键点进行全身避免动作。
translated by 谷歌翻译
本文介绍了一种新型的多视图6 DOF对象姿势细化方法,重点是改进对合成数据训练的方法。它基于DPOD检测器,该检测器会在每个帧中产生密集的2D-3D对应关系。我们选择使用多个具有已知相机转换的帧,因为它允许通过可解释的ICP样损耗函数引入几何约束。损耗函数是通过可区分的渲染器实现的,并经过迭代进行了优化。我们还证明,仅根据合成数据训练的完整检测和完善管道可用于自动标记的真实数据。我们对linemod,caslusion,自制和YCB-V数据集执行定量评估,并与对合成和真实数据训练的最新方法相比,报告出色的性能。我们从经验上证明,我们的方法仅需要几个帧,并且可以在外部摄像机校准中关闭相机位置和噪音,从而使其实际用法更加容易且无处不在。
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译