贝叶斯过滤器的测量更新规则通常包含手工制作的启发式,以计算高维传感器数据的观察概率,如图像。在这项工作中,我们提出了新颖的方法深度测量更新(DMU)作为广泛系统的一般更新规则。 DMU具有条件编码器 - 解码器神经网络结构,用于处理深度图像作为原始输入。尽管网络仅在合成数据上培训,但该模型在真实数据上的评估时间下显示出良好的性能。通过我们提出的培训计划引发数据培训,我们展示了如何有效地培训DMU模型对条件变量敏感,而无需依赖于随机信息瓶颈。我们在多种情况下验证提出的方法,从而增加了复杂性的姿势,从单个物体的姿势估计开始到姿势的关节估计和铰接系统的内部状态。此外,我们在RBO数据集上提供反对铰接的符号距离功能(A-SDF)的基准,作为关节状态估计的基线比较。
translated by 谷歌翻译
本文提出了一种类别级别的6D对象姿势和形状估计方法IDAPS,其允许在类别中跟踪6D姿势并估计其3D形状。我们使用深度图像作为输入开发类别级别自动编码器网络,其中来自自动编码器编码的特征嵌入在类别中对象的姿势。自动编码器可用于粒子过滤器框架,以估计和跟踪类别中的对象的姿势。通过利用基于符号距离函数的隐式形状表示,我们构建延迟网络以估计给定对象的估计姿势的3D形状的潜在表示。然后,估计的姿势和形状可用于以迭代方式互相更新。我们的类别级别6D对象姿势和形状估计流水线仅需要2D检测和分段进行初始化。我们在公开的数据集中评估我们的方法,并展示其有效性。特别是,我们的方法在形状估计上实现了相对高的准确性。
translated by 谷歌翻译
代表物体粒度的场景是场景理解和决策的先决条件。我们提出PrisMoNet,一种基于先前形状知识的新方法,用于学习多对象3D场景分解和来自单个图像的表示。我们的方法学会在平面曲面上分解具有多个对象的合成场景的图像,进入其组成场景对象,并从单个视图推断它们的3D属性。经常性编码器从输入的RGB图像中回归3D形状,姿势和纹理的潜在表示。通过可差异化的渲染,我们培训我们的模型以自我监督方式从RGB-D图像中分解场景。 3D形状在功能空间中连续表示,作为我们以监督方式从示例形状预先训练的符号距离函数。这些形状的前沿提供弱监管信号,以更好地条件挑战整体学习任务。我们评估我们模型在推断3D场景布局方面的准确性,展示其生成能力,评估其对真实图像的概括,并指出了学习的表示的益处。
translated by 谷歌翻译
对世界的丰富几何理解是许多机器人应用(例如计划和操纵)的重要组成部分。在本文中,我们提出了一个模块化管道,用于鉴于其类别的RGB-D图像的姿势和形状估计。我们方法的核心是一种生成形状模型,我们将其与新的初始化网络集成在一起,并具有可区分的渲染器,以从单个或多个视图中启用6D姿势和形状估计。我们研究了离散的签名距离字段作为有效的形状表示,以通过合成优化快速分析。我们的模块化框架可以实现多视图优化和可扩展性。我们证明了在合成和真实数据的几种实验中,我们的方法比最先进的方法的好处。我们在https://github.com/roym899/sdfest上开放我们的方法。
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
在本文中,我们提出了一个新颖的对象级映射系统,该系统可以同时在动态场景中分段,跟踪和重建对象。它可以通过对深度输入的重建和类别级别的重建来进一步预测并完成其完整的几何形状,其目的是完成对象几何形状会导致更好的对象重建和跟踪准确性。对于每个传入的RGB-D帧,我们执行实例分割以检测对象并在检测和现有对象图之间构建数据关联。将为每个无与伦比的检测创建一个新的对象映射。对于每个匹配的对象,我们使用几何残差和差分渲染残留物共同优化其姿势和潜在的几何表示形式,并完成其形状之前和完成的几何形状。与使用传统的体积映射或学习形状的先验方法相比,我们的方法显示出更好的跟踪和重建性能。我们通过定量和定性测试合成和现实世界序列来评估其有效性。
translated by 谷歌翻译
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
translated by 谷歌翻译
RGB图像的刚性对象的可伸缩6D构成估计旨在处理多个对象并推广到新物体。我们建立在一个著名的自动编码框架的基础上,以应对对象对称性和缺乏标记的训练数据,我们通过将自动编码器的潜在表示形状分解为形状并构成子空间来实现可伸缩性。潜在形状空间通过对比度度量学习模型不同对象的相似性,并将潜在姿势代码与旋转检索的规范旋转进行比较。由于不同的对象对称会诱导不一致的潜在姿势空间,因此我们用规范旋转重新输入形状表示,以生成形状依赖的姿势代码簿以进行旋转检索。我们在两个基准上显示了最新的性能,其中包含无类别和每日对象的无纹理CAD对象,并通过扩展到跨类别的每日对象的更具挑战性的设置,进一步证明了可扩展性。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
作为自治机器人的互动和导航在诸如房屋之类的真实环境中,可靠地识别和操纵铰接物体,例如门和橱柜是有用的。在对象铰接识别中许多先前的作品需要通过机器人或人类操纵物体。虽然最近的作品已经解决了从视觉观测的预测,但他们经常假设根据其运动约束的铰接部件移动的类别级运动模型或观察序列的先验知识。在这项工作中,我们提出了Formnet,是一种神经网络,该神经网络识别来自RGB-D图像和分段掩模的单帧对象部分的对象部分之间的铰接机制。从6个类别的149个铰接对象的100K合成图像培训网络培训。通过具有域随机化的光保护模拟器呈现合成图像。我们所提出的模型预测物体部件的运动残余流动,并且这些流量用于确定铰接类型和参数。该网络在训练有素的类别中的新对象实例上实现了82.5%的铰接式分类精度。实验还展示了该方法如何实现新颖类别的泛化,并且在没有微调的情况下应用于现实世界图像。
translated by 谷歌翻译
Generating grasp poses is a crucial component for any robot object manipulation task. In this work, we formulate the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled grasps using a grasp evaluator model. Both Grasp Sampler and Grasp Refinement networks take 3D point clouds observed by a depth camera as input. We evaluate our approach in simulation and real-world robot experiments. Our approach achieves 88% success rate on various commonly used objects with diverse appearances, scales, and weights. Our model is trained purely in simulation and works in the real world without any extra steps. The video of our experiments can be found here.
translated by 谷歌翻译
在这项工作中,我们解决了共同跟踪手对象姿势并从野外深度点云序列重建形状的具有挑战性,HandTrackNet,以估计框架间的手动运动。我们的HandTrackNet提出了一个新型的手姿势构成典型化模块,以简化跟踪任务,从而产生准确且稳健的手工关节跟踪。然后,我们的管道通过将预测的手关节转换为基于模板的参数手模型mano来重建全手。对于对象跟踪,我们设计了一个简单而有效的模块,该模块从第一帧估算对象SDF并执行基于优化的跟踪。最后,采用联合优化步骤执行联合手和物体推理,从而减轻了闭塞引起的歧义并进一步完善了手姿势。在训练过程中,整个管道仅看到纯粹的合成数据,这些数据与足够的变化并通过深度模拟合成,以易于概括。整个管道与概括差距有关,因此可以直接传输到真实的野外数据。我们在两个真实的手对象交互数据集上评估我们的方法,例如HO3D和DEXYCB,没有任何填充。我们的实验表明,所提出的方法显着优于先前基于深度的手和对象姿势估计和跟踪方法,以9 fps的帧速率运行。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce Normalized Object Coordinate Space (NOCS)-a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new contextaware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.
translated by 谷歌翻译
能够重现从光相互作用到接触力学的物理现象,模拟器在越来越多的应用程序域变得越来越有用,而现实世界中的相互作用或标记数据很难获得。尽管最近取得了进展,但仍需要大量的人为努力来配置模拟器以准确地再现现实世界的行为。我们介绍了一条管道,将反向渲染与可区分的模拟相结合,从而从深度或RGB视频中创建数字双铰接式机制。我们的方法自动发现关节类型并估算其运动学参数,而整体机制的动态特性则调整为实现物理准确的模拟。正如我们在模拟系统上所证明的那样,在我们的派生模拟传输中优化的控制策略成功地回到了原始系统。此外,我们的方法准确地重建了由机器人操纵的铰接机制的运动学树,以及现实世界中耦合的摆机制的高度非线性动力学。网站:https://Eric-heiden.github.io/video2sim
translated by 谷歌翻译
在本文中,我们提出了TAC2POSE,这是一种特定于对象的触觉方法,从首次触摸已知对象的触觉估计。鉴于对象几何形状,我们在模拟中学习了一个量身定制的感知模型,该模型估计了给定触觉观察的可能对象姿势的概率分布。为此,我们模拟了一个密集的物体姿势将在传感器上产生的密集对象姿势的接触形状。然后,鉴于从传感器获得的新接触形状,我们使用使用对比度学习学习的对象特定于对象的嵌入式将其与预计集合进行了匹配。我们从传感器中获得接触形状,并具有对象不足的校准步骤,该步骤将RGB触觉观测值映射到二进制接触形状。该映射可以在对象和传感器实例上重复使用,是唯一接受真实传感器数据训练的步骤。这导致了一种感知模型,该模型从第一个真实的触觉观察中定位对象。重要的是,它产生姿势分布,并可以纳入来自其他感知系统,联系人或先验的其他姿势限制。我们为20个对象提供定量结果。 TAC2POSE从独特的触觉观测中提供了高精度的姿势估计,同时回归有意义的姿势分布,以说明可能由不同对象姿势产生的接触形状。我们还测试了从3D扫描仪重建的对象模型上的TAC2POSE,以评估对象模型中不确定性的鲁棒性。最后,我们证明了TAC2POSE的优势与三种基线方法进行触觉姿势估计:直接使用神经网络回归对象姿势,将观察到的接触与使用标准分类神经网络的一组可能的接触匹配,并直接的像素比较比较观察到的一组可能的接触接触。网站:http://mcube.mit.edu/research/tac2pose.html
translated by 谷歌翻译
Understanding the 3D world without supervision is currently a major challenge in computer vision as the annotations required to supervise deep networks for tasks in this domain are expensive to obtain on a large scale. In this paper, we address the problem of unsupervised viewpoint estimation. We formulate this as a self-supervised learning task, where image reconstruction provides the supervision needed to predict the camera viewpoint. Specifically, we make use of pairs of images of the same object at training time, from unknown viewpoints, to self-supervise training by combining the viewpoint information from one image with the appearance information from the other. We demonstrate that using a perspective spatial transformer allows efficient viewpoint learning, outperforming existing unsupervised approaches on synthetic data, and obtains competitive results on the challenging PASCAL3D+ dataset.
translated by 谷歌翻译