我们介绍了几个弹出的对象学习(LITESOL)数据集,以供对象识别,每个对象有几个图像。我们从不同的视图中捕获了336个现实世界对象,每个对象有9个RGB-D图像。提供对象分割掩码,对象姿势和对象属性。此外,使用330 3D对象模型生成的合成图像用于增强数据集。我们研究了(i)使用我们的数据集的最先进的方法和最新方法,研究了(ii)(ii)使用最先进的方法和元学习的最先进方法的联合对象分割和几乎没有射击分类。评估结果表明,在机器人环境中,对于几个射击对象分类,仍有很大的边距可以改善。我们的数据集可用于研究一组几个弹出的对象识别问题,例如分类,检测和分割,形状重建,姿势估计,关键点对应关系和属性识别。该数据集和代码可在https://irvlutd.github.io/fewsol上找到。
translated by 谷歌翻译
A hallmark of the deep learning era for computer vision is the successful use of large-scale labeled datasets to train feature representations for tasks ranging from object recognition and semantic segmentation to optical flow estimation and novel view synthesis of 3D scenes. In this work, we aim to learn dense discriminative object representations for low-shot category recognition without requiring any category labels. To this end, we propose Deep Object Patch Encodings (DOPE), which can be trained from multiple views of object instances without any category or semantic object part labels. To train DOPE, we assume access to sparse depths, foreground masks and known cameras, to obtain pixel-level correspondences between views of an object, and use this to formulate a self-supervised learning task to learn discriminative object patches. We find that DOPE can directly be used for low-shot classification of novel categories using local-part matching, and is competitive with and outperforms supervised and self-supervised learning baselines. Code and data available at https://github.com/rehg-lab/dope_selfsup.
translated by 谷歌翻译
视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割掩码。创建此类培训数据集的过程可能很难或耗时,可以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深层方法表现出良好的性能。但是,将这些网络调整为其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展的方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller还旨在支持透明或半透明的对象,以深度密集重建的先前方法将失败。我们通过快速创建一个超过1M样品的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的抓地力。 ProgressLabeller是https://github.com/huijiezh/progresslabeller的开放源代码。
translated by 谷歌翻译
Scene understanding is essential in determining how intelligent robotic grasping and manipulation could get. It is a problem that can be approached using different techniques: seen object segmentation, unseen object segmentation, or 6D pose estimation. These techniques can even be extended to multi-view. Most of the work on these problems depends on synthetic datasets due to the lack of real datasets that are big enough for training and merely use the available real datasets for evaluation. This encourages us to introduce a new dataset (called DoPose-6D). The dataset contains annotations for 6D Pose estimation, object segmentation, and multi-view annotations, which serve all the pre-mentioned techniques. The dataset contains two types of scenes bin picking and tabletop, with the primary motive for this dataset collection being bin picking. We illustrate the effect of this dataset in the context of unseen object segmentation and provide some insights on mixing synthetic and real data for the training. We train a Mask R-CNN model that is practical to be used in industry and robotic grasping applications. Finally, we show how our dataset boosted the performance of a Mask R-CNN model. Our DoPose-6D dataset, trained network models, pipeline code, and ROS driver are available online.
translated by 谷歌翻译
商业深度传感器通常会产生嘈杂和缺失的深度,尤其是在镜面和透明的对象上,这对下游深度或基于点云的任务构成了关键问题。为了减轻此问题,我们提出了一个强大的RGBD融合网络Swindrnet,以进行深度修复。我们进一步提出了域随机增强深度模拟(DREDS)方法,以使用基于物理的渲染模拟主动的立体声深度系统,并生成一个大规模合成数据集,该数据集包含130k Photorealistic RGB图像以及其模拟深度带有现实主义的传感器。为了评估深度恢复方法,我们还策划了一个现实世界中的数据集,即STD,该数据集捕获了30个混乱的场景,这些场景由50个对象组成,具有不同的材料,从透明,透明,弥漫性。实验表明,提议的DREDS数据集桥接了SIM到实地域间隙,因此,经过训练,我们的Swindrnet可以无缝地概括到其他真实的深度数据集,例如。 ClearGrasp,并以实时速度优于深度恢复的竞争方法。我们进一步表明,我们的深度恢复有效地提高了下游任务的性能,包括类别级别的姿势估计和掌握任务。我们的数据和代码可从https://github.com/pku-epic/dreds获得
translated by 谷歌翻译
很少有开放式识别旨在对可见类别的培训数据进行有限的培训数据进行分类和新颖的图像。这项任务的挑战是,该模型不仅需要学习判别性分类器,以用很少的培训数据对预定的类进行分类,而且还要拒绝从未见过的培训时间出现的未见类别的输入。在本文中,我们建议从两个新方面解决问题。首先,我们没有像在标准的封闭设置分类中那样学习看到类之间的决策边界,而是为看不见的类保留空间,因此位于这些区域中的图像被认为是看不见的类。其次,为了有效地学习此类决策边界,我们建议利用所见类的背景功能。由于这些背景区域没有显着促进近距离分类的决定,因此自然地将它们用作分类器学习的伪阶层。我们的广泛实验表明,我们提出的方法不仅要优于多个基线,而且还为三个流行的基准测试(即Tieredimagenet,Miniimagenet和Caltech-uscd Birds-birds-2011-2011(Cub))设定了新的最先进结果。
translated by 谷歌翻译
6D对象姿势估计是计算机视觉和机器人研究中的基本问题之一。尽管最近在同一类别内将姿势估计概括为新的对象实例(即类别级别的6D姿势估计)方面已做出了许多努力,但考虑到有限的带注释数据,它仍然在受限的环境中受到限制。在本文中,我们收集了Wild6D,这是一种具有不同实例和背景的新的未标记的RGBD对象视频数据集。我们利用这些数据在野外概括了类别级别的6D对象姿势效果,并通过半监督学习。我们提出了一个新模型,称为呈现姿势估计网络reponet,该模型使用带有合成数据的自由地面真实性共同训练,以及在现实世界数据上具有轮廓匹配的目标函数。在不使用实际数据上的任何3D注释的情况下,我们的方法优于先前数据集上的最先进方法,而我们的WILD6D测试集(带有手动注释进行评估)则优于较大的边距。带有WILD6D数据的项目页面:https://oasisyang.github.io/semi-pose。
translated by 谷歌翻译
物体重新排列最近被出现为机器人操纵的关键能力,具有实用的解决方案,通常涉及物体检测,识别,掌握和高级规划。描述期望场景配置的目标图像是有希望和越来越多的指令模式。一个关键的突出挑战是机器人前面的物体之间的比赛的准确推理,并且在提供的目标图像中看到的那些,其中最近的作品在没有对象特定的培训数据的情况下挣扎。在这项工作中,我们探讨了现有方法在对象之间推断出匹配的能力,因为观察到的目标场景之间的视觉偏移增加。我们发现当前设置的基本限制是源和目标图像必须包含每个对象的相同$ \ texit {实例} $,它限制了实际部署。我们提出了一种新的对象匹配方法,它使用大型预先训练的vision语言模型来匹配交叉实例设置中的对象,通过利用语义以及视觉特征作为更强大,更通用,相似度的衡量标准。我们证明,这在交叉实例设置中提供了大大改进的匹配性能,并且可用于将多对象重新排列与机器人机械手从分享的图像与机器人的场景共享的图像指导。
translated by 谷歌翻译
We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.
translated by 谷歌翻译
很少有视觉识别是指从一些标记实例中识别新颖的视觉概念。通过将查询表示形式与类表征进行比较以预测查询实例的类别,许多少数射击的视觉识别方法采用了基于公制的元学习范式。但是,当前基于度量的方法通常平等地对待所有实例,因此通常会获得有偏见的类表示,考虑到并非所有实例在总结了类级表示的实例级表示时都同样重要。例如,某些实例可能包含无代表性的信息,例如过多的背景和无关概念的信息,这使结果偏差。为了解决上述问题,我们提出了一个新型的基于公制的元学习框架,称为实例自适应类别表示网络(ICRL-net),以进行几次视觉识别。具体而言,我们开发了一个自适应实例重新平衡网络,具有在生成班级表示,通过学习和分配自适应权重的不同实例中的自适应权重时,根据其在相应类的支持集中的相对意义来解决偏见的表示问题。此外,我们设计了改进的双线性实例表示,并结合了两个新型的结构损失,即,阶层内实例聚类损失和阶层间表示区分损失,以进一步调节实例重估过程并完善类表示。我们对四个通常采用的几个基准测试:Miniimagenet,Tieredimagenet,Cifar-FS和FC100数据集进行了广泛的实验。与最先进的方法相比,实验结果证明了我们的ICRL-NET的优势。
translated by 谷歌翻译
Deep learning techniques for point cloud data have demonstrated great potentials in solving classical problems in 3D computer vision such as 3D object classification and segmentation. Several recent 3D object classification methods have reported state-of-the-art performance on CAD model datasets such as ModelNet40 with high accuracy (∼92%). Despite such impressive results, in this paper, we argue that object classification is still a challenging task when objects are framed with real-world settings. To prove this, we introduce ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data. From our comprehensive benchmark, we show that our dataset poses great challenges to existing point cloud classification techniques as objects from real-world scans are often cluttered with background and/or are partial due to occlusions. We identify three key open problems for point cloud object classification, and propose new point cloud classification neural networks that achieve state-of-the-art performance on classifying objects with cluttered background. Our dataset and code are publicly available in our project page 1 .
translated by 谷歌翻译
透明的物体在家庭环境中无处不在,并且对视觉传感和感知系统构成了不同的挑战。透明物体的光学特性使常规的3D传感器仅对物体深度和姿势估计不可靠。这些挑战是由重点关注现实世界中透明对象的大规模RGB深度数据集突出了这些挑战。在这项工作中,我们为名为ClearPose的大规模现实世界RGB深度透明对象数据集提供了一个用于分割,场景级深度完成和以对象为中心的姿势估计任务的基准数据集。 ClearPose数据集包含超过350K标记的现实世界RGB深度框架和5M实例注释,涵盖了63个家用对象。该数据集包括在各种照明和遮挡条件下在日常生活中常用的对象类别,以及具有挑战性的测试场景,例如不透明或半透明物体的遮挡病例,非平面取向,液体的存在等。 - 艺术深度完成和对象构成清晰度上的深神经网络。数据集和基准源代码可在https://github.com/opipari/clearpose上获得。
translated by 谷歌翻译
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce Normalized Object Coordinate Space (NOCS)-a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new contextaware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.
translated by 谷歌翻译
我们介绍了Amazon Berkeley对象(ABO),这是一个新的大型数据集,旨在帮助弥合真实和虚拟3D世界之间的差距。ABO包含产品目录图像,元数据和艺术家创建的3D模型,具有复杂的几何形状和与真实的家用物体相对应的物理基础材料。我们得出了具有挑战性的基准,这些基准利用ABO的独特属性,并测量最先进的对象在三个开放问题上的最新限制,以了解实际3D对象:单视3D 3D重建,材料估计和跨域多视图对象检索。
translated by 谷歌翻译
深入学习在物体识别任务中取得了显着的成功,通过像想象成像的大规模数据集的可用性。然而,在没有重放旧数据的情况下逐步学习时,深度学习系统遭受灾难性的遗忘。对于真实世界的应用,机器人还需要逐步学习新对象。此外,由于机器人提供有限的人类援助,他们必须只能从几个例子中学习。但是,非常少量的对象识别数据集和基准测试以测试机器人视觉的增量学习能力。此外,没有专门为几个例子提供用于增量对象学习的数据集或基准。为了填补这个差距,我们呈现了一个新的DataSet称为F-Siol-310(几次增量对象学习),该数据集专门捕获用于测试机器人视觉的少量增量对象学习能力。我们还提供了在F-SIOL-310上的8个增量学习算法的基准和评估,以备将来的比较。我们的结果表明,机器人视觉的几次射击增量对象学习问题远未解决。
translated by 谷歌翻译
Generalizable 3D part segmentation is important but challenging in vision and robotics. Training deep models via conventional supervised methods requires large-scale 3D datasets with fine-grained part annotations, which are costly to collect. This paper explores an alternative way for low-shot part segmentation of 3D point clouds by leveraging a pretrained image-language model, GLIP, which achieves superior performance on open-vocabulary 2D detection. We transfer the rich knowledge from 2D to 3D through GLIP-based part detection on point cloud rendering and a novel 2D-to-3D label lifting algorithm. We also utilize multi-view 3D priors and few-shot prompt tuning to boost performance significantly. Extensive evaluation on PartNet and PartNet-Mobility datasets shows that our method enables excellent zero-shot 3D part segmentation. Our few-shot version not only outperforms existing few-shot approaches by a large margin but also achieves highly competitive results compared to the fully supervised counterpart. Furthermore, we demonstrate that our method can be directly applied to iPhone-scanned point clouds without significant domain gaps.
translated by 谷歌翻译
作为自治机器人的互动和导航在诸如房屋之类的真实环境中,可靠地识别和操纵铰接物体,例如门和橱柜是有用的。在对象铰接识别中许多先前的作品需要通过机器人或人类操纵物体。虽然最近的作品已经解决了从视觉观测的预测,但他们经常假设根据其运动约束的铰接部件移动的类别级运动模型或观察序列的先验知识。在这项工作中,我们提出了Formnet,是一种神经网络,该神经网络识别来自RGB-D图像和分段掩模的单帧对象部分的对象部分之间的铰接机制。从6个类别的149个铰接对象的100K合成图像培训网络培训。通过具有域随机化的光保护模拟器呈现合成图像。我们所提出的模型预测物体部件的运动残余流动,并且这些流量用于确定铰接类型和参数。该网络在训练有素的类别中的新对象实例上实现了82.5%的铰接式分类精度。实验还展示了该方法如何实现新颖类别的泛化,并且在没有微调的情况下应用于现实世界图像。
translated by 谷歌翻译
6D object pose estimation problem has been extensively studied in the field of Computer Vision and Robotics. It has wide range of applications such as robot manipulation, augmented reality, and 3D scene understanding. With the advent of Deep Learning, many breakthroughs have been made; however, approaches continue to struggle when they encounter unseen instances, new categories, or real-world challenges such as cluttered backgrounds and occlusions. In this study, we will explore the available methods based on input modality, problem formulation, and whether it is a category-level or instance-level approach. As a part of our discussion, we will focus on how 6D object pose estimation can be used for understanding 3D scenes.
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译