Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.
translated by 谷歌翻译
A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available -current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.
translated by 谷歌翻译
我们介绍了Amazon Berkeley对象(ABO),这是一个新的大型数据集,旨在帮助弥合真实和虚拟3D世界之间的差距。ABO包含产品目录图像,元数据和艺术家创建的3D模型,具有复杂的几何形状和与真实的家用物体相对应的物理基础材料。我们得出了具有挑战性的基准,这些基准利用ABO的独特属性,并测量最先进的对象在三个开放问题上的最新限制,以了解实际3D对象:单视3D 3D重建,材料估计和跨域多视图对象检索。
translated by 谷歌翻译
This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG -a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset, code and pretrained model will be available online upon acceptance.
translated by 谷歌翻译
我们介绍了日常桌面对象的998 3D型号的数据集及其847,000个现实世界RGB和深度图像。每个图像的相机姿势和对象姿势的准确注释都以半自动化方式执行,以促进将数据集用于多种3D应用程序,例如形状重建,对象姿势估计,形状检索等。3D重建由于缺乏适当的现实世界基准来完成该任务,并证明我们的数据集可以填补该空白。整个注释数据集以及注释工具和评估基线的源代码可在http://www.ocrtoc.org/3d-reconstruction.html上获得。
translated by 谷歌翻译
场景理解是一个活跃的研究区域。商业深度传感器(如Kinect)在过去几年中启用了几个RGB-D数据集的发布,它在3D场景理解中产生了新的方法。最近,在Apple的iPad和iPhone中推出LIDAR传感器,可以在他们通常使用的设备上访问高质量的RGB-D数据。这在对计算机视觉社区以及应用程序开发人员来说,这是一个全新的时代。现场理解的基本研究与机器学习的进步一起可以影响人们的日常经历。然而,将这些现场改变为现实世界经验的理解方法需要额外的创新和发展。在本文中,我们介绍了Arkitscenes。它不仅是具有现在广泛可用深度传感器的第一个RGB-D数据集,而且是我们最好的知识,它也是了解数据发布的最大的室内场景。除了来自移动设备的原始和处理的数据之外,Arkitscenes还包括使用固定激光扫描仪捕获的高分辨率深度图,以及手动标记为家具的大型分类的3D定向边界盒。我们进一步分析了两个下游任务数据的有用性:3D对象检测和色彩引导深度上采样。我们展示了我们的数据集可以帮助推动现有最先进的方法的边界,并引入了更好代表真实情景的新挑战。
translated by 谷歌翻译
6D object pose estimation problem has been extensively studied in the field of Computer Vision and Robotics. It has wide range of applications such as robot manipulation, augmented reality, and 3D scene understanding. With the advent of Deep Learning, many breakthroughs have been made; however, approaches continue to struggle when they encounter unseen instances, new categories, or real-world challenges such as cluttered backgrounds and occlusions. In this study, we will explore the available methods based on input modality, problem formulation, and whether it is a category-level or instance-level approach. As a part of our discussion, we will focus on how 6D object pose estimation can be used for understanding 3D scenes.
translated by 谷歌翻译
像素级别的2D对象语义理解是计算机视觉中的一个重要主题,可以帮助在日常生活中深入了解对象(例如功能和可折扣)。然而,最先前的方法直接在2D图像中的对应关系上培训,这是端到端,但在3D空间中失去了大量信息。在本文中,我们提出了一种关于在3D域中预测图像对应语义的新方法,然后将它们突出回2D图像以实现像素级别的理解。为了获得当前图像数据集中不存在的可靠的3D语义标签,我们构建一个名为KeyPointNet的大型关键点知识引擎,其中包含103,450个关键点和来自16个对象类别的8,234个3D模型。我们的方法利用3D视觉中的优势,并可以明确地理由对物体自动阻塞和可见性。我们表明我们的方法在标准语义基准上给出了比较甚至卓越的结果。
translated by 谷歌翻译
本文介绍了一个新的大型多视图数据集,称为Humbi的人体表达式,具有天然衣物。 HUMBI的目标是为了便于建模特异性的外观和五个主要身体信号的几何形状,包括来自各种各样的人的凝视,面部,手,身体和服装。 107同步高清摄像机用于捕获772个跨性别,种族,年龄和风格的独特科目。使用多视图图像流,我们使用3D网格模型重建高保真体表达式,允许表示特定于视图的外观。我们证明HUMBI在学习和重建完整的人体模型方面非常有效,并且与人体表达的现有数据集互补,具有有限的观点和主题,如MPII-Gaze,Multi-Pie,Human 3.6m和Panoptic Studio数据集。基于HUMBI,我们制定了一种展开的姿态引导外观渲染任务的新基准挑战,其旨在大大延长了在3D中建模的不同人类表达式中的光敏性,这是真实的社会远程存在的关键能力。 Humbi公开提供http://humbi-data.net
translated by 谷歌翻译
透明的物体在家庭环境中无处不在,并且对视觉传感和感知系统构成了不同的挑战。透明物体的光学特性使常规的3D传感器仅对物体深度和姿势估计不可靠。这些挑战是由重点关注现实世界中透明对象的大规模RGB深度数据集突出了这些挑战。在这项工作中,我们为名为ClearPose的大规模现实世界RGB深度透明对象数据集提供了一个用于分割,场景级深度完成和以对象为中心的姿势估计任务的基准数据集。 ClearPose数据集包含超过350K标记的现实世界RGB深度框架和5M实例注释,涵盖了63个家用对象。该数据集包括在各种照明和遮挡条件下在日常生活中常用的对象类别,以及具有挑战性的测试场景,例如不透明或半透明物体的遮挡病例,非平面取向,液体的存在等。 - 艺术深度完成和对象构成清晰度上的深神经网络。数据集和基准源代码可在https://github.com/opipari/clearpose上获得。
translated by 谷歌翻译
Various datasets have been proposed for simultaneous localization and mapping (SLAM) and related problems. Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images. We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera. Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms. Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions. We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model. As a unique experiment, we benchmark visual-inertial odometry methods using both color and active infrared images.
translated by 谷歌翻译
We present ObjectMatch, a semantic and object-centric camera pose estimation for RGB-D SLAM pipelines. Modern camera pose estimators rely on direct correspondences of overlapping regions between frames; however, they cannot align camera frames with little or no overlap. In this work, we propose to leverage indirect correspondences obtained via semantic object identification. For instance, when an object is seen from the front in one frame and from the back in another frame, we can provide additional pose constraints through canonical object correspondences. We first propose a neural network to predict such correspondences on a per-pixel level, which we then combine in our energy formulation with state-of-the-art keypoint matching solved with a joint Gauss-Newton optimization. In a pairwise setting, our method improves registration recall of state-of-the-art feature matching from 77% to 87% overall and from 21% to 52% in pairs with 10% or less inter-frame overlap. In registering RGB-D sequences, our method outperforms cutting-edge SLAM baselines in challenging, low frame-rate scenarios, achieving more than 35% reduction in trajectory error in multiple scenes.
translated by 谷歌翻译
Although RGB-D sensors have enabled major breakthroughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in highlevel scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias. Testing Set Training Set NYU (795 images) SUN RGB-D (5,285 images) NYU 32.50 34.33 SUN RGB-D 15.78 33.20
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
球形摄像机以整体方式捕获场景,并已用于房间布局估计。最近,随着适当数据集的可用性,从单个全向图像中的深度估计也取得了进展。尽管这两个任务是互补的,但很少有作品能够并行探索它们以提高室内几何感知,而那些这样做的人则依靠合成数据或使用过的小型数据集,因为很少有选项可供选择,包括两个布局。在真实场景中的注释和密集的深度图。这部分是由于需要对房间布局进行手动注释。在这项工作中,我们超越了此限制,并生成360几何视觉(360V)数据集,该数据集包括多种模式,多视图立体声数据并自动生成弱布局提示。我们还探索了两个任务之间的明确耦合,以将它们集成到经过单打的训练模型中。我们依靠基于深度的布局重建和基于布局的深度注意,这表明了两项任务的性能提高。通过使用单个360摄像机扫描房间,出现了便利和快速建筑规模3D扫描的机会。
translated by 谷歌翻译
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce Normalized Object Coordinate Space (NOCS)-a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new contextaware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.
translated by 谷歌翻译
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space. This zero-shot approach enables task-agnostic training and open-vocabulary queries. For example, to perform SOTA zero-shot 3D semantic segmentation it first infers CLIP features for every 3D point and later classifies them based on similarities to embeddings of arbitrary class labels. More interestingly, it enables a suite of open-vocabulary scene understanding applications that have never been done before. For example, it allows a user to enter an arbitrary text query and then see a heat map indicating which parts of a scene match. Our approach is effective at identifying objects, materials, affordances, activities, and room types in complex 3D scenes, all using a single model trained without any labeled 3D data.
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割掩码。创建此类培训数据集的过程可能很难或耗时,可以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深层方法表现出良好的性能。但是,将这些网络调整为其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展的方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller还旨在支持透明或半透明的对象,以深度密集重建的先前方法将失败。我们通过快速创建一个超过1M样品的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的抓地力。 ProgressLabeller是https://github.com/huijiezh/progresslabeller的开放源代码。
translated by 谷歌翻译
许多基本的室内活动,例如饮食或写作,总是在不同的桌面上(例如咖啡桌,写桌)进行。在3D室内场景解析应用程序中了解桌面场景是必不可少的。不幸的是,由于3D桌面场景在当前数据集中很少可用,因此很难通过直接部署数据驱动算法来满足这一需求。为了解决此缺陷,我们介绍了To-Scene,这是一个专注于桌面场景的大规模数据集,其中包含20,740个带有三个变体的场景。为了获取数据,我们设计了一个高效且可扩展的框架,在该框架中开发了众包UI将CAD对象从模型网和Shapenet传递到扫描室的桌子上,然后将输出桌面场景模拟为真实的扫描并自动注释。此外,提出了一种桌面吸引的学习策略,以更好地感知小型桌面实例。值得注意的是,我们还提供了真正的扫描测试集,以验证待机的实际价值。实验表明,经过训练的to-Scene的算法确实在现实的测试数据上工作,而我们提出的桌面感知学习策略极大地改善了3D语义细分和对象检测任务的最新结果。数据集和代码可在https://github.com/gap-lab-cuhk-sz/to-scene上找到。
translated by 谷歌翻译