深度神经网络的3D语义分割的最新进展已取得了显着的成功,并且可用数据集的性能快速提高。但是,当前的3D语义分割基准仅包含少数类别 - 例如,扫描仪和semantickitti少于30个类别,这些类别不足以反映真实环境的多样性(例如,语义图像涵盖数百到数千个类别的类别)。因此,我们建议研究3D语义分割的较大词汇,并在扫描仪数据上具有新的扩展基准测试,其中有200个类别类别,比以前研究的数量级要多。大量的类别类别也引起了巨大的自然级别不平衡,这两者对于现有的3D语义分割方法都具有挑战性。为了在这种情况下了解更多强大的3D功能,我们提出了一种以语言为导向的预训练方法来鼓励学习的3D功能,该方法可能有限的培训示例以靠近其预训练的文本嵌入。广泛的实验表明,我们的方法始终优于我们所提出的基准测试( +9%相对MIOU)的3D语义分割的最先进的3D预训练,包括仅使用5%的 +25%相对MIOU的有限数据方案注释。
translated by 谷歌翻译
我们提出了一种新的方法来将4D动态对象前瞻灌输到学习的3D表示,通过无监督的预训练。我们观察到对象通过环境的动态移动提供了关于其对象的重要提示,因此提出了利用这种动态理解的学习学习的3D表示,然后可以有效地传送到下游3D语义场景中的改进性能。我们提出了一种新的数据增强方案,利用静态3D环境中移动的合成3D形状,并在3D-4D约束下采用对比学习,该约束将4D Imormces编码到学习的3D表示中。实验表明,我们无监督的代表学习导致下游3D语义分割,对象检测和实例分割任务的改进,而且,显着提高了数据稀缺方案的性能。
translated by 谷歌翻译
3D感知最近的进展在了解3DACHAPES甚至场景的几何结构方面表现出令人印象深刻的进展。灵感来自这些进步的几何理解,我们旨在利用几何约束下学到的表示基于图像的感知。我们介绍一种基于多视图RGB-D数据学习View-Invariant的方法,用于网络预训练的网络预训练的几何感知表示,然后可以将其有效地传送到下游2D任务。我们建议在多视图IM-ysge约束和图像 - 几何约束下采用对比学习,以便在学习的2D表示中进行编码。这不仅仅是在几乎非仅对图像的语义分割,实例分段和对象检测的基于图像的基于图像的基于图像的TASK上学习而改进,而且,但是,在低数据方案中提供了显着的改进。我们对全数据的语义细分显示6.0%的显着提高,以及剪刀上的基线20%数据上的11.9%。
translated by 谷歌翻译
Arguably one of the top success stories of deep learning is transfer learning. The finding that pre-training a network on a rich source set (e.g., ImageNet) can help boost performance once fine-tuned on a usually much smaller target set, has been instrumental to many applications in language and vision. Yet, very little is known about its usefulness in 3D point cloud understanding. We see this as an opportunity considering the effort required for annotating data in 3D. In this work, we aim at facilitating research on 3D representation learning. Different from previous works, we focus on high-level scene understanding tasks. To this end, we select a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes. Our findings are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -demonstrating that the learned representation can generalize across domains. Furthermore, the improvement was similar to supervised pre-training, suggesting that future efforts should favor scaling data collection over more detailed annotation. We hope these findings will encourage more research on unsupervised pretext task design for 3D deep learning. Our code is publicly available at https://github.com/facebookresearch/PointContrast
translated by 谷歌翻译
由于缺乏大规模标记的3D数据集,大多数3D神经网络都是从划痕训练。在本文中,我们通过利用来自丰富的2D数据集学习的2D网络来介绍一种新的3D预预测方法。我们提出了通过将像素级和点级别特征映射到同一嵌入空间中的对比度的像素到点知识转移来有效地利用2D信息。由于2D和3D网络之间的异构性质,我们介绍了后投影功能以对准2D和3D之间的功能以使转移成为可能。此外,我们设计了一个上采样功能投影层,以增加高级2D特征图的空间分辨率,这使得能够学习细粒度的3D表示。利用普雷累染的2D网络,所提出的预介绍过程不需要额外的2D或3D标记数据,进一步缓解了昂贵的3D数据注释成本。据我们所知,我们是第一个利用现有的2D培训的权重,以预先rain 3D深度神经网络。我们的密集实验表明,使用2D知识预订的3D模型可以通过各种真实世界3D下游任务进行3D网络的性能。
translated by 谷歌翻译
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space. This zero-shot approach enables task-agnostic training and open-vocabulary queries. For example, to perform SOTA zero-shot 3D semantic segmentation it first infers CLIP features for every 3D point and later classifies them based on similarities to embeddings of arbitrary class labels. More interestingly, it enables a suite of open-vocabulary scene understanding applications that have never been done before. For example, it allows a user to enter an arbitrary text query and then see a heat map indicating which parts of a scene match. Our approach is effective at identifying objects, materials, affordances, activities, and room types in complex 3D scenes, all using a single model trained without any labeled 3D data.
translated by 谷歌翻译
随着商业深度传感器和3D扫描仪的最近可用性和可承受能力,越来越多的3D(即RGBD,点云)数据集已被宣传以促进3D计算机视觉的研究。但是,现有的数据集覆盖相对较小的区域或具有有限的语义注释。对城市规模3D场景的细粒度理解仍处于起步阶段。在本文中,我们介绍了Sensaturban,一个城市规模的UAV摄影测量点云数据集,包括从三个英国城市收集的近30亿积分,占地7.6公里^ 2。 DataSet中的每个点已标记为具有细粒度的语义注释,导致数据集是上一个现有最大摄影测量点云数据集的三倍的三倍。除了诸如道路和植被等诸如道路和植被的常见类别之外,我们的数据集还包含包括轨道,桥梁和河流的城市水平类别。基于此数据集,我们进一步构建了基准,以评估最先进的分段算法的性能。特别是,我们提供了全面的分析,确定了限制城市规模点云理解的几个关键挑战。数据集可在http://point-cloud-analysis.cs.ox.ac.uk中获取。
translated by 谷歌翻译
Deep learning has attained remarkable success in many 3D visual recognition tasks, including shape classification, object detection, and semantic segmentation. However, many of these results rely on manually collecting densely annotated real-world 3D data, which is highly time-consuming and expensive to obtain, limiting the scalability of 3D recognition tasks. Thus, we study unsupervised 3D recognition and propose a Self-supervised-Self-Labeled 3D Recognition (SL3D) framework. SL3D simultaneously solves two coupled objectives, i.e., clustering and learning feature representation to generate pseudo-labeled data for unsupervised 3D recognition. SL3D is a generic framework and can be applied to solve different 3D recognition tasks, including classification, object detection, and semantic segmentation. Extensive experiments demonstrate its effectiveness. Code is available at https://github.com/fcendra/sl3d.
translated by 谷歌翻译
深度学习方法在3D语义细分中取得了显着的成功。但是,收集密集注释的现实世界3D数据集非常耗时且昂贵。关于合成数据和对现实世界情景的培训模型成为一种吸引人的选择,但不幸的是,臭名昭著的领域变化。在这项工作中,我们提出了一个面向数据的域适应性(DODA)框架,以减轻由不同的感应机制和跨域的布局放置引起的模式和上下文差距。我们的DODA涵盖了虚拟扫描模拟,以模仿现实世界中的点云图案和尾声的长方体混合,以减轻基于Cuboid的中间域的内部环境差距。 3D室内语义分割上的第一个无监督的SIM到运行适应基准也构建在3D-Front,Scannet和S3DIS上,以及7种流行的无监督域适应(UDA)方法。我们的DODA在3D -Front-> scannet和3d -Front-> S3DIS上都超过了13%的UDA方法。代码可从https://github.com/cvmi-lab/doda获得。
translated by 谷歌翻译
我们呈现Mix3D,一种用于分割大规模3D场景的数据增强技术。由于场景上下文有助于推理对象语义,因此当前的工作侧重于具有大容量和接收字段的模型,可以完全捕获输入3D场景的全局上下文。然而,强烈的背景前瞻可能会有不利的影响,就像错过了一个穿过街道的行人。在这项工作中,我们专注于平衡全球场景和局部几何形状的重要性,以概括在培训集中的上下文前方之外的目标。特别是,我们提出了一种“混合”技术,通过组合两个增强的场景来创造新的训练样本。通过这样做,对象实例被隐式地放入新颖的外观环境中,因此模型更难地依赖场景上下文,而是从本地结构推断出语义。我们进行详细的分析以了解全球背景,局部结构,局部结构和混合场景效果的重要性。在实验中,我们展示了Mix3D培训的模型从室内(Scannet,S3DIS)和室外数据集(Semantickitti)上的显着性能提升。 Mix3D可以逐渐与任何现有方法一起使用,例如,用Mix3D培训,MinkowsWinet在SCANNet测试基准78.1 Miou的显着边际占据了所有现有最先进的方法。代码可用:https://nekrasov.dev/mix3d/
translated by 谷歌翻译
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations. Intuitively, weakly supervised training is a direct solution to reduce the cost of labeling. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised learning to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by the guidance from a heterogeneous task. Besides, to generate pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods\footnote{Code based on mindspore: https://github.com/dmcv-ecnu/MindSpore\_ModelZoo/tree/main/WS3\_MindSpore}.
translated by 谷歌翻译
在这项工作中,我们提出了一种开放式摄制对象检测方法,该方法基于图像映射对,学会了检测新颖对象类别以及给定的一组已知类别。这是一种两阶段的训练方法,首先使用位置引导的图像捕获匹配技术以弱监督的方式学习新颖和已知类别的类标签,第二个使用已知的类注释专用于对象检测任务的模型。我们表明,一个简单的语言模型比检测新对象的大型上下文化语言模型更适合。此外,我们引入了一种一致性调查技术,以更好地利用图像捕获对信息。我们的方法比较与现有的开放式检测方法相比,同时具有数据效率。源代码可从https://github.com/lmb-freiburg/locov获得。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows explicitly associating 3D and semantic-rich captions. Further, to facilitate coarse-to-fine visual-semantic representation learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and 14.5% $\sim$ 50.4% hAP$_{50}$ on open-vocabulary semantic and instance segmentation, but also shows robust transferability on challenging zero-shot domain transfer tasks. Code will be available at https://github.com/CVMI-Lab/PLA.
translated by 谷歌翻译
Generalizable 3D part segmentation is important but challenging in vision and robotics. Training deep models via conventional supervised methods requires large-scale 3D datasets with fine-grained part annotations, which are costly to collect. This paper explores an alternative way for low-shot part segmentation of 3D point clouds by leveraging a pretrained image-language model, GLIP, which achieves superior performance on open-vocabulary 2D detection. We transfer the rich knowledge from 2D to 3D through GLIP-based part detection on point cloud rendering and a novel 2D-to-3D label lifting algorithm. We also utilize multi-view 3D priors and few-shot prompt tuning to boost performance significantly. Extensive evaluation on PartNet and PartNet-Mobility datasets shows that our method enables excellent zero-shot 3D part segmentation. Our few-shot version not only outperforms existing few-shot approaches by a large margin but also achieves highly competitive results compared to the fully supervised counterpart. Furthermore, we demonstrate that our method can be directly applied to iPhone-scanned point clouds without significant domain gaps.
translated by 谷歌翻译
我们建议在2D域中利用自我监督的技术来实现细粒度的3D形状分割任务。这是受到观察的启发:基于视图的表面表示比基于点云或体素占用率的3D对应物更有效地建模高分辨率表面细节和纹理。具体而言,给定3D形状,我们将其从多个视图中渲染,并在对比度学习框架内建立密集的对应学习任务。结果,与仅在2D或3D中使用自学的替代方案相比,学到的2D表示是视图不变和几何一致的,在对有限的标记形状进行培训时,可以更好地概括概括。对纹理(渲染peple)和未纹理(partnet)3D数据集的实验表明,我们的方法在细粒部分分割中优于最先进的替代方案。当仅一组稀疏的视图可供训练或形状纹理时,对基准的改进就会更大,这表明MVDecor受益于2D处理和3D几何推理。
translated by 谷歌翻译
Learning descriptive 3D features is crucial for understanding 3D scenes with diverse objects and complex structures. However, it is usually unknown whether important geometric attributes and scene context obtain enough emphasis in an end-to-end trained 3D scene understanding network. To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions. Given some free-form descriptions paired with 3D scenes, we extract the knowledge regarding the object relationships and object attributes. We then inject the knowledge to 3D feature learning through three classification-based auxiliary tasks. This language-assisted training can be combined with modern object detection and instance segmentation methods to promote 3D semantic scene understanding, especially in a label-deficient regime. Moreover, the 3D feature learned with language assistance is better aligned with the language features, which can benefit various 3D-language multimodal tasks. Experiments on several benchmarks of 3D-only and 3D-language tasks demonstrate the effectiveness of our language-assisted 3D feature learning. Code is available at https://github.com/Asterisci/Language-Assisted-3D.
translated by 谷歌翻译
我们提出了一种基于动态卷积的3D点云的实例分割方法。这使其能够在推断时适应变化的功能和对象尺度。这样做避免了一些自下而上的方法的陷阱,包括对超参数调整和启发式后处理管道的依赖,以弥补物体大小的不可避免的可变性,即使在单个场景中也是如此。通过收集具有相同语义类别并为几何质心进行仔细投票的均匀点,网络的表示能力大大提高了。然后通过几个简单的卷积层解码实例,其中参数是在输入上生成的。所提出的方法是无建议的,而是利用适应每个实例的空间和语义特征的卷积过程。建立在瓶颈层上的轻重量变压器使模型可以捕获远程依赖性,并具有有限的计算开销。结果是一种简单,高效且健壮的方法,可以在各种数据集上产生强大的性能:ScannETV2,S3DIS和Partnet。基于体素和点的体系结构的一致改进意味着提出的方法的有效性。代码可在以下网址找到:https://git.io/dyco3d
translated by 谷歌翻译
当前的3D分割方法很大程度上依赖于大规模的点状数据集,众所周知,这些数据集众所周知。很少有尝试规避需要每点注释的需求。在这项工作中,我们研究了弱监督的3D语义实例分割。关键的想法是利用3D边界框标签,更容易,更快地注释。确实,我们表明只有仅使用边界框标签训练密集的分割模型。在我们方法的核心上,\ name {}是一个深层模型,灵感来自经典的霍夫投票,直接投票赞成边界框参数,并且是专门针对边界盒票的专门定制的群集方法。这超出了常用的中心票,这不会完全利用边界框注释。在扫描仪测试中,我们弱监督的模型在其他弱监督的方法中获得了领先的性能(+18 MAP@50)。值得注意的是,它还达到了当前完全监督模型的50分数的地图的97%。为了进一步说明我们的工作的实用性,我们在最近发布的Arkitscenes数据集中训练Box2mask,该数据集仅使用3D边界框注释,并首次显示引人注目的3D实例细分掩码。
translated by 谷歌翻译
蒙面自动编码在图像和语言领域的自我监督学习方面取得了巨大的成功。但是,基于面具的预处理尚未显示出对点云理解的好处,这可能是由于PointNet(PointNet)无法正确处理训练的标准骨架,而不是通过训练期间掩盖引入的测试分配不匹配。在本文中,我们通过提出一个判别性掩码式变压器框架,maskPoint}来弥合这一差距。我们的关键想法是将点云表示为离散的占用值(1如果点云的一部分;如果不是的,则为0),并在蒙版对象点和采样噪声点之间执行简单的二进制分类作为代理任务。这样,我们的方法是对点云中的点采样差异的强大,并促进了学习丰富的表示。我们在几个下游任务中评估了验证的模型,包括3D形状分类,分割和现实词对象检测,并展示了最新的结果,同时获得了明显的预读速度(例如,扫描仪上的4.1倍)先前的最新变压器基线。代码可在https://github.com/haotian-liu/maskpoint上找到。
translated by 谷歌翻译