Masked Modeling (MM) has demonstrated widespread success in various vision challenges, by reconstructing masked visual patches. Yet, applying MM for large-scale 3D scenes remains an open problem due to the data sparsity and scene complexity. The conventional random masking paradigm used in 2D images often causes a high risk of ambiguity when recovering the masked region of 3D scenes. To this end, we propose a novel informative-preserved reconstruction, which explores local statistics to discover and preserve the representative structured points, effectively enhancing the pretext masking task for 3D scene understanding. Integrated with a progressive reconstruction manner, our method can concentrate on modeling regional geometry and enjoy less ambiguity for masked reconstruction. Besides, such scenes with progressive masking ratios can also serve to self-distill their intrinsic spatial consistency, requiring to learn the consistent representations from unmasked areas. By elegantly combining informative-preserved reconstruction on masked areas and consistency self-distillation from unmasked areas, a unified framework called MM-3DScene is yielded. We conduct comprehensive experiments on a host of downstream tasks. The consistent improvement (e.g., +6.1 mAP@0.5 on object detection and +2.2% mIoU on semantic segmentation) demonstrates the superiority of our approach.
translated by 谷歌翻译
Arguably one of the top success stories of deep learning is transfer learning. The finding that pre-training a network on a rich source set (e.g., ImageNet) can help boost performance once fine-tuned on a usually much smaller target set, has been instrumental to many applications in language and vision. Yet, very little is known about its usefulness in 3D point cloud understanding. We see this as an opportunity considering the effort required for annotating data in 3D. In this work, we aim at facilitating research on 3D representation learning. Different from previous works, we focus on high-level scene understanding tasks. To this end, we select a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes. Our findings are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -demonstrating that the learned representation can generalize across domains. Furthermore, the improvement was similar to supervised pre-training, suggesting that future efforts should favor scaling data collection over more detailed annotation. We hope these findings will encourage more research on unsupervised pretext task design for 3D deep learning. Our code is publicly available at https://github.com/facebookresearch/PointContrast
translated by 谷歌翻译
由于缺乏大规模标记的3D数据集,大多数3D神经网络都是从划痕训练。在本文中,我们通过利用来自丰富的2D数据集学习的2D网络来介绍一种新的3D预预测方法。我们提出了通过将像素级和点级别特征映射到同一嵌入空间中的对比度的像素到点知识转移来有效地利用2D信息。由于2D和3D网络之间的异构性质,我们介绍了后投影功能以对准2D和3D之间的功能以使转移成为可能。此外,我们设计了一个上采样功能投影层,以增加高级2D特征图的空间分辨率,这使得能够学习细粒度的3D表示。利用普雷累染的2D网络,所提出的预介绍过程不需要额外的2D或3D标记数据,进一步缓解了昂贵的3D数据注释成本。据我们所知,我们是第一个利用现有的2D培训的权重,以预先rain 3D深度神经网络。我们的密集实验表明,使用2D知识预订的3D模型可以通过各种真实世界3D下游任务进行3D网络的性能。
translated by 谷歌翻译
Pre-training by numerous image data has become de-facto for robust 2D representations. In contrast, due to the expensive data acquisition and annotation, a paucity of large-scale 3D datasets severely hinders the learning for high-quality 3D features. In this paper, we propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding, which reconstructs the masked point tokens with an encoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D models to extract the multi-view visual features of the input point cloud, and then conduct two types of image-to-point learning schemes on top. For one, we introduce a 2D-guided masking strategy that maintains semantically important point tokens to be visible for the encoder. Compared to random masking, the network can better concentrate on significant 3D structures and recover the masked tokens from key spatial cues. For another, we enforce these visible tokens to reconstruct the corresponding multi-view 2D features after the decoder. This enables the network to effectively inherit high-level 2D semantics learned from rich image data for discriminative 3D modeling. Aided by our image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning, achieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully trained results of existing methods. By further fine-tuning on on ScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.
translated by 谷歌翻译
The recent success of pre-trained 2D vision models is mostly attributable to learning from large-scale datasets. However, compared with 2D image datasets, the current pre-training data of 3D point cloud is limited. To overcome this limitation, we propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model, particularly the image encoder of CLIP, through concept alignment. Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images. In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models. Extensive experiments demonstrate that the proposed knowledge distillation scheme achieves higher accuracy than the state-of-the-art 3D pre-training methods for synthetic and real-world datasets on downstream tasks, including object classification, object detection, semantic segmentation, and part segmentation.
translated by 谷歌翻译
蒙面自动编码是一种流行而有效的自我监督学习方法,可以指向云学习。但是,大多数现有方法仅重建掩盖点并忽略本地几何信息,这对于了解点云数据也很重要。在这项工作中,据我们所知,我们首次尝试将局部几何信息明确考虑到掩盖的自动编码中,并提出一种新颖的蒙版表面预测(Masksurf)方法。具体而言,考虑到以高比例掩盖的输入点云,我们学习一个基于变压器的编码器码头网络,通过同时预测表面位置(即点)和每条效率方向(即,正常),以估算基础掩盖的表面。 。点和正态的预测由倒角距离和新引入的位置指标的正常距离以设定的方式进行监督。在三种微调策略下,我们的Masksurf在六个下游任务上得到了验证。特别是,MaskSurf在OBJ-BG设置下的ScanoBjectNN的现实世界数据集上胜过其最接近的竞争对手Point-Mae,证明了掩盖的表面预测的优势比蒙版的预测优势比蒙版的预测。代码将在https://github.com/ybzh/masksurf上找到。
translated by 谷歌翻译
Recent work on 4D point cloud sequences has attracted a lot of attention. However, obtaining exhaustively labeled 4D datasets is often very expensive and laborious, so it is especially important to investigate how to utilize raw unlabeled data. However, most existing self-supervised point cloud representation learning methods only consider geometry from a static snapshot omitting the fact that sequential observations of dynamic scenes could reveal more comprehensive geometric details. And the video representation learning frameworks mostly model motion as image space flows, let alone being 3D-geometric-aware. To overcome such issues, this paper proposes a new 4D self-supervised pre-training method called Complete-to-Partial 4D Distillation. Our key idea is to formulate 4D self-supervised representation learning as a teacher-student knowledge distillation framework and let the student learn useful 4D representations with the guidance of the teacher. Experiments show that this approach significantly outperforms previous pre-training approaches on a wide range of 4D point cloud sequence understanding tasks including indoor and outdoor scenarios.
translated by 谷歌翻译
我们提出了一种新的方法来将4D动态对象前瞻灌输到学习的3D表示,通过无监督的预训练。我们观察到对象通过环境的动态移动提供了关于其对象的重要提示,因此提出了利用这种动态理解的学习学习的3D表示,然后可以有效地传送到下游3D语义场景中的改进性能。我们提出了一种新的数据增强方案,利用静态3D环境中移动的合成3D形状,并在3D-4D约束下采用对比学习,该约束将4D Imormces编码到学习的3D表示中。实验表明,我们无监督的代表学习导致下游3D语义分割,对象检测和实例分割任务的改进,而且,显着提高了数据稀缺方案的性能。
translated by 谷歌翻译
近年来,3D视觉的自我监督预训练引起了研究的兴趣。为了学习信息的表示,许多以前的作品都利用了3D功能的不向导,\ eg,同一场景的视图之间的透视感,深度和RGB图像之间的模态侵权次数,点云和voxels之间的格式不变。尽管他们取得了令人鼓舞的结果,但以前的研究缺乏对这些不稳定的系统性比较。为了解决这个问题,我们的工作首次引入了一个统一的框架,根据该框架可以研究各种预培训方法。我们进行了广泛的实验,并仔细研究了3D预训练中不同不变的贡献。另外,我们提出了一种简单但有效的方法,该方法可以共同预先培训3D编码器和使用对比度学习的深度图编码器。通过我们的方法进行预训练的模型在下游任务方面具有显着的性能提高。例如,预先训练的投票表现优于Sun RGB-D和扫描对象检测基准的先前方法,并具有明显的利润。
translated by 谷歌翻译
3D感知最近的进展在了解3DACHAPES甚至场景的几何结构方面表现出令人印象深刻的进展。灵感来自这些进步的几何理解,我们旨在利用几何约束下学到的表示基于图像的感知。我们介绍一种基于多视图RGB-D数据学习View-Invariant的方法,用于网络预训练的网络预训练的几何感知表示,然后可以将其有效地传送到下游2D任务。我们建议在多视图IM-ysge约束和图像 - 几何约束下采用对比学习,以便在学习的2D表示中进行编码。这不仅仅是在几乎非仅对图像的语义分割,实例分段和对象检测的基于图像的基于图像的基于图像的TASK上学习而改进,而且,但是,在低数据方案中提供了显着的改进。我们对全数据的语义细分显示6.0%的显着提高,以及剪刀上的基线20%数据上的11.9%。
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译
Current outdoor LiDAR-based 3D object detection methods mainly adopt the training-from-scratch paradigm. Unfortunately, this paradigm heavily relies on large-scale labeled data, whose collection can be expensive and time-consuming. Self-supervised pre-training is an effective and desirable way to alleviate this dependence on extensive annotated data. Recently, masked modeling has become a successful self-supervised learning approach for point clouds. However, current works mainly focus on synthetic or indoor datasets. When applied to large-scale and sparse outdoor point clouds, they fail to yield satisfactory results. In this work, we present BEV-MAE, a simple masked autoencoder pre-training framework for 3D object detection on outdoor point clouds. Specifically, we first propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation in a BEV perspective and avoid complex decoder design during pre-training. Besides, we introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning for masked point cloud inputs. Finally, based on the property of outdoor point clouds, i.e., the point clouds of distant objects are more sparse, we propose point density prediction to enable the 3D encoder to learn location information, which is essential for object detection. Experimental results show that BEV-MAE achieves new state-of-the-art self-supervised results on both Waymo and nuScenes with diverse 3D object detectors. Furthermore, with only 20% data and 7% training cost during pre-training, BEV-MAE achieves comparable performance with the state-of-the-art method ProposalContrast. The source code and pre-trained models will be made publicly available.
translated by 谷歌翻译
我们呈现Point-Bert,一种用于学习变压器的新范式,以概括BERT对3D点云的概念。灵感来自BERT,我们将屏蔽点建模(MPM)任务设计为预列火车点云变压器。具体地,我们首先将点云划分为几个本地点修补程序,并且具有离散变化性AutoEncoder(DVAE)的点云标记器被设计为生成包含有意义的本地信息的离散点令牌。然后,我们随机掩盖了一些输入点云的补丁并将它们送入骨干变压器。预训练目标是在销售器获得的点代币的监督下恢复蒙面地点的原始点令牌。广泛的实验表明,拟议的BERT风格的预训练策略显着提高了标准点云变压器的性能。配备了我们的预培训策略,我们表明,纯变压器架构对ModelNet40的准确性为93.8%,在ScanObjectnn的最艰难的设置上的准确性为83.1%,超越精心设计的点云模型,手工制作的设计更少。我们还证明,Point-Bert从新的任务和域中获悉的表示,我们的模型在很大程度上推动了几个射击点云分类任务的最先进。代码和预先训练的型号可在https://github.com/lulutang0608/pint -bert上获得
translated by 谷歌翻译
蒙面自动编码在图像和语言领域的自我监督学习方面取得了巨大的成功。但是,基于面具的预处理尚未显示出对点云理解的好处,这可能是由于PointNet(PointNet)无法正确处理训练的标准骨架,而不是通过训练期间掩盖引入的测试分配不匹配。在本文中,我们通过提出一个判别性掩码式变压器框架,maskPoint}来弥合这一差距。我们的关键想法是将点云表示为离散的占用值(1如果点云的一部分;如果不是的,则为0),并在蒙版对象点和采样噪声点之间执行简单的二进制分类作为代理任务。这样,我们的方法是对点云中的点采样差异的强大,并促进了学习丰富的表示。我们在几个下游任务中评估了验证的模型,包括3D形状分类,分割和现实词对象检测,并展示了最新的结果,同时获得了明显的预读速度(例如,扫描仪上的4.1倍)先前的最新变压器基线。代码可在https://github.com/haotian-liu/maskpoint上找到。
translated by 谷歌翻译
最近的蒙版图像建模(MIM)在自我监督学习(SSL)中受到了很多关注,该学习要求目标模型恢复输入图像的掩盖部分。尽管基于MIM的预训练方法在转移到许多下游任务时达到了新的最新性能,但可视化表明,与基于基于对比性学习预训练相比,学习的表示形式不可分割,尤其是相比。这激发了我们思考MIM预培训表示的线性可分离性是否可以进一步改善,从而改善了训练的性能。由于MIM和对比度学习倾向于利用不同的数据增强和培训策略,因此将这两个借口任务结合起来并不是微不足道的。在这项工作中,我们提出了一个新颖而灵活的预训练框架,名为Mimco,该框架通过两阶段的预培训结合了MIM和对比度学习。具体而言,MIMCO将预先训练的对比学习模型作为教师模型,并通过两种类型的学习目标进行了预培训:贴片级和图像级的重建损失。关于下游任务的广泛转移实验证明了我们的MIMCO预训练框架的出色表现。以VIT-S为例,当使用预先训练的MoCov3-Vit-S作为教师模型时,Mimco只需要100个时期的预训练时期即可达到Imagenet-1K上的82.53%Top-1 FineTuning精度,这表现优于表现最先进的自我监督学习对手。
translated by 谷歌翻译
通过开发基于生成的自我监督学习(SSL)方法,例如Beit和Mae,如何通过掩盖输入图像的随机补丁并重建缺失信息来学习良好的表示形式。但是,Beit和Peco需要一个“预先陈述”阶段,以生成用于掩盖补丁代表的离散代码手册。 MAE不需要预训练的代码簿流程,但是将像素设置为重建目标可能会引入前训练和下游任务之间的优化差距,即良好的重建质量可能并不总是会导致模型的高描述能力。考虑到上述问题,在本文中,我们提出了一个简单的自鉴定的蒙面自动编码器网络,即SDAE。 SDAE由一个使用编码器解码器结构的学生分支组成,以重建缺失的信息,并制作一个师范分支,生产蒙版代币的潜在表示。我们还分析了如何从信息瓶颈的角度来为教师分支机构建立潜在代表性的好看法。之后,我们提出了一种多重掩蔽策略,以提供多个掩盖视图,并具有平衡的信息以提高性能,这也可以降低计算复杂性。我们的方法很好地概括了:只有300个时期预训练,香草vit-base模型在Imagenet-1K分类上达到了84.1%的微调精度,48.6 MIOU在ADE20K细分方面和48.9 coco检测中的MAP,它超过了其他方法,从而超过其他方法。通过相当大的边距。代码可从https://github.com/abrahamyabo/sdae获得。
translated by 谷歌翻译
基于变压器的自我监督表示方法学习方法从未标记的数据集中学习通用功能,以提供有用的网络初始化参数,用于下游任务。最近,基于掩盖3D点云数据的局部表面斑块的自我监督学习的探索还不足。在本文中,我们提出了3D点云表示学习中的蒙版自动编码器(缩写为MAE3D),这是一种新颖的自动编码范式,用于自我监督学习。我们首先将输入点云拆分为补丁,然后掩盖其中的一部分,然后使用我们的补丁嵌入模块提取未掩盖的补丁的功能。其次,我们采用贴片的MAE3D变形金刚学习点云补丁的本地功能以及补丁之间的高级上下文关系,并完成蒙版补丁的潜在表示。我们将点云重建模块与多任务损失一起完成,从而完成不完整的点云。我们在Shapenet55上进行了自我监督的预训练,并使用点云完成前文本任务,并在ModelNet40和ScanObjectnn(PB \ _t50 \ _RS,最难的变体)上微调预训练的模型。全面的实验表明,我们的MAE3D从Point Cloud补丁提取的本地功能对下游分类任务有益,表现优于最先进的方法($ 93.4 \%\%\%\%$和$ 86.2 \%$ $分类精度)。
translated by 谷歌翻译
许多3D表示(例如,点云)是下面连续3D表面的离散样本。该过程不可避免地介绍了底层的3D形状上的采样变化。在学习3D表示中,应忽略应忽略变化,而应捕获基础3D形状的可转换知识。这成为现有代表学习范式的大挑战。本文在点云上自动编码。标准自动编码范例强制编码器捕获这种采样变体,因为解码器必须重建具有采样变化的原始点云。我们介绍了隐式AutoEncoder(IAE),这是一种简单而有效的方法,通过用隐式解码器替换点云解码器来解决这一挑战。隐式解码器输出与相同模型的不同点云采样之间共享的连续表示。在隐式表示下重建可以优先考虑编码器丢弃采样变体,引入更多空间以学习有用的功能。在一个简单的线性AutoEncoder下,理论上理论地证明这一索赔。此外,隐式解码器提供丰富的空间来为不同的任务设计合适的隐式表示。我们展示了IAE对3D对象和3D场景的各种自我监督学习任务的有用性。实验结果表明,IAE在每项任务中始终如一地优于最先进的。
translated by 谷歌翻译
变形金刚和蒙版语言建模在计算机视觉中很快被视为视觉变压器和蒙版图像建模(MIM)。在这项工作中,我们认为由于图像中令牌的数量和相关性,图像令牌掩盖与文本中的令牌掩盖有所不同。特别是,为了为MIM产生具有挑战性的借口任务,我们主张从随机掩盖到知情掩盖的转变。我们在基于蒸馏的MIM的背景下开发并展示了这一想法,其中教师变压器编码器生成了一个注意力图,我们用它来指导学生为学生指导掩盖。因此,我们引入了一种新颖的掩蔽策略,称为注意引导蒙版(ATTMASK),我们证明了其对基于密集蒸馏的MIM以及基于普通蒸馏的自然剥离的自助力学习的有效性。我们确认ATTMASK可以加快学习过程,并提高各种下游任务的性能。我们在https://github.com/gkakogeorgiou/attmask上提供实现代码。
translated by 谷歌翻译
点云的学习表示是3D计算机视觉中的重要任务,尤其是没有手动注释的监督。以前的方法通常会从自动编码器中获得共同的援助,以通过重建输入本身来建立自我判断。但是,现有的基于自我重建的自动编码器仅关注全球形状,而忽略本地和全球几何形状之间的层次结构背景,这是3D表示学习的重要监督。为了解决这个问题,我们提出了一个新颖的自我监督点云表示学习框架,称为3D遮挡自动编码器(3D-OAE)。我们的关键想法是随机遮住输入点云的某些局部补丁,并通过使用剩余的可见图来恢复遮挡的补丁,从而建立监督。具体而言,我们设计了一个编码器,用于学习可见的本地贴片的特征,并设计了一个用于利用这些功能预测遮挡贴片的解码器。与以前的方法相反,我们的3D-OAE可以去除大量的斑块,并仅使用少量可见斑块进行预测,这使我们能够显着加速训练并产生非平凡的自我探索性能。训练有素的编码器可以进一步转移到各种下游任务。我们证明了我们在广泛使用基准下的不同判别和生成应用中的最先进方法的表现。
translated by 谷歌翻译