We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds. The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled, and to use the underlying latent vectors as input to the perception head. The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information, that can be used to boost an actual perception task. This principle has a very simple formulation, which makes it both easy to implement and widely applicable to a large range of 3D sensors and deep networks performing semantic segmentation or object detection. In fact, it supports a single-stream pipeline, as opposed to most contrastive learning approaches, allowing training on limited resources. We conducted extensive experiments on various autonomous driving datasets, involving very different kinds of lidars, for both semantic segmentation and object detection. The results show the effectiveness of our method to learn useful representations without any annotation, compared to existing approaches. Code is available at \href{https://github.com/valeoai/ALSO}{github.com/valeoai/ALSO}
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
We show how the inherent, but often neglected, properties of large-scale LiDAR point clouds can be exploited for effective self-supervised representation learning. To this end, we design a highly data-efficient feature pre-training backbone that significantly reduces the amount of tedious 3D annotations to train state-of-the-art object detectors. In particular, we propose a Masked AutoEncoder (MAELi) that intuitively utilizes the sparsity of the LiDAR point clouds in both, the encoder and the decoder, during reconstruction. This results in more expressive and useful features, directly applicable to downstream perception tasks, such as 3D object detection for autonomous driving. In a novel reconstruction scheme, MAELi distinguishes between free and occluded space and leverages a new masking strategy which targets the LiDAR's inherent spherical projection. To demonstrate the potential of MAELi, we pre-train one of the most widespread 3D backbones, in an end-to-end fashion and show the merit of our fully unsupervised pre-trained features on several 3D object detection architectures. Given only a tiny fraction of labeled frames to fine-tune such detectors, we achieve significant performance improvements. For example, with only $\sim800$ labeled frames, MAELi features improve a SECOND model by +10.09APH/LEVEL 2 on Waymo Vehicles.
translated by 谷歌翻译
室内场景云的无监督对比学习取得了巨大的成功。但是,室外场景中无监督的学习点云仍然充满挑战,因为以前的方法需要重建整个场景并捕获对比度目标的部分视图。这在带有移动物体,障碍物和传感器的室外场景中是不可行的。在本文中,我们提出了CO^3,即合作对比度学习和上下文形状的预测,以无监督的方式学习3D表示室外景点云。与现有方法相比,Co^3具有几种优点。 (1)它利用了从车辆侧和基础架构侧来的激光点云来构建差异,但同时维护对比度学习的通用语义信息,这比以前的方法构建的视图更合适。 (2)在对比度目标的同时,提出了形状上下文预测作为预训练目标,并为无监督的3D点云表示学习带来了更多与任务相关的信息,这在将学习的表示形式转移到下游检测任务时是有益的。 (3)与以前的方法相比,CO^3学到的表示形式可以通过不同类型的LIDAR传感器收集到不同的室外场景数据集。 (4)CO^3将一次和Kitti数据集的当前最新方法提高到2.58地图。代码和模型将发布。我们认为Co^3将有助于了解室外场景中的LiDar Point云。
translated by 谷歌翻译
Arguably one of the top success stories of deep learning is transfer learning. The finding that pre-training a network on a rich source set (e.g., ImageNet) can help boost performance once fine-tuned on a usually much smaller target set, has been instrumental to many applications in language and vision. Yet, very little is known about its usefulness in 3D point cloud understanding. We see this as an opportunity considering the effort required for annotating data in 3D. In this work, we aim at facilitating research on 3D representation learning. Different from previous works, we focus on high-level scene understanding tasks. To this end, we select a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes. Our findings are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -demonstrating that the learned representation can generalize across domains. Furthermore, the improvement was similar to supervised pre-training, suggesting that future efforts should favor scaling data collection over more detailed annotation. We hope these findings will encourage more research on unsupervised pretext task design for 3D deep learning. Our code is publicly available at https://github.com/facebookresearch/PointContrast
translated by 谷歌翻译
现有的无监督点云预训练的方法被限制在场景级或点/体素级实例歧视上。场景级别的方法往往会失去对识别道路对象至关重要的本地细节,而点/体素级方法固有地遭受了有限的接收领域,而这种接收领域无力感知大型对象或上下文环境。考虑到区域级表示更适合3D对象检测,我们设计了一个新的无监督点云预训练框架,称为proposalcontrast,该框架通过对比的区域建议来学习强大的3D表示。具体而言,通过从每个点云中采样一组详尽的区域建议,每个提案中的几何点关系都是建模用于创建表达性建议表示形式的。为了更好地适应3D检测属性,提案contrast可以通过群体间和统一分离来优化,即提高跨语义类别和对象实例的提议表示的歧视性。在各种3D检测器(即PV-RCNN,Centerpoint,Pointpillars和Pointrcnn)和数据集(即Kitti,Waymo和一次)上验证了提案cont抗对流的概括性和可传递性。
translated by 谷歌翻译
Current outdoor LiDAR-based 3D object detection methods mainly adopt the training-from-scratch paradigm. Unfortunately, this paradigm heavily relies on large-scale labeled data, whose collection can be expensive and time-consuming. Self-supervised pre-training is an effective and desirable way to alleviate this dependence on extensive annotated data. Recently, masked modeling has become a successful self-supervised learning approach for point clouds. However, current works mainly focus on synthetic or indoor datasets. When applied to large-scale and sparse outdoor point clouds, they fail to yield satisfactory results. In this work, we present BEV-MAE, a simple masked autoencoder pre-training framework for 3D object detection on outdoor point clouds. Specifically, we first propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation in a BEV perspective and avoid complex decoder design during pre-training. Besides, we introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning for masked point cloud inputs. Finally, based on the property of outdoor point clouds, i.e., the point clouds of distant objects are more sparse, we propose point density prediction to enable the 3D encoder to learn location information, which is essential for object detection. Experimental results show that BEV-MAE achieves new state-of-the-art self-supervised results on both Waymo and nuScenes with diverse 3D object detectors. Furthermore, with only 20% data and 7% training cost during pre-training, BEV-MAE achieves comparable performance with the state-of-the-art method ProposalContrast. The source code and pre-trained models will be made publicly available.
translated by 谷歌翻译
转移学习是2D计算机愿景中的一种经过验证的技术,可以利用可用的大量数据并获得高性能,而数据集则由于获取或注释的成本而受到限制。在3D中,注释是一项昂贵的任务。然而,直到最近才研究转移学习方法。由于没有非常大的注释数据集,因此无监督的预培训受到了极大的青睐。在这项工作中,我们解决了稀疏室外激光扫描的实时3D语义细分的案例。这样的数据集已经上升,但是对于同一任务,也有不同的标签集。在这项工作中,我们在这里提出了一个名为“粗标签”的中级标签集,该标签允许在没有任何手动标签的情况下利用所有可用数据。这样,我们可以访问较大的数据集,以及更简单的语义分割任务。有了它,我们引入了一项新的预训练任务:粗制标签预训练,也称为可乐。我们彻底分析了可乐对各种数据集和体系结构的影响,并表明它可以提高性能,尤其是当填充任务仅访问小型数据集时。
translated by 谷歌翻译
基于最新的激光痛的3D对象检测方法依赖于监督学习和大型标记数据集。但是,注释LiDAR数据是资源消耗的,仅取决于监督的学习限制了训练有素的模型的适用性。自我监督的培训策略可以通过学习下游3D视觉任务的通用点云主链模型来减轻这些问题。在此背景下,我们显示了自我监督的多帧流程表示与单帧3D检测假设之间的关系。我们的主要贡献利用了流动和运动表示,并将自我保护的主链与有监督的3D检测头结合在一起。首先,自我监督的场景流估计模型通过循环一致性进行了训练。然后,该模型的点云编码器用作单帧3D对象检测头模型的骨干。第二个3D对象检测模型学会利用运动表示来区分表现出不同运动模式的动态对象。 Kitti和Nuscenes基准的实验表明,提出的自我监管的预训练可显着提高3D检测性能。 https://github.com/emecercelik/ssl-3d-detection.git
translated by 谷歌翻译
许多3D表示(例如,点云)是下面连续3D表面的离散样本。该过程不可避免地介绍了底层的3D形状上的采样变化。在学习3D表示中,应忽略应忽略变化,而应捕获基础3D形状的可转换知识。这成为现有代表学习范式的大挑战。本文在点云上自动编码。标准自动编码范例强制编码器捕获这种采样变体,因为解码器必须重建具有采样变化的原始点云。我们介绍了隐式AutoEncoder(IAE),这是一种简单而有效的方法,通过用隐式解码器替换点云解码器来解决这一挑战。隐式解码器输出与相同模型的不同点云采样之间共享的连续表示。在隐式表示下重建可以优先考虑编码器丢弃采样变体,引入更多空间以学习有用的功能。在一个简单的线性AutoEncoder下,理论上理论地证明这一索赔。此外,隐式解码器提供丰富的空间来为不同的任务设计合适的隐式表示。我们展示了IAE对3D对象和3D场景的各种自我监督学习任务的有用性。实验结果表明,IAE在每项任务中始终如一地优于最先进的。
translated by 谷歌翻译
尽管收集了越来越多的数据集用于培训3D对象检测模型,但在LiDar扫描上注释3D盒仍然需要大量的人类努力。为了自动化注释并促进了各种自定义数据集的生产,我们提出了一个端到端的多模式变压器(MTRANS)自动标签器,该标签既利用LIDAR扫描和图像,以生成来自弱2D边界盒的精确的3D盒子注释。为了减轻阻碍现有自动标签者的普遍稀疏性问题,MTRAN通过基于2D图像信息生成新的3D点来致密稀疏点云。凭借多任务设计,MTRANS段段前景/背景片段,使LIDAR POINT CLUENS云密布,并同时回归3D框。实验结果验证了MTRAN对提高生成标签质量的有效性。通过丰富稀疏点云,我们的方法分别在Kitti中度和硬样品上获得了4.48 \%和4.03 \%更好的3D AP,而不是最先进的自动标签器。也可以扩展Mtrans以提高3D对象检测的准确性,从而在Kitti硬样品上产生了显着的89.45 \%AP。代码位于\ url {https://github.com/cliu2/mtrans}。
translated by 谷歌翻译
隐式神经网络已成功用于点云的表面重建。然而,它们中的许多人面临着可扩展性问题,因为它们将整个对象或场景的异构面功能编码为单个潜在载体。为了克服这种限制,一些方法在粗略普通的3D网格或3D补丁上推断潜伏向量,并将它们插入以应对占用查询。在这样做时,它们可以与对象表面上采样的输入点进行直接连接,并且它们在空间中均匀地附加信息,而不是其最重要的信息,即在表面附近。此外,依赖于固定的补丁大小可能需要离散化调整。要解决这些问题,我们建议使用点云卷积并计算每个输入点的潜伏向量。然后,我们使用推断的权重在最近的邻居上执行基于学习的插值。对象和场景数据集的实验表明,我们的方法在大多数古典指标上显着优于其他方法,产生更精细的细节和更好的重建更薄的卷。代码可在https://github.com/valeoai/poco获得。
translated by 谷歌翻译
Paris-Carla-3d是由移动激光器和相机系统构建的几个浓彩色点云的数据集。数据由两组具有来自开源Carla模拟器(700百万分)的合成数据和在巴黎市中获取的真实数据(6000万分),因此Paris-Carla-3d的名称。此数据集的一个优点是在开源Carla模拟器中模拟了相同的LIDAR和相机平台,因为用于生产真实数据的开源Carla Simulator。此外,使用Carla的语义标记的手动注释在真实数据上执行,允许将转移方法从合成到实际数据进行测试。该数据集的目的是提供一个具有挑战性的数据集,以评估和改进户外环境3D映射的困难视觉任务的方法:语义分段,实例分段和场景完成。对于每项任务,我们描述了评估协议以及建立基线的实验。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidaronly method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, Vox-elNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.
translated by 谷歌翻译
蒙面自动编码在图像和语言领域的自我监督学习方面取得了巨大的成功。但是,基于面具的预处理尚未显示出对点云理解的好处,这可能是由于PointNet(PointNet)无法正确处理训练的标准骨架,而不是通过训练期间掩盖引入的测试分配不匹配。在本文中,我们通过提出一个判别性掩码式变压器框架,maskPoint}来弥合这一差距。我们的关键想法是将点云表示为离散的占用值(1如果点云的一部分;如果不是的,则为0),并在蒙版对象点和采样噪声点之间执行简单的二进制分类作为代理任务。这样,我们的方法是对点云中的点采样差异的强大,并促进了学习丰富的表示。我们在几个下游任务中评估了验证的模型,包括3D形状分类,分割和现实词对象检测,并展示了最新的结果,同时获得了明显的预读速度(例如,扫描仪上的4.1倍)先前的最新变压器基线。代码可在https://github.com/haotian-liu/maskpoint上找到。
translated by 谷歌翻译
In this work, we address the problem of unsupervised moving object segmentation (MOS) in 4D LiDAR data recorded from a stationary sensor, where no ground truth annotations are involved. Deep learning-based state-of-the-art methods for LiDAR MOS strongly depend on annotated ground truth data, which is expensive to obtain and scarce in existence. To close this gap in the stationary setting, we propose a novel 4D LiDAR representation based on multivariate time series that relaxes the problem of unsupervised MOS to a time series clustering problem. More specifically, we propose modeling the change in occupancy of a voxel by a multivariate occupancy time series (MOTS), which captures spatio-temporal occupancy changes on the voxel level and its surrounding neighborhood. To perform unsupervised MOS, we train a neural network in a self-supervised manner to encode MOTS into voxel-level feature representations, which can be partitioned by a clustering algorithm into moving or stationary. Experiments on stationary scenes from the Raw KITTI dataset show that our fully unsupervised approach achieves performance that is comparable to that of supervised state-of-the-art approaches.
translated by 谷歌翻译
基于面具的预训练在没有手动注释的监督的情况下,在图像,视频和语言中进行自我监督的学习取得了巨大的成功。但是,作为信息冗余数据,尚未在3D对象检测的字段中进行研究。由于3D对象检测中的点云是大规模的,因此无法重建输入点云。在本文中,我们提出了一个蒙版素分类网络,用于预训练大规模点云。我们的关键思想是将点云分为体素表示,并分类体素是否包含点云。这种简单的策略使网络是对物体形状的体素意识,从而改善了3D对象检测的性能。广泛的实验显示了我们在三个流行数据集(Kitti,Waymo和Nuscenes)上使用3D对象检测器(第二,Centerpoint和PV-RCNN)的预训练模型的效果。代码可在https://github.com/chaytonmin/voxel-mae上公开获得。
translated by 谷歌翻译
场景完成是从场景的部分扫描中完成缺失几何形状的任务。大多数以前的方法使用3D网格上的截断签名距离函数(T-SDF)计算出隐式表示,作为神经网络的输入。截断限制,但不会删除由非关闭表面符号引入的模棱两可的案例。作为替代方案,我们提出了一个未签名的距离函数(UDF),称为未签名的加权欧几里得距离(UWED)作为场景完成神经网络的输入表示。 UWED作为几何表示是简单而有效的,并且可以在任何点云上计算,而与通常的签名距离函数(SDF)相比,UWED不需要正常的计算。为了获得明确的几何形状,我们提出了一种从常规网格上离散的UDF值提取点云的方法。我们比较了从RGB-D和LIDAR传感器收集的室内和室外点云上的场景完成任务的不同SDF和UDFS,并使用建议的UWED功能显示了改进的完成。
translated by 谷歌翻译
3D感知最近的进展在了解3DACHAPES甚至场景的几何结构方面表现出令人印象深刻的进展。灵感来自这些进步的几何理解,我们旨在利用几何约束下学到的表示基于图像的感知。我们介绍一种基于多视图RGB-D数据学习View-Invariant的方法,用于网络预训练的网络预训练的几何感知表示,然后可以将其有效地传送到下游2D任务。我们建议在多视图IM-ysge约束和图像 - 几何约束下采用对比学习,以便在学习的2D表示中进行编码。这不仅仅是在几乎非仅对图像的语义分割,实例分段和对象检测的基于图像的基于图像的基于图像的TASK上学习而改进,而且,但是,在低数据方案中提供了显着的改进。我们对全数据的语义细分显示6.0%的显着提高,以及剪刀上的基线20%数据上的11.9%。
translated by 谷歌翻译