激光器传感器的进步提供了支持3D场景了解的丰富的3D数据。然而,由于遮挡和信号未命中,LIDAR点云实际上是2.5D,因为它们仅覆盖部分底层形状,这对3D感知构成了根本挑战。为了解决挑战,我们提出了一种基于新的LIDAR的3D对象检测模型,被称为窗帘检测器(BTCDET)后面,该模型学习物体形状前沿并估计在点云中部分封闭(窗帘)的完整物体形状。 BTCDET首先识别受遮挡和信号未命中的影响的区域。在这些区域中,我们的模型预测了占用的概率,指示区域是否包含对象形状。与此概率图集成,BTCDET可以产生高质量的3D提案。最后,占用概率也集成到提案细化模块中以生成最终边界框。关于基蒂数据集的广泛实验和Waymo Open DataSet展示了BTCDET的有效性。特别是,对于Kitti基准测试的汽车和骑自行车者的3D检测,BTCDET通过显着的边缘超越所有公布的最先进的方法。代码已发布(https://github.com/xharlie/btcdet}(https://github.com/xharlie/btcdet)。
translated by 谷歌翻译
3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A 2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A 2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. Code is available at https://github.com/sshaoshuai/PointCloudDet3D.
translated by 谷歌翻译
We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other stateof-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS.
translated by 谷歌翻译
In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. The code is available at https://github.com/sshaoshuai/PointRCNN.
translated by 谷歌翻译
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the highquality 3D proposals generated by the voxel CNN, the RoIgrid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction with multiple receptive fields. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins by using only point clouds. Code is available at https://github.com/open-mmlab/OpenPCDet.
translated by 谷歌翻译
由遮挡,信号丢失或手动注释错误引起的3D边界框的地面真相注释的固有歧义可能会使训练过程中的深3D对象检测器混淆,从而使检测准确性恶化。但是,现有方法在某种程度上忽略了此类问题,并将标签视为确定性。在本文中,我们提出了GLENET,这是一个从条件变异自动编码器改编的生成标签不确定性估计框架,以建模典型的3D对象与其潜在的潜在基边界框之间具有潜在变量的一对一关系。 Glenet产生的标签不确定性是一个插件模块,可以方便地集成到现有的深3D检测器中,以构建概率检测器并监督本地化不确定性的学习。此外,我们提出了概率探测器中的不确定性质量估计量架构,以指导对IOU分支的培训,并预测了本地化不确定性。我们将提出的方法纳入各种流行的3D检测器中,并观察到它们的性能显着提高到Waymo Open DataSet和Kitti数据集中的当前最新技术。
translated by 谷歌翻译
近年来,由于深度学习技术的发展,LiDar Point Clouds的3D对象检测取得了长足的进步。尽管基于体素或基于点的方法在3D对象检测中很受欢迎,但它们通常涉及耗时的操作,例如有关体素的3D卷积或点之间的球查询,从而使所得网络不适合时间关键应用程序。另一方面,基于2D视图的方法具有较高的计算效率,而通常比基于体素或基于点的方法获得的性能低。在这项工作中,我们提出了一个基于实时视图的单阶段3D对象检测器,即CVFNET完成此任务。为了在苛刻的效率条件下加强跨视图的学习,我们的框架提取了不同视图的特征,并以有效的渐进式方式融合了它们。我们首先提出了一个新颖的点范围特征融合模块,该模块在多个阶段深入整合点和范围视图特征。然后,当将所获得的深点视图转换为鸟类视图时,特殊的切片柱旨在很好地维护3D几何形状。为了更好地平衡样品比率,提出了一个稀疏的柱子检测头,将检测集中在非空网上。我们对流行的Kitti和Nuscenes基准进行了实验,并以准确性和速度来实现最先进的性能。
translated by 谷歌翻译
In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability. * Majority of the work done as an intern at Nuro, Inc. depth to point cloud 2D region (from CNN) to 3D frustum 3D box (from PointNet)
translated by 谷歌翻译
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Computation speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixelwise neural network predictions. The input representation, network architecture, and model optimization are especially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at > 28 FPS.
translated by 谷歌翻译
虽然基于点的网络被证明是3D点云建模的准确性,但它们仍然落在3D检测中基于体素的竞争对手后面。我们观察到,用于下采样点的主要集合抽象设计可以保持太多的不重要背景信息,可以影响检测对象的特征学习。为了解决这个问题,我们提出了一种名为语义增强集抽象(SASA)的新型集抽象方法。从技术上讲,我们首先将二进制分段模块添加为侧面输出,以帮助识别前景点。基于估计的点亮前景分数,我们提出了一种语义引导的点采样算法,帮助在下采样期间保持更重要的前景点。在实践中,SASA显示有效地识别与前景对象相关的有价值的点,并改善基于点的3D检测特征学习。此外,它是一种易于插入式模块,能够提升各种基于点的探测器,包括单级和两级的探测器。对流行的基蒂和NUSCENES数据集的广泛实验验证了SASA的优越性,提升基于点的检测模型,以达到最先进的基于体素的方法。
translated by 谷歌翻译
从点云的准确3D对象检测已成为自动驾驶中的重要组成部分。但是,前面的作品中的体积表示和投影方法无法在本地点集之间建立关系。在本文中,我们提出了稀疏的Voxel-Graph注意网络(SVGA-Net),一种新型端到端培训网络,主要包含Voxel-Traph模块和稀疏 - 致密的回归模块,以实现RAW的可比3D检测任务LIDAR数据。具体地,SVGA-NET通过所有体素构建每个分割的3D球形体素和全局KNN图中的本地完整图。本地和全局图作为增强提取特征的注意机制。此外,新颖的稀疏 - 密集的回归模块通过不同级别的特征映射聚合来增强3D盒估计精度。 KITTI检测基准测试的实验证明将图形表示扩展到3D对象检测的效率,并且所提出的SVGA-NET可以实现体面的检测精度。
translated by 谷歌翻译
近年来,自主驾驶LIDAR数据的3D对象检测一直在迈出卓越的进展。在最先进的方法中,已经证明了将点云进行编码为鸟瞰图(BEV)是有效且有效的。与透视图不同,BEV在物体之间保留丰富的空间和距离信息;虽然在BEV中相同类型的更远物体不会较小,但它们包含稀疏点云特征。这一事实使用共享卷积神经网络削弱了BEV特征提取。为了解决这一挑战,我们提出了范围感知注意网络(RAANET),提取更强大的BEV功能并产生卓越的3D对象检测。范围感知的注意力(RAA)卷曲显着改善了近距离的特征提取。此外,我们提出了一种新的辅助损耗,用于密度估计,以进一步增强覆盖物体的Raanet的检测精度。值得注意的是,我们提出的RAA卷积轻量级,并兼容,以集成到用于BEV检测的任何CNN架构中。 Nuscenes DataSet上的广泛实验表明,我们的提出方法优于基于LIDAR的3D对象检测的最先进的方法,具有16 Hz的实时推断速度,为LITE版本为22 Hz。该代码在匿名GitHub存储库HTTPS://github.com/Anonymous0522 / ange上公开提供。
translated by 谷歌翻译
3D object detection received increasing attention in autonomous driving recently. Objects in 3D scenes are distributed with diverse orientations. Ordinary detectors do not explicitly model the variations of rotation and reflection transformations. Consequently, large networks and extensive data augmentation are required for robust detection. Recent equivariant networks explicitly model the transformation variations by applying shared networks on multiple transformed point clouds, showing great potential in object geometry modeling. However, it is difficult to apply such networks to 3D object detection in autonomous driving due to its large computation cost and slow reasoning speed. In this work, we present TED, an efficient Transformation-Equivariant 3D Detector to overcome the computation cost and speed issues. TED first applies a sparse convolution backbone to extract multi-channel transformation-equivariant voxel features; and then aligns and aggregates these equivariant features into lightweight and compact representations for high-performance 3D object detection. On the highly competitive KITTI 3D car detection leaderboard, TED ranked 1st among all submissions with competitive efficiency.
translated by 谷歌翻译
它得到了很好的认识到,从深度感知的LIDAR点云和语义富有的立体图像中融合互补信息将有利于3D对象检测。然而,探索稀疏3D点和密集2D像素之间固有的不自然相互作用并不重要。为了简化这种困难,最近的建议通常将3D点投影到2D图像平面上以对图像数据进行采样,然后聚合点处的数据。然而,这种方法往往遭受点云和RGB图像的分辨率之间的不匹配,导致次优性能。具体地,作为多模态数据聚合位置的稀疏点导致高分辨率图像的严重信息丢失,这反过来破坏了多传感器融合的有效性。在本文中,我们呈现VPFNET - 一种新的架构,可以在“虚拟”点处巧妙地对齐和聚合点云和图像数据。特别地,它们的密度位于3D点和2D像素的密度之间,虚拟点可以很好地桥接两个传感器之间的分辨率间隙,从而保持更多信息以进行处理。此外,我们还研究了可以应用于点云和RGB图像的数据增强技术,因为数据增强对迄今为止对3D对象探测器的贡献不可忽略。我们对Kitti DataSet进行了广泛的实验,与最先进的方法相比,观察到了良好的性能。值得注意的是,我们的VPFNET在KITTI测试集上实现了83.21 \%中等3D AP和91.86 \%适度的BEV AP,自2021年5月21日起排名第一。网络设计也考虑了计算效率 - 我们可以实现FPS 15对单个NVIDIA RTX 2080TI GPU。该代码将用于复制和进一步调查。
translated by 谷歌翻译
从点云的3D检测中有两条流:单级方法和两级方法。虽然前者更加计算高效,但后者通常提供更好的检测精度。通过仔细检查两级方法,我们发现如果设计,第一阶段可以产生准确的盒子回归。在这种情况下,第二阶段主要重新分配盒子,使得具有更好的本地化的盒子得到选择。从这个观察开始,我们设计了一个可以满足这些要求的单级锚定网络。该网络名为AFDETV2,通过在骨干网中包含一个自校准的卷积块,键盘辅助监控和多任务头中的IOU预测分支来扩展了先前的工作。结果,检测精度在单阶段中大大提升。为了评估我们的方法,我们在Waymo Open DataSet和Nuscenes DataSet上进行了广泛的实验。我们观察到我们的AFDETv2在这两个数据集上实现了最先进的结果,优于所有现有技术,包括单级和两级SE3D探测器。 AFDETv2在Waymo Open DataSet挑战的实时3D检测中获得了第1位的第1位,我们的模型AFDetv2基地的变体题为挑战赞助商的“最有效的模型”,呈现出卓越的计算效率。为了证明这种单级方法的一般性,我们还将其应用于两级网络的第一阶段。毫无例外,结果表明,利用加强的骨干和救护方法,不再需要第二阶段细化。
translated by 谷歌翻译
在前景点(即物体)和室外激光雷达点云中的背景点之间通常存在巨大的失衡。它阻碍了尖端的探测器专注于提供信息的区域,以产生准确的3D对象检测结果。本文提出了一个新的对象检测网络,该对象检测网络通过称为PV-RCNN ++的语义点 - 素voxel特征相互作用。与大多数现有方法不同,PV-RCNN ++探索了语义信息,以增强对象检测的质量。首先,提出了一个语义分割模块,以保留更具歧视性的前景关键。这样的模块将指导我们的PV-RCNN ++在关键区域集成了更多与对象相关的点和体素特征。然后,为了使点和体素有效相互作用,我们利用基于曼哈顿距离的体素查询来快速采样关键点周围的体素特征。与球查询相比,这种体素查询将降低从O(N)到O(K)的时间复杂性。此外,为了避免仅学习本地特征,基于注意力的残留点网模块旨在扩展接收场,以将相邻的素素特征适应到关键点中。 Kitti数据集的广泛实验表明,PV-RCNN ++达到81.60 $ \%$,40.18 $ \%$,68.21 $ \%$ \%$ 3D地图在汽车,行人和骑自行车的人方面,可以在州,甚至可以在州立骑行者,甚至更好地绩效-艺术。
translated by 谷歌翻译
人的大脑可以毫不费力地识别和定位对象,而基于激光雷达点云的当前3D对象检测方法仍然报告了较低的性能,以检测闭塞和远处的对象:点云的外观由于遮挡而变化很大,并且在沿线的固有差异沿点固有差异变化。传感器的距离。因此,设计功能表示对此类点云至关重要。受到人类联想识别的启发,我们提出了一个新颖的3D检测框架,该框架通过域的适应来使对象完整特征。我们弥合感知域之间的差距,其中特征是从具有亚最佳表示的真实场景中得出的,以及概念域,其中功能是从由不批准对象组成的增强场景中提取的,并具有丰富的详细信息。研究了一种可行的方法,可以在没有外部数据集的情况下构建概念场景。我们进一步介绍了一个基于注意力的重新加权模块,该模块可适应地增强更翔实区域的特征。该网络的功能增强能力将被利用,而无需在推理过程中引入额外的成本,这是各种3D检测框架中的插件。我们以准确性和速度都在Kitti 3D检测基准上实现了新的最先进性能。关于Nuscenes和Waymo数据集的实验也验证了我们方法的多功能性。
translated by 谷歌翻译
由于其在各种领域的广泛应用,3D对象检测正在接受行业和学术界的增加。在本文中,我们提出了从点云的3D对象检测的基于角度基于卷曲区域的卷积神经网络(PV-RCNNS)。首先,我们提出了一种新颖的3D探测器,PV-RCNN,由两个步骤组成:Voxel-to-keyPoint场景编码和Keypoint-to-Grid ROI特征抽象。这两个步骤深入地将3D体素CNN与基于点的集合的集合进行了集成,以提取辨别特征。其次,我们提出了一个先进的框架,PV-RCNN ++,用于更高效和准确的3D对象检测。它由两个主要的改进组成:有效地生产更多代表性关键点的划分的提案中心策略,以及用于更好地聚合局部点特征的vectorpool聚合,具有更少的资源消耗。通过这两种策略,我们的PV-RCNN ++比PV-RCNN快2倍,同时还在具有150米* 150M检测范围内的大型Waymo Open DataSet上实现更好的性能。此外,我们提出的PV-RCNNS在Waymo Open DataSet和高竞争力的基蒂基准上实现最先进的3D检测性能。源代码可在https://github.com/open-mmlab/openpcdet上获得。
translated by 谷歌翻译
目前,现有的最先进的3D对象检测器位于两阶段范例中。这些方法通常包括两个步骤:1)利用区域提案网络以自下而上的方式提出少数高质量的提案。 2)调整拟议区域的语义特征的大小和汇集,以总结Roi-Wise表示进一步改进。注意,步骤2中的这些ROI-WISE表示在馈送到遵循检测标题之后,在步骤2中的循环表示作为不相关的条目。然而,我们观察由步骤1所产生的这些提案,以某种方式从地面真理偏移,在局部邻居中兴起潜在的概率。在该提案在很大程度上用于由于坐标偏移而导致其边界信息的情况下出现挑战,而现有网络缺乏相应的信息补偿机制。在本文中,我们向点云进行了3D对象检测的$ BADET $。具体地,而不是以先前的工作独立地将每个提议进行独立地改进每个提议,我们将每个提议代表作为在给定的截止阈值内的图形构造的节点,局部邻域图形式的提案,具有明确利用的对象的边界相关性。此外,我们设计了轻量级区域特征聚合模块,以充分利用Voxel-Wise,Pixel-Wise和Point-Wise特征,具有扩展的接收领域,以实现更多信息ROI-WISE表示。我们在广泛使用的基提数据集中验证了坏人,并且具有高度挑战的Nuscenes数据集。截至4月17日,2021年,我们的坏账在基蒂3D检测排行榜上实现了Par表演,并在Kitti Bev检测排行榜上排名在$ 1 ^ {st} $ in $ superge $难度。源代码可在https://github.com/rui-qian/badet中获得。
translated by 谷歌翻译
Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.
translated by 谷歌翻译