有希望的互补性存在着颜色图像的纹理特征和激光点云的几何信息。但是,在3D对象检测领域中,仍然存在许多挑战,以实现高效且可靠的特征融合。在本文中,首先,在2D平面中填充了非结构化的3D点云,并且使用投影感知的卷积层更快地提取3D点云特征。此外,在数据预处理中提前建立了不同传感器信号之间的相应索引,从而实现更快的交叉模式融合。为了解决LIDAR点和图像像素的未对准问题,提出了两个新的插件融合模块,即licamfuse和bilicamfuse。在Licamfuse中,提出了带有双峰特征的欧几里得距离的软查询权重。在Bilicamfuse中,提出了双重注意的融合模块,以深层关联场景的几何和纹理特征。 KITTI数据集上的定量结果表明,所提出的方法可以实现更好的特征级融合。此外,与现有方法相比,建议的网络显示出更短的运行时间。
translated by 谷歌翻译
它得到了很好的认识到,从深度感知的LIDAR点云和语义富有的立体图像中融合互补信息将有利于3D对象检测。然而,探索稀疏3D点和密集2D像素之间固有的不自然相互作用并不重要。为了简化这种困难,最近的建议通常将3D点投影到2D图像平面上以对图像数据进行采样,然后聚合点处的数据。然而,这种方法往往遭受点云和RGB图像的分辨率之间的不匹配,导致次优性能。具体地,作为多模态数据聚合位置的稀疏点导致高分辨率图像的严重信息丢失,这反过来破坏了多传感器融合的有效性。在本文中,我们呈现VPFNET - 一种新的架构,可以在“虚拟”点处巧妙地对齐和聚合点云和图像数据。特别地,它们的密度位于3D点和2D像素的密度之间,虚拟点可以很好地桥接两个传感器之间的分辨率间隙,从而保持更多信息以进行处理。此外,我们还研究了可以应用于点云和RGB图像的数据增强技术,因为数据增强对迄今为止对3D对象探测器的贡献不可忽略。我们对Kitti DataSet进行了广泛的实验,与最先进的方法相比,观察到了良好的性能。值得注意的是,我们的VPFNET在KITTI测试集上实现了83.21 \%中等3D AP和91.86 \%适度的BEV AP,自2021年5月21日起排名第一。网络设计也考虑了计算效率 - 我们可以实现FPS 15对单个NVIDIA RTX 2080TI GPU。该代码将用于复制和进一步调查。
translated by 谷歌翻译
最近,融合了激光雷达点云和相机图像,提高了3D对象检测的性能和稳健性,因为这两种方式自然具有强烈的互补性。在本文中,我们通过引入新型级联双向融合〜(CB融合)模块和多模态一致性〜(MC)损耗来提出用于多模态3D对象检测的EPNet ++。更具体地说,所提出的CB融合模块提高点特征的丰富语义信息,以级联双向交互融合方式具有图像特征,导致更全面且辨别的特征表示。 MC损失明确保证预测分数之间的一致性,以获得更全面且可靠的置信度分数。基蒂,JRDB和Sun-RGBD数据集的实验结果展示了通过最先进的方法的EPNet ++的优越性。此外,我们强调一个关键但很容易被忽视的问题,这是探讨稀疏场景中的3D探测器的性能和鲁棒性。广泛的实验存在,EPNet ++优于现有的SOTA方法,在高稀疏点云壳中具有显着的边距,这可能是降低LIDAR传感器的昂贵成本的可用方向。代码将来会发布。
translated by 谷歌翻译
LiDAR-based 3D Object detectors have achieved impressive performances in many benchmarks, however, multisensors fusion-based techniques are promising to further improve the results. PointPainting, as a recently proposed framework, can add the semantic information from the 2D image into the 3D LiDAR point by the painting operation to boost the detection performance. However, due to the limited resolution of 2D feature maps, severe boundary-blurring effect happens during re-projection of 2D semantic segmentation into the 3D point clouds. To well handle this limitation, a general multimodal fusion framework MSF has been proposed to fuse the semantic information from both the 2D image and 3D points scene parsing results. Specifically, MSF includes three main modules. First, SOTA off-the-shelf 2D/3D semantic segmentation approaches are employed to generate the parsing results for 2D images and 3D point clouds. The 2D semantic information is further re-projected into the 3D point clouds with calibrated parameters. To handle the misalignment between the 2D and 3D parsing results, an AAF module is proposed to fuse them by learning an adaptive fusion score. Then the point cloud with the fused semantic label is sent to the following 3D object detectors. Furthermore, we propose a DFF module to aggregate deep features in different levels to boost the final detection performance. The effectiveness of the framework has been verified on two public large-scale 3D object detection benchmarks by comparing with different baselines. The experimental results show that the proposed fusion strategies can significantly improve the detection performance compared to the methods using only point clouds and the methods using only 2D semantic information. Most importantly, the proposed approach significantly outperforms other approaches and sets new SOTA results on the nuScenes testing benchmark.
translated by 谷歌翻译
具有多传感器的3D对象检测对于自主驾驶和机器人技术的准确可靠感知系统至关重要。现有的3D探测器通过采用两阶段范式来显着提高准确性,这仅依靠激光点云进行3D提案的细化。尽管令人印象深刻,但点云的稀疏性,尤其是对于遥远的点,使得仅激光雷达的完善模块难以准确识别和定位对象。要解决这个问题,我们提出了一种新颖的多模式两阶段方法FusionRcnn,有效,有效地融合了感兴趣区域(ROI)的点云和摄像头图像。 FusionRcnn自适应地整合了LiDAR的稀疏几何信息和统一注意机制中相机的密集纹理信息。具体而言,它首先利用RoiPooling获得具有统一大小的图像集,并通过在ROI提取步骤中的建议中采样原始点来获取点设置;然后利用模式内的自我注意力来增强域特异性特征,此后通过精心设计的跨注意事项融合了来自两种模态的信息。FusionRCNN从根本上是插件,并支持不同的单阶段方法与不同的单阶段方法。几乎没有建筑变化。对Kitti和Waymo基准测试的广泛实验表明,我们的方法显着提高了流行探测器的性能。可取,FusionRCNN在Waymo上的FusionRCNN显着提高了强大的第二基线,而Waymo上的MAP则超过6.14%,并且优于竞争两阶段方法的表现。代码将很快在https://github.com/xxlbigbrother/fusion-rcnn上发布。
translated by 谷歌翻译
近年来,由于深度学习技术的发展,LiDar Point Clouds的3D对象检测取得了长足的进步。尽管基于体素或基于点的方法在3D对象检测中很受欢迎,但它们通常涉及耗时的操作,例如有关体素的3D卷积或点之间的球查询,从而使所得网络不适合时间关键应用程序。另一方面,基于2D视图的方法具有较高的计算效率,而通常比基于体素或基于点的方法获得的性能低。在这项工作中,我们提出了一个基于实时视图的单阶段3D对象检测器,即CVFNET完成此任务。为了在苛刻的效率条件下加强跨视图的学习,我们的框架提取了不同视图的特征,并以有效的渐进式方式融合了它们。我们首先提出了一个新颖的点范围特征融合模块,该模块在多个阶段深入整合点和范围视图特征。然后,当将所获得的深点视图转换为鸟类视图时,特殊的切片柱旨在很好地维护3D几何形状。为了更好地平衡样品比率,提出了一个稀疏的柱子检测头,将检测集中在非空网上。我们对流行的Kitti和Nuscenes基准进行了实验,并以准确性和速度来实现最先进的性能。
translated by 谷歌翻译
Aiming at highly accurate object detection for connected and automated vehicles (CAVs), this paper presents a Deep Neural Network based 3D object detection model that leverages a three-stage feature extractor by developing a novel LIDAR-Camera fusion scheme. The proposed feature extractor extracts high-level features from two input sensory modalities and recovers the important features discarded during the convolutional process. The novel fusion scheme effectively fuses features across sensory modalities and convolutional layers to find the best representative global features. The fused features are shared by a two-stage network: the region proposal network (RPN) and the detection head (DH). The RPN generates high-recall proposals, and the DH produces final detection results. The experimental results show the proposed model outperforms more recent research on the KITTI 2D and 3D detection benchmark, particularly for distant and highly occluded instances.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
利用多模式融合,尤其是在摄像头和激光雷达之间,对于为自动驾驶汽车构建准确且健壮的3D对象检测系统已经至关重要。直到最近,点装饰方法(在该点云中都用相机功能增强,一直是该领域的主要方法。但是,这些方法无法利用来自相机的较高分辨率图像。还提出了最近将摄像头功能投射到鸟类视图(BEV)融合空间的作品,但是它们需要预计数百万像素,其中大多数仅包含背景信息。在这项工作中,我们提出了一种新颖的方法中心功能融合(CFF),其中我们利用相机和激光雷达中心的基于中心的检测网络来识别相关对象位置。然后,我们使用基于中心的检测来识别与对象位置相关的像素功能的位置,这是图像中总数的一小部分。然后将它们投射并融合在BEV框架中。在Nuscenes数据集上,我们的表现优于仅限激光雷达基线的4.9%地图,同时比其他融合方法融合了100倍。
translated by 谷歌翻译
尽管收集了越来越多的数据集用于培训3D对象检测模型,但在LiDar扫描上注释3D盒仍然需要大量的人类努力。为了自动化注释并促进了各种自定义数据集的生产,我们提出了一个端到端的多模式变压器(MTRANS)自动标签器,该标签既利用LIDAR扫描和图像,以生成来自弱2D边界盒的精确的3D盒子注释。为了减轻阻碍现有自动标签者的普遍稀疏性问题,MTRAN通过基于2D图像信息生成新的3D点来致密稀疏点云。凭借多任务设计,MTRANS段段前景/背景片段,使LIDAR POINT CLUENS云密布,并同时回归3D框。实验结果验证了MTRAN对提高生成标签质量的有效性。通过丰富稀疏点云,我们的方法分别在Kitti中度和硬样品上获得了4.48 \%和4.03 \%更好的3D AP,而不是最先进的自动标签器。也可以扩展Mtrans以提高3D对象检测的准确性,从而在Kitti硬样品上产生了显着的89.45 \%AP。代码位于\ url {https://github.com/cliu2/mtrans}。
translated by 谷歌翻译
从点云的准确3D对象检测已成为自动驾驶中的重要组成部分。但是,前面的作品中的体积表示和投影方法无法在本地点集之间建立关系。在本文中,我们提出了稀疏的Voxel-Graph注意网络(SVGA-Net),一种新型端到端培训网络,主要包含Voxel-Traph模块和稀疏 - 致密的回归模块,以实现RAW的可比3D检测任务LIDAR数据。具体地,SVGA-NET通过所有体素构建每个分割的3D球形体素和全局KNN图中的本地完整图。本地和全局图作为增强提取特征的注意机制。此外,新颖的稀疏 - 密集的回归模块通过不同级别的特征映射聚合来增强3D盒估计精度。 KITTI检测基准测试的实验证明将图形表示扩展到3D对象检测的效率,并且所提出的SVGA-NET可以实现体面的检测精度。
translated by 谷歌翻译
在现有方法中,LIDAR的探测器显示出卓越的性能,但视觉探测器仍被广泛用于其价格优势。从惯例上讲,视觉检验的任务主要依赖于连续图像的输入。但是,探测器网络学习图像提供的异性几何信息非常复杂。在本文中,将伪LIDAR的概念引入了探测器中以解决此问题。伪LIDAR点云背面项目由图像生成的深度图中的3D点云,这改变了图像表示的方式。与立体声图像相比,立体声匹配网络生成的伪lidar点云可以得到显式的3D坐标。由于在3D空间中发生了6个自由度(DOF)姿势转换,因此伪宽点云提供的3D结构信息比图像更直接。与稀疏的激光雷达相比,伪驱动器具有较密集的点云。为了充分利用伪LIDAR提供的丰富点云信息,采用了投射感知的探测管道。以前的大多数基于激光雷达的算法从点云中采样了8192点,作为探视网络的输入。投影感知的密集探测管道采用从图像产生的所有伪lidar点云,除了误差点作为网络的输入。在图像中充分利用3D几何信息时,图像中的语义信息也用于探视任务中。 2D-3D的融合是在仅基于图像的进程中实现的。 Kitti数据集的实验证明了我们方法的有效性。据我们所知,这是使用伪LIDAR的第一种视觉探光法。
translated by 谷歌翻译
3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A 2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A 2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. Code is available at https://github.com/sshaoshuai/PointCloudDet3D.
translated by 谷歌翻译
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird's eye view object detection, while being real-time. * Equal contribution.† Work done as part of Uber AI Residency program.
translated by 谷歌翻译
由于经过验证的2D检测技术的适用性,大多数当前点云检测器都广泛采用了鸟类视图(BEV)。但是,现有方法通过简单地沿高度尺寸折叠的体素或点特征来获得BEV特征,从而导致3D空间信息的重丢失。为了减轻信息丢失,我们提出了一个基于多级特征降低降低策略的新颖点云检测网络,称为MDRNET。在MDRNET中,空间感知的维度降低(SDR)旨在在体素至BEV特征转换过程中动态关注对象的宝贵部分。此外,提出了多级空间残差(MSR),以融合BEV特征图中的多级空间信息。关于Nuscenes的广泛实验表明,该提出的方法的表现优于最新方法。该代码将在出版时提供。
translated by 谷歌翻译
当前仅激光雷达的3D检测方法不可避免地会遭受点云的稀疏性。提出了许多多模式方法来减轻此问题,而图像和点云的不同表示使它们很难融合,从而导致次优性能。在本文中,我们提出了一个新颖的多模式框架SFD(稀疏的保险丝密度),该框架利用了从深度完成生成的伪点云来解决上述问题。与先前的工作不同,我们提出了一种新的ROI Fusion策略3D-GAF(3D网格的专注融合),以更全面地使用来自不同类型的点云的信息。具体而言,3D-GAF以网格的细心方式从两点云中融合了3D ROI功能,这更细粒度,更精确。此外,我们提出了一种登录(同步增强),以使我们的多模式框架能够利用针对仅激光雷达方法的所有数据增强方法。最后,我们为伪点云自定义有效,有效的特征提取器CPCONV(色点卷积)。它可以同时探索伪点云的2D图像特征和3D几何特征。我们的方法在Kitti Car 3D对象检测排行榜上排名最高,证明了我们的SFD的有效性。代码可在https://github.com/littlepey/sfd上找到。
translated by 谷歌翻译
Recently, Transformer has achieved great success in computer vision. However, it is constrained because the spatial and temporal complexity grows quadratically with the number of large points in 3D object detection applications. Previous point-wise methods are suffering from time consumption and limited receptive fields to capture information among points. In this paper, we propose a two-stage hyperbolic cosine transformer (ChTR3D) for 3D object detection from LiDAR point clouds. The proposed ChTR3D refines proposals by applying cosh-attention in linear computation complexity to encode rich contextual relationships among points. The cosh-attention module reduces the space and time complexity of the attention operation. The traditional softmax operation is replaced by non-negative ReLU activation and hyperbolic-cosine-based operator with re-weighting mechanism. Extensive experiments on the widely used KITTI dataset demonstrate that, compared with vanilla attention, the cosh-attention significantly improves the inference speed with competitive performance. Experiment results show that, among two-stage state-of-the-art methods using point-level features, the proposed ChTR3D is the fastest one.
translated by 谷歌翻译
现有的最佳3D对象检测器通常依赖于多模式融合策略。但是,由于忽略了特定于模式的有用信息,因此从根本上限制了该设计,并最终阻碍了模型性能。为了解决这一局限性,在这项工作中,我们介绍了一种新型的模式相互作用策略,在该策略中,在整个过程中学习和维护单个单模式表示,以使其在物体检测过程中被利用其独特特征。为了实现这一建议的策略,我们设计了一个深层互动体系结构,其特征是多模式代表性交互编码器和多模式预测交互解码器。大规模Nuscenes数据集的实验表明,我们所提出的方法经常超过所有先前的艺术。至关重要的是,我们的方法在竞争激烈的Nuscenes对象检测排行榜上排名第一。
translated by 谷歌翻译