Bird's-Eye-View (BEV) 3D Object Detection is a crucial multi-view technique for autonomous driving systems. Recently, plenty of works are proposed, following a similar paradigm consisting of three essential components, i.e., camera feature extraction, BEV feature construction, and task heads. Among the three components, BEV feature construction is BEV-specific compared with 2D tasks. Existing methods aggregate the multi-view camera features to the flattened grid in order to construct the BEV feature. However, flattening the BEV space along the height dimension fails to emphasize the informative features of different heights. For example, the barrier is located at a low height while the truck is located at a high height. In this paper, we propose a novel method named BEV Slice Attention Network (BEV-SAN) for exploiting the intrinsic characteristics of different heights. Instead of flattening the BEV space, we first sample along the height dimension to build the global and local BEV slices. Then, the features of BEV slices are aggregated from the camera features and merged by the attention mechanism. Finally, we fuse the merged local and global BEV features by a transformer to generate the final feature map for task heads. The purpose of local BEV slices is to emphasize informative heights. In order to find them, we further propose a LiDAR-guided sampling strategy to leverage the statistical distribution of LiDAR to determine the heights of local slices. Compared with uniform sampling, LiDAR-guided sampling can determine more informative heights. We conduct detailed experiments to demonstrate the effectiveness of BEV-SAN. Code will be released.
translated by 谷歌翻译
Recently, Bird's-Eye-View (BEV) representation has gained increasing attention in multi-view 3D object detection, which has demonstrated promising applications in autonomous driving. Although multi-view camera systems can be deployed at low cost, the lack of depth information makes current approaches adopt large models for good performance. Therefore, it is essential to improve the efficiency of BEV 3D object detection. Knowledge Distillation (KD) is one of the most practical techniques to train efficient yet accurate models. However, BEV KD is still under-explored to the best of our knowledge. Different from image classification tasks, BEV 3D object detection approaches are more complicated and consist of several components. In this paper, we propose a unified framework named BEV-LGKD to transfer the knowledge in the teacher-student manner. However, directly applying the teacher-student paradigm to BEV features fails to achieve satisfying results due to heavy background information in RGB cameras. To solve this problem, we propose to leverage the localization advantage of LiDAR points. Specifically, we transform the LiDAR points to BEV space and generate the foreground mask and view-dependent mask for the teacher-student paradigm. It is to be noted that our method only uses LiDAR points to guide the KD between RGB models. As the quality of depth estimation is crucial for BEV perception, we further introduce depth distillation to our framework. Our unified framework is simple yet effective and achieves a significant performance boost. Code will be released.
translated by 谷歌翻译
3D视觉感知任务,包括基于多相机图像的3D检测和MAP分割,对于自主驾驶系统至关重要。在这项工作中,我们提出了一个称为BeVformer的新框架,该框架以时空变压器学习统一的BEV表示,以支持多个自主驾驶感知任务。简而言之,Bevormer通过通过预定义的网格形BEV查询与空间和时间空间进行交互来利用空间和时间信息。为了汇总空间信息,我们设计了空间交叉注意,每个BEV查询都从相机视图中从感兴趣的区域提取了空间特征。对于时间信息,我们提出暂时的自我注意力,以将历史bev信息偶尔融合。我们的方法在Nuscenes \ texttt {test} set上,以NDS度量为单位达到了新的最新56.9 \%,该设置比以前的最佳艺术高9.0分,并且与基于LIDAR的盆地的性能相当。我们进一步表明,BeVormer明显提高了速度估计的准确性和在低可见性条件下对象的回忆。该代码可在\ url {https://github.com/zhiqi-li/bevformer}中获得。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
以视觉为中心的BEV感知由于其固有的优点,最近受到行业和学术界的关注,包括展示世界自然代表和融合友好。随着深度学习的快速发展,已经提出了许多方法来解决以视觉为中心的BEV感知。但是,最近没有针对这个小说和不断发展的研究领域的调查。为了刺激其未来的研究,本文对以视觉为中心的BEV感知及其扩展进行了全面调查。它收集并组织了最近的知识,并对常用算法进行了系统的综述和摘要。它还为几项BEV感知任务提供了深入的分析和比较结果,从而促进了未来作品的比较并激发了未来的研究方向。此外,还讨论了经验实现细节并证明有利于相关算法的开发。
translated by 谷歌翻译
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this underlying assumption makes the current fusion framework infeasible to produce any prediction when there is a LiDAR malfunction, regardless of minor or major. This fundamentally limits the deployment capability to realistic autonomous driving scenarios. In contrast, we propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data, thus addressing the downside of previous methods. We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings. Under the robustness training settings that simulate various LiDAR malfunctions, our framework significantly surpasses the state-of-the-art methods by 15.7% to 28.9% mAP. To the best of our knowledge, we are the first to handle realistic LiDAR malfunction and can be deployed to realistic scenarios without any post-processing procedure. The code is available at https://github.com/ADLab-AutoDrive/BEVFusion.
translated by 谷歌翻译
在本文中,我们提出了PETRV2,这是来自多视图图像的3D感知统一框架。基于PETR,PETRV2探讨了时间建模的有效性,该时间建模利用先前帧的时间信息来增强3D对象检测。更具体地说,我们扩展了PETR中的3D位置嵌入(3D PE)进行时间建模。 3D PE可以在不同帧的对象位置上实现时间对齐。进一步引入了特征引导的位置编码器,以提高3D PE的数据适应性。为了支持高质量的BEV分割,PETRV2通过添加一组分割查询提供了简单而有效的解决方案。每个分割查询负责分割BEV映射的一个特定补丁。 PETRV2在3D对象检测和BEV细分方面实现了最先进的性能。在PETR框架上还进行了详细的鲁棒性分析。我们希望PETRV2可以作为3D感知的强大基准。代码可在\ url {https://github.com/megvii-research/petr}中获得。
translated by 谷歌翻译
学习准确的深度对于多视图3D对象检测至关重要。最近的方法主要是从单眼图像中学习深度,由于单眼深度学习的性质不足,这会面临固有的困难。在这项工作中,我们提出了一种新颖的环绕时间立体声(STS)技术,而不是使用唯一的单眼深度方法,而是利用跨时间之间的几何对应关系来促进准确的深度学习。具体而言,我们将自我车辆周围所有相机的视野视为统一的视图,即环绕浏览量,并在其上进行暂时立体声匹配。利用与STS不同框架之间的几何对应关系并与单眼深度结合在一起,以产生最终的深度预测。关于Nuscenes的综合实验表明,STS极大地提高了3D检测能力,特别是对于中距离和长距离对象。在带有RESNET-50骨架的BEVDEPTH上,STS分别提高了MAP和NDS,分别提高了2.6%和1.4%。当使用较大的主链和较大的图像分辨率时,观察到一致的改进,证明了其有效性
translated by 谷歌翻译
Bird's Eye View(BEV)语义分割在自动驾驶的空间传感中起着至关重要的作用。尽管最近的文献在BEV MAP的理解上取得了重大进展,但它们都是基于基于摄像头的系统,这些系统难以处理遮挡并检测复杂的交通场景中的遥远对象。车辆到车辆(V2V)通信技术使自动驾驶汽车能够共享感应信息,与单代理系统相比,可以显着改善感知性能和范围。在本文中,我们提出了Cobevt,这是可以合作生成BEV MAP预测的第一个通用多代理多机构感知框架。为了有效地从基础变压器体系结构中的多视图和多代理数据融合相机功能,我们设计了融合的轴向注意力或传真模块,可以捕获跨视图和代理的局部和全局空间交互。 V2V感知数据集OPV2V的广泛实验表明,COBEVT实现了合作BEV语义分段的最新性能。此外,COBEVT被证明可以推广到其他任务,包括1)具有单代理多摄像机的BEV分割和2)具有多代理激光雷达系统的3D对象检测,并实现具有实时性能的最新性能时间推理速度。
translated by 谷歌翻译
一个自动驾驶感知模型旨在将3D语义表示从多个相机集体提取到自我汽车的鸟类视图(BEV)坐标框架中,以使下游规划师接地。现有的感知方法通常依赖于整个场景的容易出错的深度估计,或者学习稀疏的虚拟3D表示没有目标几何结构,这两者在性能和/或能力上仍然有限。在本文中,我们介绍了一种新颖的端到端体系结构,用于自我3D表示从任意数量的无限摄像机视图中学习。受射线追踪原理的启发,我们将“想象眼睛”的两极分化网格设计为可学习的自我3D表示,并通过适应性注意机制与3D到2D投影一起以自适应注意机制的形式制定学习过程。至关重要的是,该公式允许从2D图像中提取丰富的3D表示,而无需任何深度监督,并且内置的几何结构一致W.R.T. bev。尽管具有简单性和多功能性,但对标准BEV视觉任务(例如,基于摄像机的3D对象检测和BEV细分)进行了广泛的实验表明,我们的模型的表现均优于所有最新替代方案,从多任务学习。
translated by 谷歌翻译
在本文中,我们开发了用于多视图3D对象检测的位置嵌入转换(PETR)。PETR将3D坐标的位置信息编码为图像特征,从而产生3D位置感知功能。对象查询可以感知3D位置感知功能并执行端到端对象检测。PETR在标准Nuscenes数据集上实现了最先进的性能(50.4%NDS和44.1%的地图),并在基准中排名第一。它可以作为未来研究的简单但强大的基准。代码可在\ url {https://github.com/megvii-research/petr}中获得。
translated by 谷歌翻译
LiDAR and camera are two essential sensors for 3D object detection in autonomous driving. LiDAR provides accurate and reliable 3D geometry information while the camera provides rich texture with color. Despite the increasing popularity of fusing these two complementary sensors, the challenge remains in how to effectively fuse 3D LiDAR point cloud with 2D camera images. Recent methods focus on point-level fusion which paints the LiDAR point cloud with camera features in the perspective view or bird's-eye view (BEV)-level fusion which unifies multi-modality features in the BEV representation. In this paper, we rethink these previous fusion strategies and analyze their information loss and influences on geometric and semantic features. We present SemanticBEVFusion to deeply fuse camera features with LiDAR features in a unified BEV representation while maintaining per-modality strengths for 3D object detection. Our method achieves state-of-the-art performance on the large-scale nuScenes dataset, especially for challenging distant objects. The code will be made publicly available.
translated by 谷歌翻译
自动驾驶可以感知其周围的决策,这是视觉感知中最复杂的情​​况之一。范式创新在解决2D对象检测任务方面的成功激发了我们寻求优雅,可行和可扩展的范式,以从根本上推动该领​​域的性能边界。为此,我们在本文中贡献了BEVDET范式。 BEVDET在鸟眼视图(BEV)中执行3D对象检测,其中大多数目标值被定义并可以轻松执行路线计划。我们只是重复使用现有模块来构建其框架,但通过构建独家数据增强策略并升级非最大抑制策略来实质性地发展其性能。在实验中,BEVDET在准确性和时间效率之间提供了极好的权衡。作为快速版本,nuscenes val设置的BEVDET微小分数为31.2%的地图和39.2%的NDS。它与FCOS3D相当,但仅需要11%的计算预算为215.3 GFLOPS,并且在15.6 fps的速度中运行的速度快9.2倍。另一个称为BEVDET基本的高精度版本得分为39.3%的地图和47.2%的NDS,大大超过了所有已发布的结果。具有可比的推理速度,它超过了 +9.8%地图和 +10.0%ND的大幅度的FCOS3D。源代码可在https://github.com/huangjunjie2017/bevdet上公开研究。
translated by 谷歌翻译
3D object detection with surround-view images is an essential task for autonomous driving. In this work, we propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images. We design a novel projective cross-attention mechanism for query-image interaction to address the limitations of existing methods in terms of geometric cue exploitation and information loss for cross-view objects. In addition, we introduce a heatmap generation technique that bridges 3D and 2D spaces efficiently via query initialization. Furthermore, unlike the common practice of fusing intermediate spatial features for temporal aggregation, we provide a new perspective by introducing a novel hybrid approach that performs cross-frame fusion over past object queries and image features, enabling efficient and robust modeling of temporal information. Extensive experiments on the nuScenes dataset demonstrate the effectiveness and efficiency of the proposed DETR4D.
translated by 谷歌翻译
随着LIDAR传感器在自动驾驶中的流行率,3D对象跟踪受到了越来越多的关注。在点云序列中,3D对象跟踪旨在预测给定对象模板中连续帧中对象的位置和方向。在变压器成功的驱动下,我们提出了点跟踪变压器(PTTR),它有效地预测了高质量的3D跟踪,借助变压器操作,以粗到1的方式导致。 PTTR由三个新型设计组成。 1)我们设计的关系意识采样代替随机抽样,以在亚采样过程中保留与给定模板相关的点。 2)我们提出了一个点关系变压器,以进行有效的特征聚合和模板和搜索区域之间的特征匹配。 3)基于粗糙跟踪结果,我们采用了一个新颖的预测改进模块,通过局部特征池获得最终的完善预测。此外,以捕获对象运动的鸟眼视图(BEV)的有利特性(BEV)的良好属性,我们进一步设计了一个名为PTTR ++的更高级的框架,该框架既包含了点的视图和BEV表示)产生高质量跟踪结果的影响。 PTTR ++实质上提高了PTTR顶部的跟踪性能,并具有低计算开销。多个数据集的广泛实验表明,我们提出的方法达到了卓越的3D跟踪准确性和效率。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
To achieve accurate and low-cost 3D object detection, existing methods propose to benefit camera-based multi-view detectors with spatial cues provided by the LiDAR modality, e.g., dense depth supervision and bird-eye-view (BEV) feature distillation. However, they directly conduct point-to-point mimicking from LiDAR to camera, which neglects the inner-geometry of foreground targets and suffers from the modal gap between 2D-3D features. In this paper, we propose the learning scheme of Target Inner-Geometry from the LiDAR modality into camera-based BEV detectors for both dense depth and BEV features, termed as TiG-BEV. First, we introduce an inner-depth supervision module to learn the low-level relative depth relations between different foreground pixels. This enables the camera-based detector to better understand the object-wise spatial structures. Second, we design an inner-feature BEV distillation module to imitate the high-level semantics of different keypoints within foreground targets. To further alleviate the BEV feature gap between two modalities, we adopt both inter-channel and inter-keypoint distillation for feature-similarity modeling. With our target inner-geometry distillation, TiG-BEV can effectively boost BEVDepth by +2.3% NDS and +2.4% mAP, along with BEVDet by +9.1% NDS and +10.3% mAP on nuScenes val set. Code will be available at https://github.com/ADLab3Ds/TiG-BEV.
translated by 谷歌翻译
与LIDAR相比,相机和雷达传感器在成本,可靠性和维护方面具有显着优势。现有的融合方法通常融合了结果级别的单个模式的输出,称为后期融合策略。这可以通过使用现成的单传感器检测算法受益,但是晚融合无法完全利用传感器的互补特性,因此尽管相机雷达融合的潜力很大,但性能有限。在这里,我们提出了一种新颖的提案级早期融合方法,该方法有效利用了相机和雷达的空间和上下文特性,用于3D对象检测。我们的融合框架首先将图像建议与极坐标系中的雷达点相关联,以有效处理坐标系和空间性质之间的差异。将其作为第一阶段,遵循连续的基于交叉注意的特征融合层在相机和雷达之间自适应地交换时尚信息,从而导致强大而专心的融合。我们的摄像机雷达融合方法可在Nuscenes测试集上获得最新的41.1%地图,而NDS则达到52.3%,比仅摄像机的基线高8.7和10.8点,并在竞争性能上提高竞争性能LIDAR方法。
translated by 谷歌翻译
利用多模式融合,尤其是在摄像头和激光雷达之间,对于为自动驾驶汽车构建准确且健壮的3D对象检测系统已经至关重要。直到最近,点装饰方法(在该点云中都用相机功能增强,一直是该领域的主要方法。但是,这些方法无法利用来自相机的较高分辨率图像。还提出了最近将摄像头功能投射到鸟类视图(BEV)融合空间的作品,但是它们需要预计数百万像素,其中大多数仅包含背景信息。在这项工作中,我们提出了一种新颖的方法中心功能融合(CFF),其中我们利用相机和激光雷达中心的基于中心的检测网络来识别相关对象位置。然后,我们使用基于中心的检测来识别与对象位置相关的像素功能的位置,这是图像中总数的一小部分。然后将它们投射并融合在BEV框架中。在Nuscenes数据集上,我们的表现优于仅限激光雷达基线的4.9%地图,同时比其他融合方法融合了100倍。
translated by 谷歌翻译
在此技术报告中,我们提出了我们的解决方案,称为MV-FCOS3D ++,适用于Waymo Open DataSet Challenge的仅摄像头3D检测轨道2022.仅使用birde-eye-view或3D检测多视图摄像头3D检测几何表示可以利用相邻视图之间重叠区域的立体声提示,而无需手工制作的后处理即可直接执行3D检测。但是,它缺乏对2D骨架的直接语义监督,可以通过预处理简单的单眼探测器来补充。我们的解决方案是此范式之后用于4D检测的多视图框架。它是基于简单的单眼检测器FCOS3D ++构建的,仅通过Waymo的对象注释进行了预定,并将多视图功能转换为3D网格空间以检测其上的3D对象。设计了单帧理解和时间立体声匹配的双路径颈部,以结合多帧信息。我们的方法最终通过单个模型实现了49.75%的MAPL,并在WOD挑战中赢得了第二名,而在训练过程中没有任何基于激光雷达的深度监督。该代码将在https://github.com/tai-wang/depth-from-motion上发布。
translated by 谷歌翻译