我们提出了DeepFusion,这是一种模块化的多模式结构,可在不同组合中以3D对象检测为融合激光雷达,相机和雷达。专门的功能提取器可以利用每种模式,并且可以轻松交换,从而使该方法变得简单而灵活。提取的特征被转化为鸟眼视图,作为融合的共同表示。在特征空间中融合方式之前,先进行空间和语义对齐。最后,检测头利用丰富的多模式特征,以改善3D检测性能。 LIDAR相机,激光摄像头雷达和摄像头融合的实验结果显示了我们融合方法的灵活性和有效性。在此过程中,我们研究了高达225米的遥远汽车检测的很大程度上未开发的任务,显示了激光摄像机融合的好处。此外,我们研究了3D对象检测的LIDAR点所需的密度,并在对不利天气条件的鲁棒性示例中说明了含义。此外,对我们的摄像头融合的消融研究突出了准确深度估计的重要性。
translated by 谷歌翻译
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
translated by 谷歌翻译
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird's eye view object detection, while being real-time. * Equal contribution.† Work done as part of Uber AI Residency program.
translated by 谷歌翻译
在过去的几年中,自动驾驶的感知系统在其表现方面取得了重大进步。但是,这些系统在极端天气条件下努力表现出稳健性,因为在这些条件下,传感器和相机等传感器套件中的主要传感器都会下降。为了解决此问题,摄像机雷达融合系统为所有可靠的高质量感知提供了独特的机会。相机提供丰富的语义信息,而雷达可以通过遮挡和在所有天气条件下工作。在这项工作中,我们表明,当摄像机输入降解时,最新的融合方法的性能很差,这实际上导致失去了他们设定的全天可靠性。与这些方法相反,我们提出了一种新方法RadSegnet,该方法使用了独立信息提取的新设计理念,并在所有情况下都可以在所有情况下真正实现可靠性,包括遮挡和不利天气。我们在基准ASTYX数据集上开发并验证了我们的系统,并在辐射数据集上进一步验证了这些结果。与最先进的方法相比,Radsegnet在ASTYX上提高了27%,辐射增长了41.46%,平均精度得分,并且在不利天气条件下的性能明显更好
translated by 谷歌翻译
雷达和摄像机多模式融合的环境感知对于自动驾驶至关重要,以提高准确性,完整性和稳健性。本文着重于如何利用毫米波(MMW)雷达和相机传感器融合进行3D对象检测。提出了一种新的方法,该方法在提出了更好的特征表示形式下意识到在鸟眼视图(BEV)下的特征级融合。首先,将雷达特征通过时间积累增强,并发送到时间空间编码器以进行雷达特征提取。同时,通过图像骨干和颈部模型获得了适应各种空间尺度的多尺度图像2D特征。然后,将图像功能转换为使用设计的视图变压器。此外,这项工作将多模式特征与称为点融合和ROI融合的两阶段融合模型融合在一起。最后,检测头会回归对象类别和3D位置。实验结果表明,所提出的方法在最重要的检测指标,平均平均精度(MAP)和NUSCENES检测分数(NDS)下实现了最先进的性能。
translated by 谷歌翻译
利用多模式融合,尤其是在摄像头和激光雷达之间,对于为自动驾驶汽车构建准确且健壮的3D对象检测系统已经至关重要。直到最近,点装饰方法(在该点云中都用相机功能增强,一直是该领域的主要方法。但是,这些方法无法利用来自相机的较高分辨率图像。还提出了最近将摄像头功能投射到鸟类视图(BEV)融合空间的作品,但是它们需要预计数百万像素,其中大多数仅包含背景信息。在这项工作中,我们提出了一种新颖的方法中心功能融合(CFF),其中我们利用相机和激光雷达中心的基于中心的检测网络来识别相关对象位置。然后,我们使用基于中心的检测来识别与对象位置相关的像素功能的位置,这是图像中总数的一小部分。然后将它们投射并融合在BEV框架中。在Nuscenes数据集上,我们的表现优于仅限激光雷达基线的4.9%地图,同时比其他融合方法融合了100倍。
translated by 谷歌翻译
多传感器融合对于准确可靠的自主驾驶系统至关重要。最近的方法基于点级融合:通过相机功能增强激光雷达点云。但是,摄像头投影抛弃了相机功能的语义密度,阻碍了此类方法的有效性,尤其是对于面向语义的任务(例如3D场景分割)。在本文中,我们用BevFusion打破了这个根深蒂固的惯例,这是一个有效且通用的多任务多任务融合框架。它统一了共享鸟类视图(BEV)表示空间中的多模式特征,该空间很好地保留了几何信息和语义信息。为了实现这一目标,我们通过优化的BEV池进行诊断和提高视图转换中的钥匙效率瓶颈,从而将延迟降低了40倍以上。 BevFusion从根本上是任务不合时宜的,并且无缝支持不同的3D感知任务,几乎没有建筑变化。它在Nuscenes上建立了新的最新技术,在3D对象检测上获得了1.3%的MAP和NDS,而BEV MAP分段中的MIOU高13.6%,计算成本较低1.9倍。可以在https://github.com/mit-han-lab/bevfusion上获得复制我们结果的代码。
translated by 谷歌翻译
LiDAR and camera are two essential sensors for 3D object detection in autonomous driving. LiDAR provides accurate and reliable 3D geometry information while the camera provides rich texture with color. Despite the increasing popularity of fusing these two complementary sensors, the challenge remains in how to effectively fuse 3D LiDAR point cloud with 2D camera images. Recent methods focus on point-level fusion which paints the LiDAR point cloud with camera features in the perspective view or bird's-eye view (BEV)-level fusion which unifies multi-modality features in the BEV representation. In this paper, we rethink these previous fusion strategies and analyze their information loss and influences on geometric and semantic features. We present SemanticBEVFusion to deeply fuse camera features with LiDAR features in a unified BEV representation while maintaining per-modality strengths for 3D object detection. Our method achieves state-of-the-art performance on the large-scale nuScenes dataset, especially for challenging distant objects. The code will be made publicly available.
translated by 谷歌翻译
Aiming at highly accurate object detection for connected and automated vehicles (CAVs), this paper presents a Deep Neural Network based 3D object detection model that leverages a three-stage feature extractor by developing a novel LIDAR-Camera fusion scheme. The proposed feature extractor extracts high-level features from two input sensory modalities and recovers the important features discarded during the convolutional process. The novel fusion scheme effectively fuses features across sensory modalities and convolutional layers to find the best representative global features. The fused features are shared by a two-stage network: the region proposal network (RPN) and the detection head (DH). The RPN generates high-recall proposals, and the DH produces final detection results. The experimental results show the proposed model outperforms more recent research on the KITTI 2D and 3D detection benchmark, particularly for distant and highly occluded instances.
translated by 谷歌翻译
除标准摄像机外,自动驾驶汽车通常还包括多个其他传感器,例如激光雷达和雷达,这些传感器有助于获取更丰富的信息以感知驾驶场景的内容。尽管最近的几项作品着重于通过使用特定于检查设置的架构组件融合某些传感器,例如相机,镜头或相机和雷达,但文献中缺少了通用和模块化传感器融合体系结构。在这项工作中,我们专注于2D对象检测,这是在2D图像域上定义的基本高级任务,并提出了HRFUSER,这是一种多分辨率的传感器融合体系结构,可直接扩展到任意数量的输入模式。 HRFUSER的设计基于用于仅图像密集预测的最新高分辨率网络,并结合了一种新型的多窗口交叉注意区块,作为在多种分辨率下进行多种模态融合的手段。即使单独的相机为2D检测提供了非常有用的功能,我们通过对Nuscenes的广泛实验进行了证明,并通过FOG数据集查看,我们的模型有效地利用了其他模态的互补功能,从而实质上改善了相机性能,并始终如一地超过了更胜过摄影机的状态表现。在正常情况下和不利条件下,用于2D检测的ART融合方法。源代码将公开可用。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
基于LIDAR的传感驱动器电流自主车辆。尽管进展迅速,但目前的激光雷达传感器在分辨率和成本方面仍然落后于传统彩色相机背后的二十年。对于自主驾驶,这意味着靠近传感器的大物体很容易可见,但远方或小物体仅包括一个测量或两个。这是一个问题,尤其是当这些对象结果驾驶危险时。另一方面,在车载RGB传感器中清晰可见这些相同的对象。在这项工作中,我们提出了一种将RGB传感器无缝熔化成基于LIDAR的3D识别方法。我们的方法采用一组2D检测来生成密集的3D虚拟点,以增加否则稀疏的3D点云。这些虚拟点自然地集成到任何基于标准的LIDAR的3D探测器以及常规激光雷达测量。由此产生的多模态检测器简单且有效。大规模NUSCENES数据集的实验结果表明,我们的框架通过显着的6.6地图改善了强大的中心点基线,并且优于竞争融合方法。代码和更多可视化可在https://tianweiy.github.io/mvp/上获得
translated by 谷歌翻译
它得到了很好的认识到,从深度感知的LIDAR点云和语义富有的立体图像中融合互补信息将有利于3D对象检测。然而,探索稀疏3D点和密集2D像素之间固有的不自然相互作用并不重要。为了简化这种困难,最近的建议通常将3D点投影到2D图像平面上以对图像数据进行采样,然后聚合点处的数据。然而,这种方法往往遭受点云和RGB图像的分辨率之间的不匹配,导致次优性能。具体地,作为多模态数据聚合位置的稀疏点导致高分辨率图像的严重信息丢失,这反过来破坏了多传感器融合的有效性。在本文中,我们呈现VPFNET - 一种新的架构,可以在“虚拟”点处巧妙地对齐和聚合点云和图像数据。特别地,它们的密度位于3D点和2D像素的密度之间,虚拟点可以很好地桥接两个传感器之间的分辨率间隙,从而保持更多信息以进行处理。此外,我们还研究了可以应用于点云和RGB图像的数据增强技术,因为数据增强对迄今为止对3D对象探测器的贡献不可忽略。我们对Kitti DataSet进行了广泛的实验,与最先进的方法相比,观察到了良好的性能。值得注意的是,我们的VPFNET在KITTI测试集上实现了83.21 \%中等3D AP和91.86 \%适度的BEV AP,自2021年5月21日起排名第一。网络设计也考虑了计算效率 - 我们可以实现FPS 15对单个NVIDIA RTX 2080TI GPU。该代码将用于复制和进一步调查。
translated by 谷歌翻译
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this underlying assumption makes the current fusion framework infeasible to produce any prediction when there is a LiDAR malfunction, regardless of minor or major. This fundamentally limits the deployment capability to realistic autonomous driving scenarios. In contrast, we propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data, thus addressing the downside of previous methods. We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings. Under the robustness training settings that simulate various LiDAR malfunctions, our framework significantly surpasses the state-of-the-art methods by 15.7% to 28.9% mAP. To the best of our knowledge, we are the first to handle realistic LiDAR malfunction and can be deployed to realistic scenarios without any post-processing procedure. The code is available at https://github.com/ADLab-AutoDrive/BEVFusion.
translated by 谷歌翻译
LiDAR-based 3D Object detectors have achieved impressive performances in many benchmarks, however, multisensors fusion-based techniques are promising to further improve the results. PointPainting, as a recently proposed framework, can add the semantic information from the 2D image into the 3D LiDAR point by the painting operation to boost the detection performance. However, due to the limited resolution of 2D feature maps, severe boundary-blurring effect happens during re-projection of 2D semantic segmentation into the 3D point clouds. To well handle this limitation, a general multimodal fusion framework MSF has been proposed to fuse the semantic information from both the 2D image and 3D points scene parsing results. Specifically, MSF includes three main modules. First, SOTA off-the-shelf 2D/3D semantic segmentation approaches are employed to generate the parsing results for 2D images and 3D point clouds. The 2D semantic information is further re-projected into the 3D point clouds with calibrated parameters. To handle the misalignment between the 2D and 3D parsing results, an AAF module is proposed to fuse them by learning an adaptive fusion score. Then the point cloud with the fused semantic label is sent to the following 3D object detectors. Furthermore, we propose a DFF module to aggregate deep features in different levels to boost the final detection performance. The effectiveness of the framework has been verified on two public large-scale 3D object detection benchmarks by comparing with different baselines. The experimental results show that the proposed fusion strategies can significantly improve the detection performance compared to the methods using only point clouds and the methods using only 2D semantic information. Most importantly, the proposed approach significantly outperforms other approaches and sets new SOTA results on the nuScenes testing benchmark.
translated by 谷歌翻译
Radar, the only sensor that could provide reliable perception capability in all weather conditions at an affordable cost, has been widely accepted as a key supplement to camera and LiDAR in modern advanced driver assistance systems (ADAS) and autonomous driving systems. Recent state-of-the-art works reveal that fusion of radar and LiDAR can lead to robust detection in adverse weather, such as fog. However, these methods still suffer from low accuracy of bounding box estimations. This paper proposes a bird's-eye view (BEV) fusion learning for an anchor box-free object detection system, which uses the feature derived from the radar range-azimuth heatmap and the LiDAR point cloud to estimate the possible objects. Different label assignment strategies have been designed to facilitate the consistency between the classification of foreground or background anchor points and the corresponding bounding box regressions. Furthermore, the performance of the proposed object detector can be further enhanced by employing a novel interactive transformer module. We demonstrated the superior performance of the proposed methods in this paper using the recently published Oxford Radar RobotCar (ORR) dataset. We showed that the accuracy of our system significantly outperforms the other state-of-the-art methods by a large margin.
translated by 谷歌翻译
We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod
translated by 谷歌翻译
3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies -a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that it is not the quality of the data but its representation that accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert image-based depth maps to pseudo-LiDAR representations -essentially mimicking the LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing state-of-the-art in image-based performance -raising the detection accuracy of objects within the 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo-image-based approaches. Our code is publicly available at https: //github.com/mileyan/pseudo_lidar.
translated by 谷歌翻译
与LIDAR相比,相机和雷达传感器在成本,可靠性和维护方面具有显着优势。现有的融合方法通常融合了结果级别的单个模式的输出,称为后期融合策略。这可以通过使用现成的单传感器检测算法受益,但是晚融合无法完全利用传感器的互补特性,因此尽管相机雷达融合的潜力很大,但性能有限。在这里,我们提出了一种新颖的提案级早期融合方法,该方法有效利用了相机和雷达的空间和上下文特性,用于3D对象检测。我们的融合框架首先将图像建议与极坐标系中的雷达点相关联,以有效处理坐标系和空间性质之间的差异。将其作为第一阶段,遵循连续的基于交叉注意的特征融合层在相机和雷达之间自适应地交换时尚信息,从而导致强大而专心的融合。我们的摄像机雷达融合方法可在Nuscenes测试集上获得最新的41.1%地图,而NDS则达到52.3%,比仅摄像机的基线高8.7和10.8点,并在竞争性能上提高竞争性能LIDAR方法。
translated by 谷歌翻译
近年来,自主驾驶LIDAR数据的3D对象检测一直在迈出卓越的进展。在最先进的方法中,已经证明了将点云进行编码为鸟瞰图(BEV)是有效且有效的。与透视图不同,BEV在物体之间保留丰富的空间和距离信息;虽然在BEV中相同类型的更远物体不会较小,但它们包含稀疏点云特征。这一事实使用共享卷积神经网络削弱了BEV特征提取。为了解决这一挑战,我们提出了范围感知注意网络(RAANET),提取更强大的BEV功能并产生卓越的3D对象检测。范围感知的注意力(RAA)卷曲显着改善了近距离的特征提取。此外,我们提出了一种新的辅助损耗,用于密度估计,以进一步增强覆盖物体的Raanet的检测精度。值得注意的是,我们提出的RAA卷积轻量级,并兼容,以集成到用于BEV检测的任何CNN架构中。 Nuscenes DataSet上的广泛实验表明,我们的提出方法优于基于LIDAR的3D对象检测的最先进的方法,具有16 Hz的实时推断速度,为LITE版本为22 Hz。该代码在匿名GitHub存储库HTTPS://github.com/Anonymous0522 / ange上公开提供。
translated by 谷歌翻译