自动驾驶汽车必须在3D中检测其他车辆和行人,以计划安全路线并避免碰撞。基于深度学习的最先进的3D对象探测器已显示出有希望的准确性,但容易过度拟合域特质,使它们在新环境中失败 - 如果自动驾驶汽车旨在自动操作,则是一个严重的问题。在本文中,我们提出了一种新颖的学习方法,该方法通过在目标域中的伪标记上微调检测器,从而大大减少这一差距,我们的方法在车辆停放时会根据先前记录的驾驶序列的重播而生成的差距。在这些重播中,随着时间的推移会跟踪对象,并且检测被插值和外推 - 至关重要的是利用未来的信息来捕获硬病例。我们在五个自动驾驶数据集上显示,对这些伪标签上的对象检测器进行微调大大减少了域间隙到新的驾驶环境,从而极大地提高了准确性和检测可靠性。
translated by 谷歌翻译
不同制造商和激光雷达传感器模型之间的采样差异导致对象的不一致表示。当在其他类型的楣上测试为一个激光雷达培训的3D探测器时,这导致性能下降。 LIDAR制造业的显着进展使机械,固态和最近可调节的扫描图案LIDARS的进展带来了进展。对于后者,现有工作通常需要微调模型,每次调整扫描模式,这是不可行的。我们通过提出一种小型无监督的多目标域适配框架,明确地处理采样差异,参见,用于在固定和灵活的扫描图案Lidars上传送最先进的3D探测器的性能,而无需微调模型通过最终用户。我们的方法在将其传递到检测网络之前,将底层几何形状插值并将其从不同LIDAR的对象的扫描模式正常化。我们展示了在公共数据集上看到的有效性,实现最先进的结果,并另外为新颖的高分辨率LIDAR提供定量结果,以证明我们框架的行业应用。此数据集和我们的代码将公开可用。
translated by 谷歌翻译
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译
3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies -a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that it is not the quality of the data but its representation that accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert image-based depth maps to pseudo-LiDAR representations -essentially mimicking the LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing state-of-the-art in image-based performance -raising the detection accuracy of objects within the 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo-image-based approaches. Our code is publicly available at https: //github.com/mileyan/pseudo_lidar.
translated by 谷歌翻译
在本文中,我们提出了激光雷达蒸馏,以弥合由不同的激光束引起的3D对象检测的域间隙。在许多现实世界中,大规模生产的机器人和车辆使用的激光点通常比大型公共数据集的光束少。此外,随着LIDARS升级到具有不同光束量的其他产品模型,使用先前版本的高分辨率传感器捕获的标记数据变得具有挑战性。尽管域自适应3D检测最近取得了进展,但大多数方法都难以消除梁诱导的域间隙。我们发现,在训练过程中,必须将源域的点云密度与目标域的点云密度保持一致。受到这一发现的启发,我们提出了一个渐进式框架,以减轻光束诱导的域移位。在每次迭代中,我们首先通过下采样高光束点云来产生低光束伪激光雷达。然后,使用教师学生的框架来将丰富的信息从数据中提取更多的信息。 Waymo,Nuscenes和Kitti数据集的大量实验具有三个不同的基于激光雷达的探测器,这证明了我们激光蒸馏的有效性。值得注意的是,我们的方法不会增加推理的任何额外计算成本。
translated by 谷歌翻译
单眼3D对象检测是自动驾驶和计算机视觉社区中的一项挑战。作为一种常见的做法,大多数以前的作品都使用手动注释的3D盒标签,其中注释过程很昂贵。在本文中,我们发现在单眼3D检测中,精确和仔细注释的标签可能是不必要的,这是一个有趣且违反直觉的发现。与使用地面真相标签相比,使用随机干扰的粗糙标签,检测器可以达到非常接近的精度。我们深入研究了这种潜在的机制,然后从经验上发现:关于标签精度,与标签的其他部分相比,标签中的3D位置部分是优选的。由上面的结论和考虑到精确的LIDAR 3D测量的动机,我们提出了一个简单有效的框架,称为LiDAR Point Cloud引导的单眼3D对象检测(LPCG)。该框架能够降低注释成本或大大提高检测准确性,而无需引入额外的注释成本。具体而言,它从未标记的LIDAR点云生成伪标签。得益于3D空间中精确的LIDAR 3D测量值,由于其3D位置信息是精确的,因此,此类伪标签可以替换单眼3D检测器训练中手动注释的标签。可以将LPCG应用于任何单眼3D检测器中,以完全使用自动驾驶系统中的大量未标记数据。结果,在KITTI基准测试中,我们在单眼3D和BEV(Bird's-eye-tive)检测中都获得了明显差的检测。在Waymo基准测试中,我们使用10%标记数据的方法使用100%标记的数据获得了与基线探测器的可比精度。这些代码在https://github.com/spengliang/lpcg上发布。
translated by 谷歌翻译
每个自动驾驶数据集都有不同的传感器配置,源自不同的地理区域并涵盖各种情况。结果,3D检测器倾向于过度拟合他们的数据集。当在一个数据集上训练检测器并在另一个数据集上进行测试时,这会导致精度急剧下降。我们观察到激光扫描模式差异构成了这种降低性能的很大组成部分。我们通过设计一个新颖的以观看者为中心的表面完成网络(VCN)来完成我们的方法,以在无监督的域适应框架内完成感兴趣的对象表面,从而解决此问题。使用See-VCN,我们获得了跨数据集的对象的统一表示,从而使网络可以专注于学习几何形状,而不是过度拟合扫描模式。通过采用域不变表示,可以将SEE-VCN归类为一种多目标域适应方法,在该方法中无需注释或重新训练才能获得新的扫描模式的3D检测。通过广泛的实验,我们表明我们的方法在多个域适应设置中优于先前的域适应方法。我们的代码和数据可在https://github.com/darrenjkt/see-vcn上找到。
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
Segmentation of lidar data is a task that provides rich, point-wise information about the environment of robots or autonomous vehicles. Currently best performing neural networks for lidar segmentation are fine-tuned to specific datasets. Switching the lidar sensor without retraining on a big set of annotated data from the new sensor creates a domain shift, which causes the network performance to drop drastically. In this work we propose a new method for lidar domain adaption, in which we use annotated panoptic lidar datasets and recreate the recorded scenes in the structure of a different lidar sensor. We narrow the domain gap to the target data by recreating panoptic data from one domain in another and mixing the generated data with parts of (pseudo) labeled target domain data. Our method improves the nuScenes to SemanticKITTI unsupervised domain adaptation performance by 15.2 mean Intersection over Union points (mIoU) and by 48.3 mIoU in our semi-supervised approach. We demonstrate a similar improvement for the SemanticKITTI to nuScenes domain adaptation by 21.8 mIoU and 51.5 mIoU, respectively. We compare our method with two state of the art approaches for semantic lidar segmentation domain adaptation with a significant improvement for unsupervised and semi-supervised domain adaptation. Furthermore we successfully apply our proposed method to two entirely unlabeled datasets of two state of the art lidar sensors Velodyne Alpha Prime and InnovizTwo, and train well performing semantic segmentation networks for both.
translated by 谷歌翻译
3D对象检测网络往往偏向于培训的数据。在不同位置,条件或传感器中捕获的数据集的评估比训练(源)数据的数据集导致模型性能下降,由于测试(或目标)数据分布的间隙。目前用于域适配的方法可以在训练期间采用访问源数据,这可能由于隐私或内存问题而无法使用,或者需要将一系列激光乐框架作为输入。我们提出了一种单一帧方法,用于提供的基于LIDAR的3D对象探测器的无源无监督域,它使用类原型来减轻逻辑标签噪声的效果。解决在存在嘈杂标签中的传统特征聚合方法对原型计算的限制,我们利用变压器模块识别对应于不正确,过于自信的注释的异常值ROI,并计算分级类原型。在迭代培训策略下,与嘈杂的伪标签相关的损失是下降的,因此在自我培训过程中精制。为了验证我们提出的方法的有效性,我们研究了与大型标签的数据集(例如Waymo Open DataSet和Nuscenes)培训的网络相关联的域移位,并在更小的标签差的数据集(如KITTI)上进行评估反之亦然。我们在最近的两个对象探测器上展示了我们的方法,实现了Out-执行其他域适应工作的结果。
translated by 谷歌翻译
基于图像的3D检测是自主驾驶感知系统的必不可少的组成部分。但是,它仍然受到不满意的表现,这是有限的培训数据的主要原因之一。不幸的是,在3D空间中注释对象是极度时间/资源消耗的,这使得很难任意扩展训练集。在这项工作中,我们专注于半监督的方式,并探索更便宜的替代方案(即伪标记)的可行性,以利用未标记的数据。为此,我们进行了广泛的实验,以研究伪标签是否可以在不同环境下为基线模型提供有效的监督。实验结果不仅证明了基于图像的3D检测的伪标记机制的有效性(例如,在单眼设置下,我们在没有铃铛和哨声的Kitti-3D测试集上实现了20.23 AP,用于中等水平,从6.03 AP),但还显示了几个有趣且值得注意的发现(例如,经过伪标签训练的模型的性能要比基于相同培训数据的地面真相注释训练的表现更好)。我们希望这项工作可以在半监督环境下为基于图像的3D检测社区提供见解。代码,伪标签和预培训模型将公开可用。
translated by 谷歌翻译
了解场景是自主导航车辆的关键,以及在线将周围环境分段为移动和非移动物体的能力是这项任务的中央成分。通常,基于深度学习的方法用于执行移动对象分段(MOS)。然而,这些网络的性能强烈取决于标记培训数据的多样性和数量,可以获得昂贵的信息。在本文中,我们提出了一种自动数据标记管道,用于3D LIDAR数据,以节省广泛的手动标记工作,并通过自动生成标记的训练数据来提高现有的基于学习的MOS系统的性能。我们所提出的方法通过批量处理数据来实现数据。首先利用基于占用的动态对象拆除以粗略地检测可能的动态物体。其次,它提取了提案中的段,并使用卡尔曼滤波器跟踪它们。基于跟踪的轨迹,它标记了实际移动的物体,如驾驶汽车和行人。相反,非移动物体,例如,停放的汽车,灯,道路或建筑物被标记为静态。我们表明,这种方法允许我们高效地标记LIDAR数据,并将我们的结果与其他标签生成方法的结果进行比较。我们还使用自动生成的标签培训深度神经网络,并与在同一数据上的手动标签上接受过的手动标签的培训相比,实现了类似的性能,以及使用我们方法生成的标签的其他数据集时更好的性能。此外,我们使用不同的传感器评估我们在多个数据集上的方法,我们的实验表明我们的方法可以在各种环境中生成标签。
translated by 谷歌翻译
Compared to typical multi-sensor systems, monocular 3D object detection has attracted much attention due to its simple configuration. However, there is still a significant gap between LiDAR-based and monocular-based methods. In this paper, we find that the ill-posed nature of monocular imagery can lead to depth ambiguity. Specifically, objects with different depths can appear with the same bounding boxes and similar visual features in the 2D image. Unfortunately, the network cannot accurately distinguish different depths from such non-discriminative visual features, resulting in unstable depth training. To facilitate depth learning, we propose a simple yet effective plug-and-play module, One Bounding Box Multiple Objects (OBMO). Concretely, we add a set of suitable pseudo labels by shifting the 3D bounding box along the viewing frustum. To constrain the pseudo-3D labels to be reasonable, we carefully design two label scoring strategies to represent their quality. In contrast to the original hard depth labels, such soft pseudo labels with quality scores allow the network to learn a reasonable depth range, boosting training stability and thus improving final performance. Extensive experiments on KITTI and Waymo benchmarks show that our method significantly improves state-of-the-art monocular 3D detectors by a significant margin (The improvements under the moderate setting on KITTI validation set are $\mathbf{1.82\sim 10.91\%}$ mAP in BEV and $\mathbf{1.18\sim 9.36\%}$ mAP in 3D}. Codes have been released at https://github.com/mrsempress/OBMO.
translated by 谷歌翻译
不利天气条件可能会对基于激光雷达的对象探测器产生负面影响。在这项工作中,我们专注于在寒冷天气条件下的车辆气体排气凝结现象。这种日常效果会影响对象大小,取向并引入幽灵对象检测的估计,从而损害了最先进的对象检测器状态的可靠性。我们建议通过使用数据增强和新颖的培训损失项来解决此问题。为了有效地训练深层神经网络,需要大量标记的数据。如果天气不利,此过程可能非常费力且昂贵。我们分为两个步骤解决此问题:首先,我们根据3D表面重建和采样提出了一种气排气数据生成方法,该方法使我们能够从一小群标记的数据池中生成大量的气体排气云。其次,我们引入了一个点云增强过程,该过程可用于在良好天气条件下记录的数据集中添加气体排气。最后,我们制定了一个新的训练损失术语,该损失术语利用增强点云来通过惩罚包括噪声的预测来增加对象检测的鲁棒性。与其他作品相反,我们的方法可以与基于网格的检测器和基于点的检测器一起使用。此外,由于我们的方法不需要任何网络体系结构更改,因此推理时间保持不变。实际数据的实验结果表明,我们提出的方法大大提高了对气体排气和嘈杂数据的鲁棒性。
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译
Determining accurate bird's eye view (BEV) positions of objects and tracks in a scene is vital for various perception tasks including object interactions mapping, scenario extraction etc., however, the level of supervision required to accomplish that is extremely challenging to procure. We propose a light-weight, weakly supervised method to estimate 3D position of objects by jointly learning to regress the 2D object detections and scene's depth prediction in a single feed-forward pass of a network. Our proposed method extends a center-point based single-shot object detector \cite{zhou2019objects}, and introduces a novel object representation where each object is modeled as a BEV point spatio-temporally, without the need of any 3D or BEV annotations for training and LiDAR data at query time. The approach leverages readily available 2D object supervision along with LiDAR point clouds (used only during training) to jointly train a single network, that learns to predict 2D object detection alongside the whole scene's depth, to spatio-temporally model object tracks as points in BEV. The proposed method is computationally over $\sim$10x efficient compared to recent SOTA approaches [1, 38] while achieving comparable accuracies on KITTI tracking benchmark.
translated by 谷歌翻译
大多数自治车辆都配备了LIDAR传感器和立体声相机。前者非常准确,但产生稀疏数据,而后者是密集的,具有丰富的纹理和颜色信息,但难以提取来自的强大的3D表示。在本文中,我们提出了一种新的数据融合算法,将准确的点云与致密的,但不太精确的点云组合在立体对。我们开发一个框架,将该算法集成到各种3D对象检测方法中。我们的框架从两个RGB图像中的2D检测开始,计算截肢和它们的交叉点,从立体声图像创建伪激光雷达数据,并填补了LIDAR数据缺少密集伪激光器的交叉区域的部分要点。我们训练多个3D对象检测方法,并表明我们的融合策略一致地提高了探测器的性能。
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
采用车辆到车辆通信以提高自动驾驶技术中的感知性能,最近引起了相当大的关注;然而,对于基准测试算法的合适开放数据集已经难以开发和评估合作感知技术。为此,我们介绍了用于车辆到车辆的第一个大型开放模拟数据集。它包含超过70个有趣的场景,11,464帧和232,913帧的注释3D车辆边界盒,从卡拉的8个城镇和洛杉矶的数码镇。然后,我们构建了一个全面的基准,共有16种实施模型来评估若干信息融合策略〜(即早期,晚期和中间融合),最先进的激光雷达检测算法。此外,我们提出了一种新的细心中间融合管线,以从多个连接的车辆汇总信息。我们的实验表明,拟议的管道可以很容易地与现有的3D LIDAR探测器集成,即使具有大的压缩速率也可以实现出色的性能。为了鼓励更多的研究人员来调查车辆到车辆的感知,我们将释放数据集,基准方法以及HTTPS://mobility-lab.seas.ucla.edu/opv2v2v/中的所有相关代码。
translated by 谷歌翻译
Domain adaptation for Cross-LiDAR 3D detection is challenging due to the large gap on the raw data representation with disparate point densities and point arrangements. By exploring domain-invariant 3D geometric characteristics and motion patterns, we present an unsupervised domain adaptation method that overcomes above difficulties. First, we propose the Spatial Geometry Alignment module to extract similar 3D shape geometric features of the same object class to align two domains, while eliminating the effect of distinct point distributions. Second, we present Temporal Motion Alignment module to utilize motion features in sequential frames of data to match two domains. Prototypes generated from two modules are incorporated into the pseudo-label reweighting procedure and contribute to our effective self-training framework for the target domain. Extensive experiments show that our method achieves state-of-the-art performance on cross-device datasets, especially for the datasets with large gaps captured by mechanical scanning LiDARs and solid-state LiDARs in various scenes. Project homepage is at https://github.com/4DVLab/CL3D.git
translated by 谷歌翻译