The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
Panoptic图像分割是计算机视觉任务,即在图像中查找像素组并为其分配语义类别和对象实例标识符。由于其在机器人技术和自动驾驶中的关键应用,图像细分的研究变得越来越流行。因此,研究社区依靠公开可用的基准数据集来推进计算机视觉中的最新技术。但是,由于将图像标记为高昂的成本,因此缺乏适合全景分割的公开地面真相标签。高标签成本还使得将现有数据集扩展到视频域和多相机设置是一项挑战。因此,我们介绍了Waymo Open DataSet:全景视频全景分割数据集,这是一个大型数据集,它提供了用于自主驾驶的高质量的全景分割标签。我们使用公开的Waymo打开数据集生成数据集,利用各种相机图像集。随着时间的推移,我们的标签是一致的,用于视频处理,并且在车辆上安装的多个摄像头保持一致,以了解全景的理解。具体而言,我们为28个语义类别和2,860个时间序列提供标签,这些标签由在三个不同地理位置驾驶的自动驾驶汽车上安装的五个摄像机捕获,从而导致总共标记为100k标记的相机图像。据我们所知,这使我们的数据集比现有的数据集大量数据集大的数量级。我们进一步提出了一个新的基准,用于全景视频全景分割,并根据DeepLab模型家族建立许多强大的基准。我们将公开制作基准和代码。在https://waymo.com/open上找到数据集。
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
智能城市应用程序(例如智能交通路由或事故预防)依赖计算机视觉方法来确切的车辆定位和跟踪。由于精确标记的数据缺乏,从多个摄像机中检测和跟踪3D的车辆被证明是探索挑战的。我们提出了一个庞大的合成数据集,用于多个重叠和非重叠摄像头视图中的多个车辆跟踪和分割。与现有的数据集不同,该数据集仅为2D边界框提供跟踪地面真实,我们的数据集还包含适用于相机和世界坐标中的3D边界框的完美标签,深度估计以及实例,语义和泛型细分。该数据集由17个小时的标记视频材料组成,从64个不同的一天,雨,黎明和夜幕播放的340张摄像机录制,使其成为迄今为止多目标多型多相机跟踪的最广泛数据集。我们提供用于检测,车辆重新识别以及单摄像机跟踪的基准。代码和数据公开可用。
translated by 谷歌翻译
下一代高分辨率汽车雷达(4D雷达)可以提供额外的高程测量和较密集的点云,从而在自动驾驶中具有3D传感的巨大潜力。在本文中,我们介绍了一个名为TJ4Dradset的数据集,其中包括4D雷达点用于自动驾驶研究。该数据集是在各种驾驶场景中收集的,连续44个序列中总共有7757个同步帧,这些序列用3D边界框和轨道ID很好地注释。我们为数据集提供了基于4D雷达的3D对象检测基线,以证明4D雷达点云的深度学习方法的有效性。可以通过以下链接访问数据集:https://github.com/tjradarlab/tj4dradset。
translated by 谷歌翻译
自动驾驶汽车必须在3D中检测其他车辆和行人,以计划安全路线并避免碰撞。基于深度学习的最先进的3D对象探测器已显示出有希望的准确性,但容易过度拟合域特质,使它们在新环境中失败 - 如果自动驾驶汽车旨在自动操作,则是一个严重的问题。在本文中,我们提出了一种新颖的学习方法,该方法通过在目标域中的伪标记上微调检测器,从而大大减少这一差距,我们的方法在车辆停放时会根据先前记录的驾驶序列的重播而生成的差距。在这些重播中,随着时间的推移会跟踪对象,并且检测被插值和外推 - 至关重要的是利用未来的信息来捕获硬病例。我们在五个自动驾驶数据集上显示,对这些伪标签上的对象检测器进行微调大大减少了域间隙到新的驾驶环境,从而极大地提高了准确性和检测可靠性。
translated by 谷歌翻译
车辆到所有(V2X)通信技术使车辆与附近环境中许多其他实体之间的协作可以从根本上改善自动驾驶的感知系统。但是,缺乏公共数据集极大地限制了协作感知的研究进度。为了填补这一空白,我们提出了V2X-SIM,这是一个针对V2X辅助自动驾驶的全面模拟多代理感知数据集。 V2X-SIM提供:(1)\ hl {Multi-Agent}传感器记录来自路边单元(RSU)和多种能够协作感知的车辆,(2)多模式传感器流,可促进多模式感知和多模式感知和(3)支持各种感知任务的各种基础真理。同时,我们在三个任务(包括检测,跟踪和细分)上为最先进的协作感知算法提供了一个开源测试台,并为最先进的协作感知算法提供了基准。 V2X-SIM试图在现实数据集广泛使用之前刺激自动驾驶的协作感知研究。我们的数据集和代码可在\ url {https://ai4ce.github.io/v2x-sim/}上获得。
translated by 谷歌翻译
流行的对象检测度量平均精度(3D AP)依赖于预测的边界框和地面真相边界框之间的结合。但是,基于摄像机的深度估计的精度有限,这可能会导致其他合理的预测,这些预测遭受了如此纵向定位错误,被视为假阳性和假阴性。因此,我们提出了流行的3D AP指标的变体,这些变体旨在在深度估计误差方面更具允许性。具体而言,我们新颖的纵向误差耐受度指标,Let-3D-AP和Let-3D-APL,允许预测的边界框的纵向定位误差,最高为给定的公差。所提出的指标已在Waymo Open DataSet 3D摄像头仅检测挑战中使用。我们认为,它们将通过提供更有信息的性能信号来促进仅相机3D检测领域的进步。
translated by 谷歌翻译
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-theart performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, Center-Point outperforms all previous single model methods by a large margin and ranks first among all Lidar-only submissions. The code and pretrained models are available at https://github.com/tianweiy/CenterPoint.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
基于LIDAR的传感驱动器电流自主车辆。尽管进展迅速,但目前的激光雷达传感器在分辨率和成本方面仍然落后于传统彩色相机背后的二十年。对于自主驾驶,这意味着靠近传感器的大物体很容易可见,但远方或小物体仅包括一个测量或两个。这是一个问题,尤其是当这些对象结果驾驶危险时。另一方面,在车载RGB传感器中清晰可见这些相同的对象。在这项工作中,我们提出了一种将RGB传感器无缝熔化成基于LIDAR的3D识别方法。我们的方法采用一组2D检测来生成密集的3D虚拟点,以增加否则稀疏的3D点云。这些虚拟点自然地集成到任何基于标准的LIDAR的3D探测器以及常规激光雷达测量。由此产生的多模态检测器简单且有效。大规模NUSCENES数据集的实验结果表明,我们的框架通过显着的6.6地图改善了强大的中心点基线,并且优于竞争融合方法。代码和更多可视化可在https://tianweiy.github.io/mvp/上获得
translated by 谷歌翻译
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Computation speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixelwise neural network predictions. The input representation, network architecture, and model optimization are especially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at > 28 FPS.
translated by 谷歌翻译
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.
translated by 谷歌翻译
近年来,深度学习导致了在城市驾驶场景中移动(即具有运动能力)物体的检测方面取得的巨大进展。监督方法通常需要大型培训集的注释;因此,人们对利用弱,半或自我监督的方法避免这种情况非常兴趣,并取得了很大的成功。虽然弱和半监督的方法需要一些注释,但自我监督的方法已经使用了诸如运动之类的线索来完全减轻注释的需求。但是,完全没有注释通常会降低其性能,而在运动组进行分组期间出现的歧义可以抑制其找到准确的物体边界的能力。在本文中,我们提出了一种称为SCT的新的自制移动对象检测方法。这同时使用运动提示和预期对象大小来提高检测性能,并预测3D方向边界框的密集网格以改善对象发现。我们在Kitti跟踪基准上的最先进的自我监督的移动对象检测方法TCR极大地超过了,并且实现了全面监督的PV-RCNN ++方法的30%以内IOUS <= 0.5。
translated by 谷歌翻译
与使用可见光乐队(384 $ \ sim $ 769 THz)和使用红外乐队(361 $ \ sim $ 331 THz)的RGB摄像机不同,雷达使用相对较长的波长无线电(77 $ \ sim $ 81 GHz),从而产生强大不良风雨的测量。不幸的是,与现有的相机和LIDAR数据集相比,现有的雷达数据集仅包含相对较少的样品。这可能会阻碍基于雷达的感知的复杂数据驱动的深度学习技术的发展。此外,大多数现有的雷达数据集仅提供3D雷达张量(3DRT)数据,该数据包含沿多普勒,范围和方位角尺寸的功率测量值。由于没有高程信息,因此要估算3DRT对象的3D边界框是一个挑战。在这项工作中,我们介绍了Kaist-Radar(K-Radar),这是一种新型的大规模对象检测数据集和基准测试,其中包含35K帧的4D雷达张量(4DRT)数据,并具有沿多普勒,范围,Azimuth和Apipation的功率测量值尺寸,以及小心注释的3D边界盒在道路上的物体​​标签。 K-Radar包括在各种道路结构(城市,郊区道路,小巷和高速公路)上进行挑战的驾驶条件,例如不良风雨(雾,雨和雪)。除4DRT外,我们还提供了精心校准的高分辨率激光雷,周围的立体声摄像头和RTK-GPS的辅助测量。我们还提供基于4DRT的对象检测基线神经网络(基线NNS),并表明高度信息对于3D对象检测至关重要。通过将基线NN与类似结构的激光雷达神经网络进行比较,我们证明了4D雷达是不利天气条件的更强大的传感器。所有代码均可在https://github.com/kaist-avelab/k-radar上找到。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
Panoptic现场了解和跟踪动态代理对于机器人和自动化车辆至关重要,以在城市环境中导航。由于LiDAR提供了方案的精确照明和几何描绘,使用LIDAR点云执行这些任务提供可靠的预测。然而,现有数据集缺乏城市场景类型的多样性,并且具有有限数量的动态对象实例,其阻碍了这些任务的学习以及开发方法的可信基准。在本文中,我们介绍了大规模的Panoptic Nuscenes基准数据集,它扩展了我们流行的NUSCENES DataSet,具有用于语义分割,Panoptic分段和Panoptic跟踪任务的Pock-Wise Trountruth annotations。为了便于比较,我们为我们提出的数据集提供了几个任务的强大基线。此外,我们分析了Panoptic跟踪的现有度量标准的缺点,并提出了一种解决问题的小说实例的Pat度量。我们提供详尽的实验,展示了Panoptic Nuscenes与现有数据集相比的效用,并在Nuscenes.org提供的在线评估服务器。我们认为,此扩展将加快新颖的现场了解动态城市环境的新方法研究。
translated by 谷歌翻译