We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
High-definition (HD) map change detection is the task of determining when sensor data and map data are no longer in agreement with one another due to real-world changes. We collect the first dataset for the task, which we entitle the Trust, but Verify (TbV) dataset, by mining thousands of hours of data from over 9 months of autonomous vehicle fleet operations. We present learning-based formulations for solving the problem in the bird's eye view and ego-view. Because real map changes are infrequent and vector maps are easy to synthetically manipulate, we lean on simulated data to train our model. Perhaps surprisingly, we show that such models can generalize to real world distributions. The dataset, consisting of maps and logs collected in six North American cities, is one of the largest AV datasets to date with more than 7.8 million images. We make the data available to the public at https://www.argoverse.org/av2.html#mapchange-link, along with code and models at https://github.com/johnwlambert/tbv under the the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
现实世界的行为通常是由多种代理之间复杂的相互作用来塑造的。为了可靠地研究多代理行为,无监督和自我监督的学习的进步使从轨迹数据中学到了各种不同的行为表示。迄今为止,还没有一组统一的基准测试,可以在广泛的行为分析设置中进行定量和系统地比较方法。我们的目的是通过引入来自现实世界行为神经科学实验的大规模,多代理轨迹数据集来解决这一问题,该数据集涵盖了一系列行为分析任务。我们的数据集由来自通用模型生物的轨迹数据组成,其中有960万帧的小鼠数据和440万帧的飞行数据,在各种实验环境中,例如不同的菌株,相互作用的长度和光遗传学刺激。框架的子集还包括专家注销的行为标签。我们数据集的改进对应于跨多种生物的行为表示,并能够捕获常见行为分析任务的差异。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
无监督的域适应性通过将模型从标记的源域转移到未标记的目标域来证明减轻域移动的巨大潜力。尽管无监督的域适应已应用于各种复杂的视力任务,但只有很少的作品专注于自动驾驶的车道检测。这可以归因于缺乏公开可用的数据集。为了促进这些方向的研究,我们提出了Carlane,Carlane是用于2D车道检测的3条SIM到真实域的适应基准。 Carlane包括单目标数据集Molane和Tulane以及多目标数据集Mulane。这些数据集由三个不同的域构建,这些域涵盖了不同的场景,并包含163K唯一图像,其中118K被注释。此外,我们评估和报告系统的基线,包括我们自己的方法,这些方法基于典型的跨域自学学习。我们发现,与完全监督的基线相比,评估域适应方法的假阳性和假阴性率很高。这肯定了对卡莱恩等基准的必要性,以进一步加强无监督的领域适应道的研究。 Carlane,所有评估的模型和相应的实现都可以在https://carlanebench.github.io上公开获得。
translated by 谷歌翻译
多代理行为建模旨在了解代理之间发生的交互。我们从行为神经科学,Caltech鼠标社交交互(CALMS21)数据集中提供了一个多代理数据集。我们的数据集由社交交互的轨迹数据组成,从标准居民入侵者测定中自由行为小鼠的视频记录。为了帮助加速行为研究,CALMS21数据集提供基准,以评估三种设置中自动行为分类方法的性能:(1)用于培训由单个注释器的所有注释,(2)用于风格转移以进行学习互动在特定有限培训数据的新行为学习的行为定义和(3)的注释差异。 DataSet由600万个未标记的追踪姿势的交互小鼠组成,以及超过100万帧,具有跟踪的姿势和相应的帧级行为注释。我们的数据集的挑战是能够使用标记和未标记的跟踪数据准确地对行为进行分类,以及能够概括新设置。
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译
由于几个因素之间的微妙权衡:参与者的隐私,生态有效性,数据保真度和后勤开销,记录野外未脚本人类互动的动态是具有挑战性的。为了解决这些问题,在社区精神上为社区的“数据集”之后,我们提出了会议生活实验室(Conflab):一个新的概念,用于多模式多模式数据收集,野生野外社交对话。对于此处描述的Conflab的首次实例化,我们在一次大型国际会议上组织了现实生活中的专业网络活动。该数据集涉及48个会议参与者,捕捉了地位,熟人和网络动机的各种组合。我们的捕获设置改善了先前野外数据集的数据保真度,同时保留隐私敏感性:从非侵入性的架空视图中获得8个视频(1920x1080,60 fps),并具有定制的可穿戴传感器,并带有车载记录(完整9) - 轴IMU),具有隐私性的低频音频(1250 Hz)和基于蓝牙的接近度。此外,我们开发了用于采集时分布式硬件同步的自定义解决方案,并以高采样速率对身体关键点和动作进行了及时的连续注释。我们的基准测试展示了与野外隐私保护社交数据分析有关的一些开放研究任务:从高架摄像头视图,基于骨架的No-Audio扬声器检测和F-Formation检测中的关键点检测。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
通过卫星图像和机器学习对行星进行大规模分析是一个梦想,这一梦想不断受到难以获取高度代表性的高分辨率图像的成本的阻碍。为了纠正此问题,我们在这里介绍WorldStrat数据集。 The largest and most varied such publicly available dataset, at Airbus SPOT 6/7 satellites' high resolution of up to 1.5 m/pixel, empowered by European Space Agency's Phi-Lab as part of the ESA-funded QueryPlanet project, we curate nearly 10,000独特位置的SQKM,以确保全世界所有类型的土地用途分层:从农业到冰盖,从森林到多种城市化密度。我们还丰富了通常在ML数据集中代表不足的地点的人:人道主义兴趣的地点,非法采矿地点以及有风险的人的定居点。我们以10 m/pixel的可自由访问的下分辨率Sentinel-2卫星的多个低分辨率图像为暂时匹配每个高分辨率图像。我们伴随着该数据集的开源Python软件包,以:重建或扩展WorldStrat数据集,训练和推断基线算法,并使用丰富的教程学习,所有这些都与流行的EO-Learn Toolbox兼容。我们特此希望能够促进ML在卫星图像中的广泛应用,并可能从免费的公共低分辨率Sentinel2图像中发展出昂贵的私人高分辨率图像所允许的相同的分析能力。我们通过训练并发布了有关多帧超分辨率任务的几个高度计算效率的基线来说明这一特定点。高分辨率空中图像是CC BY-NC,而标签和Sentinel2图像为CC,而BSD下的源代码和预训练模型。该数据集可从https://zenodo.org/record/6810792获得,并在https://github.com/worldstrat/worldstrat上获得。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
在过去几年中,自动驾驶一直是最受欢迎,最具挑战性的主题之一。在实现完全自治的道路上,研究人员使用了各种传感器,例如LIDAR,相机,惯性测量单元(IMU)和GPS,并开发了用于自动驾驶应用程序的智能算法,例如对象检测,对象段,障碍,避免障碍物,避免障碍物和障碍物,以及路径计划。近年来,高清(HD)地图引起了很多关注。由于本地化中高清图的精度和信息水平很高,因此它立即成为自动驾驶的关键组成部分之一。从Baidu Apollo,Nvidia和TomTom等大型组织到个别研究人员,研究人员创建了用于自主驾驶的不同场景和用途的高清地图。有必要查看高清图生成的最新方法。本文回顾了最新的高清图生成技术,这些技术利用了2D和3D地图生成。这篇评论介绍了高清图的概念及其在自主驾驶中的有用性,并详细概述了高清地图生成技术。我们还将讨论当前高清图生成技术的局限性,以激发未来的研究。
translated by 谷歌翻译
与使用可见光乐队(384 $ \ sim $ 769 THz)和使用红外乐队(361 $ \ sim $ 331 THz)的RGB摄像机不同,雷达使用相对较长的波长无线电(77 $ \ sim $ 81 GHz),从而产生强大不良风雨的测量。不幸的是,与现有的相机和LIDAR数据集相比,现有的雷达数据集仅包含相对较少的样品。这可能会阻碍基于雷达的感知的复杂数据驱动的深度学习技术的发展。此外,大多数现有的雷达数据集仅提供3D雷达张量(3DRT)数据,该数据包含沿多普勒,范围和方位角尺寸的功率测量值。由于没有高程信息,因此要估算3DRT对象的3D边界框是一个挑战。在这项工作中,我们介绍了Kaist-Radar(K-Radar),这是一种新型的大规模对象检测数据集和基准测试,其中包含35K帧的4D雷达张量(4DRT)数据,并具有沿多普勒,范围,Azimuth和Apipation的功率测量值尺寸,以及小心注释的3D边界盒在道路上的物体​​标签。 K-Radar包括在各种道路结构(城市,郊区道路,小巷和高速公路)上进行挑战的驾驶条件,例如不良风雨(雾,雨和雪)。除4DRT外,我们还提供了精心校准的高分辨率激光雷,周围的立体声摄像头和RTK-GPS的辅助测量。我们还提供基于4DRT的对象检测基线神经网络(基线NNS),并表明高度信息对于3D对象检测至关重要。通过将基线NN与类似结构的激光雷达神经网络进行比较,我们证明了4D雷达是不利天气条件的更强大的传感器。所有代码均可在https://github.com/kaist-avelab/k-radar上找到。
translated by 谷歌翻译
车辆到所有(V2X)通信技术使车辆与附近环境中许多其他实体之间的协作可以从根本上改善自动驾驶的感知系统。但是,缺乏公共数据集极大地限制了协作感知的研究进度。为了填补这一空白,我们提出了V2X-SIM,这是一个针对V2X辅助自动驾驶的全面模拟多代理感知数据集。 V2X-SIM提供:(1)\ hl {Multi-Agent}传感器记录来自路边单元(RSU)和多种能够协作感知的车辆,(2)多模式传感器流,可促进多模式感知和多模式感知和(3)支持各种感知任务的各种基础真理。同时,我们在三个任务(包括检测,跟踪和细分)上为最先进的协作感知算法提供了一个开源测试台,并为最先进的协作感知算法提供了基准。 V2X-SIM试图在现实数据集广泛使用之前刺激自动驾驶的协作感知研究。我们的数据集和代码可在\ url {https://ai4ce.github.io/v2x-sim/}上获得。
translated by 谷歌翻译
在这项工作中,我们提出了世界上第一个基于闭环ML的自动驾驶计划基准。虽然存在基于ML的ML的越来越多的ML的议员,但缺乏已建立的数据集和指标限制了该领域的进展。自主车辆运动预测的现有基准专注于短期运动预测,而不是长期规划。这导致了以前的作品来使用基于L2的度量标准的开放循环评估,这不适合公平地评估长期规划。我们的基准通过引入大规模驾驶数据集,轻量级闭环模拟器和特定于运动规划的指标来克服这些限制。我们提供高质量的数据集,在美国和亚洲的4个城市提供1500h的人类驾驶数据,具有广泛不同的交通模式(波士顿,匹兹堡,拉斯维加斯和新加坡)。我们将提供具有无功代理的闭环仿真框架,并提供一系列一般和方案特定的规划指标。我们计划在Neurips 2021上发布数据集,并在2022年初开始组织基准挑战。
translated by 谷歌翻译