In this paper, we propose SceNDD: a scenario-based naturalistic driving dataset that is built upon data collected from an instrumented vehicle in downtown Indianapolis. The data collection was completed in 68 driving sessions with different drivers, where each session lasted about 20--40 minutes. The main goal of creating this dataset is to provide the research community with real driving scenarios that have diverse trajectories and driving behaviors. The dataset contains ego-vehicle's waypoints, velocity, yaw angle, as well as non-ego actor's waypoints, velocity, yaw angle, entry-time, and exit-time. Certain flexibility is provided to users so that actors, sensors, lanes, roads, and obstacles can be added to the existing scenarios. We used a Joint Probabilistic Data Association (JPDA) tracker to detect non-ego vehicles on the road. We present some preliminary results of the proposed dataset and a few applications associated with it. The complete dataset is expected to be released by early 2023.
translated by 谷歌翻译
As one of the most popular micro-mobility options, e-scooters are spreading in hundreds of big cities and college towns in the US and worldwide. In the meantime, e-scooters are also posing new challenges to traffic safety. In general, e-scooters are suggested to be ridden in bike lanes/sidewalks or share the road with cars at the maximum speed of about 15-20 mph, which is more flexible and much faster than the pedestrains and bicyclists. These features make e-scooters challenging for human drivers, pedestrians, vehicle active safety modules, and self-driving modules to see and interact. To study this new mobility option and address e-scooter riders' and other road users' safety concerns, this paper proposes a wearable data collection system for investigating the micro-level e-Scooter motion behavior in a Naturalistic road environment. An e-Scooter-based data acquisition system has been developed by integrating LiDAR, cameras, and GPS using the robot operating system (ROS). Software frameworks are developed to support hardware interfaces, sensor operation, sensor synchronization, and data saving. The integrated system can collect data continuously for hours, meeting all the requirements including calibration accuracy and capability of collecting the vehicle and e-Scooter encountering data.
translated by 谷歌翻译
Recently, e-scooter-involved crashes have increased significantly but little information is available about the behaviors of on-road e-scooter riders. Most existing e-scooter crash research was based on retrospectively descriptive media reports, emergency room patient records, and crash reports. This paper presents a naturalistic driving study with a focus on e-scooter and vehicle encounters. The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling, baseline behavior modeling, and the potential future development of in-vehicle mitigation algorithms. The data was collected using an instrumented vehicle and an e-scooter rider wearable system, respectively. A three-step data analysis process is developed. First, semi-automatic data labeling extracts e-scooter rider images and non-rider human images in similar environments to train an e-scooter-rider classifier. Then, a multi-step scene reconstruction pipeline generates vehicle and e-scooter trajectories in all encounters. The final step is to model e-scooter rider behaviors and e-scooter-vehicle encounter scenarios. A total of 500 vehicle to e-scooter interactions are analyzed. The variables pertaining to the same are also discussed in this paper.
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
交叉路口是自动驾驶任务最具挑战性的场景之一。由于复杂性和随机性,在相交处的基本应用(例如行为建模,运动预测,安全验证等)在很大程度上取决于数据驱动的技术。因此,交叉点中对流量参与者(TPS)的轨迹数据集的需求很大。目前,城市地区的大多数交叉路口都配备了交通信号灯。但是,尚无用于信号交叉点的大规模,高质量,公开可用的轨迹数据集。因此,在本文中,在中国天津选择了典型的两相信号交叉点。此外,管道旨在构建信号交叉数据集(SIND),其中包含7个小时的记录,其中包括13,000多种TPS,具有7种类型。然后,记录了信德的交通违规行为。此外,也将信德与其他类似作品进行比较。 SIND的特征可以概括如下:1)信德提供了更全面的信息,包括交通信号灯状态,运动参数,高清(HD)地图等。2)TPS的类别是多种多样和特征的,其中比例是脆弱的道路使用者(VRU)最高为62.6%3)显示了多次交通信号灯违反非电动车辆的行为。我们认为,Sind将是对现有数据集的有效补充,可以促进有关自动驾驶的相关研究。该数据集可通过以下方式在线获得:https://github.com/sotif-avlab/sind
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Recently, numerous studies have investigated cooperative traffic systems using the communication among vehicle-to-everything (V2X). Unfortunately, when multiple autonomous vehicles are deployed while exposed to communication failure, there might be a conflict of ideal conditions between various autonomous vehicles leading to adversarial situation on the roads. In South Korea, virtual and real-world urban autonomous multi-vehicle races were held in March and November of 2021, respectively. During the competition, multiple vehicles were involved simultaneously, which required maneuvers such as overtaking low-speed vehicles, negotiating intersections, and obeying traffic laws. In this study, we introduce a fully autonomous driving software stack to deploy a competitive driving model, which enabled us to win the urban autonomous multi-vehicle races. We evaluate module-based systems such as navigation, perception, and planning in real and virtual environments. Additionally, an analysis of traffic is performed after collecting multiple vehicle position data over communication to gain additional insight into a multi-agent autonomous driving scenario. Finally, we propose a method for analyzing traffic in order to compare the spatial distribution of multiple autonomous vehicles. We study the similarity distribution between each team's driving log data to determine the impact of competitive autonomous driving on the traffic environment.
translated by 谷歌翻译
受到人类使用多种感觉器官感知世界的事实的启发,具有不同方式的传感器在端到端驾驶中部署,以获得3D场景的全球环境。在以前的作品中,相机和激光镜的输入通过变压器融合,以更好地驾驶性能。通常将这些输入进一步解释为高级地图信息,以帮助导航任务。然而,从复杂地图输入中提取有用的信息很具有挑战性,因为冗余信息可能会误导代理商并对驾驶性能产生负面影响。我们提出了一种新颖的方法,可以从矢量化高清(HD)地图中有效提取特征,并将其利用在端到端驾驶任务中。此外,我们设计了一个新的专家,以通过考虑多道路规则来进一步增强模型性能。实验结果证明,两种提出的改进都可以使我们的代理人与其他方法相比获得卓越的性能。
translated by 谷歌翻译
合作感知的一个主要挑战是加重从各种来源进行的测量,以获得准确的结果。理想情况下,权重应与传感信息中的误差成反比。但是,自动驾驶汽车的先前合作传感器融合方法使用固定的误差模型,其中传感器的协方差及其识别器管道只是所有感应场景的测量协方差的平均值。本文提出的方法使用关键预测术语估算错误,这些术语与传感和定位精度具有很高的相关性,以准确地协方差估计每个传感器观察。我们采用一个分层融合模型,该模型由局部和全球传感器融合步骤组成。在局部融合水平上,我们使用每个传感器的误差模型和测量距离添加协方差生成阶段,以生成每个观察值的预期协方差矩阵。在全球传感器融合阶段,我们添加了一个额外的阶段,以从关键预测器项速度产生定位协方差矩阵,并将其与局部融合产生的协方差相结合,以准确地进行合作感应。为了展示我们的方法,我们构建了一组1/10比例模型自动驾驶汽车,具有准确的感应功能,并针对运动捕获系统对误差特性进行了分类。结果表明,当使用我们的误差模型与典型的固定误差模型时,在四车协作融合方案中分别检测到1.42倍和1.78倍的车辆位置时,RMSE的平均水平和最大改善。
translated by 谷歌翻译
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
车辆到所有(V2X)通信技术使车辆与附近环境中许多其他实体之间的协作可以从根本上改善自动驾驶的感知系统。但是,缺乏公共数据集极大地限制了协作感知的研究进度。为了填补这一空白,我们提出了V2X-SIM,这是一个针对V2X辅助自动驾驶的全面模拟多代理感知数据集。 V2X-SIM提供:(1)\ hl {Multi-Agent}传感器记录来自路边单元(RSU)和多种能够协作感知的车辆,(2)多模式传感器流,可促进多模式感知和多模式感知和(3)支持各种感知任务的各种基础真理。同时,我们在三个任务(包括检测,跟踪和细分)上为最先进的协作感知算法提供了一个开源测试台,并为最先进的协作感知算法提供了基准。 V2X-SIM试图在现实数据集广泛使用之前刺激自动驾驶的协作感知研究。我们的数据集和代码可在\ url {https://ai4ce.github.io/v2x-sim/}上获得。
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
在未来几十年中,自动驾驶将普遍存在。闲置在交叉点上提高自动驾驶的安全性,并通过改善交叉点的交通吞吐量来提高效率。在闲置时,路边基础设施通过卸载从车辆到路边基础设施的知觉和计划,在交叉路口远程驾驶自动驾驶汽车。为了实现这一目标,iDriving必须能够以全帧速率以较少100毫秒的尾声处理大量的传感器数据,而无需牺牲准确性。我们描述了算法和优化,使其能够使用准确且轻巧的感知组件实现此目标,该组件是从重叠传感器中得出的复合视图的原因,以及一个共同计划多个车辆的轨迹的计划者。在我们的评估中,闲置始终确保车辆的安全通过,而自动驾驶只能有27%的时间。与其他方法相比,闲置的等待时间还要低5倍,因为它可以实现无流量的交叉点。
translated by 谷歌翻译
在过去几年中,自动驾驶一直是最受欢迎,最具挑战性的主题之一。在实现完全自治的道路上,研究人员使用了各种传感器,例如LIDAR,相机,惯性测量单元(IMU)和GPS,并开发了用于自动驾驶应用程序的智能算法,例如对象检测,对象段,障碍,避免障碍物,避免障碍物和障碍物,以及路径计划。近年来,高清(HD)地图引起了很多关注。由于本地化中高清图的精度和信息水平很高,因此它立即成为自动驾驶的关键组成部分之一。从Baidu Apollo,Nvidia和TomTom等大型组织到个别研究人员,研究人员创建了用于自主驾驶的不同场景和用途的高清地图。有必要查看高清图生成的最新方法。本文回顾了最新的高清图生成技术,这些技术利用了2D和3D地图生成。这篇评论介绍了高清图的概念及其在自主驾驶中的有用性,并详细概述了高清地图生成技术。我们还将讨论当前高清图生成技术的局限性,以激发未来的研究。
translated by 谷歌翻译
在这项工作中,我们提出了世界上第一个基于闭环ML的自动驾驶计划基准。虽然存在基于ML的ML的越来越多的ML的议员,但缺乏已建立的数据集和指标限制了该领域的进展。自主车辆运动预测的现有基准专注于短期运动预测,而不是长期规划。这导致了以前的作品来使用基于L2的度量标准的开放循环评估,这不适合公平地评估长期规划。我们的基准通过引入大规模驾驶数据集,轻量级闭环模拟器和特定于运动规划的指标来克服这些限制。我们提供高质量的数据集,在美国和亚洲的4个城市提供1500h的人类驾驶数据,具有广泛不同的交通模式(波士顿,匹兹堡,拉斯维加斯和新加坡)。我们将提供具有无功代理的闭环仿真框架,并提供一系列一般和方案特定的规划指标。我们计划在Neurips 2021上发布数据集,并在2022年初开始组织基准挑战。
translated by 谷歌翻译
车辆到所有(V2X)网络已使自主驾驶中的合作感达到了协作感,这是对独立情报的根本缺陷的有前途的解决方案,包括盲区和远距离感知。但是,缺乏数据集严重阻碍了协作感知算法的发展。在这项工作中,我们发布了海豚:用于协作感知的数据集,可以使和谐且相互联系的自动驾驶,这是一个新的模拟大规模的各种大规模的各种赛车多模式多模式自动驾驶数据集,该数据集为互连为互连的开创性基准平台提供自动驾驶。海豚在六个维度上优于当前数据集:从车辆和道路侧单元(RSU)(RSUS)的临时图像和点云,启用车辆到车辆(V2V)和车辆到基础设施(V2I)的协作感知; 6具有动态天气条件的典型场景使各种互连的自动驾驶数据集最多;精心选择的观点,提供关键区域和每个对象的全部覆盖范围; 42376帧和292549个对象,以及相应的3D注释,地理位置和校准,构成了最大的协作知觉数据集;全高清图像和64线激光雷达构建高分辨率数据,并具有足够的详细信息;组织良好的API和开源代码可确保海豚的可扩展性。我们还构建了2D检测,3D检测和关于海豚的多视图协作任务的基准。实验结果表明,通过V2X通信的原始融合方案可以帮助提高精度,并在RSU存在时减少昂贵的LiDAR设备的必要性,这可能会加速相互联系的自动驾驶车辆的普及。现在可以在https://dolphins-dataset.net/上获得海豚。
translated by 谷歌翻译
用于流量操作和控制的现有数据收集方法通常依赖于基于基础架构的环路探测器或探测器车辆轨迹。连接和自动化的车辆(CAVS)不仅可以报告有关自己的数据,而且可以提供所有检测到的周围车辆的状态。从多个CAVS以及基础设施传感器(例如Lidar)的感知数据集成,即使在非常低的渗透率下也可以提供更丰富的信息。本文旨在开发合作数据收集系统,该系统集成了来自基础架构和CAVS的LiDar Point Cloud数据,以为各种运输应用创建合作感知环境。最新的3D检测模型用于在合并点云中检测车辆。我们在与Carla和Sumo的共模拟平台中测试了具有最大压力自适应信号控制模型的提出的合作感知环境。结果表明,CAV和基础设施传感器的渗透率非常低,足以实现可比性的性能,而连接车辆(CV)的渗透率为30%或更高。我们还显示了不同CAV渗透率下的等效CV渗透率(E-CVPR),以证明合作感知环境的数据收集效率。
translated by 谷歌翻译
下一代高分辨率汽车雷达(4D雷达)可以提供额外的高程测量和较密集的点云,从而在自动驾驶中具有3D传感的巨大潜力。在本文中,我们介绍了一个名为TJ4Dradset的数据集,其中包括4D雷达点用于自动驾驶研究。该数据集是在各种驾驶场景中收集的,连续44个序列中总共有7757个同步帧,这些序列用3D边界框和轨道ID很好地注释。我们为数据集提供了基于4D雷达的3D对象检测基线,以证明4D雷达点云的深度学习方法的有效性。可以通过以下链接访问数据集:https://github.com/tjradarlab/tj4dradset。
translated by 谷歌翻译