对行人行为的预测对于完全自主车辆安全有效地在繁忙的城市街道上驾驶至关重要。未来的自治车需要适应混合条件,不仅具有技术还是社会能力。随着更多算法和数据集已经开发出预测行人行为,这些努力缺乏基准标签和估计行人的时间动态意图变化的能力,提供了对交互场景的解释,以及具有社会智能的支持算法。本文提出并分享另一个代表数据集,称为Iupui-CSRC行人位于意图(PSI)数据,除了综合计算机视觉标签之外,具有两种创新标签。第一部小说标签是在自助式车辆前面交叉的行人的动态意图变化,从24个司机中实现了不同的背景。第二个是在估计行人意图并在交互期间预测其行为时对驾驶员推理过程的基于文本的解释。这些创新标签可以启用几个计算机视觉任务,包括行人意图/行为预测,车辆行人互动分割和用于可解释算法的视频到语言映射。发布的数据集可以从根本上从根本上改善行人行为预测模型的发展,并开发社会智能自治车,以有效地与行人进行互动。 DataSet已被不同的任务进行评估,并已释放到公众访问。
translated by 谷歌翻译
考虑到安全至关重要自动化系统中情境意识的功能,对驾驶场景的风险及其解释性的感知对于自主和合作驾驶特别重要。为了实现这一目标,本文提出了在驾驶场景中的共同风险定位的新研究方向及其作为自然语言描述的风险解释。由于缺乏标准基准,我们收集了一个大规模数据集,戏剧性(带有字幕模块的驾驶风险评估机制),该数据集由17,785个在日本东京收集的互动驾驶场景组成。我们的戏剧数据集适用于带有相关重要对象的驾驶风险的视频和对象级别的问题,以实现视觉字幕的目标,作为一种自由形式的语言描述,利用封闭式和开放式响应用于多层次问题,可以用来使用这些响应,可用于在驾驶场景中评估一系列视觉字幕功能。我们将这些数据提供给社区以进行进一步研究。使用戏剧,我们探索了在互动驾驶场景中的联合风险定位和字幕的多个方面。特别是,我们基准了各种多任务预测架构,并提供了关节风险定位和风险字幕的详细分析。数据集可在https://usa.honda-ri.com/drama上获得
translated by 谷歌翻译
在由车辆安装的仪表板摄像机捕获的视频中检测危险交通代理(仪表板)对于促进在复杂环境中的安全导航至关重要。与事故相关的视频只是驾驶视频大数据的一小部分,并且瞬态前的事故流程具有高度动态和复杂性。此外,风险和非危险交通代理的外观可能相似。这些使驾驶视频中的风险对象本地化特别具有挑战性。为此,本文提出了一个注意力引导的多式功能融合网络(AM-NET),以将仪表板视频的危险交通代理本地化。两个封闭式复发单元(GRU)网络使用对象边界框和从连续视频帧中提取的光流功能来捕获时空提示,以区分危险交通代理。加上GRUS的注意力模块学会了与事故相关的交通代理。融合了两个功能流,AM-NET预测了视频中交通代理的风险评分。在支持这项研究的过程中,本文还引入了一个名为“风险对象本地化”(ROL)的基准数据集。该数据集包含带有事故,对象和场景级属性的空间,时间和分类注释。拟议的AM-NET在ROL数据集上实现了85.73%的AUC的有希望的性能。同时,AM-NET在DOTA数据集上优于视频异常检测的当前最新视频异常检测。一项彻底的消融研究进一步揭示了AM-NET通过评估其不同组成部分的贡献的优点。
translated by 谷歌翻译
行人意图预测问题是估计目标行人是否会过马路。最先进的方法在很大程度上依赖于使用自我车辆的前置摄像头收集的视觉信息来预测行人的意图。因此,当视觉信息不准确时,例如,当行人和自我车辆之间的距离远处或照明条件不够好时,现有方法的性能会显着降低。在本文中,我们根据与行人的智能手表(或智能手机)收集的运动传感器数据的集成,设计,实施和评估第一个行人意图预测模型。提出了一种新型的机器学习体系结构,以有效地合并运动传感器数据,以加强视觉信息,以显着改善视觉信息可能不可靠的不利情况的性能。我们还进行了大规模的数据收集,并介绍了与时间同步运动传感器数据集成的第一个行人意图预测数据集。该数据集由总共128个视频剪辑组成,这些视频片段具有不同的距离和不同级别的照明条件。我们使用广泛使用的JAAD和我们自己的数据集训练了模型,并将性能与最先进的模型进行了比较。结果表明,我们的模型优于最新方法,特别是当行人的距离远(超过70m)并且照明条件不足时。
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
安全仍然是自动驾驶的主要问题,为了在全球部署,他们需要提前充分预测行人的动作。尽管对粗粒(人体中心预测)和细粒度预测(人体关键点的预测)进行了大量研究,但我们专注于3D边界框,这是对人类的合理估计,而无需对自动驾驶汽车进行复杂的运动细节进行建模。这具有灵活性,可以在现实世界中更长的视野中进行预测。我们建议这个新问题,并为行人的3D边界框预测提供了一个简单而有效的模型。该方法遵循基于复发性神经网络的编码器编码器体系结构,我们的实验在合成(JTA)和现实世界(Nuscenes)数据集中显示出其有效性。博学的表示形式具有有用的信息来增强其他任务的绩效,例如行动预期。我们的代码可在线提供:https://github.com/vita-epfl/bounding-box-prediction
translated by 谷歌翻译
速度控制预测是驾驶员行为分析中一个具有挑战性的问题,旨在预测驾驶员在控制车速(例如制动或加速度)中的未来行动。在本文中,我们尝试仅使用以自我为中心的视频数据来应对这一挑战,与使用第三人称视图数据或额外的车辆传感器数据(例如GPS或两者)的文献中的大多数作品相比。为此,我们提出了一个基于新型的图形卷积网络(GCN)网络,即Egospeed-net。我们的动机是,随着时间的推移,对象的位置变化可以为我们提供非常有用的线索,以预测未来的速度变化。我们首先使用完全连接的图形图将每个类的对象之间的空间关系建模,并在其上应用GCN进行特征提取。然后,我们利用一个长期的短期内存网络将每个类别的此类特征随着时间的流逝融合到矢量中,加入此类矢量并使用多层perceptron分类器预测速度控制动作。我们在本田研究所驾驶数据集上进行了广泛的实验,并证明了Egospeed-NET的出色性能。
translated by 谷歌翻译
自主驾驶中安全路径规划是由于静态场景元素和不确定的周围代理的相互作用,这是一个复杂的任务。虽然所有静态场景元素都是信息来源,但对自助车辆可用的信息有不对称的重要性。我们展示了一个具有新颖功能的数据集,签署了Parience,定义为指示符号是否明显地对自助式车辆的目标有关交通规则的目标。在裁剪标志上使用卷积网络,通过道路类型,图像坐标和计划机动的实验增强,我们预测了76%的准确性,使用76%的符号蓬勃发展,并使用与标志图像的车辆机动信息找到最佳改进。
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
汽车行业在过去几十年中见证了越来越多的发展程度;从制造手动操作车辆到具有高自动化水平的制造车辆。随着近期人工智能(AI)的发展,汽车公司现在雇用BlackBox AI模型来使车辆能够感知其环境,并使人类少或没有输入的驾驶决策。希望能够在商业规模上部署自治车辆(AV),通过社会接受AV成为至关重要的,并且可能在很大程度上取决于其透明度,可信度和遵守法规的程度。通过为AVS行为的解释提供对这些接受要求的遵守对这些验收要求的评估。因此,解释性被视为AVS的重要要求。 AV应该能够解释他们在他们运作的环境中的“见到”。在本文中,我们对可解释的自动驾驶的现有工作体系进行了全面的调查。首先,我们通过突出显示并强调透明度,问责制和信任的重要性来开放一个解释的动机;并审查与AVS相关的现有法规和标准。其次,我们识别并分类了参与发展,使用和监管的不同利益相关者,并引出了AV的解释要求。第三,我们对以前的工作进行了严格的审查,以解释不同的AV操作(即,感知,本地化,规划,控制和系统管理)。最后,我们确定了相关的挑战并提供建议,例如AV可解释性的概念框架。该调查旨在提供对AVS中解释性感兴趣的研究人员所需的基本知识。
translated by 谷歌翻译
当应用于自动驾驶汽车设置时,行动识别可以帮助丰富环境模型对世界的理解并改善未来行动的计划。为了改善自动驾驶汽车决策,我们在这项工作中提出了一种新型的两阶段在线行动识别系统,称为RADAC。RADAC提出了主动剂检测的问题,并在直接的两阶段管道中以进行动作检测和分类的直接识别人类活动识别中的参与者关系的想法。我们表明,我们提出的计划可以胜过ICCV2021 ROAD挑战数据集上的基线,并通过将其部署在真实的车辆平台上,我们演示了对环境中代理行动的高阶理解如何可以改善对真实自动驾驶汽车的决策。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online 1 .
translated by 谷歌翻译
这项调查回顾了对基于视觉的自动驾驶系统进行行为克隆训练的解释性方法。解释性的概念具有多个方面,并且需要解释性的驾驶强度是一种安全至关重要的应用。从几个研究领域收集贡献,即计算机视觉,深度学习,自动驾驶,可解释的AI(X-AI),这项调查可以解决几点。首先,它讨论了从自动驾驶系统中获得更多可解释性和解释性的定义,上下文和动机,以及该应用程序特定的挑战。其次,以事后方式为黑盒自动驾驶系统提供解释的方法是全面组织和详细的。第三,详细介绍和讨论了旨在通过设计构建更容易解释的自动驾驶系统的方法。最后,确定并检查了剩余的开放挑战和潜在的未来研究方向。
translated by 谷歌翻译
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K 1 , the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue.
translated by 谷歌翻译
近年来,道路安全引起了智能运输系统领域的研究人员和从业者的重大关注。作为最常见的道路用户群体之一,行人由于其不可预测的行为和运动而导致令人震惊,因为车辆行人互动的微妙误解可以很容易地导致风险的情况或碰撞。现有方法使用预定义的基于碰撞的模型或人类标签方法来估计行人的风险。这些方法通常受到他们的概括能力差,缺乏对自我车辆和行人之间的相互作用的限制。这项工作通过提出行人风险级预测系统来解决所列问题。该系统由三个模块组成。首先,收集车辆角度的行人数据。由于数据包含关于自我车辆和行人的运动的信息,因此可以简化以交互感知方式预测时空特征的预测。使用长短短期存储器模型,行人轨迹预测模块预测后续五个框架中的时空特征。随着预测的轨迹遵循某些交互和风险模式,采用混合聚类和分类方法来探讨时空特征中的风险模式,并使用学习模式训练风险等级分类器。在预测行人的时空特征并识别相应的风险水平时,确定自我车辆和行人之间的风险模式。实验结果验证了PRLP系统的能力,以预测行人的风险程度,从而支持智能车辆的碰撞风险评估,并为车辆和行人提供安全警告。
translated by 谷歌翻译
最近,已经证明了与图形学习技术结合使用的道路场景图表示,在包括动作分类,风险评估和碰撞预测的任务中优于最先进的深度学习技术。为了使Road场景图形表示的应用探索,我们介绍了RoadScene2VEC:一个开源工具,用于提取和嵌入公路场景图。 RoadScene2VEC的目标是通过提供用于生成场景图的工具,为生成时空场景图嵌入的工具以及用于可视化和分析场景图的工具来实现Road场景图的应用程序和能力基于方法。 RoadScene2VEC的功能包括(i)来自Carla Simulator的视频剪辑或数据的自定义场景图,(ii)多种可配置的时空图嵌入模型和基于基于基于CNN的模型,(iii)内置功能使用图形和序列嵌入用于风险评估和碰撞预测应用,(iv)用于评估转移学习的工具,以及(v)用于可视化场景图的实用程序,并分析图形学习模型的解释性。我们展示了道路展示的效用,用于这些用例,具有实验结果和基于CNN的模型的实验结果和定性评估。 Rodscene2vec可在https://github.com/aicps/roadscene2vec提供。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Accurate prediction of future person location and movement trajectory from an egocentric wearable camera can benefit a wide range of applications, such as assisting visually impaired people in navigation, and the development of mobility assistance for people with disability. In this work, a new egocentric dataset was constructed using a wearable camera, with 8,250 short clips of a targeted person either walking 1) toward, 2) away, or 3) across the camera wearer in indoor environments, or 4) staying still in the scene, and 13,817 person bounding boxes were manually labelled. Apart from the bounding boxes, the dataset also contains the estimated pose of the targeted person as well as the IMU signal of the wearable camera at each time point. An LSTM-based encoder-decoder framework was designed to predict the future location and movement trajectory of the targeted person in this egocentric setting. Extensive experiments have been conducted on the new dataset, and have shown that the proposed method is able to reliably and better predict future person location and trajectory in egocentric videos captured by the wearable camera compared to three baselines.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译