在自动车辆(AVS)中,预警系统依赖于碰撞预测,以确保乘员安全。然而,使用深度卷积网络的最先进的方法在建模冲突中失败或者太昂贵/慢,使它们不太适合在AV边缘硬件上部署。为了解决这些限制,我们提出了SG2VEC,一种使用图形神经网络(GNN)和长短期内存(LSTM)层的时空场景图嵌入方法,以通过视觉场景感知来预测未来的碰撞。我们展示SG2VEC预测碰撞8.11%,比综合数据集的最新方法提前39.07%,在挑战现实世界碰撞数据集中更准确地进行29.47%。我们还表明,SG2VEC在将知识从合成数据集转移到现实世界驾驶数据集时更好。最后,我们证明SG2VEC更快地执行推论9.3X,较小的型号为88.0%,功率少32.4%,而且能量少于行业标准的NVIDIA驱动PX 2平台,制作它更适合在边缘实施。
translated by 谷歌翻译
最近,已经证明了与图形学习技术结合使用的道路场景图表示,在包括动作分类,风险评估和碰撞预测的任务中优于最先进的深度学习技术。为了使Road场景图形表示的应用探索,我们介绍了RoadScene2VEC:一个开源工具,用于提取和嵌入公路场景图。 RoadScene2VEC的目标是通过提供用于生成场景图的工具,为生成时空场景图嵌入的工具以及用于可视化和分析场景图的工具来实现Road场景图的应用程序和能力基于方法。 RoadScene2VEC的功能包括(i)来自Carla Simulator的视频剪辑或数据的自定义场景图,(ii)多种可配置的时空图嵌入模型和基于基于基于CNN的模型,(iii)内置功能使用图形和序列嵌入用于风险评估和碰撞预测应用,(iv)用于评估转移学习的工具,以及(v)用于可视化场景图的实用程序,并分析图形学习模型的解释性。我们展示了道路展示的效用,用于这些用例,具有实验结果和基于CNN的模型的实验结果和定性评估。 Rodscene2vec可在https://github.com/aicps/roadscene2vec提供。
translated by 谷歌翻译
在由车辆安装的仪表板摄像机捕获的视频中检测危险交通代理(仪表板)对于促进在复杂环境中的安全导航至关重要。与事故相关的视频只是驾驶视频大数据的一小部分,并且瞬态前的事故流程具有高度动态和复杂性。此外,风险和非危险交通代理的外观可能相似。这些使驾驶视频中的风险对象本地化特别具有挑战性。为此,本文提出了一个注意力引导的多式功能融合网络(AM-NET),以将仪表板视频的危险交通代理本地化。两个封闭式复发单元(GRU)网络使用对象边界框和从连续视频帧中提取的光流功能来捕获时空提示,以区分危险交通代理。加上GRUS的注意力模块学会了与事故相关的交通代理。融合了两个功能流,AM-NET预测了视频中交通代理的风险评分。在支持这项研究的过程中,本文还引入了一个名为“风险对象本地化”(ROL)的基准数据集。该数据集包含带有事故,对象和场景级属性的空间,时间和分类注释。拟议的AM-NET在ROL数据集上实现了85.73%的AUC的有希望的性能。同时,AM-NET在DOTA数据集上优于视频异常检测的当前最新视频异常检测。一项彻底的消融研究进一步揭示了AM-NET通过评估其不同组成部分的贡献的优点。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
速度控制预测是驾驶员行为分析中一个具有挑战性的问题,旨在预测驾驶员在控制车速(例如制动或加速度)中的未来行动。在本文中,我们尝试仅使用以自我为中心的视频数据来应对这一挑战,与使用第三人称视图数据或额外的车辆传感器数据(例如GPS或两者)的文献中的大多数作品相比。为此,我们提出了一个基于新型的图形卷积网络(GCN)网络,即Egospeed-net。我们的动机是,随着时间的推移,对象的位置变化可以为我们提供非常有用的线索,以预测未来的速度变化。我们首先使用完全连接的图形图将每个类的对象之间的空间关系建模,并在其上应用GCN进行特征提取。然后,我们利用一个长期的短期内存网络将每个类别的此类特征随着时间的流逝融合到矢量中,加入此类矢量并使用多层perceptron分类器预测速度控制动作。我们在本田研究所驾驶数据集上进行了广泛的实验,并证明了Egospeed-NET的出色性能。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
近年来,道路安全引起了智能运输系统领域的研究人员和从业者的重大关注。作为最常见的道路用户群体之一,行人由于其不可预测的行为和运动而导致令人震惊,因为车辆行人互动的微妙误解可以很容易地导致风险的情况或碰撞。现有方法使用预定义的基于碰撞的模型或人类标签方法来估计行人的风险。这些方法通常受到他们的概括能力差,缺乏对自我车辆和行人之间的相互作用的限制。这项工作通过提出行人风险级预测系统来解决所列问题。该系统由三个模块组成。首先,收集车辆角度的行人数据。由于数据包含关于自我车辆和行人的运动的信息,因此可以简化以交互感知方式预测时空特征的预测。使用长短短期存储器模型,行人轨迹预测模块预测后续五个框架中的时空特征。随着预测的轨迹遵循某些交互和风险模式,采用混合聚类和分类方法来探讨时空特征中的风险模式,并使用学习模式训练风险等级分类器。在预测行人的时空特征并识别相应的风险水平时,确定自我车辆和行人之间的风险模式。实验结果验证了PRLP系统的能力,以预测行人的风险程度,从而支持智能车辆的碰撞风险评估,并为车辆和行人提供安全警告。
translated by 谷歌翻译
本文为可以提取车辆间交互的自治车辆提供特定于自主车辆的驾驶员风险识别框架。在驾驶员认知方式下对城市驾驶场景进行了这种提取,以提高风险场景的识别准确性。首先,将群集分析应用于驱动程序的操作数据,以学习不同驱动程序风险场景的主观评估,并为每个场景生成相应的风险标签。其次,采用图形表示模型(GRM)统一和构建动态车辆,车间交互和静态交通标记的实际驾驶场景中的特征。驾驶员特定的风险标签提供了实践,以捕获不同司机的风险评估标准。此外,图形模型表示驾驶场景的多个功能。因此,所提出的框架可以了解不同驱动程序的驾驶场景的风险评估模式,并建立特定于驱动程序的风险标识符。最后,通过使用由多个驱动程序收集的现实世界城市驾驶数据集进行的实验评估所提出的框架的性能。结果表明,建议的框架可以准确地识别实际驾驶环境中的风险及其水平。
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
研究表明,自治车辆(AVS)在由人类驱动因素组成的交通环境中保守,不适应当地条件和社会文化规范。众所周知,如果存在理解人类驱动程序的行为,则可以设计社会意识的AVS。我们提出了一种利用机器学习来预测人类驱动程序的行为的方法。这类似于人类如何隐含地解释道路上司机的行为,只能观察其车辆的轨迹。我们使用图形理论工具从轨迹和机器学习中提取驾驶员行为特征,以在流量和驾驶员行为中获得车辆的提取轨迹之间的计算映射。与此域中的现有方法相比,我们证明我们的方法是强大的,一般的,并且可扩展到广泛的应用程序,如自主导航。我们评估我们在美国,印度,中国和新加坡捕获的现实世界交通数据集以及模拟中的方法。
translated by 谷歌翻译
传感器技术和人工智能的快速进步正在为交通安全增强创造新的机遇。仪表板相机(Dashcams)已广泛部署在人类驾驶车辆和自动驾驶车辆上。可以准确和迅速地预测来自Dashcam视频的事故的计算智能模型将增强事故预防的准备。交通代理的空间时间相互作用复杂。预测未来事故的视觉提示深深嵌入了Dashcam视频数据中。因此,交通事故的早期期待仍然是一个挑战。受到人类在视觉感知事故风险中的注意力行为的启发,提出了一种动态的空间 - 时间关注(DSTA)网络,用于从Dashcam视频的早期事故预期。 DSTA网络学习用动态时间关注(DTA)模块来选择视频序列的判别时间片段。它还学会专注于带有动态空间注意(DSA)模块的帧的信息空间区域。门控复发单元(GRU)与注意模块共同培训,以预测未来事故的可能性。在两个基准数据集上对DSTA网络的评估确认它已超过最先进的性能。一种彻底的消融研究,评估组件级别的DSTA网络揭示了网络如何实现这种性能。此外,本文提出了一种从两个互补模型中融合预测分数的方法,并验证其有效性进一步提高早期事故预期的性能。
translated by 谷歌翻译
Path prediction is an essential task for many real-world Cyber-Physical Systems (CPS) applications, from autonomous driving and traffic monitoring/management to pedestrian/worker safety. These real-world CPS applications need a robust, lightweight path prediction that can provide a universal network architecture for multiple subjects (e.g., pedestrians and vehicles) from different perspectives. However, most existing algorithms are tailor-made for a unique subject with a specific camera perspective and scenario. This article presents Pishgu, a universal lightweight network architecture, as a robust and holistic solution for path prediction. Pishgu's architecture can adapt to multiple path prediction domains with different subjects (vehicles, pedestrians), perspectives (bird's-eye, high-angle), and scenes (sidewalk, highway). Our proposed architecture captures the inter-dependencies within the subjects in each frame by taking advantage of Graph Isomorphism Networks and the attention module. We separately train and evaluate the efficacy of our architecture on three different CPS domains across multiple perspectives (vehicle bird's-eye view, pedestrian bird's-eye view, and human high-angle view). Pishgu outperforms state-of-the-art solutions in the vehicle bird's-eye view domain by 42% and 61% and pedestrian high-angle view domain by 23% and 22% in terms of ADE and FDE, respectively. Additionally, we analyze the domain-specific details for various datasets to understand their effect on path prediction and model interpretation. Finally, we report the latency and throughput for all three domains on multiple embedded platforms showcasing the robustness and adaptability of Pishgu for real-world integration into CPS applications.
translated by 谷歌翻译
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
translated by 谷歌翻译
Proper functioning of connected and automated vehicles (CAVs) is crucial for the safety and efficiency of future intelligent transport systems. Meanwhile, transitioning to fully autonomous driving requires a long period of mixed autonomy traffic, including both CAVs and human-driven vehicles. Thus, collaboration decision-making for CAVs is essential to generate appropriate driving behaviors to enhance the safety and efficiency of mixed autonomy traffic. In recent years, deep reinforcement learning (DRL) has been widely used in solving decision-making problems. However, the existing DRL-based methods have been mainly focused on solving the decision-making of a single CAV. Using the existing DRL-based methods in mixed autonomy traffic cannot accurately represent the mutual effects of vehicles and model dynamic traffic environments. To address these shortcomings, this article proposes a graph reinforcement learning (GRL) approach for multi-agent decision-making of CAVs in mixed autonomy traffic. First, a generic and modular GRL framework is designed. Then, a systematic review of DRL and GRL methods is presented, focusing on the problems addressed in recent research. Moreover, a comparative study on different GRL methods is further proposed based on the designed framework to verify the effectiveness of GRL methods. Results show that the GRL methods can well optimize the performance of multi-agent decision-making for CAVs in mixed autonomy traffic compared to the DRL methods. Finally, challenges and future research directions are summarized. This study can provide a valuable research reference for solving the multi-agent decision-making problems of CAVs in mixed autonomy traffic and can promote the implementation of GRL-based methods into intelligent transportation systems. The source code of our work can be found at https://github.com/Jacklinkk/Graph_CAVs.
translated by 谷歌翻译
Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
translated by 谷歌翻译
自治车辆的评估和改善规划需要可扩展的长尾交通方案。有用的是,这些情景必须是现实的和挑战性的,但不能安全地开车。在这项工作中,我们介绍努力,一种自动生成具有挑战性的场景的方法,导致给定的计划者产生不良行为,如冲突。为了维护情景合理性,关键的想法是利用基于图形的条件VAE的形式利用学习的交通运动模型。方案生成在该流量模型的潜在空间中制定了优化,通过扰乱初始的真实世界的场景来产生与给定计划者碰撞的轨迹。随后的优化用于找到“解决方案”的场景,确保改进给定的计划者是有用的。进一步的分析基于碰撞类型的群集生成的场景。我们攻击两名策划者并展示争取在这两种情况下成功地产生了现实,具有挑战性的情景。我们另外“关闭循环”并使用这些方案优化基于规则的策划器的超参数。
translated by 谷歌翻译
Speed estimation of an ego vehicle is crucial to enable autonomous driving and advanced driver assistance technologies. Due to functional and legacy issues, conventional methods depend on in-car sensors to extract vehicle speed through the Controller Area Network bus. However, it is desirable to have modular systems that are not susceptible to external sensors to execute perception tasks. In this paper, we propose a novel 3D-CNN with masked-attention architecture to estimate ego vehicle speed using a single front-facing monocular camera. To demonstrate the effectiveness of our method, we conduct experiments on two publicly available datasets, nuImages and KITTI. We also demonstrate the efficacy of masked-attention by comparing our method with a traditional 3D-CNN.
translated by 谷歌翻译
行动检测和公共交通安全是安全社区和更好社会的关键方面。使用不同的监视摄像机监视智能城市中的交通流量可以在识别事故和提醒急救人员中发挥重要作用。计算机视觉任务中的动作识别(AR)的利用为视频监视,医学成像和数字信号处理中的高精度应用做出了贡献。本文提出了一项密集的审查,重点是智能城市的事故检测和自动运输系统中的行动识别。在本文中,我们专注于使用各种交通视频捕获来源的AR系统,例如交通交叉点上的静态监视摄像头,高速公路监控摄像头,无人机摄像头和仪表板。通过这篇综述,我们确定了AR中用于自动运输和事故检测的主要技术,分类法和算法。我们还检查了AR任务中使用的数据集,并识别数据集的数据集和功能的主要来源。本文提供了潜在的研究方向,以开发和整合为自动驾驶汽车和公共交通安全系统的事故检测系统,通过警告紧急人员和执法部门,如果道路事故发生道路事故,以最大程度地减少事故报告中的人为错误,并对受害者提供自发的反应。
translated by 谷歌翻译
对行人行为的预测对于完全自主车辆安全有效地在繁忙的城市街道上驾驶至关重要。未来的自治车需要适应混合条件,不仅具有技术还是社会能力。随着更多算法和数据集已经开发出预测行人行为,这些努力缺乏基准标签和估计行人的时间动态意图变化的能力,提供了对交互场景的解释,以及具有社会智能的支持算法。本文提出并分享另一个代表数据集,称为Iupui-CSRC行人位于意图(PSI)数据,除了综合计算机视觉标签之外,具有两种创新标签。第一部小说标签是在自助式车辆前面交叉的行人的动态意图变化,从24个司机中实现了不同的背景。第二个是在估计行人意图并在交互期间预测其行为时对驾驶员推理过程的基于文本的解释。这些创新标签可以启用几个计算机视觉任务,包括行人意图/行为预测,车辆行人互动分割和用于可解释算法的视频到语言映射。发布的数据集可以从根本上从根本上改善行人行为预测模型的发展,并开发社会智能自治车,以有效地与行人进行互动。 DataSet已被不同的任务进行评估,并已释放到公众访问。
translated by 谷歌翻译