与其他技术(例如电感回路,雷达或激光器)相比,使用摄像头进行车速测量的成本效益要高得多。但是,由于相机的固有局限性提供准确的范围估计值,因此准确的速度测量仍然是一个挑战。此外,基于经典的视觉方法对相机和道路之间的外部校准非常敏感。在这种情况下,使用数据驱动的方法是一种有趣的选择。但是,数据收集需要一个复杂且昂贵的设置,以在与高精度速度传感器同步的相机中录制视频,以生成地面真相速度值。最近已经证明,使用驾驶模拟器(例如Carla)可以用作生成大型合成数据集的强大替代方案,以实现对单个摄像机的车辆速度估算的应用。在本文中,我们在不同的虚拟位置和不同的外部参数中使用多个摄像机研究相同的问题。我们解决了复杂的3D-CNN体系结构是否能够使用单个模型隐式学习视图速度的问题,或者特定于视图的模型是否更合适。结果非常有前途,因为它们表明具有来自多个视图的数据报告的单个模型比摄像机特异性模型更好地准确性,从而铺平了迈向视图的车辆速度测量系统。
translated by 谷歌翻译
Accurate speed estimation of road vehicles is important for several reasons. One is speed limit enforcement, which represents a crucial tool in decreasing traffic accidents and fatalities. Compared with other research areas and domains, the number of available datasets for vehicle speed estimation is still very limited. We present a dataset of on-road audio-video recordings of single vehicles passing by a camera at known speeds, maintained stable by the on-board cruise control. The dataset contains thirteen vehicles, selected to be as diverse as possible in terms of manufacturer, production year, engine type, power and transmission, resulting in a total of $ 400 $ annotated audio-video recordings. The dataset is fully available and intended as a public benchmark to facilitate research in audio-video vehicle speed estimation. In addition to the dataset, we propose a cross-validation strategy which can be used in a machine learning model for vehicle speed estimation. Two approaches to training-validation split of the dataset are proposed.
translated by 谷歌翻译
自主车辆的环境感知受其物理传感器范围和算法性能的限制,以及通过降低其对正在进行的交通状况的理解的闭塞。这不仅构成了对安全和限制驾驶速度的重大威胁,而且它也可能导致不方便的动作。智能基础设施系统可以帮助缓解这些问题。智能基础设施系统可以通过在当前交通情况的数字模型的形式提供关于其周围环境的额外详细信息,填补了车辆的感知中的差距并扩展了其视野。数字双胞胎。然而,这种系统的详细描述和工作原型表明其可行性稀缺。在本文中,我们提出了一种硬件和软件架构,可实现这样一个可靠的智能基础架构系统。我们在现实世界中实施了该系统,并展示了它能够创建一个准确的延伸高速公路延伸的数字双胞胎,从而提高了自主车辆超越其车载传感器的极限的感知。此外,我们通过使用空中图像和地球观测方法来评估数字双胞胎的准确性和可靠性,用于产生地面真理数据。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
计算机视觉在智能运输系统(ITS)和交通监视中发挥了重要作用。除了快速增长的自动化车辆和拥挤的城市外,通过实施深层神经网络的实施,可以使用视频监视基础架构进行自动和高级交通管理系统(ATM)。在这项研究中,我们为实时交通监控提供了一个实用的平台,包括3D车辆/行人检测,速度检测,轨迹估算,拥塞检测以及监视车辆和行人的相互作用,都使用单个CCTV交通摄像头。我们适应了定制的Yolov5深神经网络模型,用于车辆/行人检测和增强的排序跟踪算法。还开发了基于混合卫星的基于混合卫星的逆透视图(SG-IPM)方法,用于摄像机自动校准,从而导致准确的3D对象检测和可视化。我们还根据短期和长期的时间视频数据流开发了层次结构的交通建模解决方案,以了解脆弱道路使用者的交通流量,瓶颈和危险景点。关于现实世界情景和与最先进的比较的几项实验是使用各种交通监控数据集进行的,包括从高速公路,交叉路口和城市地区收集的MIO-TCD,UA-DETRAC和GRAM-RTM,在不同的照明和城市地区天气状况。
translated by 谷歌翻译
智能城市应用程序(例如智能交通路由或事故预防)依赖计算机视觉方法来确切的车辆定位和跟踪。由于精确标记的数据缺乏,从多个摄像机中检测和跟踪3D的车辆被证明是探索挑战的。我们提出了一个庞大的合成数据集,用于多个重叠和非重叠摄像头视图中的多个车辆跟踪和分割。与现有的数据集不同,该数据集仅为2D边界框提供跟踪地面真实,我们的数据集还包含适用于相机和世界坐标中的3D边界框的完美标签,深度估计以及实例,语义和泛型细分。该数据集由17个小时的标记视频材料组成,从64个不同的一天,雨,黎明和夜幕播放的340张摄像机录制,使其成为迄今为止多目标多型多相机跟踪的最广泛数据集。我们提供用于检测,车辆重新识别以及单摄像机跟踪的基准。代码和数据公开可用。
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译
Speed estimation of an ego vehicle is crucial to enable autonomous driving and advanced driver assistance technologies. Due to functional and legacy issues, conventional methods depend on in-car sensors to extract vehicle speed through the Controller Area Network bus. However, it is desirable to have modular systems that are not susceptible to external sensors to execute perception tasks. In this paper, we propose a novel 3D-CNN with masked-attention architecture to estimate ego vehicle speed using a single front-facing monocular camera. To demonstrate the effectiveness of our method, we conduct experiments on two publicly available datasets, nuImages and KITTI. We also demonstrate the efficacy of masked-attention by comparing our method with a traditional 3D-CNN.
translated by 谷歌翻译
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
translated by 谷歌翻译
这项工作提出了一种新的方法,可以使用有效的鸟类视图表示和卷积神经网络在高速公路场景中预测车辆轨迹。使用基本的视觉表示,很容易将车辆位置,运动历史,道路配置和车辆相互作用轻松包含在预测模型中。 U-NET模型已被选为预测内核,以使用图像到图像回归方法生成场景的未来视觉表示。已经实施了一种方法来从生成的图形表示中提取车辆位置以实现子像素分辨率。该方法已通过预防数据集(一个板载传感器数据集)进行了培训和评估。已经评估了不同的网络配置和场景表示。这项研究发现,使用线性终端层和车辆的高斯表示,具有6个深度水平的U-NET是最佳性能配置。发现使用车道标记不会改善预测性能。平均预测误差为0.47和0.38米,对于纵向和横向坐标的最终预测误差分别为0.76和0.53米,预测轨迹长度为2.0秒。与基线方法相比,预测误差低至50%。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
摄像机是自动化驱动系统中的主要传感器。它们提供高信息密度,并对检测为人类视野提供的道路基础设施线索最优。环绕式摄像机系统通常包括具有190 {\ DEG} +视野的四个鱼眼相机,覆盖在车辆周围的整个360 {\ DEG}集中在近场传感上。它们是低速,高精度和近距离传感应用的主要传感器,如自动停车,交通堵塞援助和低速应急制动。在这项工作中,我们提供了对这种视觉系统的详细调查,在可以分解为四个模块化组件的架构中,设置调查即可识别,重建,重建和重组。我们共同称之为4R架构。我们讨论每个组件如何完成特定方面,并提供一个位置论证,即它们可以协同组织以形成用于低速自动化的完整感知系统。我们通过呈现来自以前的作品的结果,并通过向此类系统提出架构提案来支持此参数。定性结果在视频中呈现在HTTPS://youtu.be/ae8bcof7777uy中。
translated by 谷歌翻译
Figure. 1. The SYNTHIA Dataset. A sample frame (Left) with its semantic labels (center) and a general view of the city (right).
translated by 谷歌翻译
安全部署自动驾驶汽车(SDC)需要彻底模拟和现场测试。大多数测试技术考虑在仿真环境中的虚拟化SDC,而较少的努力旨在评估这些技术是否转移到并对物理现实世界的车辆有效。在本文中,我们在部署在物理小型车辆上的虚拟模拟对应物上时,我们利用驴车开源框架对SDC的测试测试。在我们的实证研究中,我们研究了虚拟和真实环境之间的行为和失败风险在大量损坏和对抗的环境中的可转移性。虽然大量测试结果在虚拟和物理环境之间进行转移,但我们还确定了有助于虚拟和物理世界之间的现实差距的关键缺点,威胁到应用于物理SDC时现有的测试解决方案的潜力。
translated by 谷歌翻译
检测,预测和减轻交通拥堵是针对改善运输网络的服务水平的目标。随着对更高分辨率的更大数据集的访问,深度学习对这种任务的相关性正在增加。近年来几篇综合调查论文总结了运输领域的深度学习应用。然而,运输网络的系统动态在非拥挤状态和拥塞状态之间变化大大变化 - 从而需要清楚地了解对拥堵预测特异性特异性的挑战。在这项调查中,我们在与检测,预测和缓解拥堵相关的任务中,介绍了深度学习应用的当前状态。重复和非经常性充血是单独讨论的。我们的调查导致我们揭示了当前研究状态的固有挑战和差距。最后,我们向未来的研究方向提出了一些建议,因为所确定的挑战的答案。
translated by 谷歌翻译
任何自主驾驶系统的核心任务是将感觉输入转换为驾驶命令。在端到端驾驶中,这是通过神经网络实现的,其中一个或多个摄像头是最常用的输入和低级驾驶命令,例如转向角,作为输出。但是,在模拟中显示了深度感应,以使端到端驾驶任务更加容易。在真实的汽车上,由于难以获得良好的空间和时间对齐传感器,将深度和视觉信息组合起来可能具有挑战性。为了减轻对齐问题,驱动器可以以深度,强度和环境辐射通道输出周围视图激光镜像。这些测量值来自相同的传感器,使它们在时间和空间中完全排列。我们证明,这种LIDAR图像足以完成实际车程的遵循任务,并且在测试条件下至少对基于摄像机的模型执行,并且需要在需要推广到新的天气条件时差异。在第二个研究方向上,我们揭示了非政策预测序列的时间平滑度与实际的上驾驶能力同样与实际使用的绝对误差息息相关。
translated by 谷歌翻译
众所周知,在ADAS应用中,需要良好的估计车辆的姿势。本文提出了一种鉴定的2.5D内径术,由此由横摆率传感器和四轮速度传感器衍生的平面内径测量由悬架的线性模型增强。虽然平面内径术的核心是在文献中已经理解的横摆率模型,但我们通过拟合二次传入信号,实现内插,推断和车辆位置的更精细的整合来增强这一点。我们通过DGPS / IMU参考的实验结果表明,该模型提供了与现有方法相比的高精度的内径估计。利用返回车辆参考点高度变化的传感器改变悬架配置,我们定义了车辆悬架的平面模型,从而增加了内径模型。我们提出了一个实验框架和评估标准,通过该标准评估了内径术的良好和与现有方法进行了比较。该测距模型旨在支持众所周知的低速环绕式摄像头系统。因此,我们介绍了一些应用程序结果,该应用结果显示使用所提出的内径术来查看和计算机视觉应用程序的性能提升
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
智能运输系统(ITS)对可持续和绿色城市生活的发展至关重要。它是数据驱动的,并通过从气管到智能相机的传感器大量来启用。这项工作探索了基于基于光纤的分布式声传感器(DAS)的新型数据源,以进行交通分析。检测车辆的类型和估计车辆的占用是其主要关注点。第一个是由于需要跟踪,控制和预测交通流的动机。第二个目标是对高占用车辆车道的调节,以减少排放和拥堵。这些任务通常是通过检查车辆或使用新兴计算机视觉技术来执行的。前者不可扩展或有效,而后者对乘客的隐私有侵入性。为此,我们提出了一种深度学习技术,以分析DAS信号,以通过连续感应和不暴露个人信息来应对这一挑战。我们提出了一种处理DAS信号的深度学习方法,并基于在受控条件下收集的DAS数据来实现92%的车辆分类准确性和92-97%的占用检测。
translated by 谷歌翻译