具有自动化和连通性的赋予,连接和自动化的车辆旨在成为合作驾驶自动化的革命性推动者。然而,骑士需要对周围环境的高保真感知信息,但从各种车载传感器以及车辆到所有的通信(v2x)通信中都可以昂贵。因此,通过具有成本效益的平台基于高保真传感器的真实感知信息对于启用与CDA相关的研究(例如合作决策或控制)至关重要。大多数针对CAVS的最先进的交通模拟研究都通过直接呼吁对象的内在属性来依赖情况 - 意识信息,这阻碍了CDA算法评估的可靠性和保真度。在这项研究中,\ textit {网络移动镜(CMM)}共模拟平台设计用于通过提供真实感知信息来启用CDA。 \ textit {cmm}共模拟平台可以通过高保真传感器感知系统和具有实时重建系统的网络世界模仿现实世界。具体而言,现实世界的模拟器主要负责模拟交通环境,传感器以及真实的感知过程。 Mirror-World Simulator负责重建对象,并将其信息作为模拟器的内在属性,以支持CD​​A算法的开发和评估。为了说明拟议的共模拟平台的功能,将基于路边的激光雷达的车辆感知系统原型作为研究案例。特定的流量环境和CDA任务是为实验设计的,其结果得到了证明和分析以显示平台的性能。
translated by 谷歌翻译
感知环境是实现合作驾驶自动化(CDA)的最基本关键之一,该关键被认为是解决当代运输系统的安全性,流动性和可持续性问题的革命性解决方案。尽管目前在计算机视觉的物体感知领域正在发生前所未有的进化,但由于不可避免的物理遮挡和单辆车的接受程度有限,最先进的感知方法仍在与复杂的现实世界流量环境中挣扎系统。基于多个空间分离的感知节点,合作感知(CP)诞生是为了解锁驱动自动化的感知瓶颈。在本文中,我们全面审查和分析了CP的研究进度,据我们所知,这是第一次提出统一的CP框架。审查了基于不同类型的传感器的CP系统的体系结构和分类学,以显示对CP系统的工作流程和不同结构的高级描述。对节点结构,传感器模式和融合方案进行了审查和分析,并使用全面的文献进行了详细的解释。提出了分层CP框架,然后对现有数据集和模拟器进行审查,以勾勒出CP的整体景观。讨论重点介绍了当前的机会,开放挑战和预期的未来趋势。
translated by 谷歌翻译
Utilizing the latest advances in Artificial Intelligence (AI), the computer vision community is now witnessing an unprecedented evolution in all kinds of perception tasks, particularly in object detection. Based on multiple spatially separated perception nodes, Cooperative Perception (CP) has emerged to significantly advance the perception of automated driving. However, current cooperative object detection methods mainly focus on ego-vehicle efficiency without considering the practical issues of system-wide costs. In this paper, we introduce VINet, a unified deep learning-based CP network for scalable, lightweight, and heterogeneous cooperative 3D object detection. VINet is the first CP method designed from the standpoint of large-scale system-level implementation and can be divided into three main phases: 1) Global Pre-Processing and Lightweight Feature Extraction which prepare the data into global style and extract features for cooperation in a lightweight manner; 2) Two-Stream Fusion which fuses the features from scalable and heterogeneous perception nodes; and 3) Central Feature Backbone and 3D Detection Head which further process the fused features and generate cooperative detection results. A cooperative perception platform is designed and developed for CP dataset acquisition and several baselines are compared during the experiments. The experimental analysis shows that VINet can achieve remarkable improvements for pedestrians and cars with 2x less system-wide computational costs and 12x less system-wide communicational costs.
translated by 谷歌翻译
用于流量操作和控制的现有数据收集方法通常依赖于基于基础架构的环路探测器或探测器车辆轨迹。连接和自动化的车辆(CAVS)不仅可以报告有关自己的数据,而且可以提供所有检测到的周围车辆的状态。从多个CAVS以及基础设施传感器(例如Lidar)的感知数据集成,即使在非常低的渗透率下也可以提供更丰富的信息。本文旨在开发合作数据收集系统,该系统集成了来自基础架构和CAVS的LiDar Point Cloud数据,以为各种运输应用创建合作感知环境。最新的3D检测模型用于在合并点云中检测车辆。我们在与Carla和Sumo的共模拟平台中测试了具有最大压力自适应信号控制模型的提出的合作感知环境。结果表明,CAV和基础设施传感器的渗透率非常低,足以实现可比性的性能,而连接车辆(CV)的渗透率为30%或更高。我们还显示了不同CAV渗透率下的等效CV渗透率(E-CVPR),以证明合作感知环境的数据收集效率。
translated by 谷歌翻译
随着自动驾驶的发展,单个车辆的自动驾驶技术的提高已达到瓶颈。车辆合作自动驾驶技术的进步可以扩大车辆的感知范围,补充感知盲区并提高感知的准确性,以促进自主驾驶技术的发展并实现车辆路整合。该项目主要使用LIDAR来开发数据融合方案,以实现车辆和道路设备数据的共享和组合,并实现动态目标的检测和跟踪。同时,设计和用于测试我们的车辆道路合作意识系统的一些测试方案,这证明了车辆道路合作自动驾驶在单车自动驾驶上的优势。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
下一代高分辨率汽车雷达(4D雷达)可以提供额外的高程测量和较密集的点云,从而在自动驾驶中具有3D传感的巨大潜力。在本文中,我们介绍了一个名为TJ4Dradset的数据集,其中包括4D雷达点用于自动驾驶研究。该数据集是在各种驾驶场景中收集的,连续44个序列中总共有7757个同步帧,这些序列用3D边界框和轨道ID很好地注释。我们为数据集提供了基于4D雷达的3D对象检测基线,以证明4D雷达点云的深度学习方法的有效性。可以通过以下链接访问数据集:https://github.com/tjradarlab/tj4dradset。
translated by 谷歌翻译
车辆到所有(V2X)通信技术使车辆与附近环境中许多其他实体之间的协作可以从根本上改善自动驾驶的感知系统。但是,缺乏公共数据集极大地限制了协作感知的研究进度。为了填补这一空白,我们提出了V2X-SIM,这是一个针对V2X辅助自动驾驶的全面模拟多代理感知数据集。 V2X-SIM提供:(1)\ hl {Multi-Agent}传感器记录来自路边单元(RSU)和多种能够协作感知的车辆,(2)多模式传感器流,可促进多模式感知和多模式感知和(3)支持各种感知任务的各种基础真理。同时,我们在三个任务(包括检测,跟踪和细分)上为最先进的协作感知算法提供了一个开源测试台,并为最先进的协作感知算法提供了基准。 V2X-SIM试图在现实数据集广泛使用之前刺激自动驾驶的协作感知研究。我们的数据集和代码可在\ url {https://ai4ce.github.io/v2x-sim/}上获得。
translated by 谷歌翻译
采用车辆到车辆通信以提高自动驾驶技术中的感知性能,最近引起了相当大的关注;然而,对于基准测试算法的合适开放数据集已经难以开发和评估合作感知技术。为此,我们介绍了用于车辆到车辆的第一个大型开放模拟数据集。它包含超过70个有趣的场景,11,464帧和232,913帧的注释3D车辆边界盒,从卡拉的8个城镇和洛杉矶的数码镇。然后,我们构建了一个全面的基准,共有16种实施模型来评估若干信息融合策略〜(即早期,晚期和中间融合),最先进的激光雷达检测算法。此外,我们提出了一种新的细心中间融合管线,以从多个连接的车辆汇总信息。我们的实验表明,拟议的管道可以很容易地与现有的3D LIDAR探测器集成,即使具有大的压缩速率也可以实现出色的性能。为了鼓励更多的研究人员来调查车辆到车辆的感知,我们将释放数据集,基准方法以及HTTPS://mobility-lab.seas.ucla.edu/opv2v2v/中的所有相关代码。
translated by 谷歌翻译
Reliable and efficient validation technologies are critical for the recent development of multi-vehicle cooperation and vehicle-road-cloud integration. In this paper, we introduce our miniature experimental platform, Mixed Cloud Control Testbed (MCCT), developed based on a new notion of Mixed Digital Twin (mixedDT). Combining Mixed Reality with Digital Twin, mixedDT integrates the virtual and physical spaces into a mixed one, where physical entities coexist and interact with virtual entities via their digital counterparts. Under the framework of mixedDT, MCCT contains three major experimental platforms in the physical, virtual and mixed spaces respectively, and provides a unified access for various human-machine interfaces and external devices such as driving simulators. A cloud unit, where the mixed experimental platform is deployed, is responsible for fusing multi-platform information and assigning control instructions, contributing to synchronous operation and real-time cross-platform interaction. Particularly, MCCT allows for multi-vehicle coordination composed of different multi-source vehicles (\eg, physical vehicles, virtual vehicles and human-driven vehicles). Validations on vehicle platooning demonstrate the flexibility and scalability of MCCT.
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
Digital Twin is an emerging technology that replicates real-world entities into a digital space. It has attracted increasing attention in the transportation field and many researchers are exploring its future applications in the development of Intelligent Transportation System (ITS) technologies. Connected vehicles (CVs) and pedestrians are among the major traffic participants in ITS. However, the usage of Digital Twin in research involving both CV and pedestrian remains largely unexplored. In this study, a Digital Twin framework for CV and pedestrian in-the-loop simulation is proposed. The proposed framework consists of the physical world, the digital world, and data transmission in between. The features for the entities (CV and pedestrian) that need digital twined are divided into external state and internal state, and the attributes in each state are described. We also demonstrate a sample architecture under the proposed Digital Twin framework, which is based on Carla-Sumo Co-simulation and Cave automatic virtual environment (CAVE). The proposed framework is expected to provide guidance to the future Digital Twin research, and the architecture we build can serve as the testbed for further research and development of ITS applications on CV and pedestrian.
translated by 谷歌翻译
在本文中,我们使用两个无监督的学习算法的组合介绍了路边激光雷达物体检测的解决方案。 3D点云数据首先将球形坐标转换成球形坐标并使用散列函数填充到方位角网格矩阵中。之后,RAW LIDAR数据被重新排列成空间 - 时间数据结构,以存储范围,方位角和强度的信息。基于强度信道模式识别,应用动态模式分解方法将点云数据分解成低级背景和稀疏前景。三角算法根据范围信息,自动发现分割值以将移动目标与静态背景分开。在强度和范围背景减法之后,将使用基于密度的检测器检测到前景移动物体,并编码到状态空间模型中以进行跟踪。所提出的模型的输出包括车辆轨迹,可以实现许多移动性和安全应用。该方法针对商业流量数据收集平台进行了验证,并证明了对基础设施激光雷达对象检测的高效可靠的解决方案。与之前的方法相比,该方法直接处理散射和离散点云,所提出的方法可以建立3D测量数据的复杂线性关系较小,这捕获了我们经常需要的空间时间结构。
translated by 谷歌翻译
Realizing human-like perception is a challenge in open driving scenarios due to corner cases and visual occlusions. To gather knowledge of rare and occluded instances, federated learning assisted connected autonomous vehicle (FLCAV) has been proposed, which leverages vehicular networks to establish federated deep neural networks (DNNs) from distributed data captured by vehicles and road sensors. Without the need of data aggregation, FLCAV preserves privacy while reducing communication costs compared with conventional centralized learning. However, it is challenging to determine the network resources and road sensor placements for multi-stage training with multi-modal datasets in multi-variant scenarios. This article presents networking and training frameworks for FLCAV perception. Multi-layer graph resource allocation and vehicle-road contrastive sensor placement are proposed to address the network management and sensor deployment problems, respectively. We also develop CarlaFLCAV, a software platform that implements the above system and methods. Experimental results confirm the superiority of the proposed techniques compared with various benchmarks.
translated by 谷歌翻译
两栖地面汽车将飞行和驾驶模式融合在一起,以实现更灵活的空中行动能力,并且最近受到了越来越多的关注。通过分析现有的两栖车辆,我们强调了在复杂的三维城市运输系统中有效使用两栖车辆的自动驾驶功能。我们审查并总结了现有两栖车辆设计中智能飞行驾驶的关键促成技术,确定主要的技术障碍,并提出潜在的解决方案,以实现未来的研究和创新。本文旨在作为研究和开发智能两栖车辆的指南,以实现未来的城市运输。
translated by 谷歌翻译
车辆到所有(V2X)网络已使自主驾驶中的合作感达到了协作感,这是对独立情报的根本缺陷的有前途的解决方案,包括盲区和远距离感知。但是,缺乏数据集严重阻碍了协作感知算法的发展。在这项工作中,我们发布了海豚:用于协作感知的数据集,可以使和谐且相互联系的自动驾驶,这是一个新的模拟大规模的各种大规模的各种赛车多模式多模式自动驾驶数据集,该数据集为互连为互连的开创性基准平台提供自动驾驶。海豚在六个维度上优于当前数据集:从车辆和道路侧单元(RSU)(RSUS)的临时图像和点云,启用车辆到车辆(V2V)和车辆到基础设施(V2I)的协作感知; 6具有动态天气条件的典型场景使各种互连的自动驾驶数据集最多;精心选择的观点,提供关键区域和每个对象的全部覆盖范围; 42376帧和292549个对象,以及相应的3D注释,地理位置和校准,构成了最大的协作知觉数据集;全高清图像和64线激光雷达构建高分辨率数据,并具有足够的详细信息;组织良好的API和开源代码可确保海豚的可扩展性。我们还构建了2D检测,3D检测和关于海豚的多视图协作任务的基准。实验结果表明,通过V2X通信的原始融合方案可以帮助提高精度,并在RSU存在时减少昂贵的LiDAR设备的必要性,这可能会加速相互联系的自动驾驶车辆的普及。现在可以在https://dolphins-dataset.net/上获得海豚。
translated by 谷歌翻译
过去几年目睹了提高自治车辆激光器的感知性能的兴趣越来越兴趣。虽然大多数现有的工作都侧重于开发新的深度学习算法或模型架构,但我们研究了物理设计的视角,即多个激光雷达的不同放置如何影响基于学习的感知的问题。为此,我们介绍了一种易于计算的信息理论代理度量,以定量和快速评估不同类型对象的3D检测的激光雷达放置。我们还在现实的Carla模拟器中提供了一个新的数据收集,检测模型培训和评估框架,以评估不同的多激光雷达配置。通过自动驾驶公司设计灵感的多种普遍的展示,我们通过广泛的实验表明了我们在基提上不同代表算法的替代公制和对象检测性能之间的相关性,验证了我们激光雷达展示率评估方法的有效性。我们的结果表明,在基于3D点云的对象检测中,传感器放置是不可忽略的,这将在具有挑战性的3D对象检测设置方面有助于平均精度的5%〜10%。我们认为这是第一次定量调查激光雷达放置对感知性能的影响的研究之一。
translated by 谷歌翻译
随着智能车辆和先进驾驶员援助系统(ADAS)的快速发展,新趋势是人类驾驶员的混合水平将参与运输系统。因此,在这种情况下,司机的必要视觉指导对于防止潜在风险至关重要。为了推进视觉指导系统的发展,我们介绍了一种新的视觉云数据融合方法,从云中集成相机图像和数字双胞胎信息,帮助智能车辆做出更好的决策。绘制目标车辆边界框并在物体检测器的帮助下(在EGO车辆上运行)和位置信息(从云接收)匹配。使用深度图像作为附加特征源获得最佳匹配结果,从工会阈值下面的0.7交叉口下的精度为79.2%。进行了对车道改变预测的案例研究,以表明所提出的数据融合方法的有效性。在案例研究中,提出了一种多层的Perceptron算法,用修改的车道改变预测方法提出。从Unity游戏发动机获得的人型仿真结果表明,在安全性,舒适度和环境可持续性方面,拟议的模型可以显着提高高速公路驾驶性能。
translated by 谷歌翻译
自主车辆的环境感知受其物理传感器范围和算法性能的限制,以及通过降低其对正在进行的交通状况的理解的闭塞。这不仅构成了对安全和限制驾驶速度的重大威胁,而且它也可能导致不方便的动作。智能基础设施系统可以帮助缓解这些问题。智能基础设施系统可以通过在当前交通情况的数字模型的形式提供关于其周围环境的额外详细信息,填补了车辆的感知中的差距并扩展了其视野。数字双胞胎。然而,这种系统的详细描述和工作原型表明其可行性稀缺。在本文中,我们提出了一种硬件和软件架构,可实现这样一个可靠的智能基础架构系统。我们在现实世界中实施了该系统,并展示了它能够创建一个准确的延伸高速公路延伸的数字双胞胎,从而提高了自主车辆超越其车载传感器的极限的感知。此外,我们通过使用空中图像和地球观测方法来评估数字双胞胎的准确性和可靠性,用于产生地面真理数据。
translated by 谷歌翻译