这项工作提出了一种新的方法,可以使用有效的鸟类视图表示和卷积神经网络在高速公路场景中预测车辆轨迹。使用基本的视觉表示,很容易将车辆位置,运动历史,道路配置和车辆相互作用轻松包含在预测模型中。 U-NET模型已被选为预测内核,以使用图像到图像回归方法生成场景的未来视觉表示。已经实施了一种方法来从生成的图形表示中提取车辆位置以实现子像素分辨率。该方法已通过预防数据集(一个板载传感器数据集)进行了培训和评估。已经评估了不同的网络配置和场景表示。这项研究发现,使用线性终端层和车辆的高斯表示,具有6个深度水平的U-NET是最佳性能配置。发现使用车道标记不会改善预测性能。平均预测误差为0.47和0.38米,对于纵向和横向坐标的最终预测误差分别为0.76和0.53米,预测轨迹长度为2.0秒。与基线方法相比,预测误差低至50%。
translated by 谷歌翻译
轨迹预测和行为决策是自动驾驶汽车的两项重要任务,他们需要对环境环境有良好的了解;通过参考轨迹预测的输出,可以更好地做出行为决策。但是,大多数当前解决方案分别执行这两个任务。因此,提出了结合多个线索的联合神经网络,并将其命名为整体变压器,以预测轨迹并同时做出行为决策。为了更好地探索线索之间的内在关系,网络使用现有知识并采用三种注意力机制:稀疏的多头类型用于减少噪声影响,特征选择稀疏类型,可最佳地使用部分先验知识,并与Sigmoid多头激活类型,用于最佳使用后验知识。与其他轨迹预测模型相比,所提出的模型具有更好的综合性能和良好的解释性。感知噪声稳健性实验表明,所提出的模型具有良好的噪声稳健性。因此,结合多个提示的同时轨迹预测和行为决策可以降低计算成本并增强场景与代理之间的语义关系。
translated by 谷歌翻译
Reasoning about human motion is an important prerequisite to safe and socially-aware robotic navigation. As a result, multi-agent behavior prediction has become a core component of modern human-robot interactive systems, such as self-driving cars. While there exist many methods for trajectory forecasting, most do not enforce dynamic constraints and do not account for environmental information (e.g., maps). Towards this end, we present Trajectron++, a modular, graph-structured recurrent model that forecasts the trajectories of a general number of diverse agents while incorporating agent dynamics and heterogeneous data (e.g., semantic maps). Trajectron++ is designed to be tightly integrated with robotic planning and control frameworks; for example, it can produce predictions that are optionally conditioned on ego-agent motion plans. We demonstrate its performance on several challenging real-world trajectory forecasting datasets, outperforming a wide array of state-ofthe-art deterministic and generative methods.
translated by 谷歌翻译
可靠地预测围绕自动赛车的参赛者车辆的动议对于有效和表现计划至关重要。尽管高度表现力,但深度神经网络是黑盒模型,使其在安全至关重要的应用(例如自动驾驶)中具有挑战性。在本文中,我们介绍了一种结构化的方式,以预测具有深神网络的对立赛车的运动。最终可能的输出轨迹集受到限制。因此,可以给出有关预测的质量保证。我们通过将模型与基于LSTM的编码器架构一起评估模型来报告该模型的性能,这些架构是从高保真硬件中获取的数据中获得的。拟议的方法的表现优于预测准确性的基线,但仍能履行质量保证。因此,该模型的强大现实应用已被证明。介绍的模型被部署在慕尼黑技术大学的Indy Automous Challenge 2021中。本研究中使用的代码可作为开放源软件提供,网址为www.github.com/tumftm/mixnet。
translated by 谷歌翻译
自动驾驶汽车使用各种传感器和机器学习型号来预测周围道路使用者的行为。文献中的大多数机器学习模型都集中在定量误差指标上,例如均方根误差(RMSE),以学习和报告其模型的功能。对定量误差指标的关注倾向于忽略模型的更重要的行为方面,从而提出了这些模型是否真正预测类似人类行为的问题。因此,我们建议分析机器学习模型的输出,就像我们将在常规行为研究中分析人类数据一样。我们介绍定量指标,以证明在自然主义高速公路驾驶数据集中存在三种不同的行为现象:1)运动学依赖性谁通过合并点首次通过合并点2)巷道上的车道更改,可容纳坡道车辆3 )车辆通过高速公路上的车辆变化,以避免铅车冲突。然后,我们使用相同的指标分析了三个机器学习模型的行为。即使模型的RMSE值有所不同,所有模型都捕获了运动学依赖性的合并行为,但在不同程度上挣扎着捕获更细微的典型礼貌车道变更和高速公路车道的变化行为。此外,车道变化期间的碰撞厌恶分析表明,模型努力捕获人类驾驶的物理方面:在车辆之间留下足够的差距。因此,我们的分析强调了简单的定量指标不足,并且在分析人类驾驶预测的机器学习模型时需要更广泛的行为观点。
translated by 谷歌翻译
不确定性在未来预测中起关键作用。未来是不确定的。这意味着可能有很多可能的未来。未来的预测方法应涵盖坚固的全部可能性。在自动驾驶中,涵盖预测部分中的多种模式对于做出安全至关重要的决策至关重要。尽管近年来计算机视觉系统已大大提高,但如今的未来预测仍然很困难。几个示例是未来的不确定性,全面理解的要求以及嘈杂的输出空间。在本论文中,我们通过以随机方式明确地对运动进行建模并学习潜在空间中的时间动态,从而提出了解决这些挑战的解决方案。
translated by 谷歌翻译
近年来,道路安全引起了智能运输系统领域的研究人员和从业者的重大关注。作为最常见的道路用户群体之一,行人由于其不可预测的行为和运动而导致令人震惊,因为车辆行人互动的微妙误解可以很容易地导致风险的情况或碰撞。现有方法使用预定义的基于碰撞的模型或人类标签方法来估计行人的风险。这些方法通常受到他们的概括能力差,缺乏对自我车辆和行人之间的相互作用的限制。这项工作通过提出行人风险级预测系统来解决所列问题。该系统由三个模块组成。首先,收集车辆角度的行人数据。由于数据包含关于自我车辆和行人的运动的信息,因此可以简化以交互感知方式预测时空特征的预测。使用长短短期存储器模型,行人轨迹预测模块预测后续五个框架中的时空特征。随着预测的轨迹遵循某些交互和风险模式,采用混合聚类和分类方法来探讨时空特征中的风险模式,并使用学习模式训练风险等级分类器。在预测行人的时空特征并识别相应的风险水平时,确定自我车辆和行人之间的风险模式。实验结果验证了PRLP系统的能力,以预测行人的风险程度,从而支持智能车辆的碰撞风险评估,并为车辆和行人提供安全警告。
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
Vehicle-to-Everything (V2X) communication has been proposed as a potential solution to improve the robustness and safety of autonomous vehicles by improving coordination and removing the barrier of non-line-of-sight sensing. Cooperative Vehicle Safety (CVS) applications are tightly dependent on the reliability of the underneath data system, which can suffer from loss of information due to the inherent issues of their different components, such as sensors failures or the poor performance of V2X technologies under dense communication channel load. Particularly, information loss affects the target classification module and, subsequently, the safety application performance. To enable reliable and robust CVS systems that mitigate the effect of information loss, we proposed a Context-Aware Target Classification (CA-TC) module coupled with a hybrid learning-based predictive modeling technique for CVS systems. The CA-TC consists of two modules: A Context-Aware Map (CAM), and a Hybrid Gaussian Process (HGP) prediction system. Consequently, the vehicle safety applications use the information from the CA-TC, making them more robust and reliable. The CAM leverages vehicles path history, road geometry, tracking, and prediction; and the HGP is utilized to provide accurate vehicles' trajectory predictions to compensate for data loss (due to communication congestion) or sensor measurements' inaccuracies. Based on offline real-world data, we learn a finite bank of driver models that represent the joint dynamics of the vehicle and the drivers' behavior. We combine offline training and online model updates with on-the-fly forecasting to account for new possible driver behaviors. Finally, our framework is validated using simulation and realistic driving scenarios to confirm its potential in enhancing the robustness and reliability of CVS systems.
translated by 谷歌翻译
Accurate localization ability is fundamental in autonomous driving. Traditional visual localization frameworks approach the semantic map-matching problem with geometric models, which rely on complex parameter tuning and thus hinder large-scale deployment. In this paper, we propose BEV-Locator: an end-to-end visual semantic localization neural network using multi-view camera images. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and flattens the multi-view images into BEV space. While the semantic map features are structurally embedded as map queries sequence. Then a cross-model transformer associates the BEV features and semantic map queries. The localization information of ego-car is recursively queried out by cross-attention modules. Finally, the ego pose can be inferred by decoding the transformer outputs. We evaluate the proposed method in large-scale nuScenes and Qcraft datasets. The experimental results show that the BEV-locator is capable to estimate the vehicle poses under versatile scenarios, which effectively associates the cross-model information from multi-view images and global semantic maps. The experiments report satisfactory accuracy with mean absolute errors of 0.052m, 0.135m and 0.251$^\circ$ in lateral, longitudinal translation and heading angle degree.
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
预测环境的未来占用状态对于实现自动驾驶汽车的明智决定很重要。占用预测中的常见挑战包括消失的动态对象和模糊的预测,尤其是对于长期预测范围。在这项工作中,我们提出了一个双独沟的神经网络体系结构,以预测占用状态的时空演化。一个插脚致力于预测移动的自我车辆将如何观察到静态环境。另一个插脚预测环境中的动态对象将如何移动。在现实Waymo开放数据集上进行的实验表明,两个插脚的融合输出能够保留动态对象并减少预测中比基线模型更长的预测时间范围。
translated by 谷歌翻译
从社交机器人到自动驾驶汽车,多种代理的运动预测(MP)是任意复杂环境中的至关重要任务。当前方法使用端到端网络解决了此问题,其中输入数据通常是场景的最高视图和所有代理的过去轨迹;利用此信息是获得最佳性能的必不可少的。从这个意义上讲,可靠的自动驾驶(AD)系统必须按时产生合理的预测,但是,尽管其中许多方法使用了简单的Convnets和LSTM,但在使用两个信息源时,模型对于实时应用程序可能不够有效(地图和轨迹历史)。此外,这些模型的性能在很大程度上取决于训练数据的数量,这可能很昂贵(尤其是带注释的HD地图)。在这项工作中,我们探讨了如何使用有效的基于注意力的模型在Argoverse 1.0基准上实现竞争性能,该模型将其作为最小地图信息的过去轨迹和基于地图的功能的输入,以确保有效且可靠的MP。这些功能代表可解释的信息作为可驱动区域和合理的目标点,与基于黑框CNN的地图处理方法相反。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
关于车辆路径预测的推理是自动驾驶系统安全运行的必不可少的问题。有许多用于路径预测的研究工作。但是,其中大多数不使用车道信息,也不基于变压器体系结构。通过利用从配备自动驾驶车辆的传感器收集的不同类型的数据,我们提出了一个名为多模式变压器路径预测(MTPP)的路径预测系统,该系统旨在预测目标试剂的长期未来轨迹。为了实现更准确的路径预测,在我们的模型中采用了变压器体系结构。为了更好地利用车道信息,目标试剂不太可能采用与目标试剂相反的车道,因此被过滤掉。另外,将连续的车道块组合在一起,以确保车道输入足够长以进行路径预测。进行了广泛的评估,以显示使用Nuscene(现实世界中的轨迹预测数据集)的拟议系统的功效。
translated by 谷歌翻译
Speed estimation of an ego vehicle is crucial to enable autonomous driving and advanced driver assistance technologies. Due to functional and legacy issues, conventional methods depend on in-car sensors to extract vehicle speed through the Controller Area Network bus. However, it is desirable to have modular systems that are not susceptible to external sensors to execute perception tasks. In this paper, we propose a novel 3D-CNN with masked-attention architecture to estimate ego vehicle speed using a single front-facing monocular camera. To demonstrate the effectiveness of our method, we conduct experiments on two publicly available datasets, nuImages and KITTI. We also demonstrate the efficacy of masked-attention by comparing our method with a traditional 3D-CNN.
translated by 谷歌翻译
Accurately predicting interactive road agents' future trajectories and planning a socially compliant and human-like trajectory accordingly are important for autonomous vehicles. In this paper, we propose a planning-centric prediction neural network, which takes surrounding agents' historical states and map context information as input, and outputs the joint multi-modal prediction trajectories for surrounding agents, as well as a sequence of control commands for the ego vehicle by imitation learning. An agent-agent interaction module along the time axis is proposed in our network architecture to better comprehend the relationship among all the other intelligent agents on the road. To incorporate the map's topological information, a Dynamic Graph Convolutional Neural Network (DGCNN) is employed to process the road network topology. Besides, the whole architecture can serve as a backbone for the Differentiable Integrated motion Prediction with Planning (DIPP) method by providing accurate prediction results and initial planning commands. Experiments are conducted on real-world datasets to demonstrate the improvements made by our proposed method in both planning and prediction accuracy compared to the previous state-of-the-art methods.
translated by 谷歌翻译
当自治车辆仍然努力解决在路上驾驶期间解决具有挑战性的情况时,人类长期以来一直掌握具有高效可转移和适应性的驱动能力的推动的本质。通过在驾驶期间模仿人的认知模型和语义理解,我们呈现帽子,一个分层框架,在多助手密集交通环境中产生高质量的驾驶行为。我们的方法层次地由高级意图识别和低级动作生成策略组成。通过语义子任务定义和通用状态表示,分层框架可在不同的驱动方案上传输。此外,我们的模型还能够通过在线适应模块捕获个人和场景之间的驾驶行为的变化。我们展示了在交叉路口和环形交叉路口的真实交通数据的轨迹预测任务中的算法,我们对该提出的方法进行了广泛的研究,并证明了我们的方法在预测准确性和可转移性方面的方式表现出其他方法。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译