以前通过一个位置的历史轨迹可能有助于推断该位置当前代理的未来轨迹。尽管在高清图的指导下进行了轨迹预测的大大改善,但只有少数作品探讨了这种当地历史信息。在这项工作中,我们将这些信息重新引入了轨迹预测系统的新类型的输入数据:本地行为数据,我们将其概念化为特定于位置的历史轨迹的集合。局部行为数据有助于系统强调预测区域,并更好地了解静态地图对象对移动代理的影响。我们提出了一个新型的本地行为感知(LBA)预测框架,该框架通过从观察到的轨迹,高清图和局部行为数据中融合信息来提高预测准确性。同样,如果这种历史数据不足或不可用,我们采用了本地行为(LBF)预测框架,该框架采用了基于知识依据的架构来推断缺失数据的影响。广泛的实验表明,通过这两个框架升级现有方法可显着提高其性能。特别是,LBA框架将SOTA方法在Nuscenes数据集上的性能提高了至少14%的K = 1度量。
translated by 谷歌翻译
预测公路参与者的未来运动对于自动驾驶至关重要,但由于令人震惊的运动不确定性,因此极具挑战性。最近,大多数运动预测方法求助于基于目标的策略,即预测运动轨迹的终点,作为回归整个轨迹的条件,以便可以减少解决方案的搜索空间。但是,准确的目标坐标很难预测和评估。此外,目的地的点表示限制了丰富的道路环境的利用,从而导致预测不准确。目标区域,即可能的目的地区域,而不是目标坐标,可以通过涉及更多的容忍度和指导来提供更软的限制,以搜索潜在的轨迹。考虑到这一点,我们提出了一个新的基于目标区域的框架,名为“目标区域网络”(GANET)进行运动预测,该框架对目标区域进行了建模,而不是确切的目标坐标作为轨迹预测的先决条件,更加可靠,更准确地执行。具体而言,我们建议一个goicrop(目标的目标区域)操作员有效地提取目标区域中的语义巷特征,并在目标区域和模型演员的未来互动中提取语义巷,这对未来的轨迹估计很大。 Ganet在所有公共文献(直到论文提交)中排名第一个,将其源代码排在第一位。
translated by 谷歌翻译
自我监督学习(SSL)是一种新兴技术,已成功地用于培训卷积神经网络(CNNS)和图形神经网络(GNNS),以进行更可转移,可转换,可推广和稳健的代表性学习。然而,很少探索其对自动驾驶的运动预测。在这项研究中,我们报告了将自学纳入运动预测的首次系统探索和评估。我们首先建议研究四项新型的自我监督学习任务,以通过理论原理以及对挑战性的大规模argoverse数据集进行运动预测以及定量和定性比较。其次,我们指出,基于辅助SSL的学习设置不仅胜过预测方法,这些方法在性能准确性方面使用变压器,复杂的融合机制和复杂的在线密集目标候选优化算法,而且具有较低的推理时间和建筑复杂性。最后,我们进行了几项实验,以了解为什么SSL改善运动预测。代码在\ url {https://github.com/autovision-cloud/ssl-lanes}上开源。
translated by 谷歌翻译
从社交机器人到自动驾驶汽车,多种代理的运动预测(MP)是任意复杂环境中的至关重要任务。当前方法使用端到端网络解决了此问题,其中输入数据通常是场景的最高视图和所有代理的过去轨迹;利用此信息是获得最佳性能的必不可少的。从这个意义上讲,可靠的自动驾驶(AD)系统必须按时产生合理的预测,但是,尽管其中许多方法使用了简单的Convnets和LSTM,但在使用两个信息源时,模型对于实时应用程序可能不够有效(地图和轨迹历史)。此外,这些模型的性能在很大程度上取决于训练数据的数量,这可能很昂贵(尤其是带注释的HD地图)。在这项工作中,我们探讨了如何使用有效的基于注意力的模型在Argoverse 1.0基准上实现竞争性能,该模型将其作为最小地图信息的过去轨迹和基于地图的功能的输入,以确保有效且可靠的MP。这些功能代表可解释的信息作为可驱动区域和合理的目标点,与基于黑框CNN的地图处理方法相反。
translated by 谷歌翻译
交通参与者的运动预测对于安全和强大的自动化驾驶系统至关重要,特别是在杂乱的城市环境中。然而,由于复杂的道路拓扑以及其他代理的不确定意图,这是强大的挑战。在本文中,我们介绍了一种基于图形的轨迹预测网络,其命名为双级预测器(DSP),其以分层方式编码静态和动态驾驶环境。与基于光栅状地图或稀疏车道图的方法不同,我们将驾驶环境视为具有两层的图形,专注于几何和拓扑功能。图形神经网络(GNNS)应用于提取具有不同粒度级别的特征,随后通过基于关注的层间网络聚合,实现更好的本地全局特征融合。在最近的目标驱动的轨迹预测管道之后,提取了目标代理的高可能性的目标候选者,并在这些目标上产生预测的轨迹。由于提出的双尺度上下文融合网络,我们的DSP能够产生准确和人类的多模态轨迹。我们评估了大规模协会运动预测基准测试的提出方法,实现了有希望的结果,优于最近的最先进的方法。
translated by 谷歌翻译
Behavior prediction in dynamic, multi-agent systems is an important problem in the context of self-driving cars, due to the complex representations and interactions of road components, including moving agents (e.g. pedestrians and vehicles) and road context information (e.g. lanes, traffic lights). This paper introduces VectorNet, a hierarchical graph neural network that first exploits the spatial locality of individual road components represented by vectors and then models the high-order interactions among all components. In contrast to most recent approaches, which render trajectories of moving agents and road context information as bird-eye images and encode them with convolutional neural networks (ConvNets), our approach operates on a vector representation. By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps. To further boost VectorNet's capability in learning context features, we propose a novel auxiliary task to recover the randomly masked out map entities and agent trajectories based on their context. We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset. Our method achieves on par or better performance than the competitive rendering approach on both benchmarks while saving over 70% of the model parameters with an order of magnitude reduction in FLOPs. It also outperforms the state of the art on the Argoverse dataset.
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
预测场景中代理的未来位置是自动驾驶中的一个重要问题。近年来,在代表现场及其代理商方面取得了重大进展。代理与场景和彼此之间的相互作用通常由图神经网络建模。但是,图形结构主要是静态的,无法表示高度动态场景中的时间变化。在这项工作中,我们提出了一个时间图表示,以更好地捕获流量场景中的动态。我们用两种类型的内存模块补充表示形式。一个专注于感兴趣的代理,另一个专注于整个场景。这使我们能够学习暂时意识的表示,即使对多个未来进行简单回归,也可以取得良好的结果。当与目标条件预测结合使用时,我们会显示出更好的结果,可以在Argoverse基准中达到最先进的性能。
translated by 谷歌翻译
We propose a motion forecasting model that exploits a novel structured map representation as well as actor-map interactions. Instead of encoding vectorized maps as raster images, we construct a lane graph from raw map data to explicitly preserve the map structure. To capture the complex topology and long range dependencies of the lane graph, we propose LaneGCN which extends graph convolutions with multiple adjacency matrices and along-lane dilation. To capture the complex interactions between actors and maps, we exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, laneto-actor and actor-to-actor. Powered by LaneGCN and actor-map interactions, our model is able to predict accurate and realistic multi-modal trajectories. Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
translated by 谷歌翻译
在多模式的多代理轨迹预测中,尚未完全解决两个主要挑战:1)如何测量相互作用模块引起的不确定性,从而导致多个试剂的预测轨迹之间引起相关性; 2)如何对多个预测进行排名并选择最佳预测轨迹。为了应对这些挑战,这项工作首先提出了一个新颖的概念,协作不确定性(CU),该概念模拟了互动模块引起的不确定性。然后,我们使用原始置换量等不确定性估计器来构建一般的CU感知回归框架,以完成回归和不确定性估计任务。此外,我们将提出的框架应用于当前的SOTA多代理多模式预测系统作为插件模块,该模块使SOTA系统能够达到1)估计多代理多模式轨迹预测任务的不确定性; 2)对多个预测进行排名,并根据估计的不确定性选择最佳预测。我们对合成数据集和两个公共大规模多代理轨迹预测基准进行了广泛的实验。实验表明:1)在合成数据集上,Cu-Aware回归框架允许模型适当地近似地面真相拉普拉斯分布; 2)在多代理轨迹预测基准上,Cu-Aware回归框架稳步帮助SOTA系统改善了其性能。特别是,提出的框架帮助Vectornet在Nuscenes数据集中所选最佳预测的最终位移误差方面提高了262 cm; 3)对于多机构多模式轨迹预测系统,预测不确定性与未来随机性呈正相关; 4)估计的CU值与代理之间的交互式信息高度相关。
translated by 谷歌翻译
预测附近代理商的合理的未来轨迹是自治车辆安全的核心挑战,主要取决于两个外部线索:动态邻居代理和静态场景上下文。最近的方法在分别表征两个线索方面取得了很大进展。然而,它们忽略了两个线索之间的相关性,并且大多数很难实现地图自适应预测。在本文中,我们使用Lane作为场景数据,并提出一个分阶段网络,即共同学习代理和车道信息,用于多模式轨迹预测(JAL-MTP)。 JAL-MTP使用社交到LANE(S2L)模块来共同代表静态道和相邻代理的动态运动作为实例级车道,一种用于利用实例级车道来预测的反复出的车道注意力(RLA)机制来预测Map-Adaptive Future Trajections和两个选择器,可识别典型和合理的轨迹。在公共协议数据集上进行的实验表明JAL-MTP在定量和定性中显着优于现有模型。
translated by 谷歌翻译
近年来,行为预测模型已经激增,尤其是在自动驾驶的流行现实机器人技术应用中,代表移动代理可能未来的分布对于安全舒适的运动计划至关重要。在这些模型中,选择代表输入和输出的坐标框架的选择具有至关重要的交易折扣,这些折扣通常属于两个类别之一。以代理为中心的模型转换输入并在以代理为中心的坐标中执行推断。这些模型在场景元素之间的翻译和旋转上本质上不变,在公共排行榜上表现最好,但与代理和场景元素的数量相互缩小。以场景为中心的模型使用固定的坐标系来处理所有代理。这为他们提供了在所有代理之间共享表示形式的优势,并提供有效的摊销推理计算,该计算与代理数量线性缩放。但是,这些模型必须学习场景元素之间的翻译和旋转的不变性,并且通常以表现为中心的模型。在这项工作中,我们在概率运动预测模型之间开发知识蒸馏技术,并应用这些技术来缩小以代理为中心和以场景为中心的模型之间的性能差距。这将以场景为中心的模型性能提高了13.2%,在公共Argoverse基准中,Waymo Open Datatet的7.8%,在大型内部数据集中最多可达9.4%。这些以场景为中心的改进的模型在公共排行榜中排名很高,在繁忙场景中以代理商为中心的教师的效率高15倍。
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译
Visual place recognition (VPR) is usually considered as a specific image retrieval problem. Limited by existing training frameworks, most deep learning-based works cannot extract sufficiently stable global features from RGB images and rely on a time-consuming re-ranking step to exploit spatial structural information for better performance. In this paper, we propose StructVPR, a novel training architecture for VPR, to enhance structural knowledge in RGB global features and thus improve feature stability in a constantly changing environment. Specifically, StructVPR uses segmentation images as a more definitive source of structural knowledge input into a CNN network and applies knowledge distillation to avoid online segmentation and inference of seg-branch in testing. Considering that not all samples contain high-quality and helpful knowledge, and some even hurt the performance of distillation, we partition samples and weigh each sample's distillation loss to enhance the expected knowledge precisely. Finally, StructVPR achieves impressive performance on several benchmarks using only global retrieval and even outperforms many two-stage approaches by a large margin. After adding additional re-ranking, ours achieves state-of-the-art performance while maintaining a low computational cost.
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
预测交通参与者的多模式未来行为对于机器人车辆做出安全决策至关重要。现有作品探索以直接根据潜在特征预测未来的轨迹,或利用密集的目标候选者来识别代理商的目的地,在这种情况下,由于所有运动模式均来自相同的功能,而后者的策略具有效率问题,因此前者策略的收敛缓慢,因为其性能高度依赖关于候选目标的密度。在本文中,我们提出了运动变压器(MTR)框架,该框架将运动预测模拟为全球意图定位和局部运动改进的联合优化。 MTR不使用目标候选者,而是通过采用一系列可学习的运动查询对来结合空间意图。每个运动查询对负责特定运动模式的轨迹预测和完善,这可以稳定训练过程并促进更好的多模式预测。实验表明,MTR在边际和联合运动预测挑战上都达到了最新的性能,在Waymo Open Motion DataSet排行榜上排名第一。代码将在https://github.com/sshaoshuai/mtr上找到。
translated by 谷歌翻译
为了促进更好的性能带宽权衡,以实现多种代理人的感知,我们提出了一种新颖的蒸馏协作图(光盘),以模拟代理商之间的培训,姿势感知和适应性协作。我们的主要新科特迪斯在两个方面。首先,我们提出了一位教师学生框架通过知识蒸馏训练光盘。教师模型采用与全面查看输入的早期合作;学生模型基于中间协作与单视图输入。我们的框架通过在学生模型中约束协作后的特征地图来列进讨论,以匹配教师模型的对应关系。其次,我们提出了矩阵值的边缘重量。在这样的矩阵中,每个元素将互及的间歇注意力反映在特定空间区域,允许代理自适应地突出显示信息区域。在推论期间,我们只需要使用名为Distilled Collaboration Network的学生模型(Disconet)。归因于师生框架,具有共享Disconet的多个代理商可以协作地与整体视图进行假设教师模型的表现。我们的方法在V2X-SIM 1.0上验证了我们使用Carla和Sumo Co-Simulation合成的大规模多代理感知数据集。我们在多代理3D对象检测中的定量和定性实验表明,Disconet不仅可以实现比最先进的协作的感知方法更好的性能带宽权衡,而且还带来了更直接的设计理由。我们的代码可在https://github.com/ai4ce/disconet上找到。
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
揭开多个代理之间的相互作用与过去的轨迹之间的相互作用至关重要。但是,以前的作品主要考虑与有限的关系推理的静态,成对的相互作用。为了促进更全面的互动建模和关系推理,我们提出了Dyngroupnet,这是一个动态群体感知的网络,i)可以在高度动态的场景中建模时间变化的交互; ii)捕获配对和小组互动; iii)理由互动强度和类别没有直接监督。基于Dyngroupnet,我们进一步设计了一个预测系统,以预测具有动态关系推理的社会合理轨迹。提出的预测系统利用高斯混合模型,多个抽样和预测细化,分别促进预测多样性,训练稳定性和轨迹平滑度。广泛的实验表明:1)dyngroupnet可以捕获随时间变化的群体行为,在轨迹预测过程中推断时间变化的交互类别和相互作用强度,而无需在物理模拟数据集上进行任何关系监督; 2)dyngroupnet优于最先进的轨迹预测方法,其显着改善22.6%/28.0%,26.9%/34.9%,5.1%/13.0%的ADE/FDE在NBA,NFL足球和SDD Datasets上的ADE/FDE并在ETH-COY数据集上实现最先进的性能。
translated by 谷歌翻译
Predicting the future motion of dynamic agents is of paramount importance to ensure safety or assess risks in motion planning for autonomous robots. In this paper, we propose a two-stage motion prediction method, referred to as R-Pred, that effectively utilizes both the scene and interaction context using a cascade of the initial trajectory proposal network and the trajectory refinement network. The initial trajectory proposal network produces M trajectory proposals corresponding to M modes of a future trajectory distribution. The trajectory refinement network enhances each of M proposals using 1) the tube-query scene attention (TQSA) and 2) the proposal-level interaction attention (PIA). TQSA uses tube-queries to aggregate the local scene context features pooled from proximity around the trajectory proposals of interest. PIA further enhances the trajectory proposals by modeling inter-agent interactions using a group of trajectory proposals selected based on their distances from neighboring agents. Our experiments conducted on the Argoverse and nuScenes datasets demonstrate that the proposed refinement network provides significant performance improvements compared to the single-stage baseline and that R-Pred achieves state-of-the-art performance in some categories of the benchmark.
translated by 谷歌翻译