Predicting the future motion of road agents is a critical task in an autonomous driving pipeline. In this work, we address the problem of generating a set of scene-level, or joint, future trajectory predictions in multi-agent driving scenarios. To this end, we propose FJMP, a Factorized Joint Motion Prediction framework for multi-agent interactive driving scenarios. FJMP models the future scene interaction dynamics as a sparse directed interaction graph, where edges denote explicit interactions between agents. We then prune the graph into a directed acyclic graph (DAG) and decompose the joint prediction task into a sequence of marginal and conditional predictions according to the partial ordering of the DAG, where joint future trajectories are decoded using a directed acyclic graph neural network (DAGNN). We conduct experiments on the INTERACTION and Argoverse 2 datasets and demonstrate that FJMP produces more accurate and scene-consistent joint trajectory predictions than non-factorized approaches, especially on the most interactive and kinematically interesting agents. FJMP ranks 1st on the multi-agent test leaderboard of the INTERACTION dataset.
translated by 谷歌翻译
The task of motion forecasting is critical for self-driving vehicles (SDVs) to be able to plan a safe maneuver. Towards this goal, modern approaches reason about the map, the agents' past trajectories and their interactions in order to produce accurate forecasts. The predominant approach has been to encode the map and other agents in the reference frame of each target agent. However, this approach is computationally expensive for multi-agent prediction as inference needs to be run for each agent. To tackle the scaling challenge, the solution thus far has been to encode all agents and the map in a shared coordinate frame (e.g., the SDV frame). However, this is sample inefficient and vulnerable to domain shift (e.g., when the SDV visits uncommon states). In contrast, in this paper, we propose an efficient shared encoding for all agents and the map without sacrificing accuracy or generalization. Towards this goal, we leverage pair-wise relative positional encodings to represent geometric relationships between the agents and the map elements in a heterogeneous spatial graph. This parameterization allows us to be invariant to scene viewpoint, and save online computation by re-using map embeddings computed offline. Our decoder is also viewpoint agnostic, predicting agent goals on the lane graph to enable diverse and context-aware multimodal prediction. We demonstrate the effectiveness of our approach on the urban Argoverse 2 benchmark as well as a novel highway dataset.
translated by 谷歌翻译
预测场景中代理的未来位置是自动驾驶中的一个重要问题。近年来,在代表现场及其代理商方面取得了重大进展。代理与场景和彼此之间的相互作用通常由图神经网络建模。但是,图形结构主要是静态的,无法表示高度动态场景中的时间变化。在这项工作中,我们提出了一个时间图表示,以更好地捕获流量场景中的动态。我们用两种类型的内存模块补充表示形式。一个专注于感兴趣的代理,另一个专注于整个场景。这使我们能够学习暂时意识的表示,即使对多个未来进行简单回归,也可以取得良好的结果。当与目标条件预测结合使用时,我们会显示出更好的结果,可以在Argoverse基准中达到最先进的性能。
translated by 谷歌翻译
自治车辆的评估和改善规划需要可扩展的长尾交通方案。有用的是,这些情景必须是现实的和挑战性的,但不能安全地开车。在这项工作中,我们介绍努力,一种自动生成具有挑战性的场景的方法,导致给定的计划者产生不良行为,如冲突。为了维护情景合理性,关键的想法是利用基于图形的条件VAE的形式利用学习的交通运动模型。方案生成在该流量模型的潜在空间中制定了优化,通过扰乱初始的真实世界的场景来产生与给定计划者碰撞的轨迹。随后的优化用于找到“解决方案”的场景,确保改进给定的计划者是有用的。进一步的分析基于碰撞类型的群集生成的场景。我们攻击两名策划者并展示争取在这两种情况下成功地产生了现实,具有挑战性的情景。我们另外“关闭循环”并使用这些方案优化基于规则的策划器的超参数。
translated by 谷歌翻译
We propose JFP, a Joint Future Prediction model that can learn to generate accurate and consistent multi-agent future trajectories. For this task, many different methods have been proposed to capture social interactions in the encoding part of the model, however, considerably less focus has been placed on representing interactions in the decoder and output stages. As a result, the predicted trajectories are not necessarily consistent with each other, and often result in unrealistic trajectory overlaps. In contrast, we propose an end-to-end trainable model that learns directly the interaction between pairs of agents in a structured, graphical model formulation in order to generate consistent future trajectories. It sets new state-of-the-art results on Waymo Open Motion Dataset (WOMD) for the interactive setting. We also investigate a more complex multi-agent setting for both WOMD and a larger internal dataset, where our approach improves significantly on the trajectory overlap metrics while obtaining on-par or better performance on single-agent trajectory metrics.
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
行为预测在集成自主驾驶软件解决方案中起着重要作用。在行为预测研究中,与单一代理行为预测相比,交互行为预测是一个较小的领域。预测互动剂的运动需要启动新的机制来捕获交互式对的关节行为。在这项工作中,我们将端到端的关节预测问题作为边际学习和车辆行为联合学习的顺序学习过程。我们提出了ProspectNet,这是一个采用加权注意分数的联合学习块,以模拟交互式剂对之间的相互影响。联合学习块首先权衡多模式预测的候选轨迹,然后通过交叉注意更新自我代理的嵌入。此外,我们将每个交互式代理的个人未来预测播放到一个智慧评分模块中,以选择顶部的$ K $预测对。我们表明,ProspectNet优于两个边际预测的笛卡尔产品,并在Waymo交互式运动预测基准上实现了可比的性能。
translated by 谷歌翻译
自我监督学习(SSL)是一种新兴技术,已成功地用于培训卷积神经网络(CNNS)和图形神经网络(GNNS),以进行更可转移,可转换,可推广和稳健的代表性学习。然而,很少探索其对自动驾驶的运动预测。在这项研究中,我们报告了将自学纳入运动预测的首次系统探索和评估。我们首先建议研究四项新型的自我监督学习任务,以通过理论原理以及对挑战性的大规模argoverse数据集进行运动预测以及定量和定性比较。其次,我们指出,基于辅助SSL的学习设置不仅胜过预测方法,这些方法在性能准确性方面使用变压器,复杂的融合机制和复杂的在线密集目标候选优化算法,而且具有较低的推理时间和建筑复杂性。最后,我们进行了几项实验,以了解为什么SSL改善运动预测。代码在\ url {https://github.com/autovision-cloud/ssl-lanes}上开源。
translated by 谷歌翻译
预测公路参与者的未来运动对于自动驾驶至关重要,但由于令人震惊的运动不确定性,因此极具挑战性。最近,大多数运动预测方法求助于基于目标的策略,即预测运动轨迹的终点,作为回归整个轨迹的条件,以便可以减少解决方案的搜索空间。但是,准确的目标坐标很难预测和评估。此外,目的地的点表示限制了丰富的道路环境的利用,从而导致预测不准确。目标区域,即可能的目的地区域,而不是目标坐标,可以通过涉及更多的容忍度和指导来提供更软的限制,以搜索潜在的轨迹。考虑到这一点,我们提出了一个新的基于目标区域的框架,名为“目标区域网络”(GANET)进行运动预测,该框架对目标区域进行了建模,而不是确切的目标坐标作为轨迹预测的先决条件,更加可靠,更准确地执行。具体而言,我们建议一个goicrop(目标的目标区域)操作员有效地提取目标区域中的语义巷特征,并在目标区域和模型演员的未来互动中提取语义巷,这对未来的轨迹估计很大。 Ganet在所有公共文献(直到论文提交)中排名第一个,将其源代码排在第一位。
translated by 谷歌翻译
Reasoning about human motion is an important prerequisite to safe and socially-aware robotic navigation. As a result, multi-agent behavior prediction has become a core component of modern human-robot interactive systems, such as self-driving cars. While there exist many methods for trajectory forecasting, most do not enforce dynamic constraints and do not account for environmental information (e.g., maps). Towards this end, we present Trajectron++, a modular, graph-structured recurrent model that forecasts the trajectories of a general number of diverse agents while incorporating agent dynamics and heterogeneous data (e.g., semantic maps). Trajectron++ is designed to be tightly integrated with robotic planning and control frameworks; for example, it can produce predictions that are optionally conditioned on ego-agent motion plans. We demonstrate its performance on several challenging real-world trajectory forecasting datasets, outperforming a wide array of state-ofthe-art deterministic and generative methods.
translated by 谷歌翻译
预测附近代理商的合理的未来轨迹是自治车辆安全的核心挑战,主要取决于两个外部线索:动态邻居代理和静态场景上下文。最近的方法在分别表征两个线索方面取得了很大进展。然而,它们忽略了两个线索之间的相关性,并且大多数很难实现地图自适应预测。在本文中,我们使用Lane作为场景数据,并提出一个分阶段网络,即共同学习代理和车道信息,用于多模式轨迹预测(JAL-MTP)。 JAL-MTP使用社交到LANE(S2L)模块来共同代表静态道和相邻代理的动态运动作为实例级车道,一种用于利用实例级车道来预测的反复出的车道注意力(RLA)机制来预测Map-Adaptive Future Trajections和两个选择器,可识别典型和合理的轨迹。在公共协议数据集上进行的实验表明JAL-MTP在定量和定性中显着优于现有模型。
translated by 谷歌翻译
自主驾驶的运动预测领域的先前艺术倾向于寻找接近地面真理轨迹的轨迹。但是,这种问题的表述和方法经常导致多样性和偏见轨迹预测的丧失。因此,它们不适合现实世界的自主驾驶,在这种驾驶中,多样化和依赖道路的多模式轨迹预测对安全至关重要。为此,本研究提出了一种新颖的损失函数\ textit {lane损失},可确保地图自适应多样性并适应几何约束。对带有新型轨迹候选建议模块的两阶段轨迹预测架构,\ textit {轨迹预测注意(TPA)}经过训练,通过车道损失训练,鼓励多个轨迹分布多样,以涵盖可行的方式以图像意识的方式涵盖可行的操作。此外,考虑到现有的轨迹性能指标正在重点是基于地面真理未来轨迹评估准确性,因此还建议定量评估指标来评估预测的多个轨迹的多样性。在Argoverse数据集上进行的实验表明,所提出的方法显着提高了预测轨迹的多样性,而无需牺牲预测准确性。
translated by 谷歌翻译
预测交通参与者的多模式未来行为对于机器人车辆做出安全决策至关重要。现有作品探索以直接根据潜在特征预测未来的轨迹,或利用密集的目标候选者来识别代理商的目的地,在这种情况下,由于所有运动模式均来自相同的功能,而后者的策略具有效率问题,因此前者策略的收敛缓慢,因为其性能高度依赖关于候选目标的密度。在本文中,我们提出了运动变压器(MTR)框架,该框架将运动预测模拟为全球意图定位和局部运动改进的联合优化。 MTR不使用目标候选者,而是通过采用一系列可学习的运动查询对来结合空间意图。每个运动查询对负责特定运动模式的轨迹预测和完善,这可以稳定训练过程并促进更好的多模式预测。实验表明,MTR在边际和联合运动预测挑战上都达到了最新的性能,在Waymo Open Motion DataSet排行榜上排名第一。代码将在https://github.com/sshaoshuai/mtr上找到。
translated by 谷歌翻译
We propose a motion forecasting model that exploits a novel structured map representation as well as actor-map interactions. Instead of encoding vectorized maps as raster images, we construct a lane graph from raw map data to explicitly preserve the map structure. To capture the complex topology and long range dependencies of the lane graph, we propose LaneGCN which extends graph convolutions with multiple adjacency matrices and along-lane dilation. To capture the complex interactions between actors and maps, we exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, laneto-actor and actor-to-actor. Powered by LaneGCN and actor-map interactions, our model is able to predict accurate and realistic multi-modal trajectories. Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
在这项工作中,我们提出了一种新的多模态多代理轨迹预测架构,专注于使用图形表示的地图和交互建模。出于地图建模的目的,我们将丰富的拓扑结构捕获到基于向量的星形图中,使代理能够直接参加用于代表地图的折线上的相关区域。我们表示此架构Starnet,并将其集成在单次代理预测设置中。作为主要结果,我们将此架构扩展到联合场景级预测,同时产生多个代理的预测。联合赛斯网的关键思想在自己的参考框中将一个代理的意识与其他代理人的观点察觉到。我们通过蒙面的自我关注实现这一目标。两个提出的架构都建立在我们以前的工作中介绍的动作空间预测框架之上,这确保了运动学上可行的轨迹预测。我们评估了富含互动的IND和交互数据集的方法,其中STARNET和联合星网实现了最先进的技术。
translated by 谷歌翻译
相应地预测周围交通参与者的未来状态,并计划安全,平稳且符合社会的轨迹对于自动驾驶汽车至关重要。当前的自主驾驶系统有两个主要问题:预测模块通常与计划模块解耦,并且计划的成本功能很难指定和调整。为了解决这些问题,我们提出了一个端到端的可区分框架,该框架集成了预测和计划模块,并能够从数据中学习成本函数。具体而言,我们采用可区分的非线性优化器作为运动计划者,该运动计划将神经网络给出的周围剂的预测轨迹作为输入,并优化了自动驾驶汽车的轨迹,从而使框架中的所有操作都可以在框架中具有可观的成本,包括成本功能权重。提出的框架经过大规模的现实驾驶数据集进行了训练,以模仿整个驾驶场景中的人类驾驶轨迹,并在开环和闭环界面中进行了验证。开环测试结果表明,所提出的方法的表现优于各种指标的基线方法,并提供以计划为中心的预测结果,从而使计划模块能够输出接近人类的轨迹。在闭环测试中,提出的方法表明能够处理复杂的城市驾驶场景和鲁棒性,以抵抗模仿学习方法所遭受的分配转移。重要的是,我们发现计划和预测模块的联合培训比在开环和闭环测试中使用单独的训练有素的预测模块进行计划要比计划更好。此外,消融研究表明,框架中的可学习组件对于确保计划稳定性和性能至关重要。
translated by 谷歌翻译
Behavior prediction in dynamic, multi-agent systems is an important problem in the context of self-driving cars, due to the complex representations and interactions of road components, including moving agents (e.g. pedestrians and vehicles) and road context information (e.g. lanes, traffic lights). This paper introduces VectorNet, a hierarchical graph neural network that first exploits the spatial locality of individual road components represented by vectors and then models the high-order interactions among all components. In contrast to most recent approaches, which render trajectories of moving agents and road context information as bird-eye images and encode them with convolutional neural networks (ConvNets), our approach operates on a vector representation. By operating on the vectorized high definition (HD) maps and agent trajectories, we avoid lossy rendering and computationally intensive ConvNet encoding steps. To further boost VectorNet's capability in learning context features, we propose a novel auxiliary task to recover the randomly masked out map entities and agent trajectories based on their context. We evaluate VectorNet on our in-house behavior prediction benchmark and the recently released Argoverse forecasting dataset. Our method achieves on par or better performance than the competitive rendering approach on both benchmarks while saving over 70% of the model parameters with an order of magnitude reduction in FLOPs. It also outperforms the state of the art on the Argoverse dataset.
translated by 谷歌翻译
Level 5 Autonomous Driving, a technology that a fully automated vehicle (AV) requires no human intervention, has raised serious concerns on safety and stability before widespread use. The capability of understanding and predicting future motion trajectory of road objects can help AV plan a path that is safe and easy to control. In this paper, we propose a network architecture that parallelizes multiple convolutional neural network backbones and fuses features to make multi-mode trajectory prediction. In the 2020 ICRA Nuscene Prediction challenge, our model ranks 15th on the leaderboard across all teams.
translated by 谷歌翻译
自动驾驶的运动预测是一项艰巨的任务,因为复杂的驾驶场景导致静态和动态输入的异质组合。这是一个开放的问题,如何最好地表示和融合有关道路几何,车道连接,时变的交通信号状态以及动态代理的历史及其相互作用的历史。为了模拟这一不同的输入功能集,许多提出的方法旨在设计具有多种模态模块的同样复杂系统。这导致难以按严格的方式进行扩展,扩展或调整的系统以进行质量和效率。在本文中,我们介绍了Wayformer,这是一个基于注意力的运动架构,用于运动预测,简单而均匀。 Wayformer提供了一个紧凑的模型描述,该描述由基于注意力的场景编码器和解码器组成。在场景编码器中,我们研究了输入方式的早期,晚和等级融合的选择。对于每种融合类型,我们通过分解的注意力或潜在的查询关注来探索策略来折衷效率和质量。我们表明,尽管早期融合的结构简单,但不仅是情感不可知论,而且还取得了最先进的结果。
translated by 谷歌翻译