预测行人运动对于开发在拥挤的环境中相互作用的社会意识的机器人至关重要。虽然社交互动环境的自然视觉观点是一种自然的观点,但轨迹预测中的大多数现有作品纯粹是在自上而下的轨迹空间中进行的。为了支持第一人称视图轨迹预测研究,我们提出了T2FPV,这是一种构建高保真的第一人称视图数据集的方法,给定真实的,自上而下的轨迹数据集;我们在ETH/UCY行人数据集上展示了我们的方法,以生成所有互动行人的以自我为中心的视觉数据。我们报告说,原始的ETH/UCY数据集中使用的鸟眼视图假设,即代理可以用完美的信息观察场景中的每个人,而不会在第一人称视图中保持;在现有作品中通常使用的每个20个磁场场景中,只有一小部分的代理都可以完全看到。我们评估现有的轨迹预测方法在不同的现实感知水平下 - 与自上而下的完美信息设置相比,位移错误增加了356%。为了促进第一人称视图轨迹预测的研究,我们发布了T2FPV-ETH数据集和软件工具。
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
自主驾驶包括多个交互模块,其中每个模块必须与其他模块相反。通常,运动预测模块取决于稳健的跟踪系统以捕获每个代理的过去的移动。在这项工作中,我们系统地探讨了运动预测任务的跟踪模块的重要性,并且最终得出结论,整体运动预测性能对跟踪模块的缺陷非常敏感。我们明确比较了使用跟踪信息的模型,该模型不会跨越多种方案和条件。我们发现跟踪信息发挥着重要作用,并在无噪声条件下提高运动预测性能。然而,在跟踪噪声的情况下,如果没有彻底研究,它可能会影响整体性能。因此,我们应该在开发和测试运动/跟踪模块时注意到噪音,或者他们应该考虑跟踪自由替代品。
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
人类运动预测是了解社会环境,在机器人技术,监视等中直接应用的关键。我们提出了一个简单而有效的行人轨迹预测模型,该模型旨在旨在行人在以环境为条件的城市风格环境中进行预测:地图和环绕剂。我们的模型是一种基于神经的架构,可以以迭代顺序方式运行几层注意力块和变压器,从而捕获环境中的重要特征以改善预测。我们表明,如果不明确引入社交面具,动态模型,社交池层或复杂的图形结构,则可以使用SOTA模型在PAR结果上产生,这使我们的方法易于扩展和配置,取决于可用的数据。我们报告与SOTA模型相似的结果,该模型在具有单峰预测指标和FDE的公开可用和可扩展的数据集上。
translated by 谷歌翻译
我们介绍了观看鸟类,从观察者(例如一个人或车辆)捕获的自我为中心的视频中恢复人群地面运动的问题也在人群中移动。恢复的地面运动将为情境理解提供合理的基础,并在计算机视觉和机器人中使用下游应用。在本文中,我们制定了视图鸟化作为几何轨迹重建问题,并从贝叶斯视角推导出级联优化方法。该方法首先估计观察者的运动,然后为每个帧定位周围的行人,同时考虑到它们之间的本地相互作用。我们通过利用人群中的人们的综合和实际轨迹来介绍三个数据集,并评估我们方法的有效性。结果表明了我们方法的准确性,并设定了地面,以进一步研究认为鸟化是一个重要但具有挑战性的视觉理解问题。
translated by 谷歌翻译
车辆到所有(V2X)通信技术使车辆与附近环境中许多其他实体之间的协作可以从根本上改善自动驾驶的感知系统。但是,缺乏公共数据集极大地限制了协作感知的研究进度。为了填补这一空白,我们提出了V2X-SIM,这是一个针对V2X辅助自动驾驶的全面模拟多代理感知数据集。 V2X-SIM提供:(1)\ hl {Multi-Agent}传感器记录来自路边单元(RSU)和多种能够协作感知的车辆,(2)多模式传感器流,可促进多模式感知和多模式感知和(3)支持各种感知任务的各种基础真理。同时,我们在三个任务(包括检测,跟踪和细分)上为最先进的协作感知算法提供了一个开源测试台,并为最先进的协作感知算法提供了基准。 V2X-SIM试图在现实数据集广泛使用之前刺激自动驾驶的协作感知研究。我们的数据集和代码可在\ url {https://ai4ce.github.io/v2x-sim/}上获得。
translated by 谷歌翻译
以视觉为中心的BEV感知由于其固有的优点,最近受到行业和学术界的关注,包括展示世界自然代表和融合友好。随着深度学习的快速发展,已经提出了许多方法来解决以视觉为中心的BEV感知。但是,最近没有针对这个小说和不断发展的研究领域的调查。为了刺激其未来的研究,本文对以视觉为中心的BEV感知及其扩展进行了全面调查。它收集并组织了最近的知识,并对常用算法进行了系统的综述和摘要。它还为几项BEV感知任务提供了深入的分析和比较结果,从而促进了未来作品的比较并激发了未来的研究方向。此外,还讨论了经验实现细节并证明有利于相关算法的开发。
translated by 谷歌翻译
在本文中,我们解决了预测拥挤空间中的Egentric相机佩戴者(自我)的轨迹的问题。从现实世界中走向周围的不同相机佩戴者数据的数据学到的轨迹预测能力可以转移,以协助导航中的人们在导航中的人们障碍,并在移动机器人中灌输人类导航行为,从而实现更好的人机互动。为此,构建了一个新的Egocentric人类轨迹预测数据集,其中包含在佩戴相机的拥挤空间中导航的人们的真实轨迹,以及提取丰富的上下文数据。我们提取并利用三种不同的方式来预测摄像机佩戴者的轨迹,即他/她过去的轨迹,附近人的过去的轨迹以及场景语义或场景的深度等环境。基于变压器的编码器解码器神经网络模型,与熔化多种方式的新型级联跨关注机构集成,已经设计成预测相机佩戴者的未来轨迹。已经进行了广泛的实验,结果表明,我们的模型在Emocentric人类轨迹预测中优于最先进的方法。
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译
High-definition (HD) map change detection is the task of determining when sensor data and map data are no longer in agreement with one another due to real-world changes. We collect the first dataset for the task, which we entitle the Trust, but Verify (TbV) dataset, by mining thousands of hours of data from over 9 months of autonomous vehicle fleet operations. We present learning-based formulations for solving the problem in the bird's eye view and ego-view. Because real map changes are infrequent and vector maps are easy to synthetically manipulate, we lean on simulated data to train our model. Perhaps surprisingly, we show that such models can generalize to real world distributions. The dataset, consisting of maps and logs collected in six North American cities, is one of the largest AV datasets to date with more than 7.8 million images. We make the data available to the public at https://www.argoverse.org/av2.html#mapchange-link, along with code and models at https://github.com/johnwlambert/tbv under the the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
由于卷积神经网络(CNN)在过去的十年中检测成功,多对象跟踪(MOT)通过检测方法的使用来控制。随着数据集和基础标记网站的发布,研究方向已转向在跟踪时在包括重新识别对象的通用场景(包括重新识别(REID))上的最佳准确性。在这项研究中,我们通过提供专用的行人数据集并专注于对性能良好的多对象跟踪器的深入分析来缩小监视的范围)现实世界应用的技术。为此,我们介绍SOMPT22数据集;一套新的,用于多人跟踪的新套装,带有带注释的简短视频,该视频从位于杆子上的静态摄像头捕获,高度为6-8米,用于城市监视。与公共MOT数据集相比,这提供了室外监视的MOT的更为集中和具体的基准。我们分析了该新数据集上检测和REID网络的使用方式,分析了将MOT跟踪器分类为单发和两阶段。我们新数据集的实验结果表明,SOTA远非高效率,而单一跟踪器是统一快速执行和准确性的良好候选者,并具有竞争性的性能。该数据集将在以下网址提供:sompt22.github.io
translated by 谷歌翻译
本文提出了一种从单个交通相机提取3D世界中车辆的位置和姿势的方法。从驾驶员的角度来看,大多数先前的单眼3D车辆检测算法集中在车辆上的摄像机上,并假定了已知的内在和外在校准。相反,本文侧重于使用未校准单眼交通摄像头的相同任务。我们观察到,道路平面和图像平面之间的相同特法对于3D车辆检测和该任务的数据合成至关重要,并且可以在没有相机内在和外部的情况下估计同字。通过在逆透视映射中产生的鸟瞰图(BEV)图像中估计旋转边界盒(R箱)进行3D车辆检测。我们提出了一个名为Daileed R-Box的新的回归目标和双视网架构,该架构促进了翘曲的BEV图像上的检测精度。实验表明,尽管在训练期间没有看到它们的成像,所提出的方法可以推广到新的相机和环境设置。
translated by 谷歌翻译
第一人称视频在其持续环境的背景下突出了摄影师的活动。但是,当前的视频理解方法是从短视频剪辑中的视觉特征的原因,这些视频片段与基础物理空间分离,只捕获直接看到的东西。我们提出了一种方法,该方法通过学习摄影师(潜在看不见的)本地环境来促进以人为中心的环境的了解来链接以自我为中心的视频和摄像机随着时间的推移而张开。我们使用来自模拟的3D环境中的代理商的视频进行训练,在该环境中,环境完全可以观察到,并在看不见的环境的房屋旅行的真实视频中对其进行测试。我们表明,通过将视频接地在其物理环境中,我们的模型超过了传统的场景分类模型,可以预测摄影师所处的哪个房间(其中帧级信息不足),并且可以利用这种基础来定位与环境相对应的视频瞬间 - 中心查询,优于先验方法。项目页面:http://vision.cs.utexas.edu/projects/ego-scene-context/
translated by 谷歌翻译
不确定性遍及现代机器人自主堆栈,几乎每个组件(例如传感器,检测,分类,跟踪,行为预测)产生连续或离散的概率分布。尤其是,轨迹预测被不确定性所包围,因为其输入是由(嘈杂)上游感知产生的,并且其输出是通常用于下游计划中的概率的预测。但是,大多数轨迹预测方法并不能说明上游的不确定性,而仅考虑最明显的值。结果,感知不确定性不会通过预测传播,并且预测通常过于自信。为了解决这个问题,我们提出了一种在轨迹预测中纳入感知状态不确定性的新方法,其关键组成部分是一种新的基于统计距离的损失函数,它鼓励预测不确定性,以更好地匹配上游感知。我们在说明性模拟和大规模的现实世界数据中评估了我们的方法,证明了它在通过预测和产生更校准的预测来传播感知状态不确定性方面的功效。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
近年来,人类运动轨迹预测是许多领域自治系统的重要任务。通过不同社区提出的多种新方法,缺乏标准化的基准和客观比较越来越成为评估进度并指导进一步研究的主要局限性。现有基准的范围和灵活性有限,无法进行相关实验,并说明了代理和环境的上下文提示。在本文中,我们提出了地图集,这是一个系统地评估人类运动轨迹预测算法的基准。 Atlas提供数据预处理功能,超参数优化,具有流行的数据集,并具有灵活性,可以进行设置和进行不充分的相关实验,以分析方法的准确性和鲁棒性。在ATLAS的示例应用中,我们比较了五个流行的模型和基于学习的预测指标,并发现,如果适当应用,基于物理的早期方法仍然具有竞争力。这样的结果证实了像Atlas这样的基准的必要性。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译