We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
High-definition (HD) map change detection is the task of determining when sensor data and map data are no longer in agreement with one another due to real-world changes. We collect the first dataset for the task, which we entitle the Trust, but Verify (TbV) dataset, by mining thousands of hours of data from over 9 months of autonomous vehicle fleet operations. We present learning-based formulations for solving the problem in the bird's eye view and ego-view. Because real map changes are infrequent and vector maps are easy to synthetically manipulate, we lean on simulated data to train our model. Perhaps surprisingly, we show that such models can generalize to real world distributions. The dataset, consisting of maps and logs collected in six North American cities, is one of the largest AV datasets to date with more than 7.8 million images. We make the data available to the public at https://www.argoverse.org/av2.html#mapchange-link, along with code and models at https://github.com/johnwlambert/tbv under the the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Modern neural networks are over-parameterized and thus rely on strong regularization such as data augmentation and weight decay to reduce overfitting and improve generalization. The dominant form of data augmentation applies invariant transforms, where the learning target of a sample is invariant to the transform applied to that sample. We draw inspiration from human visual classification studies and propose generalizing augmentation with invariant transforms to soft augmentation where the learning target softens non-linearly as a function of the degree of the transform applied to the sample: e.g., more aggressive image crop augmentations produce less confident learning targets. We demonstrate that soft targets allow for more aggressive data augmentation, offer more robust performance boosts, work with other augmentation policies, and interestingly, produce better calibrated models (since they are trained to be less confident on aggressively cropped/occluded examples). Combined with existing aggressive augmentation strategies, soft target 1) doubles the top-1 accuracy boost across Cifar-10, Cifar-100, ImageNet-1K, and ImageNet-V2, 2) improves model occlusion performance by up to $4\times$, and 3) halves the expected calibration error (ECE). Finally, we show that soft augmentation generalizes to self-supervised classification tasks.
translated by 谷歌翻译
我们解决了用草图和文本查询检索图像的问题。我们提出任务形成器(文本和草图变压器),这是一种可使用文本说明和草图作为输入的端到端训练模型。我们认为,两种输入方式都以一种单独的方式无法轻易实现的方式相互补充。任务形成器遵循延迟融合双编码方法,类似于剪辑,该方法允许有效且可扩展的检索,因为检索集可以独立于查询而独立于索引。我们从经验上证明,与传统的基于文本的图像检索相比,除文本外,使用输入草图(甚至是绘制的草图)大大增加了检索召回。为了评估我们的方法,我们在可可数据集的测试集中收集了5,000个手绘草图。收集的草图可获得https://janesjanes.github.io/tsbir/。
translated by 谷歌翻译
软机器人抓手有助于富含接触的操作,包括对各种物体的强大抓握。然而,软抓手的有益依从性也会导致重大变形,从而使精确的操纵具有挑战性。我们提出视觉压力估计与控制(VPEC),这种方法可以使用外部摄像头的RGB图像施加的软握力施加的压力。当气动抓地力和肌腱握力与平坦的表面接触时,我们为视觉压力推断提供了结果。我们还表明,VPEC可以通过对推断压力图像的闭环控制进行精确操作。在我们的评估中,移动操纵器(来自Hello Robot的拉伸RE1)使用Visual Servoing在所需的压力下进行接触;遵循空间压力轨迹;并掌握小型低调的物体,包括microSD卡,一分钱和药丸。总体而言,我们的结果表明,对施加压力的视觉估计可以使软抓手能够执行精确操作。
translated by 谷歌翻译
人们经常通过双手施加压力来与周围环境互动。虽然可以通过在手和环境之间放置压力传感器来测量手动压力,但这样做可以改变接触力学,干扰人类触觉感知,需要昂贵的传感器,并且对大型环境的扩展很差。我们探索使用常规的RGB摄像头推断手动压力的可能性,从而使机器对无爆炸的手和表面的手动压力感知。中心洞察力是,通过手的施加压力会导致内容丰富的外观变化。手共有生物力学特性,从而产生相似的可观察现象,例如软组织变形,血液分布,手姿势和铸造阴影。我们收集了36位参与者的视频,这些参与者具有不同的肤色,向仪器的平面表面施加压力。然后,我们训练了一个深层模型(压力visionnet),以从单个RGB图像中推断出压力图像。我们的模型会在培训数据外降低给参与者的压力,并且表现优于基准。我们还表明,我们的模型的输出取决于手的外观,并在接触区域附近投射阴影。总体而言,我们的结果表明,可以使用以前未观察到的人手的出现来准确推断施加压力。数据,代码和模型可在线提供。
translated by 谷歌翻译
我们提出了COGS,这是一种新颖的方法,用于图像的样式条件,素描驱动的合成。 COGS可以为给定的草图对象探索各种外观可能性,从而对输出的结构和外观进行了脱钩的控制。通过输入草图和基于变压器的草图和样式编码器的示例“样式”调理图像启用了对物体结构和外观的粗粒粒度控制,以生成离散的代码簿表示。我们将代码簿表示形式映射到度量空间中,从而在通过量化量化的GAN(VQGAN)解码器生成图像之前,可以对多个合成选项之间的选择和插值进行细粒度的控制和插值。我们的框架因此统一了搜索和综合任务,因为草图和样式对可以用于运行初始合成,该合成可以通过结合结合在搜索语料库中结合使用,以使图像更加与用户的意图更匹配。我们表明,我们的模型对新创建的Pseudosketches数据集的125个对象类培训,能够生产出多种语义内容和外观样式的范围。
translated by 谷歌翻译
我们呈现MSEG,该数据集统一来自不同域的语义分段数据集。由于分类和注释实践不一致,因此,构成数据集的天真合并产生了差的表现。我们通过在超过80,000张图像中重新标记超过220,000个对象掩码,需要超过1.34年的集体注释员努力,调整分类管理并将像素级注释带标记为超过220,000个对象掩码。生成的复合数据集使训练单个语义分段模型可以有效地跨域功能并推广到培训期间未见的数据集。我们采用零拍摄的跨数据集转移作为基准,以系统地评估模型的稳健性,并表明MSEG培训与在没有所提出的贡献的数据集的单个数据集或天真混合的情况下,产生了大量更强大的模型。在MSEG培训的模型首先在Wilddash-V1排行榜上排名为强大的语义细分,在训练期间没有暴露于野生垃圾数据。我们在2020年的强大视觉挑战(RVC)中评估我们的模型,作为一个极端的泛化实验。 MSEG培训集中仅包括RVC中的七个数据集中中的三个;更重要的是,RVC的评估分类是不同的,更详细。令人惊讶的是,我们的模型显示出竞争性能并排名第二。为了评估我们对强大,高效和完整的场景理解的宏伟目的的关机,我们通过使用我们的数据集进行训练实例分段和Panoptic Seation模型超越语义分割。此外,我们还评估了各种工程设计决策和度量,包括分辨率和计算效率。虽然我们的模型远非这一隆重目标,但我们的综合评价对于进步至关重要。我们与社区分享所有模型和代码。
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
translated by 谷歌翻译