我们呈现MSEG,该数据集统一来自不同域的语义分段数据集。由于分类和注释实践不一致,因此,构成数据集的天真合并产生了差的表现。我们通过在超过80,000张图像中重新标记超过220,000个对象掩码,需要超过1.34年的集体注释员努力,调整分类管理并将像素级注释带标记为超过220,000个对象掩码。生成的复合数据集使训练单个语义分段模型可以有效地跨域功能并推广到培训期间未见的数据集。我们采用零拍摄的跨数据集转移作为基准,以系统地评估模型的稳健性,并表明MSEG培训与在没有所提出的贡献的数据集的单个数据集或天真混合的情况下,产生了大量更强大的模型。在MSEG培训的模型首先在Wilddash-V1排行榜上排名为强大的语义细分,在训练期间没有暴露于野生垃圾数据。我们在2020年的强大视觉挑战(RVC)中评估我们的模型,作为一个极端的泛化实验。 MSEG培训集中仅包括RVC中的七个数据集中中的三个;更重要的是,RVC的评估分类是不同的,更详细。令人惊讶的是,我们的模型显示出竞争性能并排名第二。为了评估我们对强大,高效和完整的场景理解的宏伟目的的关机,我们通过使用我们的数据集进行训练实例分段和Panoptic Seation模型超越语义分割。此外,我们还评估了各种工程设计决策和度量,包括分辨率和计算效率。虽然我们的模型远非这一隆重目标,但我们的综合评价对于进步至关重要。我们与社区分享所有模型和代码。
translated by 谷歌翻译
视频分析的图像分割在不同的研究领域起着重要作用,例如智能城市,医疗保健,计算机视觉和地球科学以及遥感应用。在这方面,最近致力于发展新的细分策略;最新的杰出成就之一是Panoptic细分。后者是由语义和实例分割的融合引起的。明确地,目前正在研究Panoptic细分,以帮助获得更多对视频监控,人群计数,自主驾驶,医学图像分析的图像场景的更细致的知识,以及一般对场景更深入的了解。为此,我们介绍了本文的首次全面审查现有的Panoptic分段方法,以获得作者的知识。因此,基于所采用的算法,应用场景和主要目标的性质,执行现有的Panoptic技术的明确定义分类。此外,讨论了使用伪标签注释新数据集的Panoptic分割。继续前进,进行消融研究,以了解不同观点的Panoptic方法。此外,讨论了适合于Panoptic分割的评估度量,并提供了现有解决方案性能的比较,以告知最先进的并识别其局限性和优势。最后,目前对主题技术面临的挑战和吸引不久的将来吸引相当兴趣的未来趋势,可以成为即将到来的研究研究的起点。提供代码的文件可用于:https://github.com/elharroussomar/awesome-panoptic-egation
translated by 谷歌翻译
Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. Totally there are 25k images of the complex everyday scenes containing a variety of objects in their natural spatial context. On average there are 19.5 instances and 10.5 object classes per image. Based on ADE20K, we construct benchmarks for scene parsing and instance segmentation. We provide baseline performances on both of the benchmarks and re-implement the state-ofthe-art models for open source. We further evaluate the effect of synchronized batch normalization and find that a reasonably large batch size is crucial for the semantic segmentation performance. We show that the networks trained on ADE20K are able to segment a wide variety of scenes and objects 1 .
translated by 谷歌翻译
How to effectively leverage the plentiful existing datasets to train a robust and high-performance model is of great significance for many practical applications. However, a model trained on a naive merge of different datasets tends to obtain poor performance due to annotation conflicts and domain divergence.In this paper, we attempt to train a unified model that is expected to perform well across domains on several popularity segmentation datasets.We conduct a detailed analysis of the impact on model generalization from three aspects of data augmentation, training strategies, and model capacity.Based on the analysis, we propose a robust solution that is able to improve model generalization across domains.Our solution ranks 2nd on RVC 2022 semantic segmentation task, with a dataset only 1/3 size of the 1st model used.
translated by 谷歌翻译
深度监督模型具有前所未有的能力来吸收大量培训数据。因此,许多数据集的培训成为一种在不寻常场景中优雅地降级的方法。不幸的是,不同的数据集通常使用不兼容的标签。例如,CityScapes Road类归入所有驱动表面,而Vistas定义了道路标记,人孔等的单独课程。我们通过提出基于部分标签和概率损失的重叠类的数据集的无缝学习方法来解决这一挑战。我们的方法在数据集中竞争和交叉数据集泛化中实现了竞争力,以及学习在任何训练数据集中不单独标记的视觉概念的能力。实验揭示了两个多域数据集集合和野外竞争性能的竞争性或最先进的性能。
translated by 谷歌翻译
Deep supervised models have an unprecedented capacity to absorb large quantities of training data. Hence, training on multiple datasets becomes a method of choice towards strong generalization in usual scenes and graceful performance degradation in edge cases. Unfortunately, different datasets often have incompatible labels. For instance, the Cityscapes road class subsumes all driving surfaces, while Vistas defines separate classes for road markings, manholes etc. Furthermore, many datasets have overlapping labels. For instance, pickups are labeled as trucks in VIPER, cars in Vistas, and vans in ADE20k. We address this challenge by considering labels as unions of universal visual concepts. This allows seamless and principled learning on multi-domain dataset collections without requiring any relabeling effort. Our method achieves competitive within-dataset and cross-dataset generalization, as well as ability to learn visual concepts which are not separately labeled in any of the training datasets. Experiments reveal competitive or state-of-the-art performance on two multi-domain dataset collections and on the WildDash 2 benchmark.
translated by 谷歌翻译
TU Dresden www.cityscapes-dataset.net train/val -fine annotation -3475 images train -coarse annotation -20 000 images test -fine annotation -1525 images
translated by 谷歌翻译
The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-ofthe-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, topperforming method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation.
translated by 谷歌翻译
Panoptic图像分割是计算机视觉任务,即在图像中查找像素组并为其分配语义类别和对象实例标识符。由于其在机器人技术和自动驾驶中的关键应用,图像细分的研究变得越来越流行。因此,研究社区依靠公开可用的基准数据集来推进计算机视觉中的最新技术。但是,由于将图像标记为高昂的成本,因此缺乏适合全景分割的公开地面真相标签。高标签成本还使得将现有数据集扩展到视频域和多相机设置是一项挑战。因此,我们介绍了Waymo Open DataSet:全景视频全景分割数据集,这是一个大型数据集,它提供了用于自主驾驶的高质量的全景分割标签。我们使用公开的Waymo打开数据集生成数据集,利用各种相机图像集。随着时间的推移,我们的标签是一致的,用于视频处理,并且在车辆上安装的多个摄像头保持一致,以了解全景的理解。具体而言,我们为28个语义类别和2,860个时间序列提供标签,这些标签由在三个不同地理位置驾驶的自动驾驶汽车上安装的五个摄像机捕获,从而导致总共标记为100k标记的相机图像。据我们所知,这使我们的数据集比现有的数据集大量数据集大的数量级。我们进一步提出了一个新的基准,用于全景视频全景分割,并根据DeepLab模型家族建立许多强大的基准。我们将公开制作基准和代码。在https://waymo.com/open上找到数据集。
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
translated by 谷歌翻译
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K 1 , the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue.
translated by 谷歌翻译
In this paper, we propose a unified panoptic segmentation network (UPSNet) for tackling the newly proposed panoptic segmentation task. On top of a single backbone residual network, we first design a deformable convolution based semantic segmentation head and a Mask R-CNN style instance segmentation head which solve these two subtasks simultaneously. More importantly, we introduce a parameter-free panoptic head which solves the panoptic segmentation via pixel-wise classification. It first leverages the logits from the previous two heads and then innovatively expands the representation for enabling prediction of an extra unknown class which helps better resolve the conflicts between semantic and instance segmentation. Additionally, it handles the challenge caused by the varying number of instances and permits back propagation to the bottom modules in an end-to-end manner. Extensive experimental results on Cityscapes, COCO and our internal dataset demonstrate that our UPSNet achieves stateof-the-art performance with much faster inference. Code has been made available at: https://github.com/ uber-research/UPSNet. * Equal contribution.† This work was done when Hengshuang Zhao was an intern at Uber ATG.
translated by 谷歌翻译
转移学习可以在源任务上重新使用知识来帮助学习目标任务。一种简单的转移学习形式在当前的最先进的计算机视觉模型中是常见的,即预先训练ILSVRC数据集上的图像分类模型,然后在任何目标任务上进行微调。然而,先前对转移学习的系统研究已经有限,并且预计工作的情况并不完全明白。在本文中,我们对跨越不同的图像域进行了广泛的转移学习实验探索(消费者照片,自主驾驶,空中图像,水下,室内场景,合成,特写镜头)和任务类型(语义分割,物体检测,深度估计,关键点检测)。重要的是,这些都是与现代计算机视觉应用相关的复杂的结构化的输出任务类型。总共执行超过2000年的转移学习实验,包括许多来源和目标来自不同的图像域,任务类型或两者。我们系统地分析了这些实验,了解图像域,任务类型和数据集大小对传输学习性能的影响。我们的研究导致了几个见解和具体建议:(1)对于大多数任务,存在一个显着优于ILSVRC'12预培训的来源; (2)图像领域是实现阳性转移的最重要因素; (3)源数据集应该\ \ emph {include}目标数据集的图像域以获得最佳结果; (4)与此同时,当源任务的图像域比目标的图像域时,我们只观察小的负面影响; (5)跨任务类型的转移可能是有益的,但其成功严重依赖于源和目标任务类型。
translated by 谷歌翻译
In dense image segmentation tasks (e.g., semantic, panoptic), existing methods can hardly generalize well to unseen image domains, predefined classes, and image resolution & quality variations. Motivated by these observations, we construct a large-scale entity segmentation dataset to explore fine-grained entity segmentation, with a strong focus on open-world and high-quality dense segmentation. The dataset contains images spanning diverse image domains and resolutions, along with high-quality mask annotations for training and testing. Given the high-quality and -resolution nature of the dataset, we propose CropFormer for high-quality segmentation, which can improve mask prediction using high-res image crops that provide more fine-grained image details than the full image. CropFormer is the first query-based Transformer architecture that can effectively ensemble mask predictions from multiple image crops, by learning queries that can associate the same entities across the full image and its crop. With CropFormer, we achieve a significant AP gain of $1.9$ on the challenging fine-grained entity segmentation task. The dataset and code will be released at http://luqi.info/entityv2.github.io/.
translated by 谷歌翻译
在这项工作中,我们将全景景观分割介绍为最整体的场景理解,无论是在视野(FOV)和图像级别的理解方面,用于基于标准摄像机的输入。完整的围绕理解为移动代理提供了最大的信息,这对于任何智能车辆至关重要,以便在安全至关重要的动态环境(例如现实世界流量)中做出明智的决定。为了克服缺乏带注释的全景图像,我们提出了一个框架,该框架允许在标准针孔图像上进行模型训练,并以成本限制的方式将学习的功能传输到不同的域。使用我们提出的方法和密集的对比度学习,我们设法对非适应方法实现了重大改进。根据有效的综合分割体系结构,我们可以在我们已建立的野生全景泛滥分割(WILDPPS)数据集中,以圆锥体质量(PQ)测量的3.5-6.5%提高3.5-6.5%。此外,我们的有效框架不需要访问目标域的图像,使其成为适合有限硬件设置的可行域概括方法。作为其他贡献,我们发布了WILDPPS:第一个全景全景图像数据集,以促进周围感知的进展,并探索一种结合受监督和对比度培训的新型培训程序。
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
We propose and study a task we name panoptic segmentation (PS). Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation.
translated by 谷歌翻译
Panoptic现场了解和跟踪动态代理对于机器人和自动化车辆至关重要,以在城市环境中导航。由于LiDAR提供了方案的精确照明和几何描绘,使用LIDAR点云执行这些任务提供可靠的预测。然而,现有数据集缺乏城市场景类型的多样性,并且具有有限数量的动态对象实例,其阻碍了这些任务的学习以及开发方法的可信基准。在本文中,我们介绍了大规模的Panoptic Nuscenes基准数据集,它扩展了我们流行的NUSCENES DataSet,具有用于语义分割,Panoptic分段和Panoptic跟踪任务的Pock-Wise Trountruth annotations。为了便于比较,我们为我们提出的数据集提供了几个任务的强大基线。此外,我们分析了Panoptic跟踪的现有度量标准的缺点,并提出了一种解决问题的小说实例的Pat度量。我们提供详尽的实验,展示了Panoptic Nuscenes与现有数据集相比的效用,并在Nuscenes.org提供的在线评估服务器。我们认为,此扩展将加快新颖的现场了解动态城市环境的新方法研究。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译