多个数据集上的培训语义细分模型引起了对计算机视觉社区的最新兴趣。这种兴趣是由昂贵的注释和渴望在多个视觉领域熟练的愿望激发的。但是,已建立的数据集具有相互不相容的标签,这些标签破坏了野生中原则上的推断。我们通过迭代数据集集成自动构建通用分类法来解决这个问题。我们的方法检测数据集特异性标签之间的子集 - 苏佩特关系,并通过将超级类作为部分标签来支持子类liogits的学习。我们介绍了有关标准数据集收集的实验,并证明了相对于先前工作的竞争性概括性表现。
translated by 谷歌翻译
Deep supervised models have an unprecedented capacity to absorb large quantities of training data. Hence, training on multiple datasets becomes a method of choice towards strong generalization in usual scenes and graceful performance degradation in edge cases. Unfortunately, different datasets often have incompatible labels. For instance, the Cityscapes road class subsumes all driving surfaces, while Vistas defines separate classes for road markings, manholes etc. Furthermore, many datasets have overlapping labels. For instance, pickups are labeled as trucks in VIPER, cars in Vistas, and vans in ADE20k. We address this challenge by considering labels as unions of universal visual concepts. This allows seamless and principled learning on multi-domain dataset collections without requiring any relabeling effort. Our method achieves competitive within-dataset and cross-dataset generalization, as well as ability to learn visual concepts which are not separately labeled in any of the training datasets. Experiments reveal competitive or state-of-the-art performance on two multi-domain dataset collections and on the WildDash 2 benchmark.
translated by 谷歌翻译
深度监督模型具有前所未有的能力来吸收大量培训数据。因此,许多数据集的培训成为一种在不寻常场景中优雅地降级的方法。不幸的是,不同的数据集通常使用不兼容的标签。例如,CityScapes Road类归入所有驱动表面,而Vistas定义了道路标记,人孔等的单独课程。我们通过提出基于部分标签和概率损失的重叠类的数据集的无缝学习方法来解决这一挑战。我们的方法在数据集中竞争和交叉数据集泛化中实现了竞争力,以及学习在任何训练数据集中不单独标记的视觉概念的能力。实验揭示了两个多域数据集集合和野外竞争性能的竞争性或最先进的性能。
translated by 谷歌翻译
我们呈现MSEG,该数据集统一来自不同域的语义分段数据集。由于分类和注释实践不一致,因此,构成数据集的天真合并产生了差的表现。我们通过在超过80,000张图像中重新标记超过220,000个对象掩码,需要超过1.34年的集体注释员努力,调整分类管理并将像素级注释带标记为超过220,000个对象掩码。生成的复合数据集使训练单个语义分段模型可以有效地跨域功能并推广到培训期间未见的数据集。我们采用零拍摄的跨数据集转移作为基准,以系统地评估模型的稳健性,并表明MSEG培训与在没有所提出的贡献的数据集的单个数据集或天真混合的情况下,产生了大量更强大的模型。在MSEG培训的模型首先在Wilddash-V1排行榜上排名为强大的语义细分,在训练期间没有暴露于野生垃圾数据。我们在2020年的强大视觉挑战(RVC)中评估我们的模型,作为一个极端的泛化实验。 MSEG培训集中仅包括RVC中的七个数据集中中的三个;更重要的是,RVC的评估分类是不同的,更详细。令人惊讶的是,我们的模型显示出竞争性能并排名第二。为了评估我们对强大,高效和完整的场景理解的宏伟目的的关机,我们通过使用我们的数据集进行训练实例分段和Panoptic Seation模型超越语义分割。此外,我们还评估了各种工程设计决策和度量,包括分辨率和计算效率。虽然我们的模型远非这一隆重目标,但我们的综合评价对于进步至关重要。我们与社区分享所有模型和代码。
translated by 谷歌翻译
计算机视觉是由许多数据集驱动的,这些数据集可用于培训或评估新方法。但是,每个数据集都有不同的类标签,类的视觉定义,遵循特定分布的图像,注释协议等。在本文中,我们探讨了跨数据集之间的视觉语义关系的自动发现。我们想了解数据集中某个类的实例与另一个数据集中另一类的实例有关。他们是否处于身份,父母/孩子的重叠关系中?还是他们之间没有链接?为了找到跨数据集的标签之间的关系,我们根据语言,视觉和两者的组合提出方法。我们的方法可以有效地发现跨数据集和关系类型的标签关系。我们使用这些结果进行更深入的检查,以了解为什么实例相关,找到班级缺失方面,并利用我们的关系来创建更细粒度的注释。我们得出的结论是,不能通过单独查看类的名称来建立标签关系,因为它们在很大程度上取决于每个数据集的构建方式。
translated by 谷歌翻译
Over the past few years, developing a broad, universal, and general-purpose computer vision system has become a hot topic. A powerful universal system would be capable of solving diverse vision tasks simultaneously without being restricted to a specific problem or a specific data domain, which is of great importance in practical real-world computer vision applications. This study pushes the direction forward by concentrating on the million-scale multi-domain universal object detection problem. The problem is not trivial due to its complicated nature in terms of cross-dataset category label duplication, label conflicts, and the hierarchical taxonomy handling. Moreover, what is the resource-efficient way to utilize emerging large pre-trained vision models for million-scale cross-dataset object detection remains an open challenge. This paper tries to address these challenges by introducing our practices in label handling, hierarchy-aware loss design and resource-efficient model training with a pre-trained large model. Our method is ranked second in the object detection track of Robust Vision Challenge 2022 (RVC 2022). We hope our detailed study would serve as an alternative practice paradigm for similar problems in the community. The code is available at https://github.com/linfeng93/Large-UniDet.
translated by 谷歌翻译
How to effectively leverage the plentiful existing datasets to train a robust and high-performance model is of great significance for many practical applications. However, a model trained on a naive merge of different datasets tends to obtain poor performance due to annotation conflicts and domain divergence.In this paper, we attempt to train a unified model that is expected to perform well across domains on several popularity segmentation datasets.We conduct a detailed analysis of the impact on model generalization from three aspects of data augmentation, training strategies, and model capacity.Based on the analysis, we propose a robust solution that is able to improve model generalization across domains.Our solution ranks 2nd on RVC 2022 semantic segmentation task, with a dataset only 1/3 size of the 1st model used.
translated by 谷歌翻译
密集的语义预测通过推断未观察到的未来图像的像素级语义来预测视频中的未来事件。我们提出了一种适用于各种单帧架构和任务的新方法。我们的方法包括两个模块。功能 - 动作(F2M)模块预测了密集的变形领域,将过去的功能扭曲到其未来的位置。功能到特征(F2F)模块直接回归未来功能,因此能够考虑紧急风景。化合物F2MF模型以任务 - 不可行的方式与新奇效果的运动效果脱钩。我们的目标是将F2MF预测应用于所需单帧模型的最自述和最抽象的最摘要表示。我们的设计利用了相邻时间瞬间可变形卷曲和空间相关系数。我们在三个密集预测任务中执行实验:语义分割,实例级分割和Panoptic分割。结果介绍了三个密集预测任务的最先进的预测精度。
translated by 谷歌翻译
语义细分具有广泛的应用,但是其现实世界的影响受到实现部署所必需的过度注释成本的限制。放弃监督的细分方法可以辅助这些成本,但表现出不便的要求,以提供目标分布中标记的示例以将概念名称分配给预测。语言图像预训练中的另一种工作线最近证明了可以产生模型的潜力,这些模型既可以在概念的大词汇上分配名称,又可以使零摄像转移进行分类,但并未证明相应的细分能力。在这项工作中,我们努力实现这两种结合其优势的方法的综合。我们利用一种此类语言图像预训练的模型Clip的检索能力,从未标记的图像中动态策划训练集,以获取任意概念名称集的收集,并利用现代图像表示的强大对应关系到共同段的实体之间的强大通信由此产生的收藏。然后使用合成段集合来构建一个分割模型(不需要像素标签),其概念知识是从剪辑的可扩展预训练过程继承的。我们证明,我们的方法被称为检索和共段(RECO)对无监督的分割方法表现出色,同时继承了可命名的预测和零拍传输的便利性。我们还展示了Reco为极稀有物体生成专业细分器的能力。
translated by 谷歌翻译
Recent advances in pixel-level tasks (e.g., segmentation) illustrate the benefit of long-range interactions between aggregated region-based representations that can enhance local features. However, such pixel-to-region associations and the resulting representation, which often take the form of attention, cannot model the underlying semantic structure of the scene (e.g., individual objects and, by extension, their interactions). In this work, we take a step toward addressing this limitation. Specifically, we propose an architecture where we learn to project image features into latent region representations and perform global reasoning across them, using a transformer, to produce contextualized and scene-consistent representations that are then fused with original pixel-level features. Our design enables the latent regions to represent semantically meaningful concepts, by ensuring that activated regions are spatially disjoint and unions of such regions correspond to connected object segments. The resulting semantic global reasoning (SGR) is end-to-end trainable and can be combined with any semantic segmentation framework and backbone. Combining SGR with DeepLabV3 results in a semantic segmentation performance that is competitive to the state-of-the-art, while resulting in more semantically interpretable and diverse region representations, which we show can effectively transfer to detection and instance segmentation. Further, we propose a new metric that allows us to measure the semantics of representations at both the object class and instance level.
translated by 谷歌翻译
虽然对2D图像的零射击学习(ZSL)进行了许多研究,但其在3D数据中的应用仍然是最近且稀缺的,只有几种方法限于分类。我们在3D数据上介绍了ZSL和广义ZSL(GZSL)的第一代生成方法,可以处理分类,并且是第一次语义分割。我们表明它达到或胜过了INTEMNET40对归纳ZSL和归纳GZSL的ModelNet40分类的最新状态。对于语义分割,我们创建了三个基准,用于评估此新ZSL任务,使用S3DIS,Scannet和Semantickitti进行评估。我们的实验表明,我们的方法优于强大的基线,我们另外为此任务提出。
translated by 谷歌翻译
Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff 1 , which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
由于严重的图像降解,在挑战性高动态范围(HDR)和高速条件下检索准确的语义信息仍然是基于图像的算法的开放挑战。事件摄像机有望应对这些挑战,因为它们具有更高的动态范围,并且对运动模糊具有弹性。尽管如此,事件摄像机的语义细分仍处于起步阶段,这主要是由于缺乏高质量的标记数据集所致。在这项工作中,我们介绍了ESS(基于事件的语义细分),该工作通过将语义分割任务直接从现有标记的图像数据集传输到无标记的事件来解决此问题。与现有的UDA方法相比,我们的方法与图像嵌入的经常性运动不变事件嵌入对齐。因此,我们的方法既不需要视频数据,也不需要图像和事件之间的每个像素对齐,也不需要从静止图像中幻觉运动。此外,我们介绍了DSEC-Semantic,这是第一个带有细粒标签的基于大规模事件的数据集。我们表明,单独使用图像标签,ESS优于现有的UDA方法,并且与事件标签结合使用,它甚至超过了DDD17和DSEC-Semantic上最先进的监督方法。最后,ESS是通用的,它可以解锁大量现有标记的图像数据集,并为事件摄像机无法访问的新领域的新领域中的新和令人兴奋的研究方向铺平了道路。
translated by 谷歌翻译
转移学习可以在源任务上重新使用知识来帮助学习目标任务。一种简单的转移学习形式在当前的最先进的计算机视觉模型中是常见的,即预先训练ILSVRC数据集上的图像分类模型,然后在任何目标任务上进行微调。然而,先前对转移学习的系统研究已经有限,并且预计工作的情况并不完全明白。在本文中,我们对跨越不同的图像域进行了广泛的转移学习实验探索(消费者照片,自主驾驶,空中图像,水下,室内场景,合成,特写镜头)和任务类型(语义分割,物体检测,深度估计,关键点检测)。重要的是,这些都是与现代计算机视觉应用相关的复杂的结构化的输出任务类型。总共执行超过2000年的转移学习实验,包括许多来源和目标来自不同的图像域,任务类型或两者。我们系统地分析了这些实验,了解图像域,任务类型和数据集大小对传输学习性能的影响。我们的研究导致了几个见解和具体建议:(1)对于大多数任务,存在一个显着优于ILSVRC'12预培训的来源; (2)图像领域是实现阳性转移的最重要因素; (3)源数据集应该\ \ emph {include}目标数据集的图像域以获得最佳结果; (4)与此同时,当源任务的图像域比目标的图像域时,我们只观察小的负面影响; (5)跨任务类型的转移可能是有益的,但其成功严重依赖于源和目标任务类型。
translated by 谷歌翻译
我们提出了一个统一的查看,即通过通用表示,一个深层神经网络共同学习多个视觉任务和视觉域。同时学习多个问题涉及最大程度地减少具有不同幅度和特征的多个损失函数的加权总和,从而导致一个损失的不平衡状态,与学习每个问题的单独模型相比,一个损失的不平衡状态主导了优化和差的结果。为此,我们提出了通过小容量适配器将多个任务/特定于域网络的知识提炼到单个深神经网络中的知识。我们严格地表明,通用表示在学习NYU-V2和CityScapes中多个密集的预测问题方面实现了最新的表现,来自视觉Decathlon数据集中的不同域中的多个图像分类问题以及MetadataSet中的跨域中的几个域中学习。最后,我们还通过消融和定性研究进行多次分析。
translated by 谷歌翻译
视频分析的图像分割在不同的研究领域起着重要作用,例如智能城市,医疗保健,计算机视觉和地球科学以及遥感应用。在这方面,最近致力于发展新的细分策略;最新的杰出成就之一是Panoptic细分。后者是由语义和实例分割的融合引起的。明确地,目前正在研究Panoptic细分,以帮助获得更多对视频监控,人群计数,自主驾驶,医学图像分析的图像场景的更细致的知识,以及一般对场景更深入的了解。为此,我们介绍了本文的首次全面审查现有的Panoptic分段方法,以获得作者的知识。因此,基于所采用的算法,应用场景和主要目标的性质,执行现有的Panoptic技术的明确定义分类。此外,讨论了使用伪标签注释新数据集的Panoptic分割。继续前进,进行消融研究,以了解不同观点的Panoptic方法。此外,讨论了适合于Panoptic分割的评估度量,并提供了现有解决方案性能的比较,以告知最先进的并识别其局限性和优势。最后,目前对主题技术面临的挑战和吸引不久的将来吸引相当兴趣的未来趋势,可以成为即将到来的研究研究的起点。提供代码的文件可用于:https://github.com/elharroussomar/awesome-panoptic-egation
translated by 谷歌翻译
In this paper, we propose a unified panoptic segmentation network (UPSNet) for tackling the newly proposed panoptic segmentation task. On top of a single backbone residual network, we first design a deformable convolution based semantic segmentation head and a Mask R-CNN style instance segmentation head which solve these two subtasks simultaneously. More importantly, we introduce a parameter-free panoptic head which solves the panoptic segmentation via pixel-wise classification. It first leverages the logits from the previous two heads and then innovatively expands the representation for enabling prediction of an extra unknown class which helps better resolve the conflicts between semantic and instance segmentation. Additionally, it handles the challenge caused by the varying number of instances and permits back propagation to the bottom modules in an end-to-end manner. Extensive experimental results on Cityscapes, COCO and our internal dataset demonstrate that our UPSNet achieves stateof-the-art performance with much faster inference. Code has been made available at: https://github.com/ uber-research/UPSNet. * Equal contribution.† This work was done when Hengshuang Zhao was an intern at Uber ATG.
translated by 谷歌翻译
现有的研究解决场景图生成(SGG) - 图像中场景理解的关键技术 - 从检测角度,即使用边界框检测到对象,然后预测其成对关系。我们认为这种范式引起了几个阻碍该领域进步的问题。例如,当前数据集中的基于框的标签通常包含冗余类,例如头发,并遗漏对上下文理解至关重要的背景信息。在这项工作中,我们介绍了Panoptic场景图生成(PSG),这是一项新的问题任务,要求该模型基于全景分割而不是刚性边界框生成更全面的场景图表示。一个高质量的PSG数据集包含可可和视觉基因组的49k井被宣传的重叠图像,是为社区创建的,以跟踪其进度。为了进行基准测试,我们构建了四个两阶段基线,这些基线是根据SGG中的经典方法修改的,以及两个单阶段基准,称为PSGTR和PSGFORMER,它们基于基于高效的变压器检测器,即detr。虽然PSGTR使用一组查询来直接学习三重态,但PSGFormer以来自两个变压器解码器的查询形式分别模拟对象和关系,然后是一种迅速的关系 - 对象对象匹配机制。最后,我们分享了关于公开挑战和未来方向的见解。
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译