这项工作的目的是在训练过程中划分和名称图像区域,而无需访问像素级标签。为了解决这项任务,我们通过提炼两个基础模型的互补优势来构建细分器。第一个剪辑(Radford等,2021)具有将名称分配给图像内容的能力,但缺乏对象结构的可访问表示。第二个Dino(Caron等,2021)捕获了物体的空间范围,但对对象名称不了解。我们的方法称为名为Mask,开始使用剪辑来构建特定于类别的图像档案。这些图像用dino的类别 - 敏捷的对象检测器进行伪标记,然后使用夹档案标签通过类别特定的细分器进行完善。得益于精制面具的高质量,我们表明,在这些档案中训练有适当数据的培训的标准分割体系结构可为单对象和多对象图像带来令人印象深刻的语义细分能力。结果,我们提出的名字命名为在包括VOC2012,可可和大规模ImageNet-S数据集在内的五个基准上的一系列先前工作中表现出色。
translated by 谷歌翻译
语义细分具有广泛的应用,但是其现实世界的影响受到实现部署所必需的过度注释成本的限制。放弃监督的细分方法可以辅助这些成本,但表现出不便的要求,以提供目标分布中标记的示例以将概念名称分配给预测。语言图像预训练中的另一种工作线最近证明了可以产生模型的潜力,这些模型既可以在概念的大词汇上分配名称,又可以使零摄像转移进行分类,但并未证明相应的细分能力。在这项工作中,我们努力实现这两种结合其优势的方法的综合。我们利用一种此类语言图像预训练的模型Clip的检索能力,从未标记的图像中动态策划训练集,以获取任意概念名称集的收集,并利用现代图像表示的强大对应关系到共同段的实体之间的强大通信由此产生的收藏。然后使用合成段集合来构建一个分割模型(不需要像素标签),其概念知识是从剪辑的可扩展预训练过程继承的。我们证明,我们的方法被称为检索和共段(RECO)对无监督的分割方法表现出色,同时继承了可命名的预测和零拍传输的便利性。我们还展示了Reco为极稀有物体生成专业细分器的能力。
translated by 谷歌翻译
在本文中,我们表明,自我监督的功能学习的最新进展使无监督的对象发现和语义细分,其性能与10年前的监督语义分割相匹配。我们提出了一种基于无监督的显着性掩码和自我监督的特征聚类的方法,以启动对象发现,然后在伪标签上训练语义分割网络,以在带有多个对象的图像上引导系统。我们介绍了Pascal VOC的结果,该结果远远超出了当前的最新状态(47.3 MIOU),我们首次在整个81个类别中向COCO上首次报告结果:我们的方法发现了34个类别,价格超过20美元\%$ iou,同时获得所有81个类别的平均值为19.6。
translated by 谷歌翻译
无监督语义分割的任务旨在将像素聚集到语义上有意义的群体中。具体而言,分配给同一群集的像素应共享高级语义属性,例如其对象或零件类别。本文介绍了MaskDistill:基于三个关键想法的无监督语义细分的新颖框架。首先,我们提倡一种数据驱动的策略,以生成对象掩模作为语义分割事先的像素分组。这种方法省略了手工制作的先验,这些先验通常是为特定场景组成而设计的,并限制了竞争框架的适用性。其次,MaskDistill将对象掩盖簇簇以获取伪地真相,以训练初始对象分割模型。第三,我们利用此模型过滤出低质量的对象掩模。这种策略减轻了我们像素分组中的噪声,并导致了我们用来训练最终分割模型的干净掩模集合。通过组合这些组件,我们可以大大优于以前的作品,用于对Pascal(+11%MIOU)和COCO(+4%Mask AP50)进行无监督的语义分割。有趣的是,与现有方法相反,我们的框架不在低级图像提示上,也不限于以对象为中心的数据集。代码和型号将提供。
translated by 谷歌翻译
最近,Vision-Language预训练的零拍图像分类已经表现出令人难以置信的成就,即该模型可以对任意类别进行分类而不看到该类别的其他注释图像。然而,目前尚不清楚如何在更广泛的视觉问题上进行零射识别,例如对象检测和语义分割。在本文中,我们通过在现成的预训练的视觉模型,即剪辑上建立零拍语义分割来定位零拍语义分割。很难因为语义分割和剪辑模型在不同的视觉粒度上执行,该语义分段处理在像素上时,而剪辑在图像上执行。为了解决处理粒度的差异,我们拒绝使用普遍的一级FCN基于FCN的框架,并倡导一个两级语义分割框架,其中第一阶段提取一个完全提取的掩模提案和第二阶段利用基于图像的剪辑模型在第一阶段生成的蒙版图像作物上执行零拍分类。我们的实验结果表明,这种简单的框架通过大型利润率超越了先前的最先进:+29.5 Hiou On Pascal VOC 2012 DataSet,+8.9 Hiou On Coco Stuff DataSet。凭借其简单性和强大的表现,我们希望本框架成为促进未来研究的基准。
translated by 谷歌翻译
零拍语义分割(ZS3)旨在分割培训中没有看到的新型类别。现有的作品将zs3作为像素级零拍分类问题,以及在仅使用文本预先培训的语言模型的帮助下,将语义知识从看见课程转移到未知一体。虽然简单,像素级ZS3配方显示了集成具有图像文本对预训练的视觉语言模型的有限能力,并且目前展示了愿景任务的巨大潜力。灵感来自观察,人类经常执行段级语义标签,我们建议将zs3分成两个子任务:1)将像素分组到段中的类别不可知的分组任务。 2)段的零拍分类任务。前者的子任务不涉及类别信息,可以直接传输到未安装类的组像素。后一子任务在段级执行,提供了一种自然的方式,可以利用预先培训的大规模视觉模型,用于ZS3的图像文本对(例如剪辑)。基于解耦制剂,我们提出了一种简单且有效的零拍语义分割模型,称为ZegFormer,这优于大幅边缘的先前方法,例如,Pascal VOC的35分和3分在Coco-在宫颈课程方面的东西。代码将在https://github.com/dingjiansw101/zegformer发布。
translated by 谷歌翻译
分组和识别是视觉场景理解的重要组成部分,例如,用于对象检测和语义分割。借助端到端的深度学习系统,图像区域的分组通常通过像素级识别标签的自上而下的监督隐式进行。取而代之的是,在本文中,我们建议将分组机制恢复到深层网络中,从而使语义片段仅在文本监督下自动出现。我们提出了一个分层分组视觉变压器(GroupVit),它超出了常规的网格结构表示,并学会了将图像区域分组为逐渐更大的任意形状段。我们通过对比度损失在大规模图像文本数据集上与文本编码器共同训练小组vit。只有文本监督并且没有任何像素级注释,GroupVit就学会了将语义区域分组在一起,并以零拍的方式成功地将语义分割的任务转移到语义分割的任务,即,而没有任何进一步的微调。它在Pascal VOC 2012上获得了52.3%MIOU的零拍摄精度和Pascal上下文数据集中的22.4%MIOU,并竞争性地表现为需要更高水平监督的最先进的转移学习方法。我们在https://github.com/nvlabs/groupvit上开放代码。
translated by 谷歌翻译
我们呈现LSEG,这是一种用于语言驱动语义图像分割的新模型。 LSEG使用文本编码器来计算描述性输入标签(例如,“草”或“构建”)的嵌入式,以及基于变压器的图像编码器,该图像编码器计算输入图像的密度每个像素嵌入。图像编码器具有对比度目标,以将像素嵌入对准对应语义类的文本嵌入。文本嵌入式提供了一种灵活的标签表示,其中将语义相似的标签映射到嵌入空间中的类似区域(例如,“猫”和“毛茸茸”)。这允许LSEG概括到以前在测试时间的预先看不见的类别,而不会再培训或甚至需要单一的额外训练样本。我们展示了与现有的零点和少量拍摄语义分割方法相比,我们的方法实现了高竞争激烈的零射性能,甚至在提供固定标签集时符合传统分段算法的准确性。代码和演示可在https://github.com/isl-org/lang-seg获取。
translated by 谷歌翻译
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task in computer vision. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without any further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel framework called CLIP-ES for WSSS. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) to mitigate noise and focus on confident regions. Our proposed framework dramatically reduces the cost of training for WSSS and shows the capability of localizing objects in CLIP. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES.
translated by 谷歌翻译
无监督的语义细分需要将标签分配给每个像素,而无需任何人类注释。尽管在单个图像的自我监督表示学习方面取得了进步,但使用像素级表示的无监督语义细分仍然是一项艰巨的任务,并且仍然没有被淘汰。在这项工作中,我们通过使用视觉概念(即具有语义含义的像素组,例如零件,对象和场景)提出一种自我监督的像素表示学习方法,以进行语义分割。为了指导自我监督的学习,我们利用像素和概念之间的三种类型的关系,包括像素与本地概念之间的关系,本地和全球概念以及概念的共发生。我们在包括Pascal VOC 2012,Coco 2017和Davis 2017的三个数据集上评估了学识渊博的像素嵌入和视觉概念。我们的结果表明,提议的方法对最近的无监督语义细分方法进行了一致性和实质性改进,并证明了视觉概念的视觉概念。可以向图像数据集揭示洞察力。
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
语义细分是一项关键的计算机视觉任务,该任务已经积极研究了几十年。近年来,监督方法已达到前所未有的准确性,但是每个新的类别类别都需要许多像素级注释,这是非常耗时和昂贵的。另外,当前语义分割网络处理大量类别的能力是有限的。这意味着包含稀有类别类别的图像不太可能通过当前方法进行很好的分割。在本文中,我们提出了一种为每个对象创建语义细分掩码的新方法,而无需训练分割网络或查看任何分割掩模。我们的方法用作图像中存在的类类别的图像级标签;它们可以自动或手动获得。我们利用Vision语言嵌入模型(特别是Clip)来使用模型解释性方法为每个类创建粗略分割映射。我们使用测试时间增强技术来优化地图。此阶段的输出提供像素级伪标签,而不是监督方法所需的手动像素级标签。鉴于伪标签,我们利用单图像分割技术来获得高质量的输出分割掩模。我们的方法是定量和定性地示出的,以优于使用类似的监督数量的方法。我们的结果对于包含罕见类别的图像特别显着。
translated by 谷歌翻译
对比语言 - 图像预训练(剪辑)在开放词汇零拍摄图像识别方面取得了显着突破。许多最近的研究利用预先训练的剪辑模型进行图像级分类和操纵。在本文中,我们进一步探索了剪辑的电位,用于像素级致密预测,具体地在语义分割中。在没有注释和微调的情况下,我们的方法Denseclip会产生合理的分段结果,在各种数据集中的开放概念上产生了合理的分段结果。通过添加伪标签和自我培训,Denseclip +超越了SOTA转换零点语义分割方法,通过大幅边缘,例如,Pascal VOC / Pascal Context / Coco Sift的宣传课程从35.6 / 20.7 / 30.3到86.1 / 66.7 / 54.7。我们还在输入损坏下测试了Denseclip的稳健性,并评估其在识别细粒度物体和新颖概念中的能力。我们的发现表明,Denseclip可以作为致密预测任务的新可靠的监督源,以实现无批准的分割。
translated by 谷歌翻译
Recent advances in self-supervised visual representation learning have paved the way for unsupervised methods tackling tasks such as object discovery and instance segmentation. However, discovering objects in an image with no supervision is a very hard task; what are the desired objects, when to separate them into parts, how many are there, and of what classes? The answers to these questions depend on the tasks and datasets of evaluation. In this work, we take a different approach and propose to look for the background instead. This way, the salient objects emerge as a by-product without any strong assumption on what an object should be. We propose FOUND, a simple model made of a single $conv1\times1$ initialized with coarse background masks extracted from self-supervised patch-based representations. After fast training and refining these seed masks, the model reaches state-of-the-art results on unsupervised saliency detection and object discovery benchmarks. Moreover, we show that our approach yields good results in the unsupervised semantic segmentation retrieval task. The code to reproduce our results is available at https://github.com/valeoai/FOUND.
translated by 谷歌翻译
无监督的语义分割旨在在没有手动注释的情况下获得高级视觉功能的高级语义表示。大多数现有方法是基于其视觉提示或某些预定义规则尝试将像素分组为区域的自下而上的方法。因此,在具有多个对象的复杂场景和共享类似的视觉外观的某些对象时,这些自下而上的方法难以产生细粒度的语义分割。相比之下,我们提出了一个在极其复杂的情景中的细粒度分割的第一个自上而下的无监督语义分割框架。具体而言,我们首先以自我监督的学习方式从大规模视觉数据中获得丰富的高级结构化语义概念信息,并在发现目标数据集中呈现的潜在语义类别之前使用此类信息。其次,通过计算关于某些发现的语义表示的类激活地图(CAM)来计算发现的高电平语义类别以映射到低级像素特征。最后,所获得的凸轮用作伪标签,以培训分割模块并产生最终的语义分割。多个语义分割基准测试的实验结果表明,我们的自上而下的无监督分割对于对象为中心和以场景为中心的数据集,在不同的语义粒度水平下,并且优于所有最新的最先进的自下而上方法。我们的代码可用于\ URL {https://github.com/damo-cv/transfgugu}。
translated by 谷歌翻译
我们在语义分段(NCDSS)中介绍了新型类发现的新设置,其目的在于将未标记的图像分段,其中给出了从标记的不相交类集之前知识的新类。与看起来在图像分类中的新型类发现的现有方法相比,我们专注于更具挑战性的语义细分。在NCDS中,我们需要区分对象和背景,并处理图像内的多个类的存在,这增加了使用未标记数据的难度。为了解决这个新的设置,我们利用标记的基础数据和显着模型来粗略地集群新颖的课程,以便在我们的基本框架中进行模型培训。此外,我们提出了基于熵的不确定性建模和自我培训(EUMS)框架来克服嘈杂的伪标签,进一步提高了新颖类别的模型性能。我们的欧姆斯利用熵排名技术和动态重新分配来蒸馏清洁标签,从而充分利用自我监督的学习来充分利用嘈杂的数据。我们在Pascal-5 $ ^ i $ dataSet上构建NCDSS基准。广泛的实验表明了基本框架的可行性(实现了平均Miou的49.81%)和欧姆斯框架的有效性(优于9.28%Miou的基本框架)。
translated by 谷歌翻译
Unsupervised object discovery aims to localize objects in images, while removing the dependence on annotations required by most deep learning-based methods. To address this problem, we propose a fully unsupervised, bottom-up approach, for multiple objects discovery. The proposed approach is a two-stage framework. First, instances of object parts are segmented by using the intra-image similarity between self-supervised local features. The second step merges and filters the object parts to form complete object instances. The latter is performed by two CNN models that capture semantic information on objects from the entire dataset. We demonstrate that the pseudo-labels generated by our method provide a better precision-recall trade-off than existing single and multiple objects discovery methods. In particular, we provide state-of-the-art results for both unsupervised class-agnostic object detection and unsupervised image segmentation.
translated by 谷歌翻译
Recently, CLIP has been applied to pixel-level zero-shot learning tasks via a two-stage scheme. The general idea is to first generate class-agnostic region proposals and then feed the cropped proposal regions to CLIP to utilize its image-level zero-shot classification capability. While effective, such a scheme requires two image encoders, one for proposal generation and one for CLIP, leading to a complicated pipeline and high computational cost. In this work, we pursue a simpler-and-efficient one-stage solution that directly extends CLIP's zero-shot prediction capability from image to pixel level. Our investigation starts with a straightforward extension as our baseline that generates semantic masks by comparing the similarity between text and patch embeddings extracted from CLIP. However, such a paradigm could heavily overfit the seen classes and fail to generalize to unseen classes. To handle this issue, we propose three simple-but-effective designs and figure out that they can significantly retain the inherent zero-shot capacity of CLIP and improve pixel-level generalization ability. Incorporating those modifications leads to an efficient zero-shot semantic segmentation system called ZegCLIP. Through extensive experiments on three public benchmarks, ZegCLIP demonstrates superior performance, outperforming the state-of-the-art methods by a large margin under both "inductive" and "transductive" zero-shot settings. In addition, compared with the two-stage method, our one-stage ZegCLIP achieves a speedup of about 5 times faster during inference. We release the code at https://github.com/ZiqinZhou66/ZegCLIP.git.
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
This paper presents the first attempt to learn semantic boundary detection using image-level class labels as supervision. Our method starts by estimating coarse areas of object classes through attentions drawn by an image classification network. Since boundaries will locate somewhere between such areas of different classes, our task is formulated as a multiple instance learning (MIL) problem, where pixels on a line segment connecting areas of two different classes are regarded as a bag of boundary candidates. Moreover, we design a new neural network architecture that can learn to estimate semantic boundaries reliably even with uncertain supervision given by the MIL strategy. Our network is used to generate pseudo semantic boundary labels of training images, which are in turn used to train fully supervised models. The final model trained with our pseudo labels achieves an outstanding performance on the SBD dataset, where it is as competitive as some of previous arts trained with stronger supervision.
translated by 谷歌翻译