通常通过培训用于固定的对象类的模型来解决图像分割。稍后包含附加类或更复杂的查询是昂贵的,因为它需要重新培训包含这些表达式的数据集上的模型。在这里,我们提出了一个系统,该系统可以基于测试时间的任意提示生成图像分割。提示可以是文本或图像。这种方法使我们能够为三个常见的分段任务创建一个统一的模型(训练一次),这具有不同的挑战:引用表达式分割,零拍分段和单次分割。我们构建在剪辑模型中作为骨干,我们使用基于变压器的解码器扩展,该解码器能够致密预测。在对PhraseCut数据集的扩展版本进行培训之后,我们的系统基于自由文本提示符或表达查询的附加图像生成图像的二进制分段映射。详细分析了基于图像的提示的不同变体。这种新型混合输入允许不仅针对上述三个分段任务的动态调整,而是可以制定文本或图像查询的任何二进制分段任务。最后,我们发现我们的系统适应涉及可承受能力或属性的广义查询。源代码:https://ecterlab.org/code/clipseg
translated by 谷歌翻译
We introduce Patch Aligned Contrastive Learning (PACL), a modified compatibility function for CLIP's contrastive loss, intending to train an alignment between the patch tokens of the vision encoder and the CLS token of the text encoder. With such an alignment, a model can identify regions of an image corresponding to a given text input, and therefore transfer seamlessly to the task of open vocabulary semantic segmentation without requiring any segmentation annotations during training. Using pre-trained CLIP encoders with PACL, we are able to set the state-of-the-art on the task of open vocabulary zero-shot segmentation on 4 different segmentation benchmarks: Pascal VOC, Pascal Context, COCO Stuff and ADE20K. Furthermore, we show that PACL is also applicable to image-level predictions and when used with a CLIP backbone, provides a general improvement in zero-shot classification accuracy compared to CLIP, across a suite of 12 image classification datasets.
translated by 谷歌翻译
最近,Vision-Language预训练的零拍图像分类已经表现出令人难以置信的成就,即该模型可以对任意类别进行分类而不看到该类别的其他注释图像。然而,目前尚不清楚如何在更广泛的视觉问题上进行零射识别,例如对象检测和语义分割。在本文中,我们通过在现成的预训练的视觉模型,即剪辑上建立零拍语义分割来定位零拍语义分割。很难因为语义分割和剪辑模型在不同的视觉粒度上执行,该语义分段处理在像素上时,而剪辑在图像上执行。为了解决处理粒度的差异,我们拒绝使用普遍的一级FCN基于FCN的框架,并倡导一个两级语义分割框架,其中第一阶段提取一个完全提取的掩模提案和第二阶段利用基于图像的剪辑模型在第一阶段生成的蒙版图像作物上执行零拍分类。我们的实验结果表明,这种简单的框架通过大型利润率超越了先前的最先进:+29.5 Hiou On Pascal VOC 2012 DataSet,+8.9 Hiou On Coco Stuff DataSet。凭借其简单性和强大的表现,我们希望本框架成为促进未来研究的基准。
translated by 谷歌翻译
Recently, CLIP has been applied to pixel-level zero-shot learning tasks via a two-stage scheme. The general idea is to first generate class-agnostic region proposals and then feed the cropped proposal regions to CLIP to utilize its image-level zero-shot classification capability. While effective, such a scheme requires two image encoders, one for proposal generation and one for CLIP, leading to a complicated pipeline and high computational cost. In this work, we pursue a simpler-and-efficient one-stage solution that directly extends CLIP's zero-shot prediction capability from image to pixel level. Our investigation starts with a straightforward extension as our baseline that generates semantic masks by comparing the similarity between text and patch embeddings extracted from CLIP. However, such a paradigm could heavily overfit the seen classes and fail to generalize to unseen classes. To handle this issue, we propose three simple-but-effective designs and figure out that they can significantly retain the inherent zero-shot capacity of CLIP and improve pixel-level generalization ability. Incorporating those modifications leads to an efficient zero-shot semantic segmentation system called ZegCLIP. Through extensive experiments on three public benchmarks, ZegCLIP demonstrates superior performance, outperforming the state-of-the-art methods by a large margin under both "inductive" and "transductive" zero-shot settings. In addition, compared with the two-stage method, our one-stage ZegCLIP achieves a speedup of about 5 times faster during inference. We release the code at https://github.com/ZiqinZhou66/ZegCLIP.git.
translated by 谷歌翻译
对比语言 - 图像预训练(剪辑)在开放词汇零拍摄图像识别方面取得了显着突破。许多最近的研究利用预先训练的剪辑模型进行图像级分类和操纵。在本文中,我们进一步探索了剪辑的电位,用于像素级致密预测,具体地在语义分割中。在没有注释和微调的情况下,我们的方法Denseclip会产生合理的分段结果,在各种数据集中的开放概念上产生了合理的分段结果。通过添加伪标签和自我培训,Denseclip +超越了SOTA转换零点语义分割方法,通过大幅边缘,例如,Pascal VOC / Pascal Context / Coco Sift的宣传课程从35.6 / 20.7 / 30.3到86.1 / 66.7 / 54.7。我们还在输入损坏下测试了Denseclip的稳健性,并评估其在识别细粒度物体和新颖概念中的能力。我们的发现表明,Denseclip可以作为致密预测任务的新可靠的监督源,以实现无批准的分割。
translated by 谷歌翻译
将简单的体系结构与大规模预训练相结合已导致图像分类的大量改进。对于对象检测,预训练和缩放方法的确定性不佳,尤其是在长尾和开放式摄影的环境中,训练数据相对较少。在本文中,我们提出了一个强大的配方,用于将图像文本模型转移到开放式对象检测中。我们使用具有最小修改,对比度文本预训练和端到端检测微调的标准视觉变压器体系结构。我们对该设置的缩放属性的分析表明,增加图像级预训练和模型大小在下游检测任务上产生一致的改进。我们提供适应性策略和正规化,以实现零击文本条件和单次图像条件对象检测的非常强劲的性能。代码和型号可在GitHub上找到。
translated by 谷歌翻译
我们呈现LSEG,这是一种用于语言驱动语义图像分割的新模型。 LSEG使用文本编码器来计算描述性输入标签(例如,“草”或“构建”)的嵌入式,以及基于变压器的图像编码器,该图像编码器计算输入图像的密度每个像素嵌入。图像编码器具有对比度目标,以将像素嵌入对准对应语义类的文本嵌入。文本嵌入式提供了一种灵活的标签表示,其中将语义相似的标签映射到嵌入空间中的类似区域(例如,“猫”和“毛茸茸”)。这允许LSEG概括到以前在测试时间的预先看不见的类别,而不会再培训或甚至需要单一的额外训练样本。我们展示了与现有的零点和少量拍摄语义分割方法相比,我们的方法实现了高竞争激烈的零射性能,甚至在提供固定标签集时符合传统分段算法的准确性。代码和演示可在https://github.com/isl-org/lang-seg获取。
translated by 谷歌翻译
分组和识别是视觉场景理解的重要组成部分,例如,用于对象检测和语义分割。借助端到端的深度学习系统,图像区域的分组通常通过像素级识别标签的自上而下的监督隐式进行。取而代之的是,在本文中,我们建议将分组机制恢复到深层网络中,从而使语义片段仅在文本监督下自动出现。我们提出了一个分层分组视觉变压器(GroupVit),它超出了常规的网格结构表示,并学会了将图像区域分组为逐渐更大的任意形状段。我们通过对比度损失在大规模图像文本数据集上与文本编码器共同训练小组vit。只有文本监督并且没有任何像素级注释,GroupVit就学会了将语义区域分组在一起,并以零拍的方式成功地将语义分割的任务转移到语义分割的任务,即,而没有任何进一步的微调。它在Pascal VOC 2012上获得了52.3%MIOU的零拍摄精度和Pascal上下文数据集中的22.4%MIOU,并竞争性地表现为需要更高水平监督的最先进的转移学习方法。我们在https://github.com/nvlabs/groupvit上开放代码。
translated by 谷歌翻译
零拍语义分割(ZS3)旨在分割培训中没有看到的新型类别。现有的作品将zs3作为像素级零拍分类问题,以及在仅使用文本预先培训的语言模型的帮助下,将语义知识从看见课程转移到未知一体。虽然简单,像素级ZS3配方显示了集成具有图像文本对预训练的视觉语言模型的有限能力,并且目前展示了愿景任务的巨大潜力。灵感来自观察,人类经常执行段级语义标签,我们建议将zs3分成两个子任务:1)将像素分组到段中的类别不可知的分组任务。 2)段的零拍分类任务。前者的子任务不涉及类别信息,可以直接传输到未安装类的组像素。后一子任务在段级执行,提供了一种自然的方式,可以利用预先培训的大规模视觉模型,用于ZS3的图像文本对(例如剪辑)。基于解耦制剂,我们提出了一种简单且有效的零拍语义分割模型,称为ZegFormer,这优于大幅边缘的先前方法,例如,Pascal VOC的35分和3分在Coco-在宫颈课程方面的东西。代码将在https://github.com/dingjiansw101/zegformer发布。
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译
语义细分是一项关键的计算机视觉任务,该任务已经积极研究了几十年。近年来,监督方法已达到前所未有的准确性,但是每个新的类别类别都需要许多像素级注释,这是非常耗时和昂贵的。另外,当前语义分割网络处理大量类别的能力是有限的。这意味着包含稀有类别类别的图像不太可能通过当前方法进行很好的分割。在本文中,我们提出了一种为每个对象创建语义细分掩码的新方法,而无需训练分割网络或查看任何分割掩模。我们的方法用作图像中存在的类类别的图像级标签;它们可以自动或手动获得。我们利用Vision语言嵌入模型(特别是Clip)来使用模型解释性方法为每个类创建粗略分割映射。我们使用测试时间增强技术来优化地图。此阶段的输出提供像素级伪标签,而不是监督方法所需的手动像素级标签。鉴于伪标签,我们利用单图像分割技术来获得高质量的输出分割掩模。我们的方法是定量和定性地示出的,以优于使用类似的监督数量的方法。我们的结果对于包含罕见类别的图像特别显着。
translated by 谷歌翻译
Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
Few-shot (FS) and zero-shot (ZS) learning are two different approaches for scaling temporal action detection (TAD) to new classes. The former adapts a pretrained vision model to a new task represented by as few as a single video per class, whilst the latter requires no training examples by exploiting a semantic description of the new class. In this work, we introduce a new multi-modality few-shot (MMFS) TAD problem, which can be considered as a marriage of FS-TAD and ZS-TAD by leveraging few-shot support videos and new class names jointly. To tackle this problem, we further introduce a novel MUlti-modality PromPt mETa-learning (MUPPET) method. This is enabled by efficiently bridging pretrained vision and language models whilst maximally reusing already learned capacity. Concretely, we construct multi-modal prompts by mapping support videos into the textual token space of a vision-language model using a meta-learned adapter-equipped visual semantics tokenizer. To tackle large intra-class variation, we further design a query feature regulation scheme. Extensive experiments on ActivityNetv1.3 and THUMOS14 demonstrate that our MUPPET outperforms state-of-the-art alternative methods, often by a large margin. We also show that our MUPPET can be easily extended to tackle the few-shot object detection problem and again achieves the state-of-the-art performance on MS-COCO dataset. The code will be available in https://github.com/sauradip/MUPPET
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
现有的时间动作检测(TAD)方法依赖于大型培训数据,包括细分级注释,仅限于在推理期间单独识别先前看到的课程。为每类兴趣收集和注释一个大型培训集是昂贵的,因此无法计算。零射TAD(ZS-TAD)通过启用预训练的模型来识别任何看不见的动作类别来解决这一障碍。同时,ZS-TAD的调查大大降低,ZS-Tad也更具挑战性。受零摄像图像分类的成功的启发,我们旨在解决更复杂的TAD任务。一种直观的方法是将现成的建议探测器与剪辑样式分类集成。但是,由于顺序定位(例如,提案生成)和分类设计,它很容易进行定位误差传播。为了克服这个问题,在本文中,我们通过视觉提示(陈旧)提出了一种新型的零射击时间动作检测模型。这种新颖的设计通过破坏介于两者之间的错误传播途径来有效地消除了定位和分类之间的依赖性。我们进一步介绍了分类和定位之间的相互作用机制,以改善优化。对标准ZS-TAD视频基准测试的广泛实验表明,我们的陈旧的表现明显优于最先进的替代方案。此外,我们的模型还与最近的强大竞争对手相比,在受到监督的TAD上还能产生卓越的成果。 Stale的Pytorch实现可从https://github.com/sauradip/stale获得。
translated by 谷歌翻译
我们设计了一个开放式图像分割模型,以将图像组织到任意文本指示的有意义区域中。最近的作品(剪辑和对齐),尽管使用图像级字幕标签获得了令人印象深刻的开放式摄氏分类精度,但仍无法用像素分段视觉概念。我们认为这些模型错过了视觉分组的重要步骤,该模型在学习视觉语义对齐之前将像素组织成小组。我们建议OpenSeg解决上述问题,同时仍利用可扩展的图像级标题监督。首先,它学会了为可能的组织提出细分面具。然后,它通过将标题中的每个单词与一个或几个预测的面具对齐来学习视觉语义对齐。我们发现蒙版表示是支持字幕学习图像分割的关键,从而可以扩大数据集和词汇大小。 OpenSeg大大优于pascal数据集上LSEG最近的开放式LSEG +19.9 MIOU的开放式方法。
translated by 谷歌翻译
什么构成一个物体?这是计算机愿景中的长期问题。为了实现这一目标,已经开发了许多基于学习的基于学习的方法来得分对象。但是,它们通常不会划过新域和未经看不见的对象。在本文中,我们倡导现有方法缺乏由人类可理解的语义管理的自上而下的监督信号。为了弥合这一差距,我们探索了已经用对齐的图像文本对培训的多模态视觉变压器(MVIT)。我们对各个域和新型对象的广泛实验显示了MVITS的最先进的性能,以使图像中的通用对象本地化。基于这些发现,我们使用多尺度特征处理和可变形的自我关注来开发一种高效且灵活的MVIT架构,可以自适应地生成给定特定语言查询的提议。我们展示了MVIT提案在各种应用中的重要性,包括开放世界对象检测,突出和伪装对象检测,监督和自我监督的检测任务。此外,MVITS提供了具有可理解文本查询的增强的交互性。代码:https://git.io/j1hpy。
translated by 谷歌翻译
对比性语言图像预训练(剪辑)通过随时可用的自然语言监督学习丰富的表示。它可以改善下游视觉任务的一般性能,包括但不限于零射击,长尾巴,细分,检索,标题和视频。但是,据我们所知,尚未研究剪辑的视觉解释性。为了提供其预测的视觉解释,我们提出了图像文本相似性图(ITSM)。基于它,我们出人意料地发现,剪辑比前景更喜欢背景区域,并且对人类理解提出了错误的可视化。在实验上,我们发现魔鬼在汇总部分,其中不适当的合并方法导致一种称为语义转移的现象。为了纠正和提高可视化结果,我们提出了蒙版的最大池,并使用自我监督图像编码器的注意力图。同时,解释性任务和识别任务需要不同的表示。为了解决这个问题,我们提出了双重预测,以满足这一要求。我们将上述方法整合为可解释的对比度图像预训练(ICLIP)。实验表明ICLIP极大地提高了可解释性。例如,在VOC 2012数据集中,非平凡的改进分别为$ 32.85 \%$和$ 49.10 \%$。
translated by 谷歌翻译