Contrastive Language-Image Pre-training (CLIP) has demonstrated great potential in realizing open-vocabulary visual recognition in a matching style, due to its holistic use of natural language supervision that covers unconstrained real-world visual concepts. However, it is, in turn, also difficult to evaluate and analyze the openness of CLIP-like models, since they are in theory open to any vocabulary but the actual accuracy varies. To address the insufficiency of conventional studies on openness, we resort to an incremental perspective and define the extensibility, which essentially approximates the model's ability to deal with new visual concepts, by evaluating openness through vocabulary expansions. Our evaluation based on extensibility shows that CLIP-like models are hardly truly open and their performance degrades as the vocabulary expands to different degrees. Further analysis reveals that the over-estimation of openness is not because CLIP-like models fail to capture the general similarity of image and text features of novel visual concepts, but because of the confusion among competing text features, that is, they are not stable with respect to the vocabulary. In light of this, we propose to improve the openness of CLIP in the feature space by enforcing the distinguishability of text features. Our method retrieves relevant texts from the pre-training corpus to enhance prompts for inference, which boosts the extensibility and stability of CLIP even without fine-tuning.
translated by 谷歌翻译
对比视力语言预训练(称为剪辑)为使用大型图像文本对学习视觉表示提供了新的范式。通过零拍知识转移,它在下游任务上表现出令人印象深刻的表现。为了进一步增强剪辑的适应能力,现有的方法提议微调额外的可学习模块,这大大改善了少量的性能,但引入了额外的培训时间和计算资源。在本文中,我们提出了一种无训练的适应方法,用于进行剪辑进行几个弹药分类,称为Tip-Adapter,该分类不仅继承了零拍剪辑的无训练优势,而且还与训练需要的那些相当的表现相当方法。 TIP-ADAPTER通过少数照片训练集通过键值缓存模型构造适配器,并更新通过功能检索中剪辑中编码的先验知识。最重要的是,可以通过对10 $ \ times $ \现有方法少的速度$ \ times $ $ \现有方法进行微调,这可以进一步提高Imagenet上的最先进。高效的。我们在11个数据集上进行了很少的射击分类实验,以证明我们提出的方法的优势。代码在https://github.com/gaopengcuhk/tip-adapter上发布。
translated by 谷歌翻译
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
translated by 谷歌翻译
Contrastive Language-Image Pre-training (CLIP) has emerged as a simple yet effective way to train large-scale vision-language models. CLIP demonstrates impressive zero-shot classification and retrieval on diverse downstream tasks. However, to leverage its full potential, fine-tuning still appears to be necessary. Fine-tuning the entire CLIP model can be resource-intensive and unstable. Moreover, recent methods that aim to circumvent this need for fine-tuning still require access to images from the target distribution. In this paper, we pursue a different approach and explore the regime of training-free "name-only transfer" in which the only knowledge we possess about the downstream task comprises the names of downstream target categories. We propose a novel method, SuS-X, consisting of two key building blocks -- SuS and TIP-X, that requires neither intensive fine-tuning nor costly labelled data. SuS-X achieves state-of-the-art zero-shot classification results on 19 benchmark datasets. We further show the utility of TIP-X in the training-free few-shot setting, where we again achieve state-of-the-art results over strong training-free baselines. Code is available at https://github.com/vishaal27/SuS-X.
translated by 谷歌翻译
We introduce Patch Aligned Contrastive Learning (PACL), a modified compatibility function for CLIP's contrastive loss, intending to train an alignment between the patch tokens of the vision encoder and the CLS token of the text encoder. With such an alignment, a model can identify regions of an image corresponding to a given text input, and therefore transfer seamlessly to the task of open vocabulary semantic segmentation without requiring any segmentation annotations during training. Using pre-trained CLIP encoders with PACL, we are able to set the state-of-the-art on the task of open vocabulary zero-shot segmentation on 4 different segmentation benchmarks: Pascal VOC, Pascal Context, COCO Stuff and ADE20K. Furthermore, we show that PACL is also applicable to image-level predictions and when used with a CLIP backbone, provides a general improvement in zero-shot classification accuracy compared to CLIP, across a suite of 12 image classification datasets.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
本文提出了一种对比调整,这是一种简单的方法,采用对比训练来对准图像和文本模型,同时仍然利用他们的预训练。在我们的实证研究中,我们发现,锁定的预训练图像模型与解锁文本模型最佳。我们调用这种对比调整“锁定图像文本调整”(LIT TOONING)的实例,该实例仅教导文本模型,从预先训练的图像模型中读出了良好的表示新任务。亮度调谐模型将零拍摄传输到新视觉任务的能力提高,例如图像分类或检索。建议的亮度调整是广泛适用的;它可以使用三种不同的图像文本数据集可靠地使用多种预训练方法(监督和无监督)和多种架构(Reset,Vision变换器和MLP-MILLER)。利用基于变压器的预训练VIT-G / 14型号,LIT调谐模型在想象网测试集中实现了84.5%的零射频传输精度,并且在充满挑战的分发ObjectNet测试集中实现了81.1%。
translated by 谷歌翻译
在许多图像分类任务中,诸如夹子之类的开放式摄影模型具有高精度。但是,在某些设置中,他们的零拍摄性能远非最佳。我们研究模型修补程序,目的是提高对特定任务的准确性,而不会在表现已经足够的任务上降低准确性。为了实现这一目标,我们引入了油漆,这是一种修补方法,该方法在微调之前使用模型的权重与要修补的任务进行微调后的权重。在零机夹的性能差的九个任务上,油漆可将精度提高15至60个百分点,同时将ImageNet上的精度保留在零拍模型的一个百分点之内。油漆还允许在多个任务上修补单个模型,并通过模型刻度进行改进。此外,我们确定了广泛转移的案例,即使任务不相交,对一个任务进行修补也会提高其他任务的准确性。最后,我们研究了超出常见基准的应用程序,例如计数或减少印刷攻击对剪辑的影响。我们的发现表明,可以扩展一组任务集,开放式摄影模型可实现高精度,而无需从头开始重新训练它们。
translated by 谷歌翻译
很少有射击分类需要深层神经网络才能仅从有限的培训图像中学习广义表示,这在低数据制度中很有挑战,但很重要。最近,基于剪辑的方法显示出有希望的很少的射击性能受益于对比的语言图像预训练。基于这一点,我们质疑大规模的预训练是否可以减轻少数数据的缺陷,并通过预测的知识帮助代表性学习。在本文中,我们提出了Como,这是对预培训模型的合作,该模型结合了来自各种培训范式的各种先验知识,以获得更好的几次学习。我们的科莫包括:剪辑的语言对比知识,迪诺的视力对抗性知识以及达尔 - E的语言基础知识。具体而言,科莫在两个方面工作:很少的数据扩展和多样化的知识合奏。首先,我们通过零摄影dall-e生成合成图像,以丰富少量训练数据,而无需任何人力。另一方面,我们引入了一个可学习的多知识适配器(MK-apapter),以适应剪辑和恐龙的预测。通过这种合作,COMO可以完全释放不同的预训练方法的潜力,并将其统一以进行几次分类。我们在11个数据集上进行了广泛的实验,以证明我们方法的优势和概括能力。
translated by 谷歌翻译
随着大型预训练的Vison语言模型(如剪辑)的出现,可以通过及时调整来调整可转让表示形式。及时调整试图从存储在预训练的视觉模型的图像和文本编码器中的常识中探索有益信息,以探索下游任务。最近提出的名为“上下文优化”(COP)的方法将一组可学习的向量从语言侧引入文本提示符,而单独调整文本提示符则不会影响图像编码器的计算视觉特征,从而导致了次级优势。在本文中,我们通过学习文本提示并同时为文本和图像编码器提供双重模式提示调整范式。此外,为了使视觉提示更多地集中在目标视觉概念上,我们提出了类感知的视觉及时调整(CAVPT),该调整是通过在模板提示和视觉类别令牌嵌入的语言描述之间进行交叉注意来动态生成的。我们的方法提供了一种新的范式来调整大型预训练的视觉模型,并在8个数据集上进行了广泛的实验结果,证明了该方法的有效性。我们的代码在补充材料中可用。
translated by 谷歌翻译
具有大尺度图像文本对的视觉预训练(VLP)在各个领域都表现出卓越的性能。但是,Internet上的图像文本对共存通常缺乏明确的对齐信息,这对于VLP来说是次优的。建议采用现成的对象检测器来利用其他图像标签信息。但是,对象检测器是耗时的,只能识别预定义的对象类别,从而限制了模型容量。受到观察的启发,即文本包含不完整的细粒图像信息,我们介绍了Ideas,该想法代表通过在线多标签识别VLP来增加文本多样性。想法表明,可以在VLP期间共同优化从文本中提取的图像标签的多标签学习。此外,想法可以在线识别有价值的图像标签,以提供更明确的文本监督。全面的实验表明,想法可以显着提高多个下游数据集上的性能,并具有较小的额外计算成本。
translated by 谷歌翻译
视觉语言预训练(VLP)模型在各种下游任务上表现出色。他们的成功在很大程度上取决于预训练的跨模式数据集的规模。但是,中文中缺乏大规模数据集和基准阻碍了中国VLP模型和更广泛的多语言应用程序的发展。在这项工作中,我们发布了一个名为Wukong的大型中国跨模式数据集,其中包含从网络收集的1亿个中文图像文本对。 Wukong旨在基准基准不同的多模式预训练方法,以促进VLP研究和社区发展。此外,我们发布了一组模型,预先训练了各种图像编码器(vit-b/vit-l/swint),还将高级预训练技术应用于VLP,例如锁定图像文本调整,相对于代币的相似性学习和减少互动。还提供了广泛的实验和不同下游任务的基准测试,包括新的最大人验证的图像文本测试数据集。实验表明,Wukong可以作为不同的跨模式学习方法的有前途的中国预培训数据集和基准。对于10个数据集上的零摄像图像分类任务,$ Wukong_ {vit-l} $达到的平均准确度为73.03%。对于图像文本检索任务,它在AIC-ICC上的平均召回率为71.6%,比Wenlan 2.0高12.9%。此外,我们的Wukong模型在下游任务上进行了基准测试,例如多个数据集上的其他变体,例如Flickr8k-CN,Flickr-30K-CN,Coco-CN,Coco-CN等。更多信息可以参考:https://wukong-dataset.github.io/wukong-dataset/。
translated by 谷歌翻译
最近的工作表明,自我监督的预训练导致对挑战性视觉识别任务的监督学习改进。剪辑是一种令人兴奋的学习语言监督的新方法,展示了各种基准的有希望的表现。在这项工作中,我们探索自我监督的学习是否可以帮助使用语言监督来进行视觉表现学习。我们介绍了一个用于组合自我监督学习和剪辑预训练的多任务学习框架。在使用视觉变形金刚进行预培训之后,我们在三个不同的设置下彻底评估了代表性质量,并将性能与自我监督学习进行了比较:零拍摄传输,线性分类和端到端的FineTuning。在ImageNet和电池的额外数据集中,我们发现SLIP通过大幅度提高了精度。我们将通过关于不同模型大小,培训计划和预训练预训练数据集的实验进行验证。我们的研究结果表明,滑块享有世界上最好的:性能比自我监督更好(+ 8.1%的线性精度)和语言监督(+ 5.2%的零射精精度)。
translated by 谷歌翻译
诸如剪辑之类的大型预训练的视觉模型在学习表现方面表现出巨大的潜力,这些模型可以在各种下游任务中转移。与主要基于离散标签的传统表示学习不同,视觉语言预训练会使图像和文本在公共特征空间中对齐,这允许通过提示零弹性转移到下游任务,即从分类权重合成。描述兴趣类的自然语言。在这项工作中,我们表明,在实践中部署此类模型的一个重大挑战是及时的工程,它需要域专业知识,并且非常耗时 - 由于措辞的略有变化,需要花费大量时间来进行单词调整可能会对性能产生巨大影响。受到自然语言处理(NLP)迅速学习研究的最新进展的启发,我们提出了上下文优化(COP),这是一种专门用于调整类似剪辑的视觉语言模型的简单方法,用于下游图像识别。具体而言,Coop用可学习的向量建模了提示A的上下文单词,而整个预训练的参数则保持固定。为了处理不同的图像识别任务,我们提供了两个COOP的实现:统一上下文和特定于班级的上下文。通过在11个数据集上进行的大量实验,我们证明Coop只需要一两个镜头才能以相当的利润击败手工制作的提示,并且能够以16张镜头(例如16张照片)获得迅速工程的显着改进增益约为15%(最高达到45%以上)。尽管是一种基于学习的方法,但与使用手工制作的提示相比,Coop与零拍模型相比,取得了出色的域泛化性能。
translated by 谷歌翻译
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as Ima-geNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated crossattention models. The representations also enable cross-modality search with complex text and text + image queries.
translated by 谷歌翻译
视觉世界自然地展现了一个长尾的开放类分布,这对现代视觉系统带来了巨大挑战。现有方法可以执行类重新平衡策略或直接改进网络模块以解决问题。然而,他们仍然用有限一套预定义标签训练模型,限制了他们的监督信息并限制了他们对新颖实例的可转移性。新途径上的大型对比视觉普瑞宁普雷宁闪光灯的最新进展,可视识别。利用开放词汇监督,预先染色的对比视觉语言模型学习强大的多模式表示,这是对处理数据缺陷和看不见的概念。通过计算视觉和文本输入之间的语义相似性,可视识别被转换为vision语言匹配问题。灵感来自于此,我们提出了民谣,利用了对比尾识别的对比视觉模型。我们首先通过对特定的长尾目标数据集进行对比学习继续预先预留视觉语言骨干。之后,我们冻结了骨干,进一步采用了额外的适配器层,以增强通过重新采样策略构建的平衡训练样本上的尾级课程的表示。已经在三个流行的长尾识别基准测试中进行了广泛的实验。因此,我们简单有效的方法设定了新的最先进的表演,优于具有大边距的竞争基础。代码在https://github.com/gaopengcuhk/ballad发布。
translated by 谷歌翻译
从自然语言监督中学习视觉表示,最近在许多开创性的作品中表现出了巨大的希望。通常,这些具有语言的视觉模型表现出对各种数据集和任务的强大可传递性。但是,由于缺乏易于使用的评估工具包和公共基准,评估这些模型的可转让性仍然很具有挑战性。为了解决这个问题,我们构建了高级版(评估语言的视觉任务级传输),这是用于评估(预训练)语言增强视觉模型的第一个基准和工具包。升华由三个组成部分组成。 (i)数据集。作为下游评估套件,它由20个图像分类数据集和35个对象检测数据集组成,每个数据集都用外部知识来增强。 (ii)工具包。开发了自动高参数调谐工具包,以促进下游任务的模型评估。 (iii)指标。多种评估指标用于测量样品效率(零射击和少量)和参数效率(线性探测和完整模型微调)。我们在https://computer-vision-in-the-wild.github.io/elevater/上公开发布leverater
translated by 谷歌翻译
作为剪辑的对比视觉语言预培训为通过使用大规模对比图像文本对提供了学习视觉表示的新范式。它显示了零击中知识转移到下游任务的令人印象深刻的性能。为了进一步增强剪辑的几次射击功能,提出的剪辑适配器提出微调轻量级残留功能适配器,并显着提高了几次拍摄分类的性能。但是,这样的过程仍然需要额外的培训和计算资源。在本文中,我们提出了\ textbf {t}下雨的cl \ textbf {ip} - \ textbf {适配器}(\ textbf {tip-adapter}),它不仅继承了剪辑的无训练优势,还可以相当地执行或甚至比剪辑适配器更好。提示 - 适配器不需要任何用于训练适配器的备份传播,而是通过从几次拍摄训练集构造的键值高速缓存模型创建权重。在这种非参数的方式中,提示适配器在没有任何训练的情况下获取良好的适配器权重,这既有效且有效。此外,可以通过微调这种适当的初始化适配器进一步提高尖端适配器的性能,仅用于具有超快速收敛速度的几个时期。我们对ImageNet和其他10个数据集进行了广泛的小型分类实验,以证明提出的提示适配器的优越性。代码将以\ URL {https://github.com/gaopengcuhk/tip-adapter}释放。
translated by 谷歌翻译
对比性语言图像预处理(剪辑)受到广泛关注,因为它的学会表示形式可以很好地转移到各种下游任务上。在剪辑训练期间,Infonce目标旨在使正面图像对齐和分开的负面图像对齐。在本文中,我们在此过程中显示了表示分组的效果:Infonce客观间接通过随机出现的模式内锚将语义相似的表示形式组合在一起。我们引入了原型对比度图像预处理(原始的),以提高其效率并提高其针对模态差距的鲁棒性来增强这种分组。具体而言,原始利润在图像和文本空间之间建立了原型级别的歧视,从而有效传输了更高级别的结构知识。我们进一步提出了典型的背部翻译(PBT),以将表示形式分组与表示形式对齐,从而有效地学习了在较大的模态差距下有意义的表示。 PBT还使我们能够以更丰富的先验知识介绍其他外部教师。 ProtoClip通过在线情节培训策略进行了培训,这可以扩展到无限量的数据。结合上述新颖的设计,我们在概念标题上训练原始设计,并获得了 +5.81%的成像网线性探测改进,并且 +2.01%的Imagenet Zero Zero-shot分类改进。代码可在https://github.com/megvii-research/protoclip上找到。
translated by 谷歌翻译
Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]". Based on exhaustive text cues in "[CONTEXT]", CLIP model is aware of different contexts, e.g. background, style, viewpoint, and exhibits unprecedented robustness against a wide range of distribution shifts. However, recent works find further fine-tuning of CLIP models improves accuracy but sacrifices the robustness on downstream tasks. We conduct an empirical investigation to show fine-tuning will corrupt the context-aware ability of pre-trained CLIP features. To solve this problem, we propose Context-Aware Robust Fine-tuning (CAR-FT). CAR-FT regularizes the model during fine-tuning to capture the context information. Specifically, we use zero-shot prompt weights to get the context distribution contained in the image. By minimizing the Kullback-Leibler Divergence (KLD) between context distributions induced by original/fine-tuned CLIP models, CAR-FT makes the context-aware ability of CLIP inherited into downstream tasks, and achieves both higher In-Distribution (ID) and Out-Of-Distribution (OOD) accuracy. The experimental results show CAR-FT achieves superior robustness on five OOD test datasets of ImageNet, and meanwhile brings accuracy gains on nine downstream tasks. Additionally, CAR-FT surpasses previous Domain Generalization (DG) methods and gets 78.5% averaged accuracy on DomainBed benchmark, building the new state-of-the-art.
translated by 谷歌翻译