Pre-trained vision-language models like CLIP have recently shown superior performances on various downstream tasks, including image classification and segmentation. However, in fine-grained image re-identification (ReID), the labels are indexes, lacking concrete text descriptions. Therefore, it remains to be determined how such models could be applied to these tasks. This paper first finds out that simply fine-tuning the visual model initialized by the image encoder in CLIP, has already obtained competitive performances in various ReID tasks. Then we propose a two-stage strategy to facilitate a better visual representation. The key idea is to fully exploit the cross-modal description ability in CLIP through a set of learnable text tokens for each ID and give them to the text encoder to form ambiguous descriptions. In the first training stage, image and text encoders from CLIP keep fixed, and only the text tokens are optimized from scratch by the contrastive loss computed within a batch. In the second stage, the ID-specific text tokens and their encoder become static, providing constraints for fine-tuning the image encoder. With the help of the designed loss in the downstream task, the image encoder is able to represent data as vectors in the feature embedding accurately. The effectiveness of the proposed strategy is validated on several datasets for the person or vehicle ReID tasks. Code is available at https://github.com/Syliz517/CLIP-ReID.
translated by 谷歌翻译
随着大型预训练的Vison语言模型(如剪辑)的出现,可以通过及时调整来调整可转让表示形式。及时调整试图从存储在预训练的视觉模型的图像和文本编码器中的常识中探索有益信息,以探索下游任务。最近提出的名为“上下文优化”(COP)的方法将一组可学习的向量从语言侧引入文本提示符,而单独调整文本提示符则不会影响图像编码器的计算视觉特征,从而导致了次级优势。在本文中,我们通过学习文本提示并同时为文本和图像编码器提供双重模式提示调整范式。此外,为了使视觉提示更多地集中在目标视觉概念上,我们提出了类感知的视觉及时调整(CAVPT),该调整是通过在模板提示和视觉类别令牌嵌入的语言描述之间进行交叉注意来动态生成的。我们的方法提供了一种新的范式来调整大型预训练的视觉模型,并在8个数据集上进行了广泛的实验结果,证明了该方法的有效性。我们的代码在补充材料中可用。
translated by 谷歌翻译
自动视觉解对我们多样化和开放的世界需要计算机视觉模型,以概括为特定任务的最小定制,类似于人类视力。计算机视觉基础型号培训,培训多样化,大型数据集,可以适应各种下游任务,对该任务来解决现实世界计算机视觉应用而言至关重要。虽然现有的视觉基础模型如剪辑,对齐和吴道2.0主要集中在映射图像和文本表示到跨模型共享表示,我们介绍了一台新的计算机视觉基础模型,佛罗伦萨,扩大粗糙的表示(现场)到精细(对象),从静态(图像)到动态(视频),以及从RGB到多个模态(标题,深度)。通过从Web级图像文本数据中纳入通用视觉语言表示,我们的佛罗伦萨模型可以很容易地适应各种计算机视觉任务,例如分类,检索,对象检测,VQA,图像标题,视频检索和动作识别。此外,佛罗伦萨在许多类型的转移学习中表现出出色的表现:全面采样的微调,线性探测,几次射击传输和用于新颖图像和物体的零拍摄传输。所有这些属性对于我们的视觉基础模型至关重要,以提供通用视觉任务。佛罗伦萨实现了新的最先进的导致44个代表性基准,例如Imagenet-1K零射击分类,最高1精度为83.74,最高5个精度为97.18,62.4地图上的Coco微调, 80.36在VQA上,动力学-600上的87.8。
translated by 谷歌翻译
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous success on a wide range of downstream tasks, e.g., zero-shot classification, retrieval and image captioning. However, their successes highly rely on the scale and quality of web-crawled data that naturally contain incomplete and noisy information (e.g., wrong or irrelevant content). Existing works either design manual rules to clean data or generate pseudo-targets as auxiliary signals for reducing noise impact, which do not explicitly tackle both the incorrect and incomplete challenges simultaneously. In this paper, to automatically mitigate the impact of noise by solely mining over existing data, we propose a principled Noise-robust Language-Image Pre-training framework (NLIP) to stabilize pre-training via two schemes: noise-harmonization and noise-completion. First, in noise-harmonization scheme, NLIP estimates the noise probability of each pair according to the memorization effect of cross-modal transformers, then adopts noise-adaptive regularization to harmonize the cross-modal alignments with varying degrees. Second, in noise-completion scheme, to enrich the missing object information of text, NLIP injects a concept-conditioned cross-modal decoder to obtain semantic-consistent synthetic captions to complete noisy ones, which uses the retrieved visual concepts (i.e., objects' names) for the corresponding image to guide captioning generation. By collaboratively optimizing noise-harmonization and noise-completion schemes, our NLIP can alleviate the common noise effects during image-text pre-training in a more efficient way. Extensive experiments show the significant performance improvements of our NLIP using only 26M data over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot classification datasets, MSCOCO image captioning and zero-shot image-text retrieval tasks.
translated by 谷歌翻译
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
translated by 谷歌翻译
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as Ima-geNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated crossattention models. The representations also enable cross-modality search with complex text and text + image queries.
translated by 谷歌翻译
现实世界的识别系统在实践中经常遇到许多看不见的标签。为了识别这种看不见的标签,多标签的零光学习(ML-ZSL)着重于通过预先训练的文本标签嵌入(例如,手套)传输知识。但是,这种方法仅利用语言模型利用单极知识,同时忽略了图像文本对固有的丰富语义信息。取而代之的是,最近开发的基于开放式摄影的方法(OV)方法成功地利用了对象检测中图像文本对的此类信息,并实现了令人印象深刻的性能。受基于OV的方法的成功启发,我们提出了一个新型的开放式视频框架,称为多模式知识转移(MKT),用于多标签分类。具体而言,我们的方法利用基于视觉和语言预处理(VLP)模型的图像文本对的多模式知识。为了促进VLP模型的Imagetext匹配能力,使用知识蒸馏来保证图像和标签嵌入的一致性以及及时调整以进一步更新标签嵌入。为了进一步识别多个对象,开发了一个简单但有效的两流模块,以捕获本地和全局功能。广泛的实验结果表明,我们的方法在公共基准数据集上的表现明显优于最先进的方法。代码将在https://github.com/seanhe97/mkt上找到。
translated by 谷歌翻译
Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, we propose a Deep Multimodal Fusion network to elaborate rich semantic knowledge for assisting in representation learning during the pre-training. Importantly, a multimodal fusion strategy is introduced to translate the features of different modalities into the common space, which can significantly boost generalization capability of Re-ID model. As for the fine-tuning stage, a realistic dataset is adopted to fine-tune the pre-trained model for better distribution alignment with real-world data. Comprehensive experiments on benchmarks demonstrate that our method can significantly outperform previous domain generalization or meta-learning methods with a clear margin. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.
translated by 谷歌翻译
诸如剪辑之类的大型预训练的视觉模型在学习表现方面表现出巨大的潜力,这些模型可以在各种下游任务中转移。与主要基于离散标签的传统表示学习不同,视觉语言预训练会使图像和文本在公共特征空间中对齐,这允许通过提示零弹性转移到下游任务,即从分类权重合成。描述兴趣类的自然语言。在这项工作中,我们表明,在实践中部署此类模型的一个重大挑战是及时的工程,它需要域专业知识,并且非常耗时 - 由于措辞的略有变化,需要花费大量时间来进行单词调整可能会对性能产生巨大影响。受到自然语言处理(NLP)迅速学习研究的最新进展的启发,我们提出了上下文优化(COP),这是一种专门用于调整类似剪辑的视觉语言模型的简单方法,用于下游图像识别。具体而言,Coop用可学习的向量建模了提示A的上下文单词,而整个预训练的参数则保持固定。为了处理不同的图像识别任务,我们提供了两个COOP的实现:统一上下文和特定于班级的上下文。通过在11个数据集上进行的大量实验,我们证明Coop只需要一两个镜头才能以相当的利润击败手工制作的提示,并且能够以16张镜头(例如16张照片)获得迅速工程的显着改进增益约为15%(最高达到45%以上)。尽管是一种基于学习的方法,但与使用手工制作的提示相比,Coop与零拍模型相比,取得了出色的域泛化性能。
translated by 谷歌翻译
Although existing multi-object tracking (MOT) algorithms have obtained competitive performance on various benchmarks, almost all of them train and validate models on the same domain. The domain generalization problem of MOT is hardly studied. To bridge this gap, we first draw the observation that the high-level information contained in natural language is domain invariant to different tracking domains. Based on this observation, we propose to introduce natural language representation into visual MOT models for boosting the domain generalization ability. However, it is infeasible to label every tracking target with a textual description. To tackle this problem, we design two modules, namely visual context prompting (VCP) and visual-language mixing (VLM). Specifically, VCP generates visual prompts based on the input frames. VLM joints the information in the generated visual prompts and the textual prompts from a pre-defined Trackbook to obtain instance-level pseudo textual description, which is domain invariant to different tracking scenes. Through training models on MOT17 and validating them on MOT20, we observe that the pseudo textual descriptions generated by our proposed modules improve the generalization performance of query-based trackers by large margins.
translated by 谷歌翻译
基于文本的人检索旨在根据文本描述找到查询人员。关键是学习视觉文本模式之间的常见潜在空间映射。为了实现这一目标,现有的作品采用细分来获得明确的跨模式对齐方式或利用注意力来探索显着对准。这些方法有两个缺点:1)标记交叉模式比对很耗时。 2)注意方法可以探索显着的跨模式对齐,但可能会忽略一些微妙而有价值的对。为了缓解这些问题,我们为基于文本的人检索引入了一个隐式视觉文本(IVT)框架。与以前的模型不同,IVT利用单个网络来学习两种模式的表示形式,这有助于视觉文本相互作用。为了探索细粒的对准,我们进一步提出了两个隐式语义比对范式:多级比对(MLA)和双向掩码建模(BMM)。 MLA模块在句子,短语和单词级别上探索了更精细的匹配,而BMM模块旨在挖掘视觉和文本模态之间的\ textbf {更多}语义对齐。进行了广泛的实验,以评估公共数据集中提出的IVT,即Cuhk-Pedes,RSTPREID和ICFG-PEDES。即使没有明确的身体部位对准,我们的方法仍然可以达到最先进的表现。代码可在以下网址获得:https://github.com/tencentyouturesearch/personretrieval-ivt。
translated by 谷歌翻译
基于深度学习的模型在现实世界中处理长尾数据时遇到挑战。现有解决方案通常采用一些平衡策略或转移学习来应对类别不平衡问题,基于图像模态。在这项工作中,我们提出了一种视觉语言长尾识别框架,称为VL-LTR,并对介绍长尾识别(LTR)引入文本方式的益处进行实证研究。与现有方法相比,建议的VL-LTR具有以下优点。 (1)我们的方法不仅可以从图像中学习视觉表示,还可以从Internet收集的嘈杂类级文本描述中学习相应的语言表示; (2)我们的方法可以有效地利用学习的视觉语言表示来提高视觉识别性能,尤其适用于具有较少图像样本的类。我们还开展广泛的实验,并在广泛使用的LTR基准上设定了新的最先进的性能。值得注意的是,我们的方法在想象中实现了77.2%的总体准确性 - 这显着优于超过17分的最佳方法,并且接近完整的想象集的主要性能培训。代码应释放。
translated by 谷歌翻译
对比性语言图像预训练(剪辑)已被证明可以学习具有出色传递性的视觉表示,从而实现了零击分类的有希望的准确性。为了进一步提高其下游性能,现有作品在剪辑上提出了其他可学习的模块,并通过几次训练集对其进行微调。但是,由此产生的额外培训成本和数据要求严重阻碍了模型部署和知识转移的效率。在本文中,我们引入了一种自由午餐的增强方法CALIP,以通过无参数注意模块来提高Clip的零拍摄性能。具体而言,我们指导视觉和文本表示相互交互,并通过注意探索跨模式的信息特征。由于预训练大大降低了两种方式之间的嵌入距离,因此我们在注意力中丢弃所有可学习的参数,并在双向更新多模式特征,从而使整个过程无参数且无培训。通过这种方式,图像与文本感知信号混合在一起,文本表示形式被视觉引导以获得更好的自适应零射击对齐。我们在14个数据集的各种基准上评估CALIP,用于2D图像和3D Point Cloud几乎没有分类,显示出一致的零弹性性能改进了夹子。基于此,我们进一步在Calip的注意模块中插入了少量线性层,并在少量射击设置下验证我们的鲁棒性,与现有方法相比,这也可以实现领先的性能。这些广泛的实验证明了我们的方法在有效增强夹子方面的优势。
translated by 谷歌翻译
视觉语言预训练(VLP)模型在各种下游任务上表现出色。他们的成功在很大程度上取决于预训练的跨模式数据集的规模。但是,中文中缺乏大规模数据集和基准阻碍了中国VLP模型和更广泛的多语言应用程序的发展。在这项工作中,我们发布了一个名为Wukong的大型中国跨模式数据集,其中包含从网络收集的1亿个中文图像文本对。 Wukong旨在基准基准不同的多模式预训练方法,以促进VLP研究和社区发展。此外,我们发布了一组模型,预先训练了各种图像编码器(vit-b/vit-l/swint),还将高级预训练技术应用于VLP,例如锁定图像文本调整,相对于代币的相似性学习和减少互动。还提供了广泛的实验和不同下游任务的基准测试,包括新的最大人验证的图像文本测试数据集。实验表明,Wukong可以作为不同的跨模式学习方法的有前途的中国预培训数据集和基准。对于10个数据集上的零摄像图像分类任务,$ Wukong_ {vit-l} $达到的平均准确度为73.03%。对于图像文本检索任务,它在AIC-ICC上的平均召回率为71.6%,比Wenlan 2.0高12.9%。此外,我们的Wukong模型在下游任务上进行了基准测试,例如多个数据集上的其他变体,例如Flickr8k-CN,Flickr-30K-CN,Coco-CN,Coco-CN等。更多信息可以参考:https://wukong-dataset.github.io/wukong-dataset/。
translated by 谷歌翻译
在计算机视觉中,多标签分类(包括零击的多标签分类)是具有许多真实应用程序的重要任务。在本文中,我们提出了一种新颖的算法,对齐双模态分类器(ADDS),其中包括一个双模式解码器(DM-DECODER),具有视觉和文本特征之间的对齐方式,用于多标签分类任务。此外,我们设计了一种简单但有效的方法,称为金字塔 - 福音,以提高分辨率高的输入的性能。在标准的多标签基准数据集(MS-Coco和NUS范围内)进行的广泛实验表明,我们的方法显着胜过以前的方法,并为常规多标签分类,零发射的多标签提供最先进的性能分类和一种称为单一标签分类的极端情况,其中在单标签数据集(Imagenet-1K,Imagenet-21K)上训练的模型在多标签的模型(MS-Coco和NUS范围内)进行了测试。我们还分析了视觉文本一致性如何有助于提出的方法,验证DM码头的重要性,并证明了金字塔 - 反向视觉变压器的有效性。
translated by 谷歌翻译
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing visionlanguage models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "See Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an endto-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than regionbased approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR 2 test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
translated by 谷歌翻译
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
translated by 谷歌翻译
随着变压器的发展,近年来预先训练的模型已经以突破性的步伐发展。他们在自然语言处理(NLP)和计算机视觉(CV)中主导了主流技术。如何将预训练适应视觉和语言(V-L)学习和改善下游任务绩效成为多模式学习的重点。在本文中,我们回顾了视力语言预训练模型(VL-PTMS)的最新进展。作为核心内容,我们首先简要介绍了几种方法,将原始图像和文本编码为单模式嵌入在预训练之前。然后,我们在建模文本和图像表示之间的相互作用时深入研究VL-PTM的主流体系结构。我们进一步提出了广泛使用的预训练任务,然后我们介绍了一些常见的下游任务。我们终于结束了本文,并提出了一些有前途的研究方向。我们的调查旨在为研究人员提供合成和指向相关研究的指针。
translated by 谷歌翻译
Vision语言中最现有的方法依赖于通过对象检测提取的对象中心特征,并在提取的功能和文本之间进行细粒度对齐。我们认为物体检测的使用可能不适合视觉语言预培训。相反,我们指出应该执行任务,以便文本中提到的“视觉概念”的区域位于图像中,并且在文本和视觉概念之间的平时对齐中,识别在其中的校准处于多个 - 粒度。本文提出了一种称为X-VLM的新方法,以执行“多粒度的视觉语言预训练”。实验结果表明,X-VLM在许多下游视觉语言任务中始终如一地优于最先进的方法。
translated by 谷歌翻译
跨模式的人重新识别(RE-ID)对于现代视频监视系统至关重要。关键的挑战是与一个人提供的语义信息引起的跨模式表示,并忽略背景信息。这项工作介绍了一种新型的基于卷积神经网络(CNN)的体系结构,旨在学习语义上的跨模式视觉和文本表示。基础构建块,名为Axm-block,是一个统一的多层网络,该网络会动态利用多尺度知识,并根据共享语义重新校准每种模式。为了补充卷积设计,在文本分支中应用上下文注意力以操纵长期依赖性。此外,我们提出了一种独特的设计,以增强基于视觉零件的功能连贯性和局部性信息。我们的框架具有新颖的能力,可以在功能学习阶段隐式学习模式之间的一致语义。统一的特征学习有效地利用文本数据作为视觉表示学习的超级注释信号,并自动拒绝无关的信息。整个AXM-NET经过Cuhk-Pedes数据的端到端训练。我们报告了两个任务的结果,即人搜索和跨模式重新ID。 AXM-NET优于当前最新方法(SOTA)方法,并在Cuhk-Pedes测试集上获得64.44 \%等级@1。在Crossre-ID和Cuhk-Sysu数据集中,它还胜过竞争对手的竞争对手$> $ 10 \%。
translated by 谷歌翻译