We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific modelsachieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.Preprint. Under review.
translated by 谷歌翻译
Vision-Language预培训是一个新兴和快速发展的研究主题,将多模态知识从丰富的资源预训练任务转移到有限资源下游任务。与主要学习单个通用编码器的现有作品不同,我们提出了一种可训练的通用编码器 - 解码器网络(UNI-EDEN),以促进视觉语言感知(例如,视觉问题应答)和生成(例如,图像标题)。 UNI-EDEN是一种基于双流变换器的结构,由三个模块组成:对象和句子编码器,其单独了解每个模态的表示,以及通过模态交互能够实现多模态推理和句子的句子解码器。考虑到每个图像的语言表示可以跨越该层次结构的不同粒度,包括从简单到全面,个人标签,短语和自然句子,我们通过多粒愿景语言代理任务预先列车UNI-EDEN:屏蔽对象分类(MOC),蒙版区域短语生成(MRPG),图像句匹配(ISM)和屏蔽句生成(MSG)。以这种方式,UNI-EDEN赋予了多模态表示提取和语言建模的功率。广泛的实验证明了通过微调到四个视觉语言感知和发电下游任务来展示Uni-Eden的概括性。
translated by 谷歌翻译
Much of vision-and-language research focuses on a small but diverse set of independent tasks and supporting datasets often studied in isolation; however, the visuallygrounded language understanding skills required for success at these tasks overlap significantly. In this work, we investigate these relationships between vision-and-language tasks by developing a large-scale, multi-task training regime. Our approach culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multi-modal verification. Compared to independently trained single-task models, this represents a reduction from approximately 3 billion parameters to 270 million while simultaneously improving performance by 2.05 points on average across tasks. We use our multi-task framework to perform in-depth analysis of the effect of joint training diverse tasks. Further, we show that finetuning task-specific models from our single multi-task model can lead to further improvements, achieving performance at or above the state-of-the-art.
translated by 谷歌翻译
Joint image-text embedding is the bedrock for most Visionand-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage finegrained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OTbased WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question
translated by 谷歌翻译
This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be finetuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP.
translated by 谷歌翻译
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use selfattention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar 1 , which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. 2
translated by 谷歌翻译
在过去的几年中,训练前模型的出现将单峰领域(例如计算机视觉(CV)和自然语言处理(NLP))带到了一个新时代。实质性的作品表明它们对下游大学任务有益,并避免从头开始训练新的模型。那么,此类预训练的模型可以应用于多模式任务吗?研究人员探索了这个问题并取得了重大进展。本文调查了视觉预训练(VLP)的最新进展和新的前沿,包括图像文本和视频文本预训练。为了使读者更好地掌握VLP,我们首先从五个方面回顾了其最新进展:功能提取,模型体系结构,培训预训练目标,预训练数据集和下游任务。然后,我们详细概述了特定的VLP模型。最后,我们讨论了VLP中的新边界。据我们所知,这是对VLP的首次调查。我们希望这项调查能够阐明VLP领域的未来研究。
translated by 谷歌翻译
本文介绍了Omnivl,这是一种新的基础模型,旨在使用一种通用体系结构来支持图像语言和视频语言任务。它为图像和视频输入采用了统一的基于变压器的视觉编码器,因此可以执行联合图像语言和视频语言预处理。我们首次证明了这样的范式受益于图像和视频任务,而不是传统的单向传输(例如,使用图像语言来帮助视频语言)。为此,我们提出了对图像语言和视频语言的脱钩关节预处理,以有效地将视觉模型分解为空间和时间维度,并在图像和视频任务上获得性能提升。此外,我们引入了一种新颖的统一视觉对比度(UNIVLC)损失,以利用图像文本,视频文本,图像标签(例如,图像分类),视频标签(例如,视频动作识别)在一起受到监督和吵闹的监督预处理数据都尽可能多地利用。无需额外的任务适配器,Omnivl可以同时支持仅视觉任务(例如,图像分类,视频操作识别),跨模式对齐任务(例如,图像/视频 - 文本检索)和多模式理解和生成任务(例如,图像/视频问答,字幕)。我们在各种下游任务上评估Omnivl,并以相似的模型大小和数据量表获得最新的或竞争结果。
translated by 谷歌翻译
从纯图像和具有对比性损失的纯图像和文本预测的自我监督的视觉语言是有效的,但是由于双流式体系结构仅在全球层面上与图像和文本表示形式对齐,因此忽略了细粒度​​的对齐。早些时候,受监督的,非对比度的方法具有更细粒度的对齐方式,但需要致密的注释,这些注释不可伸缩。我们提出了一个单个流体系结构,该体系结构使用两个新颖的任务:对称交叉模式重建(XMM)和一个伪标记的关键字预测,将图像和语言对齐:全局,细粒度的补丁和概念/语义(PSL)。在XMM中,我们从一种模态掩盖了输入令牌,并使用跨模式信息重建掩盖的令牌,从而改善了两种模式之间的细粒度对齐。在PSL中,我们使用注意力在标题中选择关键字,使用动量编码器推荐标题中缺少但在图像中表示的其他重要关键字,然后训练视觉编码器以预测这些关键字的存在,并帮助它。学习对于将文本令牌接地到图像区域至关重要的语义概念。我们证明了对图像文本检索,接地,视觉问题的回答/推理的竞争性能和提高的数据效率,以针对对更多数据进行培训的较大模型和模型。 Zaidkhan.me/simla上可用的代码和型号。
translated by 谷歌翻译
The availability of large-scale image captioning and visual question answering datasets has contributed significantly to recent successes in vision-and-language pretraining. However, these datasets are often collected with overrestrictive requirements inherited from their original target tasks (e.g., image caption generation), which limit the resulting dataset scale and diversity. We take a step further in pushing the limits of vision-and-language pretraining data by relaxing the data collection pipeline used in Conceptual Captions 3M (CC3M) [70] and introduce the Conceptual 12M (CC12M), a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training. We perform an analysis of this dataset and benchmark its effectiveness against CC3M on multiple downstream tasks with an emphasis on long-tail visual recognition. Our results clearly illustrate the benefit of scaling up pre-training data for vision-and-language tasks, as indicated by the new state-of-the-art results on both the nocaps and Conceptual Captions benchmarks. 1
translated by 谷歌翻译
We present Answer-Me, a task-aware multi-task framework which unifies a variety of question answering tasks, such as, visual question answering, visual entailment, visual reasoning. In contrast to previous works using contrastive or generative captioning training, we propose a novel and simple recipe to pre-train a vision-language joint model, which is multi-task as well. The pre-training uses only noisy image captioning data, and is formulated to use the entire architecture end-to-end with both a strong language encoder and decoder. Our results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results across a variety of question answering tasks. Our multi-task mixture training learns from tasks of various question intents and thus generalizes better, including on zero-shot vision-language tasks. We conduct experiments in the challenging multi-task and open-vocabulary settings and across a variety of datasets and tasks, such as VQA2.0, SNLI-VE, NLVR2, GQA. We observe that the proposed approach is able to generalize to unseen tasks and that more diverse mixtures lead to higher accuracy in both known and novel tasks.
translated by 谷歌翻译
近年来,统一的视觉语言框架已经大大提高,其中大多数采用编码器架构将图像文本任务统一为序列到序列的生成。但是,现有的视频语言(VIDL)模型仍需要在每个任务的模型体系结构和培训目标中进行特定于任务的设计。在这项工作中,我们探索了一个统一的VIDL框架薰衣草,其中蒙版语言建模(MLM)用作所有前训练和下游任务的常见接口。这样的统一导致了简化的模型体系结构,在多模式编码器之上,只需要一个轻巧的MLM头,而不是具有更多参数的解码器。令人惊讶的是,实验结果表明,这个统一的框架在14个VIDL基准测试中实现了竞争性能,涵盖了视频问答,文本到视频检索和视频字幕。广泛的分析进一步证明了薰衣草比现有VIDL方法的优势:(i)在多任务列出时仅使用一组参数值支持所有下游任务; (ii)对各种下游任务的几乎没有概括; (iii)在视频问题回答任务上启用零射门评估。代码可从https://github.com/microsoft/lavender获得。
translated by 谷歌翻译
视频问题回答(VideoQA)是一项复杂的任务,需要多种模式数据进行培训。但是,对视频的问题和答案的手动注释是乏味的,禁止可扩展性。为了解决这个问题,最近的方法考虑了零拍设置,而无需手动注释视觉问题。特别是,一种有前途的方法调整了在网络级文本数据中预测的冻结自回归语言模型,以适应多模式输入。相比之下,我们在这里建立在冷冻双向语言模型(BILM)的基础上,并表明这种方法为零拍出的VideoQA提供了更强大,更便宜的替代方案。特别是(i)我们使用轻型训练模块将视觉输入与冷冻的BILM结合在一起,(ii)我们使用Web-Scrafe Multi-Mododal数据训练此类模块,最后(iii)我们通过掩盖语言执行零声录像带推断建模,其中蒙版文本是给定问题的答案。我们提出的方法Frozenbilm在零摄影的视频中的表现优于最高的,包括LSMDC-FIB,包括LSMDC-FIB,IVQA,MSRVTT-QA,MSVD-QA,ActivityNet-QA,TGIF-FRAMEQA,TGIF-FRAMEQA,,TGIF-FRAMEQA,,TGIF-FRAMEQA,,,MSRVTT-QA,MSRVTT-QA,MSRVTT-QA,MSRVTT-QA,MSRVTT-QA,,均优于最新技术。 How2QA和TVQA。它还在几次且完全监督的环境中展示了竞争性能。我们的代码和模型将在https://antoyang.github.io/frozenbilm.html上公开提供。
translated by 谷歌翻译
Vision-Language Transformers can be learned without human labels (e.g. class labels, bounding boxes, etc). Existing work, whether explicitly utilizing bounding boxes or patches, assumes that the visual backbone must first be trained on ImageNet class prediction before being integrated into a multimodal linguistic pipeline. We show that this is not necessary and introduce a new model Vision-Language from Captions (VLC) built on top of Masked Auto-Encoders that does not require this supervision. In fact, in a head-to-head comparison between ViLT, the current state-of-the-art patch-based vision-language transformer which is pretrained with supervised object classification, and our model, VLC, we find that our approach 1. outperforms ViLT on standard benchmarks, 2. provides more interpretable and intuitive patch visualizations, and 3. is competitive with many larger models that utilize ROIs trained on annotated bounding-boxes.
translated by 谷歌翻译
随着图像文本对的大量数据以及视觉和语言(V&L)任务的多样性,学者在该研究领域引入了大量的深度学习模型。此外,近年来,转移学习还显示出在计算机愿景中的巨大成功,例如图像分类,对象检测等以及在自然语言处理中以进行问答,机器翻译等的自然语言处理。继承转移学习的精神, V&L的研究工作已经在大规模数据集上设计了多种预训练技术,以增强下游任务的性能。本文的目的是提供当代V&L预审前模型的全面修订。特别是,我们对预处理的方法进行了分类和描述,以及最先进的视觉和语言预训练模型的摘要。此外,还提供了培训数据集和下游任务的列表,以进一步提高V&L预处理的观点。最后,我们决定采取进一步的一步,讨论众多未来研究的方向。
translated by 谷歌翻译
This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
最先进的愿景和愿景和语言模型依靠大规模的Visio-linguisting预借鉴,以获得各种下游任务的良好性能。通常,这种模型通常是跨模态(对比)或多模态(具有早期融合)但不是两者;它们通常只针对特定的方式或任务。有希望的方向将是使用单一整体普遍模型,作为“基础”,目标是一次性的所有方式 - 真正的视觉和语言基础模型应该擅长视力任务,语言任务和交叉和多数模态视觉和语言任务。我们将Flava介绍在这样的模型中,并在跨越这些目标模式的广泛的35个任务上展示令人印象深刻的性能。
translated by 谷歌翻译
自我监督的视觉和语言预处理(VLP)旨在从大规模的图像文本数据中学习可转移的多模式表示形式,并在填充后在广泛的视觉范围内实现强大的表现。以前的主流VLP方法通常采用依靠外部对象检测器来编码多模式变压器框架中的图像的两步策略,该框架遭受了限制性对象概念空间,有限的图像上下文和效率低下的计算。在本文中,我们提出了一个对象感知的端到端VLP框架,该框架将来自CNN的图像网格特征直接馈送到变压器中,并共同学习多模式表示。更重要的是,我们建议执行对象知识蒸馏,以促进在不同语义级别的学习跨模式对齐。为了实现这一目标,我们通过将对象特征及其来自外部检测器的语义标签作为监督来设计两个新颖的借口任务:1。)对象引导的蒙版视觉建模任务的重点是在多模式变压器中强制执行对象感知的表示的学习; 2.)短语区域对准任务旨在通过利用语言空间中名词短语和对象标签之间的相似性来改善跨模式对齐。对各种视觉语言任务进行的广泛实验证明了我们提出的框架的功效,并且我们在现有的预科策略中实现了竞争性或优越的表现。
translated by 谷歌翻译
远见和语言预测已成为解决多模式下游任务的普遍方法。当前的趋势是朝着更大的模型和预处理数据集迈进。从长远来看,这一计算头急促似乎是不合理的,而是朝着可持续的解决方案迈进,事实上,排除了资源有限的学术实验室。在这项工作中,我们提出了一个称为VICHA的新框架,该框架有效利用输入数据以通过以下方式提高学习,以: ,(c)利用图像级注释,称为视觉概念,使用现有基础模型(例如剪辑)获得,以提高图像编码器的性能。尽管对数据的预估计少了四倍,但我们的VICHA策略在下游任务(例如图像文本检索,VQA,视觉推理,视觉上和视觉接地)上的其他方法优于其他方法。该代码将在此处公开提供:https://github.com/mshukor/vicha
translated by 谷歌翻译
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing visionlanguage models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "See Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an endto-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than regionbased approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR 2 test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
translated by 谷歌翻译