自然语言伯特以自我监督的方式用语言语料库培训。与自然语言贝尔有不同,Vision语言伯特需要将配对的数据带到训练,这限制了VL-BERT预制的规模。我们提出了一种自我训练方法,允许从未标记的图像数据训练VL-BERT。所提出的方法从我们统一的条件模型开始 - 一个可以执行零拍条件生成的视觉语言BERT模型。给定不同的条件,统一的条件模型可以生成标题,密集的标题,甚至是问题。我们使用标记的图像数据来训练教师模型,并使用训练模型在未标记的图像数据上生成伪字幕。然后,我们将标记的数据和伪标记数据组合以培训学生模型。通过将学生模型作为新老师提出该过程。通过使用拟议的自我训练方法,只有300k未标记的额外数据,我们能够与培训300万额外的图像数据培训的类似型号尺寸的模型相比,我们能够获得竞争或更好的表演。
translated by 谷歌翻译
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific modelsachieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.Preprint. Under review.
translated by 谷歌翻译
This paper presents a unified Vision-Language Pre-training (VLP) model. The model is unified in that (1) it can be finetuned for either vision-language generation (e.g., image captioning) or understanding (e.g., visual question answering) tasks, and (2) it uses a shared multi-layer transformer network for both encoding and decoding, which differs from many existing methods where the encoder and decoder are implemented using separate models. The unified VLP model is pre-trained on a large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequence-to-sequence (seq2seq) masked vision-language prediction. The two tasks differ solely in what context the prediction conditions on. This is controlled by utilizing specific self-attention masks for the shared transformer network. To the best of our knowledge, VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions, and VQA 2.0. The code and the pre-trained models are available at https://github.com/LuoweiZhou/VLP.
translated by 谷歌翻译
Joint image-text embedding is the bedrock for most Visionand-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage finegrained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OTbased WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question
translated by 谷歌翻译
大多数现有的视觉语言预训练方法侧重于在预先绘制期间了解解决任务并使用伯特样目标(屏蔽语言建模和图像 - 文本匹配)。虽然它们在许多理解下游任务中表现良好,但是,例如,视觉问题应答,图像文本检索和视觉存在,它们没有生成的能力。为了解决这个问题,我们为视觉语言理解和一代(UNIVL)提出了统一的多模式预培训。建议的UNIVL能够处理理解任务和生成任务。我们增强了现有的预押范例,只使用带有因果面罩的随机掩模,即掩盖未来令牌的三角面具,使得预先接受的模型可以通过设计具有自动发育能力。我们将几个以前的理解任务作为文本生成任务制定,并建议使用基于提示的方法来进行不同的下游任务进行微调。我们的实验表明,在使用相同型号的同时了解任务和生成任务之间存在权衡,以及改善两个任务的可行方式是使用更多数据。我们的UNIVL框架可以在近似验证任务和生成任务中获得最近的愿景预培训方法的性能。此外,我们开展了基于及时的FineTuning更具数据效率 - 在几次拍摄场景中表现出差异的方法。
translated by 谷歌翻译
This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
Vision-Language预培训是一个新兴和快速发展的研究主题,将多模态知识从丰富的资源预训练任务转移到有限资源下游任务。与主要学习单个通用编码器的现有作品不同,我们提出了一种可训练的通用编码器 - 解码器网络(UNI-EDEN),以促进视觉语言感知(例如,视觉问题应答)和生成(例如,图像标题)。 UNI-EDEN是一种基于双流变换器的结构,由三个模块组成:对象和句子编码器,其单独了解每个模态的表示,以及通过模态交互能够实现多模态推理和句子的句子解码器。考虑到每个图像的语言表示可以跨越该层次结构的不同粒度,包括从简单到全面,个人标签,短语和自然句子,我们通过多粒愿景语言代理任务预先列车UNI-EDEN:屏蔽对象分类(MOC),蒙版区域短语生成(MRPG),图像句匹配(ISM)和屏蔽句生成(MSG)。以这种方式,UNI-EDEN赋予了多模态表示提取和语言建模的功率。广泛的实验证明了通过微调到四个视觉语言感知和发电下游任务来展示Uni-Eden的概括性。
translated by 谷歌翻译
Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders. 1
translated by 谷歌翻译
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use selfattention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar 1 , which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. 2
translated by 谷歌翻译
我们提出了GLIPV2,这是一个接地的VL理解模型,该模型既服务于本地化任务(例如,对象检测,实例分割)和视觉语言(VL)理解任务(例如VQA,图像字幕)。 GLIPV2优雅地将本地化预训练和视觉语言预训练(VLP)具有三个预训练任务:短语接地作为对检测任务的VL重新重新制定,区域词对比度学习作为新型的区域词对比度对比度对比学习任务,以及蒙面的语言建模。这种统一不仅简化了先前的多阶段VLP程序,而且还可以在本地化和理解任务之间实现相互利益。实验结果表明,在各种本地化和理解任务上,单个GLIPV2模型(所有模型权重)在SOTA性能附近实现。该模型还显示了(1)在开放式摄制对象检测任务上进行的强零射击和很少的自适应性能,以及(2)VL理解任务上的卓越接地能力。代码将在https://github.com/microsoft/glip上发布。
translated by 谷歌翻译
Vision-and语言(VL)预培训已被证明对各种VL下游任务非常有效。虽然最近的工作表明,基于完全变换器的VL模型可以比以前的基于区域特征的方法更有效,但它们在下游任务上的性能通常显着降低。在本文中,我们呈现仪表〜(\ textbf {m} ultimodal \ textbf {e} nd-to-text \ textbf {t} ransform \ textbf {er}),我们通过它系统地调查如何设计和预先列车基于完全变换器的VL模型以端到端的方式。具体而言,我们将模型设计沿多个尺寸分析:视觉编码器(例如,剪辑 - vit,Swin变压器),文本编码器(例如,Roberta,Deberta),多模式融合(例如,合并注意力与共同关注),架构设计(例如,仅编码器与编码器 - 解码器)和预训练目标(例如,屏蔽图像建模)。我们对广泛的VL任务进行全面实验,并提供有关如何在保持快速推理速度的同时培训表演VL变压器的见解。值得注意的是,仪表〜使用仅使用4M图像进行预培训的VQAV2 TEST-STD设置的精度为77.64 \%,超过最先进的区域特征的VINVL模型+1.04 \%,以及优于以前最好的完全变换器的ALBEF模型+1.6 \%。
translated by 谷歌翻译
近年来,根据Vision-Language预训练(VLP),我们在图像标题任务中掌握了显着的性能提升。比例被认为是这一进步的重要因素。然而,大多数现有工作仅侧重于预训练的变压器,在大约400万图像上具有中等大小(例如,12或24层)。在本文中,我们呈现柠檬,一个大规模的图像标题器,并为图像标题的VLP的缩放行为提供第一个实证研究。我们使用最先进的VINVL模型作为我们的参考模型,它由图像特征提取器和变压器模型组成,并将变压器上下放大,模型大小范围从13到675万参数。在数据方面,我们通过高达200万图像文本对进行实验,该对基于图像的Alt属性自动从Web自动收集(称为ALT200M)。广泛的分析有助于将性能趋势表征为模型大小和预训练数据尺寸增加。我们还比较不同的培训配方,特别是在大规模嘈杂数据上培训。结果,柠檬在几个主要图像标题基准上实现了新的技术状态,包括Coco标题,Nocaps和概念标题。我们还显示柠檬可以在以零拍摄方式使用时生成带有长尾视觉概念的标题。
translated by 谷歌翻译
我们介绍了一个名为VL-BEIT的视觉基础模型,这是一种双向多模式变压器,通过生成预处理学习。我们的极简主义解决方案通过共享变压器对单接和多模式数据进行掩盖的预测。具体而言,我们对图像文本对,文本上的掩盖语言建模以及图像上的掩盖图像建模进行了掩盖视觉模型。VL-从头开始学习,其中一项统一的预处理任务,一个共用的骨干和一阶段的训练。我们的方法在概念上是简单的,并且在经验上有效。实验结果表明,VL-BEIT在各种视觉语言基准(例如视觉问题回答,视觉推理和图像文本检索)上获得了强大的结果。此外,我们的方法学习可转移的视觉特征,在图像分类方面实现竞争性能以及语义分割。
translated by 谷歌翻译
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing visionlanguage models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "See Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an endto-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than regionbased approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR 2 test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
translated by 谷歌翻译
在本文中,我们提出了一种单一统一的变压器(UFO),其能够处理视觉语言的单峰输入(例如,图像或语言)或多模式输入(例如,图像和问题的串联)( VL)表示学习。现有方法通常为每个模态和/或特定融合网络设计个人网络,用于多模式任务。为了简化网络架构,我们使用单个变压器网络并在VL预培训期间强制执行多任务学习,其包括图像文本对比丢失,图像文本匹配丢失和基于双向的屏蔽语言建模损耗SEQ2Seq注意面具。相同的变压器网络用作不同预训练任务中的图像编码器,文本编码器或融合网络。经验上,我们观察不同任务之间的冲突,并在视觉问题应答,Coco图像标题(交叉熵优化)和Nocaps(在香料中)实现新的艺术状态。在其他下游任务中,例如,图像文本检索,我们也实现了竞争性能。
translated by 谷歌翻译
近年来,统一的视觉语言框架已经大大提高,其中大多数采用编码器架构将图像文本任务统一为序列到序列的生成。但是,现有的视频语言(VIDL)模型仍需要在每个任务的模型体系结构和培训目标中进行特定于任务的设计。在这项工作中,我们探索了一个统一的VIDL框架薰衣草,其中蒙版语言建模(MLM)用作所有前训练和下游任务的常见接口。这样的统一导致了简化的模型体系结构,在多模式编码器之上,只需要一个轻巧的MLM头,而不是具有更多参数的解码器。令人惊讶的是,实验结果表明,这个统一的框架在14个VIDL基准测试中实现了竞争性能,涵盖了视频问答,文本到视频检索和视频字幕。广泛的分析进一步证明了薰衣草比现有VIDL方法的优势:(i)在多任务列出时仅使用一组参数值支持所有下游任务; (ii)对各种下游任务的几乎没有概括; (iii)在视频问题回答任务上启用零射门评估。代码可从https://github.com/microsoft/lavender获得。
translated by 谷歌翻译
语言,视觉和多模式预审查的大量融合正在出现。在这项工作中,我们介绍了通用多模式基础模型BEIT-3,该模型BEIT-3,该模型在视觉和视觉任务上都实现了最新的转移性能。具体来说,我们从三个方面提出了大融合:骨干架构,预训练任务和模型扩展。我们介绍了多道路变压器进行通用建模,其中模块化体系结构可以实现深融合和模态特定的编码。基于共享的骨干,我们以统一的方式对图像(Imglish),文本(英语)和图像文本对(“平行句子”)进行蒙面的“语言”建模。实验结果表明,BEIT-3在对象检测(COCO),语义分割(ADE20K),图像分类(Imagenet),视觉推理(NLVR2),视觉询问答案(VQAV2),图像字幕上获得最先进的性能(可可)和跨模式检索(Flickr30k,可可)。
translated by 谷歌翻译
本文介绍了Omnivl,这是一种新的基础模型,旨在使用一种通用体系结构来支持图像语言和视频语言任务。它为图像和视频输入采用了统一的基于变压器的视觉编码器,因此可以执行联合图像语言和视频语言预处理。我们首次证明了这样的范式受益于图像和视频任务,而不是传统的单向传输(例如,使用图像语言来帮助视频语言)。为此,我们提出了对图像语言和视频语言的脱钩关节预处理,以有效地将视觉模型分解为空间和时间维度,并在图像和视频任务上获得性能提升。此外,我们引入了一种新颖的统一视觉对比度(UNIVLC)损失,以利用图像文本,视频文本,图像标签(例如,图像分类),视频标签(例如,视频动作识别)在一起受到监督和吵闹的监督预处理数据都尽可能多地利用。无需额外的任务适配器,Omnivl可以同时支持仅视觉任务(例如,图像分类,视频操作识别),跨模式对齐任务(例如,图像/视频 - 文本检索)和多模式理解和生成任务(例如,图像/视频问答,字幕)。我们在各种下游任务上评估Omnivl,并以相似的模型大小和数据量表获得最新的或竞争结果。
translated by 谷歌翻译
随着变压器的发展,近年来预先训练的模型已经以突破性的步伐发展。他们在自然语言处理(NLP)和计算机视觉(CV)中主导了主流技术。如何将预训练适应视觉和语言(V-L)学习和改善下游任务绩效成为多模式学习的重点。在本文中,我们回顾了视力语言预训练模型(VL-PTMS)的最新进展。作为核心内容,我们首先简要介绍了几种方法,将原始图像和文本编码为单模式嵌入在预训练之前。然后,我们在建模文本和图像表示之间的相互作用时深入研究VL-PTM的主流体系结构。我们进一步提出了广泛使用的预训练任务,然后我们介绍了一些常见的下游任务。我们终于结束了本文,并提出了一些有前途的研究方向。我们的调查旨在为研究人员提供合成和指向相关研究的指针。
translated by 谷歌翻译
视觉语言预处理框架中的语言方式是天生离散的,在语言词汇中赋予每个单词是语义含义。相比之下,视觉方式本质上是连续和高维的,这可能禁止视觉和语言方式之间的对齐和融合。因此,我们建议通过联合学习一本赋予每个视觉令牌语义的代码手册来“离散”视觉表示。然后,我们利用这些离散的视觉语义作为自我监督的基础真相来构建我们的蒙版图像建模目标,这是蒙版语言建模的对应物,证明了语言模型成功。为了优化代码簿,我们扩展了VQ-VAE的配方,该配方提供了理论保证。实验验证了我们在常见视觉基准测试中的方法的有效性。
translated by 谷歌翻译