近年来,多模态变压器在视觉语言任务中显示出显着进展,例如视觉问题应答(VQA),以相当多的余量优于以前的架构。 VQA的这种改进通常归因于视觉和语言流之间的丰富相互作用。在这项工作中,我们研究了共同关注变压器层在回答问题时帮助网络专注于相关区域的功效。我们使用这些共同关注层中的质询图像注意力分数来生成视觉注意图。我们评估以下关键组分对最先进的VQA模型的视觉注意的影响:(i)对象区域提案数,(ii)言语(POS)标签的问题部分,(iii)问题语义,(iv)共同关注层数,和(v)答案准确性。我们比较神经网络注意力地图对人类注意力地图的定性和定量。我们的研究结果表明,在给出一个问题的情况下,共同关注变压器模块对图像的相关区域至关重要。重要的是,我们观察到问题的语义含义不是驱动视觉关注的,但问题中的特定关键词是。我们的工作揭示了关注变压器层的功能和解释,突出了当前网络中的差距,并指导了同时处理视觉和语言流的未来VQA模型和网络的开发。
translated by 谷歌翻译
A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. 1 .
translated by 谷歌翻译
用于视觉语言表示学习的变压器已经引起了很多兴趣,并在视觉问题答案(VQA)和接地方面表现出了巨大的表现。但是,大多数显示出良好性能的系统在培训过程中仍然依赖于预训练的对象探测器,这将其适用性限制在可用于这些检测器的对象类中。为了减轻这种限制,以下论文着重于在变形金刚中的视觉问题答案的背景下进行弱监督的基础问题。该方法通过将每个视觉令牌分组在视觉编码器中,并使用语言自我发项层作为文本引导选择模块来利用胶囊,以在将它们转发到下一层之前掩盖它们。我们评估了针对挑战的GQA以及VQA帽数据集的VQA接地的方法。我们的实验表明:在从标准变压器体系结构中删除蒙版对象的信息的同时,胶囊的集成显着提高了此类系统的接地能力,并提供了与其他新的最先进的结果。在现场接近。
translated by 谷歌翻译
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific modelsachieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.Preprint. Under review.
translated by 谷歌翻译
Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.
translated by 谷歌翻译
视觉问题的视觉关注在视觉问题上应答(VQA)目标在定位有关答案预测的右图像区域,提供强大的技术来促进多模态理解。然而,最近的研究指出,来自视觉关注的突出显示的图像区域通常与给定的问题和答案无关,导致模型混淆正确的视觉推理。为了解决这个问题,现有方法主要是为了对准人类关注的视觉注意力。尽管如此,收集这种人类数据是费力且昂贵的,使其在数据集中调整良好开发的模型。为了解决这个问题,在本文中,我们设计了一种新的视觉关注正规化方法,即attreg,以便在VQA中更好地视觉接地。具体而言,attraT首先识别了由骨干模型出乎意料地忽略(即,分配低注意重量)的问题所必需的图像区域。然后,利用掩模引导的学习方案来规范视觉注意力,以便更多地关注这些忽略的关键区域。所提出的方法是非常灵活的,模型不可知,可以集成到基于大多数基于视觉关注的VQA模型中,并且不需要人类注意监督。已经进行了三个基准数据集,即VQA-CP V2,VQA-CP V1和VQA V2的广泛实验,以评估attreg的有效性。作为副产品,将Attreg纳入强基线LMH时,我们的方法可以实现新的最先进的准确性为60.00%,在VQA-CP V2基准数据集上绝对性能增益为7.01%。 。
translated by 谷歌翻译
Visual Question Answering (VQA) requires a finegrained and simultaneous understanding of both the visual content of images and the textual content of questions. Therefore, designing an effective 'co-attention' model to associate key words in questions with key objects in images is central to VQA performance. So far, most successful attempts at co-attention learning have been achieved by using shallow models, and deep co-attention models show little improvement over their shallow counterparts. In this paper, we propose a deep Modular Co-Attention Network (MCAN) that consists of Modular Co-Attention (MCA) layers cascaded in depth. Each MCA layer models the self-attention of questions and images, as well as the guided-attention of images jointly using a modular composition of two basic attention units. We quantitatively and qualitatively evaluate MCAN on the benchmark VQA-v2 dataset and conduct extensive ablation studies to explore the reasons behind MCAN's effectiveness.Experimental results demonstrate that MCAN significantly outperforms the previous state-ofthe-art. Our best single model delivers 70.63% overall accuracy on the test-dev set.Code is available at https://github.com/MILVLG/mcan-vqa.
translated by 谷歌翻译
变压器架构已经带来了计算语言领域的根本变化,这已经由经常性神经网络主导多年。它的成功还意味着具有语言和愿景的跨模型任务的大幅度变化,许多研究人员已经解决了这个问题。在本文中,我们审查了该领域中的一些最关键的里程碑,以及变压器架构如何纳入Visuol语言跨模型任务的整体趋势。此外,我们讨论了当前的局限性,并推测了我们发现迫在眉睫的一些前景。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
根据图像回答语义复杂的问题是在视觉问题应答(VQA)任务中的具有挑战性。虽然图像可以通过深度学习来良好代表,但是始终简单地嵌入问题,并且不能很好地表明它的含义。此外,视觉和文本特征具有不同模式的间隙,很难对齐和利用跨模块信息。在本文中,我们专注于这两个问题,并提出了一种匹配关注(GMA)网络的图表。首先,它不仅为图像构建图形,而且在句法和嵌入信息方面构建了该问题的图表。接下来,我们通过双级图形编码器探讨了模特内的关系,然后呈现双边跨模型图匹配注意力以推断图像与问题之间的关系。然后将更新的跨模式特征发送到答案预测模块中以进行最终答案预测。实验表明,我们的网络在GQA数据集和VQA 2.0数据集上达到了最先进的性能。消融研究验证了GMA网络中每个模块的有效性。
translated by 谷歌翻译
We propose a technique for producing 'visual explanations' for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable.Our approach -Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say 'dog' in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fullyconnected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative vi-
translated by 谷歌翻译
以前的研究如vizwiz发现,可以阅读的视觉问题(VQA)系统可以阅读和图像中的文本的理由在辅助视觉上受损人群的应用领域很有用。 TextVQA是一个用于这个问题的VQA数据集,其中问题需要回答系统来读取和理由图像中的视觉对象和文本对象。 TextVQA中的一个关键挑战是系统的设计,有效地是单独的视觉和文本对象的理由,而且还有关于这些对象之间的空间关系。这激励了使用“边缘特征”,即关于每对对象之间的关系的信息。一些当前TextVQA模型解决了这个问题,但只使用关系类别(而不是边缘特征向量),或者不要在变压器架构中使用边缘功能。为了克服这些缺点,我们提出了一种曲线图形关系变压器(GRT),除了节点信息之外,还使用边缘信息进行变压器中的图注意计算。我们发现,在不使用任何其他优化的情况下,所提出的GRT方法优于M4C基线模型的精度0.65%在Val Set上的精度和测试集0.57%。定性,我们观察到GRT对M4C具有卓越的空间推理能力。
translated by 谷歌翻译
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
随着变压器的发展,近年来预先训练的模型已经以突破性的步伐发展。他们在自然语言处理(NLP)和计算机视觉(CV)中主导了主流技术。如何将预训练适应视觉和语言(V-L)学习和改善下游任务绩效成为多模式学习的重点。在本文中,我们回顾了视力语言预训练模型(VL-PTMS)的最新进展。作为核心内容,我们首先简要介绍了几种方法,将原始图像和文本编码为单模式嵌入在预训练之前。然后,我们在建模文本和图像表示之间的相互作用时深入研究VL-PTM的主流体系结构。我们进一步提出了广泛使用的预训练任务,然后我们介绍了一些常见的下游任务。我们终于结束了本文,并提出了一些有前途的研究方向。我们的调查旨在为研究人员提供合成和指向相关研究的指针。
translated by 谷歌翻译
与自然语言解释的视觉结合旨在推断文本图像对之间的关​​系并生成句子以解释决策过程。先前的方法主要依靠预先训练的视觉模型来执行关系推断和语言模型来生成相应的解释。但是,预训练的视觉模型主要在文本和图像之间建立令牌级别的对齐,但忽略了短语(块)和视觉内容之间的高级语义对齐,这对于视觉推理至关重要。此外,仅基于编码的联合表示形式的解释生成器并未明确考虑关键的关系推理的决策点。因此,产生的解释不太忠于视觉语言推理。为了减轻这些问题,我们提出了一种统一的块意见对齐和基于词汇约束的方法,称为CALEC。它包含一个块感知的语义交互器(ARR。CSI),一个关系属性和词汇约束感知的发生器(arr。Lecg)。具体而言,CSI利用语言和各个图像区域固有的句子结构来构建块感知语义对齐。关系下属使用基于注意力的推理网络来合并令牌级别和块级视觉语言表示。 LECG利用词汇约束来将关系下列者重点关注的单词或块纳入解释世代,从而提高了解释的忠诚和信息性。我们在三个数据集上进行了广泛的实验,实验结果表明,CALEC在推理准确性和生成的解释的质量方面显着优于其他竞争者模型。
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
We present an effective method for fusing visual-and-language representations for several question answering tasks including visual question answering and visual entailment. In contrast to prior works that concatenate unimodal representations or use only cross-attention, we compose multimodal representations via channel fusion. By fusing on the channels, the model is able to more effectively align the tokens compared to standard methods. These multimodal representations, which we call compound tokens are generated with cross-attention transformer layers. First, vision tokens are used as queries to retrieve compatible text tokens through cross-attention. We then chain the vision tokens and the queried text tokens along the channel dimension. We call the resulting representations compound tokens. A second group of compound tokens are generated using an analogous process where the text tokens serve as queries to the cross-attention layer. We concatenate all the compound tokens for further processing with multimodal encoder. We demonstrate the effectiveness of compound tokens using an encoder-decoder vision-language model trained end-to-end in the open-vocabulary setting. Compound Tokens achieve highly competitive performance across a range of question answering tasks including GQA, VQA2.0, and SNLI-VE.
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
3D场景理解是一个相对新兴的研究领域。在本文中,我们介绍了3D现实世界场景(VQA-3D)中的视觉问题应答任务,旨在给出3D场景的所有可能的问题。为了解决这个问题,提出了第一个VQA-3D数据集,即CLEVR3D,其中包含在1,129个现实世界场景中的60k个问题。具体而言,我们开发一个问题发动机利用3D场景图结构来生成不同的推理问题,涵盖物体属性的问题(即,大小,颜色和材料)及其空间关系。建立在此数据集之上,我们进一步设计了第一个VQA-3D基线模型TransVQA3D。 TransVQA3D型号采用精心设计的变压器架构,实现优越的VQA-3D性能,与纯语言基线和先前的3D推理方法直接应用于3D场景。实验结果验证了VQA-3D作为辅助任务可以提高3D场景理解的性能,包括节点明智分类和全图识别的场景图分析。
translated by 谷歌翻译