We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
我们介绍了视觉问题应答(VQA)的评估方法,以更好地诊断捷径学习案例。当模型利用虚假统计规则产生正确答案但实际上没有部署所需的行为时,会发生这些情况。需要在数据集中识别可能的快捷方式,并在部署现实世界中的模型之前评估它们的使用。 VQA的研究界专注于基于问题的快捷方式,其中模型可能是通过依赖于先前的问题条件培训并提供重量的问题条件培训来回答“天空的颜色”。视觉证据。我们进一步逐步,考虑涉及两个问题和图像的多模式捷径。我们首先通过挖掘琐碎的预测规则,例如诸如单词和视觉元素的共同发生的琐碎的预测规则来确定流行的VQA V2培训中的潜在捷径。然后,我们将介绍VQA-Consterexamples(VQA-CE),一个评估协议,基于我们的反例等的子集i.e.图像 - 问题答案三胞胎,我们的规则导致错误的答案。我们在大规模研究VQA现有方法中使用这一新评估。我们表明即使是最先进的模型也表现不佳,并且在这种情况下,降低偏差的现有技术在很大程度上无效。我们的研究结果表明,过去的vqa中的基于问题的偏差的工作仅签署了一个复杂问题的一个方面。我们方法的代码可在https://github.com/cdancette/detect-shortcut中获得。
translated by 谷歌翻译
尽管视觉问题答案取得了长足的进步(VQA),但当前的VQA模型严重依赖问题类型及其相应的频繁答案(即语言先验)之间的表面相关性来做出预测,而无需真正理解输入。在这项工作中,我们用相同的问题类型定义了培训实例,但与\ textit {表面上相似的实例}定义了不同的答案,并将语言先验归因于VQA模型在此类情况下的混淆。为了解决这个问题,我们提出了一个新颖的培训框架,该培训框架明确鼓励VQA模型区分表面上相似的实例。具体而言,对于每个培训实例,我们首先构建一个包含其表面上相似的对应物的集合。然后,我们利用所提出的区分模块增加了答案空间中实例及其对应物之间的距离。这样,VQA模型被迫进一步关注问题类型的输入的其他部分,这有助于克服语言先验。实验结果表明,我们的方法在VQA-CP V2上实现了最新性能。代码可在\ href {https://github.com/wyk-nku/distinguishing-vqa.git} {sickithing-vqa}中获得。
translated by 谷歌翻译
We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model's capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
The domain of joint vision-language understanding, especially in the context of reasoning in Visual Question Answering (VQA) models, has garnered significant attention in the recent past. While most of the existing VQA models focus on improving the accuracy of VQA, the way models arrive at an answer is oftentimes a black box. As a step towards making the VQA task more explainable and interpretable, our method is built upon the SOTA VQA framework by augmenting it with an end-to-end explanation generation module. In this paper, we investigate two network architectures, including Long Short-Term Memory (LSTM) and Transformer decoder, as the explanation generator. Our method generates human-readable textual explanations while maintaining SOTA VQA accuracy on the GQA-REX (77.49%) and VQA-E (71.48%) datasets. Approximately 65.16% of the generated explanations are approved by humans as valid. Roughly 60.5% of the generated explanations are valid and lead to the correct answers.
translated by 谷歌翻译
视觉问题应答(VQA)是一个具有挑战性的任务,在计算机视觉和自然语言处理领域中引起了越来越多的关注。然而,目前的视觉问题回答具有语言偏差问题,这减少了模型的稳健性,对视觉问题的实际应用产生了不利影响。在本文中,我们首次对该领域进行了全面的审查和分析,并根据三个类别对现有方法进行分类,包括增强视觉信息,弱化语言前瞻,数据增强和培训策略。与此同时,依次介绍相关的代表方法,依次汇总和分析。揭示和分类语言偏见的原因。其次,本文介绍了主要用于测试的数据集,并报告各种现有方法的实验结果。最后,我们讨论了该领域的可能的未来研究方向。
translated by 谷歌翻译
由于自然语言处理和基于计算机视觉模型的显着进步,视觉问题应答(VQA)系统变得越来越聪明,高级。然而,在处理相对复杂的问题时,它们仍然易于出错。因此,在采用结果之前了解VQA模型的行为非常重要。在本文中,我们通过生成反事实图像来引入VQA模型的可解释方法。具体地,所生成的图像应该具有对原始图像具有最小可能的改变,并引导VQA模型来提供不同的答案。此外,我们的方法确保生成的图像是逼真的。由于无法使用定量度量来评估模型的可解释性,因此我们进行了用户学习,以评估我们方法的不同方面。除了在单个图像上解释VQA模型的结果,所获得的结果和讨论还提供了对VQA模型的行为的广泛解释。
translated by 谷歌翻译
We propose a technique for producing 'visual explanations' for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable.Our approach -Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say 'dog' in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fullyconnected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative vi-
translated by 谷歌翻译
A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. 1 .
translated by 谷歌翻译
我们提出了一种新颖的方法,可以在没有直接监督或对困难的注释的情况下确定视觉问题回答(VQA)的难度。先前的工作已经考虑了人类注释者的基础答案的多样性。相反,我们根据多个不同VQA模型的行为分析了视觉问题的难度。我们建议通过三个不同的模型获得预测的答案分布的熵值:一种基线方法,该方法将作为输入图像和问题采用,以及两个仅作为输入图像和仅提出问题的变体。我们使用简单的K-均值来聚集VQA V2验证集的视觉问题。然后,我们使用最先进的方法来确定每个集群的答案分布的准确性和熵。提出的方法的一个好处是,不需要对难度的注释,因为每个集群的准确性反映了属于它的视觉问题的难度。我们的方法可以识别出难以通过最新方法正确回答的困难视觉问题的集群。对VQA V2数据集的详细分析表明,1)所有方法在最困难的群集上表现出较差的性能(大约10 \%精度),2)随着群集难度的增加,不同方法预测的答案开始差异,3 )聚类熵的值与群集精度高度相关。我们表明,我们的方法具有能够在没有地面真相的情况下评估视觉问题的难度(\ ie,VQA V2的测试集),通过将它们分配给其中一个簇来评估视觉问题的难度。我们希望这可以刺激研究和新算法的新方向发展。
translated by 谷歌翻译
最近的研究表明,许多发达的视觉问题的答案(VQA)模型受到先前问题的严重影响,这是指基于文本问题和答案之间的共同发生模式来提出预测而不是推理视觉内容。为了解决它,大多数现有方法都侧重于增强视觉特征学习,以减少对VQA模型决策的这种肤浅的快捷方式影响。然而,有限的努力已经致力于为其固有原因提供明确的解释。因此,缺乏以有目的的方式向前迈出前进的良好指导,导致模型构建困惑在克服这种非琐碎问题时。在本文中,我们建议从类 - 不平衡视图中解释VQA中的语言。具体地,我们设计了一种新颖的解释方案,从而在晚期训练阶段明显展出了误差频繁和稀疏答案的丢失。它明确揭示了为什么VQA模型倾向于产生频繁但是明显的错误答案,给出的给定问题,其正确答案在训练集中稀疏。基于此观察,我们进一步开发了一种新的损失重新缩放方法,以基于计算最终损失的训练数据统计来为每个答案分配不同权重。我们将我们的方法应用于三个基线,两个VQA-CP基准数据集的实验结果明显证明了其有效性。此外,我们还可以证明在其他计算机视觉任务上的类别不平衡解释方案的有效性,例如面部识别和图像分类。
translated by 谷歌翻译
医学视觉问题应答(VQA)是医疗人工智能和流行的VQA挑战的组合。鉴于医学形象和在自然语言中的临床相关问题,预计医疗VQA系统将预测符号和令人信服的答案。虽然一般域VQA已被广泛研究,但医疗VQA仍然需要特定的调查和探索,因为它的任务特征是。在本调查的第一部分,我们涵盖并讨论了关于数据源,数据数量和任务功能的公开可用的医疗VQA数据集。在第二部分中,我们审查了医疗VQA任务中使用的方法。在最后,我们分析了该领域的一些有效的挑战,并讨论了未来的研究方向。
translated by 谷歌翻译
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages Visual Genome scene graph structures to create 22M diverse reasoning questions, which all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. A careful analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains a mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding of vision and language.
translated by 谷歌翻译
We introduce a new dataset for joint reasoning about natural language and images, with a focus on semantic diversity, compositionality, and visual reasoning challenges. The data contains 107,292 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a pair of photographs. We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language. Qualitative analysis shows the data requires compositional joint reasoning, including about quantities, comparisons, and relations. Evaluation using state-of-the-art visual reasoning methods shows the data presents a strong challenge. * Contributed equally. † Work done as an undergraduate at Cornell University. 1 In parts of this paper, we use the term compositional differently than it is commonly used in linguistics to refer to reasoning that requires composition. This type of reasoning often manifests itself in highly compositional language.2 Appendix G contains license information for all photographs used in this paper. 3 The top example is True, while the bottom is False.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译