我们提出了一种新颖的方法,可以在没有直接监督或对困难的注释的情况下确定视觉问题回答(VQA)的难度。先前的工作已经考虑了人类注释者的基础答案的多样性。相反,我们根据多个不同VQA模型的行为分析了视觉问题的难度。我们建议通过三个不同的模型获得预测的答案分布的熵值:一种基线方法,该方法将作为输入图像和问题采用,以及两个仅作为输入图像和仅提出问题的变体。我们使用简单的K-均值来聚集VQA V2验证集的视觉问题。然后,我们使用最先进的方法来确定每个集群的答案分布的准确性和熵。提出的方法的一个好处是,不需要对难度的注释,因为每个集群的准确性反映了属于它的视觉问题的难度。我们的方法可以识别出难以通过最新方法正确回答的困难视觉问题的集群。对VQA V2数据集的详细分析表明,1)所有方法在最困难的群集上表现出较差的性能(大约10 \%精度),2)随着群集难度的增加,不同方法预测的答案开始差异,3 )聚类熵的值与群集精度高度相关。我们表明,我们的方法具有能够在没有地面真相的情况下评估视觉问题的难度(\ ie,VQA V2的测试集),通过将它们分配给其中一个簇来评估视觉问题的难度。我们希望这可以刺激研究和新算法的新方向发展。
translated by 谷歌翻译
A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.
translated by 谷歌翻译
我们介绍了视觉问题应答(VQA)的评估方法,以更好地诊断捷径学习案例。当模型利用虚假统计规则产生正确答案但实际上没有部署所需的行为时,会发生这些情况。需要在数据集中识别可能的快捷方式,并在部署现实世界中的模型之前评估它们的使用。 VQA的研究界专注于基于问题的快捷方式,其中模型可能是通过依赖于先前的问题条件培训并提供重量的问题条件培训来回答“天空的颜色”。视觉证据。我们进一步逐步,考虑涉及两个问题和图像的多模式捷径。我们首先通过挖掘琐碎的预测规则,例如诸如单词和视觉元素的共同发生的琐碎的预测规则来确定流行的VQA V2培训中的潜在捷径。然后,我们将介绍VQA-Consterexamples(VQA-CE),一个评估协议,基于我们的反例等的子集i.e.图像 - 问题答案三胞胎,我们的规则导致错误的答案。我们在大规模研究VQA现有方法中使用这一新评估。我们表明即使是最先进的模型也表现不佳,并且在这种情况下,降低偏差的现有技术在很大程度上无效。我们的研究结果表明,过去的vqa中的基于问题的偏差的工作仅签署了一个复杂问题的一个方面。我们方法的代码可在https://github.com/cdancette/detect-shortcut中获得。
translated by 谷歌翻译
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
机器学习已经急剧提高,在多模式任务中缩小了人类的准确性差距,例如视觉问题答案(VQA)。但是,尽管人类在不确定的时候可以说“我不知道”(即避免回答问题),但这种能力在多模式研究中被大大忽略了,尽管此问题对VQA的使用很重要,而VQA实际上使用了VQA。设置。在这项工作中,我们为可靠的VQA提出了一个问题制定,我们更喜欢弃权,而不是提供错误的答案。我们首先为多种VQA模型提供了弃戒功能,并分析了它们的覆盖范围,回答的问题的一部分和风险,该部分的错误。为此,我们探索了几种弃权方法。我们发现,尽管最佳性能模型在VQA V2数据集上实现了超过71%的准确性,但通过直接使用模型的SoftMax得分介绍了弃权的选项,限制了它们的少于8%的问题,以达到错误的错误风险(即1%)。这促使我们利用多模式选择功能直接估计预测答案的正确性,我们显示的可以将覆盖率增加,例如,在1%风险下,2.4倍从6.8%到16.3%。尽管分析覆盖范围和风险很重要,但这些指标具有权衡,这使得比较VQA模型具有挑战性。为了解决这个问题,我们还建议对VQA的有效可靠性指标,与弃权相比,将不正确的答案的成本更大。 VQA的这种新问题制定,度量和分析为构建有效和可靠的VQA模型提供了基础,这些模型具有自我意识,并且只有当他们不知道答案时才戒除。
translated by 谷歌翻译
最近的研究表明,许多发达的视觉问题的答案(VQA)模型受到先前问题的严重影响,这是指基于文本问题和答案之间的共同发生模式来提出预测而不是推理视觉内容。为了解决它,大多数现有方法都侧重于增强视觉特征学习,以减少对VQA模型决策的这种肤浅的快捷方式影响。然而,有限的努力已经致力于为其固有原因提供明确的解释。因此,缺乏以有目的的方式向前迈出前进的良好指导,导致模型构建困惑在克服这种非琐碎问题时。在本文中,我们建议从类 - 不平衡视图中解释VQA中的语言。具体地,我们设计了一种新颖的解释方案,从而在晚期训练阶段明显展出了误差频繁和稀疏答案的丢失。它明确揭示了为什么VQA模型倾向于产生频繁但是明显的错误答案,给出的给定问题,其正确答案在训练集中稀疏。基于此观察,我们进一步开发了一种新的损失重新缩放方法,以基于计算最终损失的训练数据统计来为每个答案分配不同权重。我们将我们的方法应用于三个基线,两个VQA-CP基准数据集的实验结果明显证明了其有效性。此外,我们还可以证明在其他计算机视觉任务上的类别不平衡解释方案的有效性,例如面部识别和图像分类。
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model's capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.
translated by 谷歌翻译
视觉问题的视觉关注在视觉问题上应答(VQA)目标在定位有关答案预测的右图像区域,提供强大的技术来促进多模态理解。然而,最近的研究指出,来自视觉关注的突出显示的图像区域通常与给定的问题和答案无关,导致模型混淆正确的视觉推理。为了解决这个问题,现有方法主要是为了对准人类关注的视觉注意力。尽管如此,收集这种人类数据是费力且昂贵的,使其在数据集中调整良好开发的模型。为了解决这个问题,在本文中,我们设计了一种新的视觉关注正规化方法,即attreg,以便在VQA中更好地视觉接地。具体而言,attraT首先识别了由骨干模型出乎意料地忽略(即,分配低注意重量)的问题所必需的图像区域。然后,利用掩模引导的学习方案来规范视觉注意力,以便更多地关注这些忽略的关键区域。所提出的方法是非常灵活的,模型不可知,可以集成到基于大多数基于视觉关注的VQA模型中,并且不需要人类注意监督。已经进行了三个基准数据集,即VQA-CP V2,VQA-CP V1和VQA V2的广泛实验,以评估attreg的有效性。作为副产品,将Attreg纳入强基线LMH时,我们的方法可以实现新的最先进的准确性为60.00%,在VQA-CP V2基准数据集上绝对性能增益为7.01%。 。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
Visual question answering (VQA) is challenging not only because the model has to handle multi-modal information, but also because it is just so hard to collect sufficient training examples -- there are too many questions one can ask about an image. As a result, a VQA model trained solely on human-annotated examples could easily over-fit specific question styles or image contents that are being asked, leaving the model largely ignorant about the sheer diversity of questions. Existing methods address this issue primarily by introducing an auxiliary task such as visual grounding, cycle consistency, or debiasing. In this paper, we take a drastically different approach. We found that many of the "unknowns" to the learned VQA model are indeed "known" in the dataset implicitly. For instance, questions asking about the same object in different images are likely paraphrases; the number of detected or annotated objects in an image already provides the answer to the "how many" question, even if the question has not been annotated for that image. Building upon these insights, we present a simple data augmentation pipeline SimpleAug to turn this "known" knowledge into training examples for VQA. We show that these augmented examples can notably improve the learned VQA models' performance, not only on the VQA-CP dataset with language prior shifts but also on the VQA v2 dataset without such shifts. Our method further opens up the door to leverage weakly-labeled or unlabeled images in a principled way to enhance VQA models. Our code and data are publicly available at https://github.com/heendung/simpleAUG.
translated by 谷歌翻译
尽管视觉问题答案取得了长足的进步(VQA),但当前的VQA模型严重依赖问题类型及其相应的频繁答案(即语言先验)之间的表面相关性来做出预测,而无需真正理解输入。在这项工作中,我们用相同的问题类型定义了培训实例,但与\ textit {表面上相似的实例}定义了不同的答案,并将语言先验归因于VQA模型在此类情况下的混淆。为了解决这个问题,我们提出了一个新颖的培训框架,该培训框架明确鼓励VQA模型区分表面上相似的实例。具体而言,对于每个培训实例,我们首先构建一个包含其表面上相似的对应物的集合。然后,我们利用所提出的区分模块增加了答案空间中实例及其对应物之间的距离。这样,VQA模型被迫进一步关注问题类型的输入的其他部分,这有助于克服语言先验。实验结果表明,我们的方法在VQA-CP V2上实现了最新性能。代码可在\ href {https://github.com/wyk-nku/distinguishing-vqa.git} {sickithing-vqa}中获得。
translated by 谷歌翻译
视觉问题应答(VQA)是一个具有挑战性的任务,在计算机视觉和自然语言处理领域中引起了越来越多的关注。然而,目前的视觉问题回答具有语言偏差问题,这减少了模型的稳健性,对视觉问题的实际应用产生了不利影响。在本文中,我们首次对该领域进行了全面的审查和分析,并根据三个类别对现有方法进行分类,包括增强视觉信息,弱化语言前瞻,数据增强和培训策略。与此同时,依次介绍相关的代表方法,依次汇总和分析。揭示和分类语言偏见的原因。其次,本文介绍了主要用于测试的数据集,并报告各种现有方法的实验结果。最后,我们讨论了该领域的可能的未来研究方向。
translated by 谷歌翻译
已知视觉问题答案(VQA)的任务受到VQA模型的问题的困扰,从而利用数据集中的偏见来做出最终预测。已经提出了许多先前基于合奏的偏数方法,其中有目的地训练了一个额外的模型以帮助训练强大的目标模型。但是,这些方法从训练数据的标签统计数据或直接从单局分支中计算出模型的偏差。相反,在这项工作中,为了更好地了解目标VQA模型的偏见,我们提出了一种生成方法来训练偏差模型\ emph {直接来自目标模型},称为GenB。特别是,GENB采用生成网络来通过对抗目标和知识蒸馏的结合来学习偏见。然后,我们将目标模型以GENB作为偏置模型为单位,并通过广泛的实验显示了我们方法对包括VQA CP2,VQA-CP1,VQA-CP1,GQA-OOD和VQA-CE在内的各种VQA偏置数据集的影响。
translated by 谷歌翻译
神经网络通常使预测依赖于数据集的虚假相关性,而不是感兴趣的任务的内在特性,面对分布外(OOD)测试数据的急剧下降。现有的De-Bias学习框架尝试通过偏置注释捕获特定的DataSet偏差,它们无法处理复杂的“ood方案”。其他人在低能力偏置模型或损失上隐含地识别数据集偏置,但在训练和测试数据来自相同分布时,它们会降低。在本文中,我们提出了一般的贪婪去偏见学习框架(GGD),它贪婪地训练偏置模型和基础模型,如功能空间中的梯度下降。它鼓励基础模型专注于用偏置模型难以解决的示例,从而仍然在测试阶段中的杂散相关性稳健。 GGD在很大程度上提高了各种任务的模型的泛化能力,但有时会过度估计偏置水平并降低在分配测试。我们进一步重新分析了GGD的集合过程,并将课程正规化为由课程学习启发的GGD,这取得了良好的分配和分发性能之间的权衡。对图像分类的广泛实验,对抗问题应答和视觉问题应答展示了我们方法的有效性。 GGD可以在特定于特定于任务的偏置模型的设置下学习更强大的基础模型,其中具有现有知识和自组合偏置模型而无需先验知识。
translated by 谷歌翻译
最近的几项研究指出,现有的视觉问题回答(VQA)模型严重遭受了先前的问题的困扰,这是指捕获问题类型和答案之间的表面统计相关性,而忽略了图像内容。通过创建精致的模型或引入额外的视觉注释,已经致力于加强图像依赖性。但是,这些方法无法充分探索视觉提示如何显式影响学习的答案表示,这对于减轻语言的依赖至关重要。此外,他们通常强调对学习的答案表示形式的班级歧视,这忽略了更精细的实例级别模式,并要求进一步优化。在本文中,我们从视觉扰动校准的角度提出了一种新颖的协作学习方案,该方案可以更好地研究细粒度的视觉效果,并通过学习实例级别的特征来减轻语言的先验问题。具体而言,我们设计了一个视觉控制器来构建具有不同扰动范围的两种策划图像,基于该图像的协作学习内置不变性和实体歧视的协作学习由两个精心设计的歧视者实现。此外,我们在潜在空间上实施信息瓶颈调制器,以进一步减轻偏见和表示校准。我们将视觉扰动感知框架强加于三个正统基准,并将实验结果对两个诊断性VQA-CP基准数据集进行了实验结果,显然表明了其有效性。此外,我们还证明了它在平衡的VQA基准上的鲁棒性是合理的。
translated by 谷歌翻译
视觉问题回答(VQA)近年来见证了巨大进展。但是,大多数努力只关注2D图像问题应答任务。在本文中,我们介绍了将VQA扩展到3D域的第一次尝试,这可以促进人工智能对3D现实世界情景的看法。与基于图像的VQA不同,3D问题应答(3DQA)将颜色点云作为输入,需要外观和3D几何理解能力来回答3D相关问题。为此,我们提出了一种基于新颖的基于变换器的3DQA框架\ TextBF {“3DQA-TR”},其包括两个编码器,分别用于利用外观和几何信息。外观,几何和的多模码信息语言问题最终可以通过3D语言伯特互相参加,以预测目标答案。要验证我们提出的3DQA框架的有效性,我们还开发了第一个建立的3DQA DataSet \ TextBF {“scanqa”} SCANNet DataSet并包含$ \ SIM $ 6K问题,$ \ SIM $ 30k答案,可满足806美元的场景。在此数据集上的广泛实验展示了我们提出的3DQA框架在现有的VQA框架上的明显优势,以及我们主要设计的有效性。我们的代码和数据集将公开可用于促进此方向的研究。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译