人类视觉感知的关键方面是能够将视觉场景分解为单个对象并进一步进入对象部分,形成部分整个层次结构。这种复合结构可以诱导丰富的语义概念和关系,从而在视觉信号的解释和组织中发挥着重要作用,以及视觉感知和推理的概括。但是,现有的视觉推理基准主要专注于物体而不是零件。基于完整的部分整个层次结构的视觉推理比以前粒度概念,更丰富的几何关系和更复杂的物理学所致的对象的推理更具挑战性。因此,为了更好地为基于部分的概念,关系和物理推理服务,我们介绍了一个名为PTR的新型大规模诊断视觉推理数据集。 PTR包含大约70k RGBD合成图像,具有地面真理对象和有关语义实例分段,颜色属性,空间和几何关系的部分级别注释,以及诸如稳定性的某些物理性质。这些图像与700K机生成的问题配对,涵盖各种类型的推理类型,使其成为视觉推理模型的良好测试平台。我们在这个数据集上检查了几种最先进的视觉推理模型,并观察到他们在人类可以容易地推断正确答案的情况下仍然存在许多令人惊讶的错误。我们认为,此数据集将开辟基于零件推理的新机会。
translated by 谷歌翻译
When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
translated by 谷歌翻译
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages Visual Genome scene graph structures to create 22M diverse reasoning questions, which all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. A careful analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains a mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding of vision and language.
translated by 谷歌翻译
Visual Question Answering (VQA) models often perform poorly on out-of-distribution data and struggle on domain generalization. Due to the multi-modal nature of this task, multiple factors of variation are intertwined, making generalization difficult to analyze. This motivates us to introduce a virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts can be isolated in order that their effects can be studied independently. Four factors are considered: visual complexity, question redundancy, concept distribution and concept compositionality. With controllably generated data, Super-CLEVR enables us to test VQA methods in situations where the test data differs from the training data along each of these axes. We study four existing methods, including two neural symbolic methods NSCL and NSVQA, and two non-symbolic methods FiLM and mDETR; and our proposed method, probabilistic NSVQA (P-NSVQA), which extends NSVQA with uncertainty reasoning. P-NSVQA outperforms other methods on three of the four domain shift factors. Our results suggest that disentangling reasoning and perception, combined with probabilistic uncertainty, form a strong VQA model that is more robust to domain shifts. The dataset and code are released at https://github.com/Lizw14/Super-CLEVR.
translated by 谷歌翻译
在本文中,我们通过查看RGBD图像以及有关配对问题和答案的推理来解决3D概念接地(即细分和学习视觉概念)的挑战性问题。现有的视觉推理方法通常利用监督的方法来提取概念接地的2D分割面具。相比之下,人类能够将图像的基础3D表示基础。但是,传统上推断出的3D表示(例如,点云,体素格林和网格)无法灵活地捕获连续的3D特征,从而使基于所指对象的语言描述对3D区域的地面概念充满挑战。为了解决这两个问题,我们建议利用神经领域的连续,可区分的性质来细分和学习概念。具体而言,场景中的每个3D坐标都表示为高维描述符。然后,可以通过计算3D坐标的描述符向量与语言概念的向量嵌入之间的相似性来执行概念接地,这使得能够以不同的方式在神经领域中共同学习分割和概念。结果,3D语义和实例分割都可以直接通过使用神经场顶上的一组定义的神经操作员来回答监督(例如,过滤和计数)。实验结果表明,我们提出的框架优于语义和实例细分任务上的无监督/语言介导的分割模型,并且在具有挑战性的3D意识到的视觉推理任务上优于现有模型。此外,我们的框架可以很好地概括为看不见的形状类别和真正的扫描。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
我们介绍了CLEVR-MATH,这是一个多模式数学单词问题数据集,该数据集由涉及加法/减法的简单数学单词问题组成,部分地表示文本描述,部分地是由图像说明了场景。文本描述了图像中描述的场景上执行的动作。由于提出的问题可能与图像中的场景有关,而是针对采用动作之前或之后的场景状态,因此求解器设想或想象由于这些动作而导致的状态发生了变化。解决这些单词问题需要语言,视觉和数学推理的结合。我们将最新的神经和神经符号模型应用于CLEVR-MATH的视觉问题,并经验评估其表现。我们的结果表明,两种方法如何推广到操作链。我们讨论了两者在解决多模式单词问题解决的任务时的局限性。
translated by 谷歌翻译
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
translated by 谷歌翻译
3D场景理解是一个相对新兴的研究领域。在本文中,我们介绍了3D现实世界场景(VQA-3D)中的视觉问题应答任务,旨在给出3D场景的所有可能的问题。为了解决这个问题,提出了第一个VQA-3D数据集,即CLEVR3D,其中包含在1,129个现实世界场景中的60k个问题。具体而言,我们开发一个问题发动机利用3D场景图结构来生成不同的推理问题,涵盖物体属性的问题(即,大小,颜色和材料)及其空间关系。建立在此数据集之上,我们进一步设计了第一个VQA-3D基线模型TransVQA3D。 TransVQA3D型号采用精心设计的变压器架构,实现优越的VQA-3D性能,与纯语言基线和先前的3D推理方法直接应用于3D场景。实验结果验证了VQA-3D作为辅助任务可以提高3D场景理解的性能,包括节点明智分类和全图识别的场景图分析。
translated by 谷歌翻译
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
场景图是一个场景的结构化表示,可以清楚地表达场景中对象之间的对象,属性和关系。随着计算机视觉技术继续发展,只需检测和识别图像中的对象,人们不再满足。相反,人们期待着对视觉场景更高的理解和推理。例如,给定图像,我们希望不仅检测和识别图像中的对象,还要知道对象之间的关系(视觉关系检测),并基于图像内容生成文本描述(图像标题)。或者,我们可能希望机器告诉我们图像中的小女孩正在做什么(视觉问题应答(VQA)),甚至从图像中移除狗并找到类似的图像(图像编辑和检索)等。这些任务需要更高水平的图像视觉任务的理解和推理。场景图只是场景理解的强大工具。因此,场景图引起了大量研究人员的注意力,相关的研究往往是跨模型,复杂,快速发展的。然而,目前没有对场景图的相对系统的调查。为此,本调查对现行场景图研究进行了全面调查。更具体地说,我们首先总结了场景图的一般定义,随后对场景图(SGG)和SGG的发电方法进行了全面和系统的讨论,借助于先验知识。然后,我们调查了场景图的主要应用,并汇总了最常用的数据集。最后,我们对场景图的未来发展提供了一些见解。我们相信这将是未来研究场景图的一个非常有帮助的基础。
translated by 谷歌翻译
目前的视觉问题应答(VQA)任务主要考虑回答自然图像的人为注释问题。然而,除了自然图像之外,在视觉理解和推理研究中仍然可以解读具有语义丰富性的抽象图。在这项工作中,我们介绍了ICON问题的新挑战(ICONQA),其目标是在图标图像上下文中回答问题。我们发布了ICONQA,这是一个由107,439个问题和三个子任务组成的大型数据集:多图像选择,多文本选择和填充空白。 ICONQA数据集是由真实世界图中的启发,突出了抽象图理解和综合认知推理的重要性。因此,ICONQA不仅需要对象识别和文本理解等感知技能,而且还需要多种认知推理技能,例如几何推理,致辞推理和算术推理。为了促进潜在的iconqa模型来学习图标图像的语义表示,我们进一步发布了一个图标数据集图标645,其中包含377级上的645,687个彩色图标。我们进行广泛的用户研究和盲目实验,并重现各种先进的VQA方法来基准iconQA任务。此外,我们开发了一个强大的ICONQA基线Patch-TRM,它应用金字塔跨模型变压器,其中包含在图标数据集上预先培训的输入图嵌入式。 iconqa和图标645可在https://iconqa.github.io提供。
translated by 谷歌翻译
视觉问题回答(VQA)近年来见证了巨大进展。但是,大多数努力只关注2D图像问题应答任务。在本文中,我们介绍了将VQA扩展到3D域的第一次尝试,这可以促进人工智能对3D现实世界情景的看法。与基于图像的VQA不同,3D问题应答(3DQA)将颜色点云作为输入,需要外观和3D几何理解能力来回答3D相关问题。为此,我们提出了一种基于新颖的基于变换器的3DQA框架\ TextBF {“3DQA-TR”},其包括两个编码器,分别用于利用外观和几何信息。外观,几何和的多模码信息语言问题最终可以通过3D语言伯特互相参加,以预测目标答案。要验证我们提出的3DQA框架的有效性,我们还开发了第一个建立的3DQA DataSet \ TextBF {“scanqa”} SCANNet DataSet并包含$ \ SIM $ 6K问题,$ \ SIM $ 30k答案,可满足806美元的场景。在此数据集上的广泛实验展示了我们提出的3DQA框架在现有的VQA框架上的明显优势,以及我们主要设计的有效性。我们的代码和数据集将公开可用于促进此方向的研究。
translated by 谷歌翻译
Visual question answering is fundamentally compositional in nature-a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
图表是一种流行且有效的数据可视化形式。图表问题应答(CQA)是用于评估图表理解的任务,从根本上与理解自然图像不同。 CQA需要分析图表的文本和视觉组件之间的关系,以便回答一般问题或推断数值。大多数现有的CQA数据集和IT模型都基于简化通常能够超越人类性能的假设。在这项工作中,我们进一步探讨了这一结果背后的原因,并提出了一个共同学习分类和回归的新模式。我们的语言视觉与共同关注变压器设置捕获问题与文本元素之间的复杂相互作用,该元素通常存在于现实世界图表中。我们通过广泛的实验和故障验证了这些结论,并在现实的PlotQA数据集中进行了故障,优于较大的边距,同时表现出竞争性能。我们的模型的边缘尤其强调了与词汇外答案的问题,其中许多需要回归。我们希望这项工作能够进一步促进解决挑战性和高实际实际任务的进一步研究图表理解。
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
最近,3D视觉和语言任务吸引了不断增长的研究兴趣。与其他视觉和语言任务相比,3D视觉问题回答(VQA)任务的利用较小,并且更容易受到语言先验和共同参考的歧义。同时,由于规模和注释方法有限,最近提出的几个3D VQA数据集并不能很好地支持3D VQA任务。在这项工作中,我们通过收集一个新的3D VQA数据集(称为FE-3DGQA),正式定义和解决3D接地的VQA任务,并具有多样化且相对自由形式的提问,以及密集和完全接地的边界框注释。为了获得更多可解释的答案,我们标记了出现在复杂的质量检查对中的对象,该对象具有不同的语义类型,包括答案接地的对象(均出现并未出现在问题中),以及用于答案的对象的上下文对象。我们还提出了一个新的3D VQA框架,以有效地预测完全视觉扎根和可解释的答案。广泛的实验证明,我们新收集的基准数据集可有效地用于评估不同方面的各种3D VQA方法,而我们新提出的框架也可以在新的基准数据集中实现最新的性能。新收集的数据集和我们的代码都将在http://github.com/zlccccc/3dgqa上公开获得。
translated by 谷歌翻译
我们提出了神经符号视觉对话框(NSVD),这是将深度学习和符号程序执行的第一种方法进行多轮视觉上的推理。 NSVD对视觉对话固有的两个关键挑战:长距离共同参考分辨率以及消失的问题效能绩效大大优于现有的纯粹连接主义方法。我们通过提出一个更现实,更严格的评估方案来演示后者,在计算准确性时,我们在其中为完整对话记录使用预测的答案。我们描述了我们模型的两个变体,并表明使用这种新方案,我们的最佳模型在CLEVR -DIALOG上的准确度达到99.72% - 相对提高了10%以上的技术,而仅需培训的一部分即可数据。此外,我们证明了我们的神经符号模型具有更高的平均第一场失败回合,对不完整的对话历史更强大,并且不仅概括了对话的最多三倍的对话,而且还比在训练中看到的对话更长,而且还要看不见问题。类型和场景。
translated by 谷歌翻译