人类通过各种感官方式逮捕了世界,但语言是他们主要的交流渠道。机器学习系统需要利用相同的多模式丰富性,以使人类以自然语言知情。对于专门从事视觉密集信息的系统,例如对话,建议和搜索引擎,尤其如此。为此,我们训练一个视觉问题回答(VQA)系统,以回答有关时尚拍摄图像中服装的复杂自然语言问题。成功培训我们的VQA模型的关键是使用不同模板从207,000张图像的项目属性中自动创建一个视觉提问数据集。样本生成采用了一种策略,该策略考虑了提问的困难,以强调具有挑战性的概念。与使用几个数据集预处理视觉问题答案模型的最新趋势相反,我们专注于保持数据集的固定,同时从头开始训练各种模型以隔离模型体系结构的改进。我们看到,使用相同的变压器编码问题并解码答案,就像在语言模型中一样,可以达到最大的准确性,表明视觉语言模型(VLMS)为我们的数据集提供了最佳的视觉问题答案系统。最佳模型的准确性也超过了人类专家的水平,即使回答不限于模板格式的人类生成的问题。我们生成大规模多模式域特异性数据集的方法为训练能够以自然语言进行交流的专业模型提供了途径。这样的域 - 专家模型的培训,例如我们的时尚VLM模型,不能仅依靠从网络收集的大规模通用数据集。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
视觉问题应答(VQA)任务利用视觉图像和语言分析来回回答图像的文本问题。它是一个流行的研究课题,在过去十年中越来越多的现实应用。本文介绍了我们最近对AliceMind-MMU的研究(阿里巴巴的编码器 - 解码器来自Damo Academy - 多媒体理解的机器智能实验室),其比人类在VQA上获得相似甚至略微更好的结果。这是通过系统地改善VQA流水线来实现的,包括:(1)具有全面的视觉和文本特征表示的预培训; (2)与学习参加的有效跨模型互动; (3)一个新颖的知识挖掘框架,具有专门的专业专家模块,适用于复杂的VQA任务。处理不同类型的视觉问题,需要具有相应的专业知识在提高我们的VQA架构的表现方面发挥着重要作用,这取决于人力水平。进行了广泛的实验和分析,以证明新的研究工作的有效性。
translated by 谷歌翻译
This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
医学视觉问题应答(VQA)是医疗人工智能和流行的VQA挑战的组合。鉴于医学形象和在自然语言中的临床相关问题,预计医疗VQA系统将预测符号和令人信服的答案。虽然一般域VQA已被广泛研究,但医疗VQA仍然需要特定的调查和探索,因为它的任务特征是。在本调查的第一部分,我们涵盖并讨论了关于数据源,数据数量和任务功能的公开可用的医疗VQA数据集。在第二部分中,我们审查了医疗VQA任务中使用的方法。在最后,我们分析了该领域的一些有效的挑战,并讨论了未来的研究方向。
translated by 谷歌翻译
A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.
translated by 谷歌翻译
Visual question answering (VQA) is challenging not only because the model has to handle multi-modal information, but also because it is just so hard to collect sufficient training examples -- there are too many questions one can ask about an image. As a result, a VQA model trained solely on human-annotated examples could easily over-fit specific question styles or image contents that are being asked, leaving the model largely ignorant about the sheer diversity of questions. Existing methods address this issue primarily by introducing an auxiliary task such as visual grounding, cycle consistency, or debiasing. In this paper, we take a drastically different approach. We found that many of the "unknowns" to the learned VQA model are indeed "known" in the dataset implicitly. For instance, questions asking about the same object in different images are likely paraphrases; the number of detected or annotated objects in an image already provides the answer to the "how many" question, even if the question has not been annotated for that image. Building upon these insights, we present a simple data augmentation pipeline SimpleAug to turn this "known" knowledge into training examples for VQA. We show that these augmented examples can notably improve the learned VQA models' performance, not only on the VQA-CP dataset with language prior shifts but also on the VQA v2 dataset without such shifts. Our method further opens up the door to leverage weakly-labeled or unlabeled images in a principled way to enhance VQA models. Our code and data are publicly available at https://github.com/heendung/simpleAUG.
translated by 谷歌翻译
用于视觉语言表示学习的变压器已经引起了很多兴趣,并在视觉问题答案(VQA)和接地方面表现出了巨大的表现。但是,大多数显示出良好性能的系统在培训过程中仍然依赖于预训练的对象探测器,这将其适用性限制在可用于这些检测器的对象类中。为了减轻这种限制,以下论文着重于在变形金刚中的视觉问题答案的背景下进行弱监督的基础问题。该方法通过将每个视觉令牌分组在视觉编码器中,并使用语言自我发项层作为文本引导选择模块来利用胶囊,以在将它们转发到下一层之前掩盖它们。我们评估了针对挑战的GQA以及VQA帽数据集的VQA接地的方法。我们的实验表明:在从标准变压器体系结构中删除蒙版对象的信息的同时,胶囊的集成显着提高了此类系统的接地能力,并提供了与其他新的最先进的结果。在现场接近。
translated by 谷歌翻译
事实证明,大规模的视觉和语言(V+L)预训练已被证明有效地增强了下游V+L任务。但是,当涉及时尚域时,现有的V+L方法是不足的,因为它们忽略了时尚V+L数据和下游任务的独特特征。在这项工作中,我们提出了一个以时尚为中心的新型V+L表示框架,被称为Fashionvil。它包含两个新型时尚特定的预训练任务,旨在使用时尚V+L数据利用两个内在属性。首先,与其他域仅包含单个图像文本对的其他域相比,时尚域中可能有多个图像。因此,我们提出了一项多视图对比学习任务,以将一个图像的可视化表示为另一个图像+文本的组成多模式表示。其次,时尚文本(例如,产品描述)通常包含丰富的细粒概念(属性/名词短语)。为了利用这一点,引入了伪归因于分类任务,以鼓励同一概念的学习的单峰(视觉/文本)表示。此外,时尚V+L任务唯一包含不符合常见的一流或两流体系结构的任务(例如,文本引导的图像检索)。因此,我们提出了一个灵活的,多功能的V+L模型体系结构,该体系结构由模态 - 静态变压器组成,以便可以灵活地适应任何下游任务。广泛的实验表明,我们的FashionVil在五个下游任务中实现了新的最新技术。代码可从https://github.com/brandonhanx/mmf获得。
translated by 谷歌翻译
目前的视觉问题应答(VQA)任务主要考虑回答自然图像的人为注释问题。然而,除了自然图像之外,在视觉理解和推理研究中仍然可以解读具有语义丰富性的抽象图。在这项工作中,我们介绍了ICON问题的新挑战(ICONQA),其目标是在图标图像上下文中回答问题。我们发布了ICONQA,这是一个由107,439个问题和三个子任务组成的大型数据集:多图像选择,多文本选择和填充空白。 ICONQA数据集是由真实世界图中的启发,突出了抽象图理解和综合认知推理的重要性。因此,ICONQA不仅需要对象识别和文本理解等感知技能,而且还需要多种认知推理技能,例如几何推理,致辞推理和算术推理。为了促进潜在的iconqa模型来学习图标图像的语义表示,我们进一步发布了一个图标数据集图标645,其中包含377级上的645,687个彩色图标。我们进行广泛的用户研究和盲目实验,并重现各种先进的VQA方法来基准iconQA任务。此外,我们开发了一个强大的ICONQA基线Patch-TRM,它应用金字塔跨模型变压器,其中包含在图标数据集上预先培训的输入图嵌入式。 iconqa和图标645可在https://iconqa.github.io提供。
translated by 谷歌翻译
视觉问题回答是自然语言和愿景理解的重要任务。但是,在大多数公众视觉问题上回答了诸如VQA,CLEVR之类的数据集,这些问题是针对给定图像的特定于“她的眼睛是什么颜色?”的人类产生的。人类产生的众包问题相对简单,有时对某些实体或属性有偏见。在本文中,我们介绍了一个基于Image-Chiqa的新问题回答数据集。它包含Internet用户发布的现实查询,并结合了几个相关的开放域图像。系统应确定图像是否可以回答问题。与以前的VQA数据集不同,这些问题是现实世界中独立的查询,这些查询更加各种和无偏见。与先前的图像回程或图像捕获数据集相比,Chiqa不仅衡量了相关性,而且还可以衡量答案性,这需要更细粒度的视力和语言推理。 Chiqa包含超过40k的问题和超过200k的问题图像对。将三级2/1/0标签分配给每个对,指示完美的答案,部分答案和无关紧要。数据分析表明,Chiqa需要对语言和视觉有深入的了解,包括接地,比较和阅读。我们评估了几种最先进的视觉语言模型,例如ALBEF,表明仍然有一个很大的改进奇卡的空间。
translated by 谷歌翻译
最近,3D视觉和语言任务吸引了不断增长的研究兴趣。与其他视觉和语言任务相比,3D视觉问题回答(VQA)任务的利用较小,并且更容易受到语言先验和共同参考的歧义。同时,由于规模和注释方法有限,最近提出的几个3D VQA数据集并不能很好地支持3D VQA任务。在这项工作中,我们通过收集一个新的3D VQA数据集(称为FE-3DGQA),正式定义和解决3D接地的VQA任务,并具有多样化且相对自由形式的提问,以及密集和完全接地的边界框注释。为了获得更多可解释的答案,我们标记了出现在复杂的质量检查对中的对象,该对象具有不同的语义类型,包括答案接地的对象(均出现并未出现在问题中),以及用于答案的对象的上下文对象。我们还提出了一个新的3D VQA框架,以有效地预测完全视觉扎根和可解释的答案。广泛的实验证明,我们新收集的基准数据集可有效地用于评估不同方面的各种3D VQA方法,而我们新提出的框架也可以在新的基准数据集中实现最新的性能。新收集的数据集和我们的代码都将在http://github.com/zlccccc/3dgqa上公开获得。
translated by 谷歌翻译
通用视觉(GPV)系统是旨在解决各种视觉任务的模型,而无需进行架构更改。如今,GPV主要从大型完全监督的数据集中学习技能和概念。通过获取数据以迅速学习每个技能的每个概念,将GPV扩展到数万个概念都变得令人望而却步。这项工作提出了一种有效且廉价的替代方法:从监督数据集中学习技能,从Web图像搜索中学习概念,并利用GPV的关键特征:跨技能传递视觉知识的能力。我们使用跨越10K+视觉概念的1M+图像的数据集来演示3个基准上的两个现有GPV(GPV-1和VL-T5)的Webly Supumented概念扩展:5个基于可可的数据集(80个主要概念),这是一个新的策划系列,这是一个新的策划系列。基于OpenImages和VisualGenome存储库(〜500个概念)以及Web衍生的数据集(10K+概念)的5个数据集。我们还提出了一种新的体系结构GPV-2,该架构支持各种任务 - 从分类和本地化等视觉任务到Qu Viewer+语言任务,例如QA和字幕,再到更多的利基市场,例如人类对象互动检测。 GPV-2从Web数据中受益匪浅,并且在这些基准测试中胜过GPV-1和VL-T5。我们的数据,代码和Web演示可在https://prior.allenai.org/projects/gpv2上获得。
translated by 谷歌翻译
在过去的几年中,训练前模型的出现将单峰领域(例如计算机视觉(CV)和自然语言处理(NLP))带到了一个新时代。实质性的作品表明它们对下游大学任务有益,并避免从头开始训练新的模型。那么,此类预训练的模型可以应用于多模式任务吗?研究人员探索了这个问题并取得了重大进展。本文调查了视觉预训练(VLP)的最新进展和新的前沿,包括图像文本和视频文本预训练。为了使读者更好地掌握VLP,我们首先从五个方面回顾了其最新进展:功能提取,模型体系结构,培训预训练目标,预训练数据集和下游任务。然后,我们详细概述了特定的VLP模型。最后,我们讨论了VLP中的新边界。据我们所知,这是对VLP的首次调查。我们希望这项调查能够阐明VLP领域的未来研究。
translated by 谷歌翻译
Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-andlanguage downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
随着图像文本对的大量数据以及视觉和语言(V&L)任务的多样性,学者在该研究领域引入了大量的深度学习模型。此外,近年来,转移学习还显示出在计算机愿景中的巨大成功,例如图像分类,对象检测等以及在自然语言处理中以进行问答,机器翻译等的自然语言处理。继承转移学习的精神, V&L的研究工作已经在大规模数据集上设计了多种预训练技术,以增强下游任务的性能。本文的目的是提供当代V&L预审前模型的全面修订。特别是,我们对预处理的方法进行了分类和描述,以及最先进的视觉和语言预训练模型的摘要。此外,还提供了培训数据集和下游任务的列表,以进一步提高V&L预处理的观点。最后,我们决定采取进一步的一步,讨论众多未来研究的方向。
translated by 谷歌翻译
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use selfattention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar 1 , which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. 2
translated by 谷歌翻译
图表是一种流行且有效的数据可视化形式。图表问题应答(CQA)是用于评估图表理解的任务,从根本上与理解自然图像不同。 CQA需要分析图表的文本和视觉组件之间的关系,以便回答一般问题或推断数值。大多数现有的CQA数据集和IT模型都基于简化通常能够超越人类性能的假设。在这项工作中,我们进一步探讨了这一结果背后的原因,并提出了一个共同学习分类和回归的新模式。我们的语言视觉与共同关注变压器设置捕获问题与文本元素之间的复杂相互作用,该元素通常存在于现实世界图表中。我们通过广泛的实验和故障验证了这些结论,并在现实的PlotQA数据集中进行了故障,优于较大的边距,同时表现出竞争性能。我们的模型的边缘尤其强调了与词汇外答案的问题,其中许多需要回归。我们希望这项工作能够进一步促进解决挑战性和高实际实际任务的进一步研究图表理解。
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译