我们提出了一项新的3D问题答案的3D空间理解任务(3D-QA)。在3D-QA任务中,模型从丰富的RGB-D室内扫描的整个3D场景接收视觉信息,并回答关于3D场景的给定文本问题。与VQA的2D答案不同,传统的2D-QA模型遭受了对对象对齐和方向的空间理解的问题,并且从3D-QA中的文本问题中失败了对象本地化。我们为3D-QA提出了一个名为ScanQA模型的3D-QA基线模型,其中模型从3D对象提案和编码的句子嵌入中获取融合描述符。该学习描述符将语言表达式与3D扫描的底层几何特征相关联,并促进3D边界框的回归以确定文本问题中的描述对象。我们收集了人类编辑的问题答案对,自由表格答案将接地为3D场景中的3D对象。我们的新ScanQA数据集包含来自Scannet DataSet的800个室内场景的超过41K问答对。据我们所知,ScanQA是第一个在3D环境中执行对象接地的问答的大规模工作。
translated by 谷歌翻译
视觉问题回答(VQA)近年来见证了巨大进展。但是,大多数努力只关注2D图像问题应答任务。在本文中,我们介绍了将VQA扩展到3D域的第一次尝试,这可以促进人工智能对3D现实世界情景的看法。与基于图像的VQA不同,3D问题应答(3DQA)将颜色点云作为输入,需要外观和3D几何理解能力来回答3D相关问题。为此,我们提出了一种基于新颖的基于变换器的3DQA框架\ TextBF {“3DQA-TR”},其包括两个编码器,分别用于利用外观和几何信息。外观,几何和的多模码信息语言问题最终可以通过3D语言伯特互相参加,以预测目标答案。要验证我们提出的3DQA框架的有效性,我们还开发了第一个建立的3DQA DataSet \ TextBF {“scanqa”} SCANNet DataSet并包含$ \ SIM $ 6K问题,$ \ SIM $ 30k答案,可满足806美元的场景。在此数据集上的广泛实验展示了我们提出的3DQA框架在现有的VQA框架上的明显优势,以及我们主要设计的有效性。我们的代码和数据集将公开可用于促进此方向的研究。
translated by 谷歌翻译
最近,3D视觉和语言任务吸引了不断增长的研究兴趣。与其他视觉和语言任务相比,3D视觉问题回答(VQA)任务的利用较小,并且更容易受到语言先验和共同参考的歧义。同时,由于规模和注释方法有限,最近提出的几个3D VQA数据集并不能很好地支持3D VQA任务。在这项工作中,我们通过收集一个新的3D VQA数据集(称为FE-3DGQA),正式定义和解决3D接地的VQA任务,并具有多样化且相对自由形式的提问,以及密集和完全接地的边界框注释。为了获得更多可解释的答案,我们标记了出现在复杂的质量检查对中的对象,该对象具有不同的语义类型,包括答案接地的对象(均出现并未出现在问题中),以及用于答案的对象的上下文对象。我们还提出了一个新的3D VQA框架,以有效地预测完全视觉扎根和可解释的答案。广泛的实验证明,我们新收集的基准数据集可有效地用于评估不同方面的各种3D VQA方法,而我们新提出的框架也可以在新的基准数据集中实现最新的性能。新收集的数据集和我们的代码都将在http://github.com/zlccccc/3dgqa上公开获得。
translated by 谷歌翻译
We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model's capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
Video Question Answering methods focus on commonsense reasoning and visual cognition of objects or persons and their interactions over time. Current VideoQA approaches ignore the textual information present in the video. Instead, we argue that textual information is complementary to the action and provides essential contextualisation cues to the reasoning process. To this end, we propose a novel VideoQA task that requires reading and understanding the text in the video. To explore this direction, we focus on news videos and require QA systems to comprehend and answer questions about the topics presented by combining visual and textual cues in the video. We introduce the ``NewsVideoQA'' dataset that comprises more than $8,600$ QA pairs on $3,000+$ news videos obtained from diverse news channels from around the world. We demonstrate the limitations of current Scene Text VQA and VideoQA methods and propose ways to incorporate scene text information into VideoQA methods.
translated by 谷歌翻译
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
最近关于3D密集标题和视觉接地的研究取得了令人印象深刻的结果。尽管这两个方面都有发展,但可用的3D视觉语言数据的有限量导致3D视觉接地和3D密度标题方法的过度问题。此外,尚未完全研究如何辨别地描述复杂3D环境中的对象。为了解决这些挑战,我们呈现D3Net,即最终的神经扬声器 - 侦听器架构,可以检测,描述和辨别。我们的D3Net以自我批评方式统一3D密集的标题和视觉接地。D3Net的这种自我关键性质还引入了对象标题生成过程中的可怜性,并且可以通过部分注释的描述启用对Scannet数据的半监督培训。我们的方法在扫描带数据集的两个任务中优于SOTA方法,超越了SOTA 3D密度标题方法,通过显着的余量(23.56%的填充剂@ 0.5iou改进)。
translated by 谷歌翻译
We present Answer-Me, a task-aware multi-task framework which unifies a variety of question answering tasks, such as, visual question answering, visual entailment, visual reasoning. In contrast to previous works using contrastive or generative captioning training, we propose a novel and simple recipe to pre-train a vision-language joint model, which is multi-task as well. The pre-training uses only noisy image captioning data, and is formulated to use the entire architecture end-to-end with both a strong language encoder and decoder. Our results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results across a variety of question answering tasks. Our multi-task mixture training learns from tasks of various question intents and thus generalizes better, including on zero-shot vision-language tasks. We conduct experiments in the challenging multi-task and open-vocabulary settings and across a variety of datasets and tasks, such as VQA2.0, SNLI-VE, NLVR2, GQA. We observe that the proposed approach is able to generalize to unseen tasks and that more diverse mixtures lead to higher accuracy in both known and novel tasks.
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a largescale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.
translated by 谷歌翻译
考虑到安全至关重要自动化系统中情境意识的功能,对驾驶场景的风险及其解释性的感知对于自主和合作驾驶特别重要。为了实现这一目标,本文提出了在驾驶场景中的共同风险定位的新研究方向及其作为自然语言描述的风险解释。由于缺乏标准基准,我们收集了一个大规模数据集,戏剧性(带有字幕模块的驾驶风险评估机制),该数据集由17,785个在日本东京收集的互动驾驶场景组成。我们的戏剧数据集适用于带有相关重要对象的驾驶风险的视频和对象级别的问题,以实现视觉字幕的目标,作为一种自由形式的语言描述,利用封闭式和开放式响应用于多层次问题,可以用来使用这些响应,可用于在驾驶场景中评估一系列视觉字幕功能。我们将这些数据提供给社区以进行进一步研究。使用戏剧,我们探索了在互动驾驶场景中的联合风险定位和字幕的多个方面。特别是,我们基准了各种多任务预测架构,并提供了关节风险定位和风险字幕的详细分析。数据集可在https://usa.honda-ri.com/drama上获得
translated by 谷歌翻译
We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MS-COCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox.
translated by 谷歌翻译
We present a new AI task -Embodied Question Answering (EmbodiedQA) -where an agent is spawned at a random location in a 3D environment and asked a question ('What color is the car?'). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question ('orange'). This challenging task requires a range of AI skills -active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
AR眼镜/机器人等智能助手的长期目标是帮助用户以负担得起的现实世界情景,例如“我如何运行微波炉1分钟?”。但是,仍然没有明确的任务定义和合适的基准。在本文中,我们定义了一项名为“负担中心问题驱动的任务完成”的新任务,AI助手应从教学视频和脚本中学习,以指导用户逐步指导用户。为了支持该任务,我们构建了AssistQ,这是一个新的数据集,其中包括531个问答样本,该样本来自100个新电影的第一人称视频。每个问题都应通过从视觉细节(例如按钮的位置)和纹理细节(例如,按/转弯之类的操作)推断出多步导完成。为了解决这一独特的任务,我们开发了一个问题对行为(Q2A)模型,该模型极大地超过了几种基线方法,同时仍然有大量改进的空间。我们希望我们的任务和数据集能够推进Egentric AI助手的发展。我们的项目页面可在以下网址找到:https://showlab.github.io/assistq
translated by 谷歌翻译
最近,对建立问题的兴趣越来越兴趣,其中跨多种模式(如文本和图像)的原因。但是,使用图像的QA通常仅限于从预定义的选项集中挑选答案。此外,在现实世界中的图像,特别是在新闻中,具有与文本共同参考的对象,其中来自两个模态的互补信息。在本文中,我们提出了一种新的QA评估基准,并在新闻文章中提出了1,384个问题,这些文章需要跨媒体接地图像中的物体接地到文本上。具体地,该任务涉及需要推理图像标题对的多跳问题,以识别接地的视觉对象,然后从新闻正文文本中预测跨度以回答问题。此外,我们介绍了一种新颖的多媒体数据增强框架,基于跨媒体知识提取和合成问题答案生成,自动增强可以为此任务提供弱监管的数据。我们在我们的基准测试中评估了基于管道和基于端到端的预先预测的多媒体QA模型,并表明他们实现了有希望的性能,而在人类性能之后大幅滞后,因此留下了未来工作的大型空间,以便在这一具有挑战性的新任务上的工作。
translated by 谷歌翻译
Learning descriptive 3D features is crucial for understanding 3D scenes with diverse objects and complex structures. However, it is usually unknown whether important geometric attributes and scene context obtain enough emphasis in an end-to-end trained 3D scene understanding network. To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions. Given some free-form descriptions paired with 3D scenes, we extract the knowledge regarding the object relationships and object attributes. We then inject the knowledge to 3D feature learning through three classification-based auxiliary tasks. This language-assisted training can be combined with modern object detection and instance segmentation methods to promote 3D semantic scene understanding, especially in a label-deficient regime. Moreover, the 3D feature learned with language assistance is better aligned with the language features, which can benefit various 3D-language multimodal tasks. Experiments on several benchmarks of 3D-only and 3D-language tasks demonstrate the effectiveness of our language-assisted 3D feature learning. Code is available at https://github.com/Asterisci/Language-Assisted-3D.
translated by 谷歌翻译
在传统的视觉问题(VQG)中,大多数图像具有多个概念(例如,对象和类别),可以生成问题,但培训模型以模仿培训数据中给出的任意选择概念。这使得训练困难并且还造成评估问题 - 对于大多数图像而言,存在多个有效问题,但人类参考资料只捕获一个或多个。我们呈现指导视觉问题 - VQG的变体,它根据对问题类型和应该探索的对象的期望来解决基于分类信息的问题生成器。我们提出了两个变体:(i)明确指导的模型,使演员(人机或自动化)能够选择哪些对象和类别来生成问题; (ii)基于离散潜在变量的基于离散潜变量,了解了一个隐式导游的模型,该模型将了解条件的哪些对象和类别。在答案类别增强VQA数据集上评估所提出的模型,我们的定量结果显示了对现有技术的大大改进(超过9bleu-4增加)。人类评估验证指导有助于生成语法相干的问题,并与给定的图像和对象相关。
translated by 谷歌翻译