视频问题应答需要模型来理解和理由对复杂的视频和语言数据来正确地推导答案。现有努力专注于设计复杂的跨模型交互,使来自两个模态的信息融合,同时将视频和问题全面地作为帧和单词序列对。尽管取得了成功,但这些方法基本上围绕了视频和问题内容的连续性,对问题回答和缺乏可解释性的问题提供了很少的洞察。在这项工作中,我们认为,虽然视频以帧序列呈现,但是在语义空间中的视觉元素(例如,对象,动作,活动和事件)不是顺序但相当分层。为了与语言查询中的语言概念的多粒子概念对齐,我们建议将视频作为条件图层次结构,以相应的文本线索的指导在一起以级别明智的方式编织不同粒度的视觉事实。尽管简单性,我们的广泛实验表明了这种条件等级图形架构的优越性,并且在现有方法上具有明显的性能改进,以及不同类型的问题的更好的概括。进一步分析还巩固模型的可靠性,因为它显示了预测答案的有意义的视觉文本证据。
translated by 谷歌翻译
本文提出了一个视频图形变压器(VGT)模型,用于视频Quetion Answering(VideoQA)。 VGT的唯一性是双重的:1)它设计了一个动态图形变压器模块,该模块通过明确捕获视觉对象,它们的关系和动态来编码视频,以进行复杂的时空推理; 2)它利用了删除的视频和文本变压器,以比较视频和文本以执行质量检查,而不是纠缠的跨模式变压器进行答案分类。视觉文本通信是通过其他跨模式相互作用模块完成的。借助更合理的视频编码和质量检查解决方案,我们表明VGT可以在挑战动态关系推理的视频中取得更好的性能,而不是在没有预处理的情况下。它的性能甚至超过了那些被数百万个外部数据鉴定的模型。我们进一步表明,VGT也可以从自我监督的交叉模式预处理中受益匪浅,但数据的数量级较小。这些结果清楚地表明了VGT的有效性和优势,并揭示了其具有更高数据预处理的潜力。通过全面的分析和一些启发式观察,我们希望VGT能够在现实视频中促进VQA研究超越粗略的认识/描述,以实现细粒度的关系推理。我们的代码可在https://github.com/sail-sg/vgt上找到。
translated by 谷歌翻译
视频问题应答(VideoQA),旨在基于了解多模态视频内容正确回答给定的问题,由于视频内容丰富,这是具有挑战性的。从视频理解的角度来看,良好的视频仪框架需要了解不同语义级别的视频内容,并灵活地将不同的视频内容集成到蒸馏问题相关内容。为此,我们提出了一个名为Livlr的轻量级视觉语言推理框架。具体地,Livlr首先利用基于图形的视觉和语言编码器来获得多粒度的视觉和语言表示。随后,所获得的表示与设计的分集感知视觉语言推理模块(DAVL)集成。 DAVL考虑不同类型的表示之间的差异,并且在生成问题相关的联合表示时可以灵活地调整不同类型表示的重要性,这是一种有效和一般的表示集成方法。拟议的LIVLR轻量级,并在两个VideoQ基准,MRSVTT-QA和了解VQA上显示了其性能优势。广泛的消融研究证明了LIVLR关键部件的有效性。
translated by 谷歌翻译
现有的视觉问题回答方法倾向于捕获视觉和语言方式中的虚假相关性,并且未能发现真正的休闲机制,这些机制是基于主导的视觉证据和正确的问题意图而实现推理的真正休闲机制。此外,现有方法通常忽略了多模式设置中复杂的事件级别的理解,这需要因果推断对共同模型跨模式事件的时间性,因果关系和动力学的强大认知能力。在这项工作中,我们通过引入因果干预方法来减轻虚假相关性并发现真实的因果结构,从而从新的角度(即跨模式因果关系推理)回答事件级别的视觉问题,即跨模式的因果关系推理并发现了真实的因果结构,以集成视觉和语言的相关性方式。具体而言,我们提出了一个新颖的事件级视觉问题答案框架,称为跨模式因果关系推理(CMCIR),以实现强大的偶然性随意感知的视觉视觉语言问题。为了揭示视觉和语言方式的因果结构,提出了新颖的因果关系 - 感知视觉语言推理(CVLR)模块,以通过精心设计的前对门和后门Causal Causal Intervention模块进行合作地解散视觉和语言的杂语相关性。为了发现语言语义和时空表示之间的细粒度相互作用,我们构建了一种新型的时空变压器(STT),该变压器(STT)构建了视觉内容和语言内容之间的多模式共发生相互作用。大规模事件级城市数据集SUTD-TrafficQA和三个基准现实世界数据集TGIF-QA,MSVD-QA和MSRVTT-QA进行了广泛的实验,这证明了我们的CMCIR在发现视觉效果的Causal Causal Causal结构中的有效性。
translated by 谷歌翻译
To build Video Question Answering (VideoQA) systems capable of assisting humans in daily activities, seeking answers from long-form videos with diverse and complex events is a must. Existing multi-modal VQA models achieve promising performance on images or short video clips, especially with the recent success of large-scale multi-modal pre-training. However, when extending these methods to long-form videos, new challenges arise. On the one hand, using a dense video sampling strategy is computationally prohibitive. On the other hand, methods relying on sparse sampling struggle in scenarios where multi-event and multi-granularity visual reasoning are required. In this work, we introduce a new model named Multi-modal Iterative Spatial-temporal Transformer (MIST) to better adapt pre-trained models for long-form VideoQA. Specifically, MIST decomposes traditional dense spatial-temporal self-attention into cascaded segment and region selection modules that adaptively select frames and image regions that are closely relevant to the question itself. Visual concepts at different granularities are then processed efficiently through an attention module. In addition, MIST iteratively conducts selection and attention over multiple layers to support reasoning over multiple events. The experimental results on four VideoQA datasets, including AGQA, NExT-QA, STAR, and Env-QA, show that MIST achieves state-of-the-art performance and is superior at computation efficiency and interpretability.
translated by 谷歌翻译
文本和视频之间交叉模态检索的任务旨在了解视觉和语言之间的对应关系。现有研究遵循基于文本和视频嵌入的测量文本视频相似度的趋势。在常见的做法中,通过将视频帧馈送到用于全球视觉特征提取的视频帧或仅通过使用图形卷积网络使用本地细粒度的框架区域来实现简单的语义关系来构造视频表示。然而,这些视频表示在学习视频表示中的视觉组件之间没有充分利用时空关系,从而无法区分具有相同视觉组件但具有不同关系的视频。为了解决这个问题,我们提出了一种视觉时空关系增强的网络(VSR-Net),这是一种新的跨模型检索框架,其考虑组件之间的空间视觉关系,以增强桥接文本 - 视频模型中的全局视频表示。具体地,使用多层时空变压器来编码视觉时空关系,以学习视觉关系特征。我们将全局视觉和细粒度的关系功能与两个嵌入空格上的文本功能对齐,用于交叉模态文本 - 视频检索。在MSR-VTT和MSVD数据集中进行了广泛的实验。结果表明了我们提出的模型的有效性。我们将发布促进未来研究的代码。
translated by 谷歌翻译
本文解决了颞句的接地。以前的作品通常通过学习帧级视频功能来解决此任务并将其与文本信息对齐。这些作品的一个主要限制是,由于帧级特征提取,它们未能利用具有微妙的外观差异的模糊视频帧。最近,一些方法采用更快的R-CNN来提取每帧中的详细物体特征来区分细粒的外观相似性。然而,由于对象检测模型缺乏时间建模,因此通过更快的R-CNN提取的对象级别特征遭受缺失的运动分析。为了解决这个问题,我们提出了一种新颖的运动外观推理网络(MARN),其包括动作感知和外观感知对象特征,以更好的原因对象关系来建立连续帧之间的活动。具体而言,我们首先介绍两个单独的视频编码器以将视频嵌入到相应的主导和外观 - 方面对象表示中。然后,我们开发单独的运动和外观分支,以分别学习运动引导和外观引导的对象关系。最后,来自两个分支的运动和外观信息都与用于最终接地的更多代表性的特征相关联。对两个具有挑战性的数据集(Chardes-Sta和Tacos)的广泛实验表明,我们提出的马恩在以前的最先进的方法中大大优于大幅度。
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
视频问题回答是一项具有挑战性的任务,需要共同理解语言输入,单个视频帧中的视觉信息以及视频中发生的事件的时间信息。在本文中,我们提出了一种新颖的多流视频编码器,用于视频问题回答,它使用多个视频输入和一种新的视频文本迭代迭代式共同指定方法来回答与视频相关的各种问题。我们在几个数据集上进行了实验评估该模型,例如MSRVTT-QA,MSVD-QA,IVQA,超过了大幅度的先前最新时间。同时,我们的模型将所需的Gflops从150-360减少到只有67,从而产生了高效的视频答案模型。
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
基于文本的视频细分旨在通过用文本查询指定演员及其表演动作来细分视频序列中的演员。由于\ emph {emph {语义不对称}的问题,以前的方法无法根据演员及其动作以细粒度的方式将视频内容与文本查询对齐。 \ emph {语义不对称}意味着在多模式融合过程中包含不同量的语义信息。为了减轻这个问题,我们提出了一个新颖的演员和动作模块化网络,该网络将演员及其动作分别定位在两个单独的模块中。具体来说,我们首先从视频和文本查询中学习与参与者相关的内容,然后以对称方式匹配它们以定位目标管。目标管包含所需的参与者和动作,然后将其送入完全卷积的网络,以预测演员的分割掩模。我们的方法还建立了对象的关联,使其与所提出的时间建议聚合机制交叉多个框架。这使我们的方法能够有效地细分视频并保持预测的时间一致性。整个模型允许联合学习参与者的匹配和细分,并在A2D句子和J-HMDB句子数据集上实现单帧细分和完整视频细分的最新性能。
translated by 谷歌翻译
translated by 谷歌翻译
神经模块网络(NMN)在图像接地任务中取得了成功,例如在合成图像上的视觉询问(VQA)。但是,在视频接地的对话任务中已经研究了NMN的非常有限的工作。这些任务通过附加的视觉时间差异和语言交叉转移依赖性扩展了传统视觉任务的复杂性。在最新的NMN方法上,我们介绍了视频接地的神经模块网络(VGNMN),以模拟视频基础语言任务中的信息检索过程,作为神经模块的管道。 VGNMN首先分解对话中的所有语言组件,以明确解决任何实体参考并从问题中检测相应的基于动作的输入。检测到的实体和动作被用作实例化神经模块网络并从视频中提取视觉提示的参数。我们的实验表明,VGNMN可以在充满挑战的视频对话基准以及视频质量质量标准测试中实现有希望的表现。
translated by 谷歌翻译
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years. This achievement can be ascribed in part to advances in AI subfields including Machine Learning (ML), Computer Vision (CV), and Natural Language Processing (NLP). Deep learning, a sub-field of machine learning that employs artificial neural network concepts, has enabled the most rapid growth in these domains. The integration of vision and language has sparked a lot of attention as a result of this. The tasks have been created in such a way that they properly exemplify the concepts of deep learning. In this review paper, we provide a thorough and an extensive review of the state of the arts approaches, key models design principles and discuss existing datasets, methods, their problem formulation and evaluation measures for VQA and Visual reasoning tasks to understand vision and language representation learning. We also present some potential future paths in this field of research, with the hope that our study may generate new ideas and novel approaches to handle existing difficulties and develop new applications.
translated by 谷歌翻译
人类对象相互作用(HOI)识别的关键是推断人与物体之间的关系。最近,该图像的人类对象相互作用(HOI)检测取得了重大进展。但是,仍然有改善视频HOI检测性能的空间。现有的一阶段方法使用精心设计的端到端网络来检测视频段并直接预测交互。它使网络的模型学习和进一步的优化更加复杂。本文介绍了空间解析和动态时间池(SPDTP)网络,该网络将整个视频作为时空图作为人类和对象节点作为输入。与现有方法不同,我们提出的网络通过显式空间解析预测交互式和非相互作用对之间的差异,然后执行交互识别。此外,我们提出了一个可学习且可区分的动态时间模块(DTM),以强调视频的关键帧并抑制冗余帧。此外,实验结果表明,SPDTP可以更多地关注主动的人类对象对和有效的密钥帧。总体而言,我们在CAD-1220数据集和某些ELSE数据集上实现了最先进的性能。
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
Image-text retrieval (ITR) is a challenging task in the field of multimodal information processing due to the semantic gap between different modalities. In recent years, researchers have made great progress in exploring the accurate alignment between image and text. However, existing works mainly focus on the fine-grained alignment between image regions and sentence fragments, which ignores the guiding significance of context background information. Actually, integrating the local fine-grained information and global context background information can provide more semantic clues for retrieval. In this paper, we propose a novel Hierarchical Graph Alignment Network (HGAN) for image-text retrieval. First, to capture the comprehensive multimodal features, we construct the feature graphs for the image and text modality respectively. Then, a multi-granularity shared space is established with a designed Multi-granularity Feature Aggregation and Rearrangement (MFAR) module, which enhances the semantic corresponding relations between the local and global information, and obtains more accurate feature representations for the image and text modalities. Finally, the ultimate image and text features are further refined through three-level similarity functions to achieve the hierarchical alignment. To justify the proposed model, we perform extensive experiments on MS-COCO and Flickr30K datasets. Experimental results show that the proposed HGAN outperforms the state-of-the-art methods on both datasets, which demonstrates the effectiveness and superiority of our model.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
视频字幕定位目标将复杂的视觉内容解释为文本说明,这要求模型充分了解包括对象及其交互的视频场景。流行的方法采用现成的对象检测网络来提供对象建议,并使用注意机制来建模对象之间的关系。他们通常会错过一些预验证模型的不确定语义概念,并且无法识别对象之间的确切谓词关系。在本文中,我们研究了为给定视频生成文本描述的开放研究任务,并提出了带有元概念的跨模式图(CMG)。具体而言,为了涵盖视频字幕中有用的语义概念,我们弱地学习了文本描述的相应视觉区域,其中相关的视觉区域和文本单词被命名为跨模式元概念。我们通过学习的跨模式元概念动态地构建元概念图。我们还构建了整体视频级别和本地框架级视频图,并具有预测的谓词,以建模视频序列结构。我们通过广泛的实验来验证我们提出的技术的功效,并在两个公共数据集上实现最新结果。
translated by 谷歌翻译
In this paper, we introduce ActBERT for self-supervised learning of joint video-text representations from unlabeled data. First, we leverage global action information to catalyze mutual interactions between linguistic texts and local regional objects. It uncovers global and local visual clues from paired video sequences and text descriptions for detailed visual and text relation modeling. Second, we introduce a TaNgled Transformer block (TNT) to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered via judicious clues extraction from contextual information. It enforces the joint video-text representation to be aware of fine-grained objects as well as global human intention. We validate the generalization capability of ActBERT on downstream video-and-language tasks, i.e., text-video clip retrieval, video captioning, video question answering, action segmentation, and action step localization. ActBERT significantly outperforms the stateof-the-art, demonstrating its superiority in video-text representation learning.actbct * This work was done when Linchao Zhu visited Baidu Research. Yi Yang is the corresponding author.
translated by 谷歌翻译