我们介绍了一种视听方法,用于远程文本到视频检索。与以前专为简短视频检索设计的方法(例如,持续时间为5-15秒)不同,我们的方法旨在检索捕获复杂人类动作的长时间视频。仅标准视频方法的一个挑战是与从这样的长视频中处理数百个密集提取的帧相关的大量计算成本。为了解决这个问题,我们建议用紧凑的音频提示替换视频的部分,这些线索简洁地汇总了动态音频事件,并且处理便宜。我们的方法称为Eclipse(带有声音编码的有效剪辑),通过添加一个统一的视听变压器块,将流行的剪辑模型调整为视听视频设置,该块从视频和音频流中捕获互补的提示。除了比仅长期视频的方法快2.92倍和2.34倍的内存效率外,我们的方法还可以在几个不同的远程视频数据集上,例如ActivityNet,QVHighighlights,Youcoook2,Youcoook2,Youcook2,Youcook2,Youcook2,Youcook2,Youcook2,Youcook2, Didemo和Charades。
translated by 谷歌翻译
Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/
translated by 谷歌翻译
The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.
translated by 谷歌翻译
视频字幕的规范方法决定了用于从离线提取的密集视频特征学习的标题生成模型。这些特征提取器通常在以固定帧速率采样的视频帧上操作,并且通常在图像/视频理解任务上培训,而不适用于视频标题数据。在这项工作中,我们展示了Swinbert,一种用于视频字幕的基于端到端的变换器的模型,它将视频帧贴片直接作为输入,并输出自然语言描述。我们的方法代替利用多个2D / 3D特征提取器,该方法采用视频变压器来编码可适应可变长度的视频输入,而无需专用设计,可以针对不同的帧速率进行专用设计。基于该模型架构,我们表明视频标题可以从更密集地采样的视频帧中受益匪浅,而不是以前的成功,用于视频和语言理解任务的稀疏采样视频帧(例如,视频问题应答)。此外,为了避免连续视频帧中固有的冗余,我们建议通过更好的远程视频序列建模来自适应地学习稀疏的注意掩模并优化任务特定性能改进。通过对5个视频字幕数据集的广泛实验,我们展示了Swinbert通过较大的余量来实现对以前的方法的整体性能改进。此外,学习的稀疏注意力掩模将限制推向新的技术,可以在不同的视频长度和不同的数据集之间传输。
translated by 谷歌翻译
视频语言(VIDL)建模的巨大挑战在于从图像/视频理解模型和下游Vidl数据中提取的固定视频表示之间的断开。最近的研究试图通过端到端培训来减轻这种断开连接。为了使其进行计算可行,先前的作品倾向于“想象”视频输入,即,将一些稀疏的采样帧馈送到2D CNN中,然后是简单的均值汇集或连接以获得整体视频表示。虽然实现了有希望的结果,但这种简单的方法可能会失去对于执行下游VIDL任务至关重要的时间信息。在这项工作中,我们呈现紫罗兰色,全新的视频语言变压器,采用视频变压器,明确地模拟视频输入的时间动态。此外,与以前的研究不同,发现视频输入上的预训练任务(例如,屏蔽帧建模)不是非常有效的,我们设计了一个新的预训练任务,屏蔽了视觉令牌建模(MVM),以获得更好的视频建模。具体地,原始视频帧修补程序将“令牌化”转换为离散的视觉令牌,目标是基于蒙面的贴片恢复原始的视觉令牌。综合分析展示了通过视频变压器和MVM显式时间建模的有效性。因此,紫罗兰在5个视频问题的回答任务和4个文本到视频检索任务中实现了新的最先进的性能。
translated by 谷歌翻译
The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework CLIPBERT that enables affordable endto-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that CLIP-BERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second genericdomain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. Our code is publicly available. 1 * Equal contribution.
translated by 谷歌翻译
大规模未标记数据集的预培训显示了计算机视觉和自然语言处理领域的令人印象深刻的性能改进。鉴于大规模教学视频数据集的出现,预训练视频编码器的常见策略是使用随附的语音作为弱监管。但是,由于演讲用于监督预培训,视频编码器从未见过,这不会学会处理该模态。我们解决了当前预训练方法的这种缺点,这未能利用口语语言中的丰富的线索。我们的提议是使用所有可用的视频模型作为监督,即外观,声音和转录语音预先列车。我们在输入中掩盖了整个模态并使用其他两个模态预测它。这鼓励每个码头与其他方式合作,我们的视频编码器学会处理外观和音频以及语音。我们展示了我们在How2R,YouScook2和浓缩电影数据集上视频检索的“模态屏蔽”预培训方法的卓越性能。
translated by 谷歌翻译
视频文本检索一直是多模式研究中的至关重要和基本任务。大型多模式对比预训练的发展,视频文本检索的开发已大大促进,这主要侧重于粗粒或细粒对比。然而,在先前的研究中很少探索过跨粒度的对比,这是粗粒表示和细粒度表示之间的对比。与细粒度或粗粒的对比相比,交叉粒度对比度计算了粗粒粒度特征与每个细粒特征之间的相关性,并且能够过滤出不必要的细颗粒特征,这些特征由粗粒度的特征引导相似性计算,从而提高了检索的准确性。为此,本文提出了一种新型的多透明对比模型,即X-CLIP,用于视频文本检索。但是,另一个挑战在于相似性聚集问题,该问题旨在将细粒度和跨粒度相似性矩阵与实例级别的相似性汇总。为了应对这一挑战,我们提出了对相似性矩阵(AOSM)模块的关注,以使模型重点放在基本帧和单词之间的对比度上,从而降低了不必要的帧和单词对检索结果的影响。 X-CLIP具有多透明的对比度和提议的AOSM模块,在五个广泛使用的视频文本检索数据集上取得了出色的性能,包括MSR-VTT(49.3 R@1),MSVD(50.4 R@1),LSMDC(26.11)(26.1 r@1),didemo(47.8 r@1)和ActivityNet(46.2 r@1)。它的表现优于先前的最先前, +6.3%, +6.6%, +11.1%, +6.7%, +3.8%的相对改善对这些基准测试,这表明了多透明的对比度和AOSM的优势。
translated by 谷歌翻译
大多数现代视频识别模型旨在在短视频剪辑上运行(例如,长度为5-10)。因此,将此类模型应用于长时间的电影理解任务是一项挑战,通常需要复杂的长期时间推理。最近引入的视频变形金刚通过使用远程时间自我注意来部分解决此问题。但是,由于自我注意力的二次成本,这种模型通常是昂贵且不切实际的。取而代之的是,我们提出了Vis4mer,这是一种有效的远程视频模型,结合了自我注意力的优势和最近引入的结构化状态空间序列(S4)层。我们的模型使用标准的变压器编码器进行短距离时空特征提取,以及多尺度的时间S4解码器,用于随后的远程时间推理。通过逐步减少每个解码器层处的时空特征分辨率和通道维度,Vis4mer在视频中学习了复杂的长期时空依赖性。此外,比相应的基于纯的自我注意力的模型,Vis4mer的价格更快为$ 2.63 \ times $ $,$ 8 \ times $ $ GPU内存。此外,Vis4mer实现最先进的结果,在长期视频理解(LVU)基准中,$ 9 $ 9 $长的电影视频分类任务中的$ 6 $。此外,我们表明我们的方法成功地将其推广到其他领域,从而在早餐和硬币程序活动数据集中取得了竞争成果。该代码可在以下网址公开获取:https://github.com/md-mohaiminul/vis4mer。
translated by 谷歌翻译
来自视频数据的多模态学习最近看过,因为它允许在没有人为注释的情况下培训语义有意义的嵌入,从而使得零射击检索和分类等任务。在这项工作中,我们提出了一种多模态,模态无政府主义融合变压器方法,它学会在多个模态之间交换信息,例如视频,音频和文本,并将它们集成到加入的多模态表示中,以获取聚合的嵌入多模态时间信息。我们建议培训系统的组合丢失,单个模态以及成对的方式,明确地留出任何附加组件,如位置或模态编码。在测试时间时,产生的模型可以处理和融合任意数量的输入模态。此外,变压器的隐式属性允许处理不同长度的输入。为了评估所提出的方法,我们在大规模HOWASET上培训模型,并评估四个具有挑战性的基准数据集上产生的嵌入空间获得最先进的视频检索和零射击视频动作定位。
translated by 谷歌翻译
在这项工作中,我们介绍了无文本视觉语言变压器(TVLT),其中均匀的变压器块使用最小的模态设计进行视觉和语言表示的原始视觉和音频输入,并且不使用特定于文本的模块,例如作为令牌化或自动语音识别(ASR)。 TVLT通过重建连续视频帧和音频谱图(蒙版自动编码)和对比度建模以使视频和音频对比度建模进行训练。 TVLT在各种多模式任务上的性能与其基于文本的对应物相当,例如视觉询问,图像检索,视频检索和多模式情感分析,具有28倍的推理速度和仅1/3参数。我们的发现表明,从低级视觉和音频信号中学习紧凑,有效的视觉语言表示的可能性,而无需假设文本的先前存在。我们的代码和检查点可在以下网址找到:https://github.com/zinengtang/tvlt
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
translated by 谷歌翻译
文本和视频之间交叉模态检索的任务旨在了解视觉和语言之间的对应关系。现有研究遵循基于文本和视频嵌入的测量文本视频相似度的趋势。在常见的做法中,通过将视频帧馈送到用于全球视觉特征提取的视频帧或仅通过使用图形卷积网络使用本地细粒度的框架区域来实现简单的语义关系来构造视频表示。然而,这些视频表示在学习视频表示中的视觉组件之间没有充分利用时空关系,从而无法区分具有相同视觉组件但具有不同关系的视频。为了解决这个问题,我们提出了一种视觉时空关系增强的网络(VSR-Net),这是一种新的跨模型检索框架,其考虑组件之间的空间视觉关系,以增强桥接文本 - 视频模型中的全局视频表示。具体地,使用多层时空变压器来编码视觉时空关系,以学习视觉关系特征。我们将全局视觉和细粒度的关系功能与两个嵌入空格上的文本功能对齐,用于交叉模态文本 - 视频检索。在MSR-VTT和MSVD数据集中进行了广泛的实验。结果表明了我们提出的模型的有效性。我们将发布促进未来研究的代码。
translated by 谷歌翻译
现代视频文本检索框架基本上由三个部分组成:视频编码器,文本编码器和相似性。随着Visual和Textual表示学习的成功,在视频文本检索领域也采用了基于变压器的编码器和融合方法。在本报告中,我们呈现Clip2TV,旨在探索关键元素在基于变压器的方法中。为实现这一目标,我们首先重新审视一些对多模态学习的工作,然后将一些技术介绍到视频文本检索中,最后通过不同配置的大量实验进行评估。值得注意的是,Clip2TV在MSR-VTT数据集上实现了52.9 @ R1,优先表现出先前的SOTA结果为4.1%。
translated by 谷歌翻译
我们介绍了空间本地化叙述中的视频中的任务。我们的方法的关键是能够学会在与随附的叙述的视频中的大型视频中对自我监督进行空间地定位与自我监督的互动。为实现这一目标,我们提出了一种多层跨模型关注网络,可以在培训期间有效优化对比损失。我们介绍了一种分割的策略,可以通过视觉和自然语言方式计算和中间模态注意力之间的交替,这允许通过直接对比两种方式的表示来实现有效的培训。我们展示了我们对HOWTO100M教学数据集的自我训练的方法的有效性,并在YouCook2 DataSet中的本地化描述交互的新收集数据集上进行评估。我们展示了我们的方法优于替代基准,包括浅薄的共同关注和完全跨越的关注。我们还将我们的方法应用于在Flickr30k上的弱监管下的图像中的接地短语,并显示堆叠多个注意层是有效的,并且当与对区域丢失相结合时,在召回召回和指向时达到最先进的艺术状态手准确性。
translated by 谷歌翻译
Video-language pre-training is crucial for learning powerful multi-modal representation. However, it typically requires a massive amount of computation. In this paper, we develop SMAUG, an efficient pre-training framework for video-language models. The foundation component in SMAUG is masked autoencoders. Different from prior works which only mask textual inputs, our masking strategy considers both visual and textual modalities, providing a better cross-modal alignment and saving more pre-training costs. On top of that, we introduce a space-time token sparsification module, which leverages context information to further select only "important" spatial regions and temporal frames for pre-training. Coupling all these designs allows our method to enjoy both competitive performances on text-to-video retrieval and video question answering tasks, and much less pre-training costs by 1.9X or more. For example, our SMAUG only needs about 50 NVIDIA A6000 GPU hours for pre-training to attain competitive performances on these two video-language tasks across six popular benchmarks.
translated by 谷歌翻译
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of framelevel patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/ facebookresearch/TimeSformer.
translated by 谷歌翻译
最近,通过引入大规模的数据集和强大的变压器网络,视频预培训表明尤其是检索的巨大成功。然而,现有的视频语言变压器模型没有明确细粒度的语义对齐。在这项工作中,我们呈现了对象感知的变换器,以对象为中心的方法,该对象方法扩展了视频语言变压器来合并对象表示。关键的想法是利用边界框和对象标签来指导培训过程。我们在四个广泛使用的基准测试中评估了我们的三个标准子任务的模型。我们还提供了深入的分析和详细消融关于所提出的方法。我们在考虑的所有任务和数据集中表现出清晰的性能,展示将对象表示的模型中的型号集成到视频架构中。代码将以\ URL {https://github.com/fingerrec/oa -transformer}释放。
translated by 谷歌翻译
我们研究了联合视频和语言(VL)预培训,以实现跨模型学习和益处丰富的下游VL任务。现有的作品要么提取低质量的视频特征或学习有限的文本嵌入,但忽略了高分辨率视频和多样化的语义可以显着提高跨模型学习。在本文中,我们提出了一种新的高分辨率和多样化的视频 - 语言预训练模型(HD-VILA),用于许多可视任务。特别是,我们收集具有两个不同属性的大型数据集:1)第一个高分辨率数据集包括371.5k小时的720p视频,2)最多样化的数据集涵盖15个流行的YouTube类别。为了启用VL预培训,我们通过学习丰富的时空特征的混合变压器联合优化HD-VILA模型,以及多峰变压器,用于强制学习视频功能与多样化文本的交互。我们的预训练模式实现了新的最先进的导致10 VL了解任务和2个新颖的文本到视觉生成任务。例如,我们以零拍摄MSR-VTT文本到视频检索任务的相对增加38.5%R @ 1的相对增长,高分辨率数据集LSMDC为53.6%。学习的VL嵌入也有效地在文本到视觉操纵和超分辨率任务中产生视觉上令人愉悦和语义相关结果。
translated by 谷歌翻译