该技术报告介绍了MTVG的第三次获胜解决方案,这是在ACM MM 2022中的第4-人(PIC)挑战中引入的一项新任务。MTVG旨在根据文本拟定视频将步骤的时间边界定位在文本视频中描述。这项任务的最大挑战是化妆步骤的Fi ne Grean Video-Text语义。但是,当前方法主要使用基于动作的预训练模型提取视频功能。由于动作比化妆步骤更粗糙,因此基于动作的特征不足以提供提示。为了解决这个问题,我们建议通过利用特征多样性来实现Fi ne Grean的表示。具体而言,我们提出了一系列从特征提取,网络优化到模型集合的方法。结果,我们在MTVG比赛中获得了第三名。
translated by 谷歌翻译
The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework CLIPBERT that enables affordable endto-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that CLIP-BERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second genericdomain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. Our code is publicly available. 1 * Equal contribution.
translated by 谷歌翻译
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
translated by 谷歌翻译
现代视频文本检索框架基本上由三个部分组成:视频编码器,文本编码器和相似性。随着Visual和Textual表示学习的成功,在视频文本检索领域也采用了基于变压器的编码器和融合方法。在本报告中,我们呈现Clip2TV,旨在探索关键元素在基于变压器的方法中。为实现这一目标,我们首先重新审视一些对多模态学习的工作,然后将一些技术介绍到视频文本检索中,最后通过不同配置的大量实验进行评估。值得注意的是,Clip2TV在MSR-VTT数据集上实现了52.9 @ R1,优先表现出先前的SOTA结果为4.1%。
translated by 谷歌翻译
视频语言(VIDL)建模的巨大挑战在于从图像/视频理解模型和下游Vidl数据中提取的固定视频表示之间的断开。最近的研究试图通过端到端培训来减轻这种断开连接。为了使其进行计算可行,先前的作品倾向于“想象”视频输入,即,将一些稀疏的采样帧馈送到2D CNN中,然后是简单的均值汇集或连接以获得整体视频表示。虽然实现了有希望的结果,但这种简单的方法可能会失去对于执行下游VIDL任务至关重要的时间信息。在这项工作中,我们呈现紫罗兰色,全新的视频语言变压器,采用视频变压器,明确地模拟视频输入的时间动态。此外,与以前的研究不同,发现视频输入上的预训练任务(例如,屏蔽帧建模)不是非常有效的,我们设计了一个新的预训练任务,屏蔽了视觉令牌建模(MVM),以获得更好的视频建模。具体地,原始视频帧修补程序将“令牌化”转换为离散的视觉令牌,目标是基于蒙面的贴片恢复原始的视觉令牌。综合分析展示了通过视频变压器和MVM显式时间建模的有效性。因此,紫罗兰在5个视频问题的回答任务和4个文本到视频检索任务中实现了新的最先进的性能。
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
培训有效的视频和语言模型直观地需要多个帧作为模型输入。但是,目前尚不清楚使用多个帧是否有利于下游任务,如果是的话,性能增益是否值得通过使用更多帧产生的巨大计算和内存成本。在这项工作中,我们探索了视频和语言学习的单帧模型。在各种视频和语言任务(包括文本到视频检索和视频问题)上,我们显示出令人惊讶的结果,即通过大规模的预训练和适当的框架合奏在推理时,与使用多个训练的现有方法相比,不考虑时间信息的单帧训练模型可以实现更好的性能。该结果揭示了流行的视频和语言数据集中存在强烈的“静态外观偏差”。因此,为了对视频和语言模型进行更全面的评估,我们建议基于现有的细粒度识别数据集,提出了两个新的检索任务,以鼓励时间建模。我们的代码可从https://github.com/jayleicn/singularity获得
translated by 谷歌翻译
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the frame-level processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a `bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code is available at https://github.com/muzairkhattak/ViFi-CLIP.
translated by 谷歌翻译
视频时间基础(VTG)的目标是根据自然语言(NL)描述在未修剪视频中定位时间矩。由于现实世界的应用程序提供了永无止境的视频流,因此它提出了对长形视频的时间基础的需求,这导致了两个主要挑战:(1)长视频长度使得很难处理整个视频而不减少样本速率并导致高计算负担; (2)随着候选时间的增加数量,准确的多模式对准更具挑战性。为了应对这些挑战,我们提出了一个有效的以窗户为中心的粗略对齐框架,它可以灵活地处理具有较高推理速度的长格式视频输入,并通过我们的新颖的Choce-Fine Muly-Fine增强了时间基础模态对齐框架。具体来说,我们通过滑动窗口方法将长视频将长视频切成候选窗口。 Cone(1)以窗户为中心,通过对比度学习和通过对NL查询相关的候选窗口进行过滤来学习窗口间的(粗粒)语义差异,并且(2)执行内部(罚款) - 使用强大的对比视力文本预训练模型的强大多模式对齐能力对候选力矩进行排名。长期视频的两个大规模VTG基准测试的广泛实验始终显示出可观的性能增长(MAD的3.13%至6.87%,从10.46%到EGO4D-NLQ上的10.46%至13.46%),并且Cone在两个数据集上都可以达到SOTA结果。分析揭示了组件的有效性和长期视频接地的效率较高,因为我们的系统在EGO4D-NLQ上提高了2倍的推理速度,而在MAD上提高了15倍的速度,同时保持了锥体的SOTA性能。
translated by 谷歌翻译
这项工作提出了一个名为TEG的自我监督的学习框架,探讨学习视频表示中的时间粒度。在TEG中,我们从视频中抽出一个长剪辑,以及在长夹内部的短夹。然后我们提取密集的时间嵌入品。培训目标由两部分组成:一个细粒度的时间学习目的,以最大化短夹和长剪辑中的相应时间嵌入之间的相似性,以及持续的时间学习目标,以将两个剪辑的全局嵌入在一起。我们的研究揭示了时间粒度与三个主要发现的影响。 1)不同的视频任务可能需要不同时间粒度的特征。 2)有趣的是,广泛认为需要时间感知的一些任务实际上可以通过时间持久的功能来解决。 3)TEG的灵活性对8个视频基准测试产生最先进的结果,在大多数情况下优于监督预训练。
translated by 谷歌翻译
视频内容是多方面的,由对象,场景,交互或操作组成。现有数据集主要标记为模型培训的一个方面,导致视频表示根据训练数据集仅偏置为一个小平面。目前还没有研究如何学习来自多方面标签的视频表示,以及多方面的信息是否有助于视频表示学习。在本文中,我们提出了一种新的学习框架,多朝向集成(MUFI),以聚合来自不同数据集的面部,以学习可以反映视频内容的全频谱的表示。从技术上讲,MUFI将问题交流为视觉语义嵌入学习,该问题将视频表示映射到丰富的语义嵌入空间中,并从两个角度联合优化视频表示。一个是利用每个视频和自己的标签描述之间的小型内部监督,第二个是从其他数据集的小平面预测每个视频的“语义表示”作为刻面监控。广泛的实验表明,通过我们的MUFI框架在四个大型视频数据集加上两个图像数据集的联盟上学习3D CNN,导致视频表示的优异能力。具有MUFI的预先学习的3D CNN还显示出在几个下游视频应用上的其他方法的清晰改进。更值得注意的是,MUFI在UCF101 / HMDB51上实现98.1%/ 80.9%,用于行动识别和101.5%,在MSVD上的浏览器D得分为视频字幕。
translated by 谷歌翻译
时间接地旨在本地化与给定的自然语言查询语义对齐的视频片刻。现有方法通常在融合表示上应用检测或回归管道,研究重点是设计复杂的预测头或融合策略。相反,从时间接地作为度量学习问题的角度来看,我们呈现了一个相互匹配的网络(MMN),以直接模拟联合嵌入空间中的语言查询和视频矩之间的相似性。这种新的公制学习框架可以完全利用两个新方面的负面样本:在相互匹配方案中构建负跨模型对和跨不同视频的挖掘负对。这些新的阴性样本可以通过跨模态相互匹配来增强两个模式的联合表示学习,以最大化其互信。实验表明,与四个视频接地基准测试的最先进的方法相比,我们的MMN实现了竞争力的表现。基于MMN,我们为第三张图片车间的HC-STVG挑战提供了一个胜利者解决方案。这表明度量学习仍然是通过捕获关节嵌入空间中的基本跨模式相关性的时间接地的有希望的方法。代码可在https://github.com/mcg-nju/mmn获得。
translated by 谷歌翻译
基于文本的人检索旨在根据文本描述找到查询人员。关键是学习视觉文本模式之间的常见潜在空间映射。为了实现这一目标,现有的作品采用细分来获得明确的跨模式对齐方式或利用注意力来探索显着对准。这些方法有两个缺点:1)标记交叉模式比对很耗时。 2)注意方法可以探索显着的跨模式对齐,但可能会忽略一些微妙而有价值的对。为了缓解这些问题,我们为基于文本的人检索引入了一个隐式视觉文本(IVT)框架。与以前的模型不同,IVT利用单个网络来学习两种模式的表示形式,这有助于视觉文本相互作用。为了探索细粒的对准,我们进一步提出了两个隐式语义比对范式:多级比对(MLA)和双向掩码建模(BMM)。 MLA模块在句子,短语和单词级别上探索了更精细的匹配,而BMM模块旨在挖掘视觉和文本模态之间的\ textbf {更多}语义对齐。进行了广泛的实验,以评估公共数据集中提出的IVT,即Cuhk-Pedes,RSTPREID和ICFG-PEDES。即使没有明确的身体部位对准,我们的方法仍然可以达到最先进的表现。代码可在以下网址获得:https://github.com/tencentyouturesearch/personretrieval-ivt。
translated by 谷歌翻译
视频文本预训练(VTP)旨在从大规模的网络视频中学习可转移的代表。迄今为止,几乎所有现有的VTP方法都仅限于基于检索的下游任务,例如视频检索,而它们在基于本地化的任务(例如时间基础)上的转移潜力不足。在本文中,我们实验分析并证明了当前VTP方法与本地化任务的不相容性,并提出了一种新颖的面向定位的视频文本预训练框架,称为LocvTP。具体而言,我们执行细粒对比度对准作为通过剪贴字对数发现方案对粗粒粒度的补充。为了进一步增强学习功能的时间推理能力,我们提出了一个上下文投影头和暂时意识的对比损失,以感知上下文关系。对六个数据集的四个下游任务进行的广泛实验表明,我们的LOCVTP在基于检索和基于本地化的任务上都达到了最先进的性能。此外,我们进行了全面的消融研究和彻底的分析,以探索最佳的模型设计和培训策略。
translated by 谷歌翻译
本文研究了时间句子接地的多媒体问题(TSG),该问题旨在根据给定的句子查询准确地确定未修剪视频中的特定视频段。传统的TSG方法主要遵循自上而下或自下而上的框架,不是端到端。他们严重依靠耗时的后处理来完善接地结果。最近,提出了一些基于变压器的方法来有效地对视频和查询之间的细粒语义对齐进行建模。尽管这些方法在一定程度上达到了显着的性能,但它们同样将视频的框架和查询的单词视为用于关联的变压器输入,未能捕获其不同水平的粒度与独特的语义。为了解决这个问题,在本文中,我们提出了一种新型的等级局部 - 全球变压器(HLGT)来利用这种层次结构信息,并模拟不同粒度的不同级别的相互作用和不同的模态之间的相互作用,以学习更多细粒度的多模式表示。具体而言,我们首先将视频和查询分为单个剪辑和短语,以通过时间变压器学习其本地上下文(相邻依赖关系)和全局相关性(远程依赖)。然后,引入了全球本地变压器,以了解本地级别和全球级别语义之间的相互作用,以提供更好的多模式推理。此外,我们开发了一种新的跨模式周期一致性损失,以在两种模式之间实施相互作用,并鼓励它们之间的语义一致性。最后,我们设计了一个全新的跨模式平行变压器解码器,以集成编码的视觉和文本特征,以进行最终接地。在三个具有挑战性的数据集上进行了广泛的实验表明,我们提出的HLGT实现了新的最新性能。
translated by 谷歌翻译
视频文本检索一直是多模式研究中的至关重要和基本任务。大型多模式对比预训练的发展,视频文本检索的开发已大大促进,这主要侧重于粗粒或细粒对比。然而,在先前的研究中很少探索过跨粒度的对比,这是粗粒表示和细粒度表示之间的对比。与细粒度或粗粒的对比相比,交叉粒度对比度计算了粗粒粒度特征与每个细粒特征之间的相关性,并且能够过滤出不必要的细颗粒特征,这些特征由粗粒度的特征引导相似性计算,从而提高了检索的准确性。为此,本文提出了一种新型的多透明对比模型,即X-CLIP,用于视频文本检索。但是,另一个挑战在于相似性聚集问题,该问题旨在将细粒度和跨粒度相似性矩阵与实例级别的相似性汇总。为了应对这一挑战,我们提出了对相似性矩阵(AOSM)模块的关注,以使模型重点放在基本帧和单词之间的对比度上,从而降低了不必要的帧和单词对检索结果的影响。 X-CLIP具有多透明的对比度和提议的AOSM模块,在五个广泛使用的视频文本检索数据集上取得了出色的性能,包括MSR-VTT(49.3 R@1),MSVD(50.4 R@1),LSMDC(26.11)(26.1 r@1),didemo(47.8 r@1)和ActivityNet(46.2 r@1)。它的表现优于先前的最先前, +6.3%, +6.6%, +11.1%, +6.7%, +3.8%的相对改善对这些基准测试,这表明了多透明的对比度和AOSM的优势。
translated by 谷歌翻译
Pre-training by numerous image data has become de-facto for robust 2D representations. In contrast, due to the expensive data acquisition and annotation, a paucity of large-scale 3D datasets severely hinders the learning for high-quality 3D features. In this paper, we propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding, which reconstructs the masked point tokens with an encoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D models to extract the multi-view visual features of the input point cloud, and then conduct two types of image-to-point learning schemes on top. For one, we introduce a 2D-guided masking strategy that maintains semantically important point tokens to be visible for the encoder. Compared to random masking, the network can better concentrate on significant 3D structures and recover the masked tokens from key spatial cues. For another, we enforce these visible tokens to reconstruct the corresponding multi-view 2D features after the decoder. This enables the network to effectively inherit high-level 2D semantics learned from rich image data for discriminative 3D modeling. Aided by our image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning, achieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully trained results of existing methods. By further fine-tuning on on ScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.
translated by 谷歌翻译
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
在本报告中,我们向CVPR 2022中的EGO4D自然语言查询(NLQ)挑战介绍了Reler@zju-alibaba提交。给定视频剪辑和文本查询,该挑战的目标是确定视频的时间时刻剪辑可以获得查询的答案。为了解决这项任务,我们提出了一个多尺度的跨模式变压器和视频框架级对比度损失,以完全发现语言查询与视频剪辑之间的相关性。此外,我们提出了两种数据增强策略,以增加培训样本的多样性。实验结果证明了我们方法的有效性。最后的提交在排行榜上排名第一。
translated by 谷歌翻译