语义表示对于视频文本跟踪(VTT)任务具有很大的益处,该任务需要同时对视频中的视频进行分类,检测和跟踪文本。大多数现有方法通过在连续帧中的外观相似性来解决此任务,同时忽略丰富的语义功能。在本文中,我们探讨了具有对语义和视觉表示的对比学习的强大追踪视频文本。相应地,我们介绍了一个具有语义和视觉表示(SVREP)的端到端视频文本跟踪器,它通过在视频序列中利用不同文本之间的视觉和语义关系来检测和跟踪文本。此外,通过轻量级架构,SVREP在保持竞争推断速度的同时实现最先进的性能。具体而言,使用Reset-18的骨干,SVREP实现了$ \ textbf {65.9 \%} $的$ \ textbf {65.9 \%} $,以$ \ textbf {16.7} $ fps,在ICDAR2015上运行(视频)与$ \ textbf {8.6 \%} $提高的数据集比以前的最先进的方法。
translated by 谷歌翻译
视频文本发现(VTS)是需要同时检测,跟踪和识别视频中文本的任务。现有的视频文本发现方法通常开发复杂的管道和多个模型,这不是实时应用程序的朋友。在这里,我们提出了一个带有对比表示学习(Cotext)的实时端到端视频文本检测器。我们的贡献分为三个:1)Cotext同时解决实时端到端可训练框架中的三个任务(例如,文本检测,跟踪,识别)。 2)通过对比度学习,Cotext模拟了多个帧的长距离依赖性和学习时间信息。 3)简单,轻巧的体系结构设计用于有效和准确的性能,包括带有蒙版ROI的基于CTC的GPU - 平行检测后处理。广泛的实验显示了我们方法的优越性。尤其是,Cotext在ICDAR2015VIDEO上以41.0 fps的速度实现了一个视频文本,以72.0%的IDF1命中,其video的范围为10.5%和32.0 fps,改进了先前的最佳方法。该代码可以在github.com/weijiawu/cotext上找到。
translated by 谷歌翻译
最近的视频文本发现方法通常需要三个阶段的管道,即检测单个图像中的文本,识别本地化文本,跟踪文本流以及后处理以生成最终结果。这些方法通常遵循按匹配范式跟踪并开发复杂的管道。在本文中,植根于变压器序列建模,我们提出了一个简单但有效的端到端视频文本检测,跟踪和识别框架(TransDert)。转码主要包括两个优点:1)与相邻帧中的显式匹配范式不同,transdetr轨道和不同的匹配范围,并通过长期时间序列(超过7帧)隐含的不同查询所谓的文本查询隐式识别每个文本。 2)Transdetr是第一个端到端可训练的视频文本斑点框架,该框架同时介绍了三个子任务(例如,文本检测,跟踪,识别)。进行了四个视频文本数据集(即ICDAR2013视频,ICDAR2015视频,Minetto和YouTube视频文本)中的广泛实验,以证明Transdetr在预先的性能中达到了最大的表现,并且在视频文本发现任务方面的提高约为8.0%。 。可以在https://github.com/weijiawu/transdetr上找到Transdet的代码。
translated by 谷歌翻译
大多数现有的视频文本发现基准测试专注于评估单一语言和具有有限数据的场景。在这项工作中,我们引入了大规模的双语,开放世界视频文本基准数据集(BovText)。 BovText有四个功能。首先,我们提供2,000多个具有超过1,75万多帧的视频,比现有最大数据集大25倍,其中包含录像中的附带文本。其次,我们的数据集涵盖了具有多种各种场景的30多个开放类别,例如Life VLog,驾驶,电影等。第三,为不同的代表提供了丰富的文本类型注释(即标题,标题或场景文本)视频中的意义。第四,BOVTEXT提供双语文本注释,以促进多种文化的生活和沟通。此外,我们提出了一个与变压器的端到端视频文本发现框架,被称为TransVtspotter,它通过简单但高效的关注的查询密钥机制解决了视频中的多东方文本。它将来自前一个帧的对象特征应用于当前帧的跟踪查询,并引入旋转角度预测以适合多大学实例。在ICDAR2015(视频)上,Transvtspotter以44.1%的Mota,9 FPS实现最先进的性能。 DataSet和TransVtspotter的代码可以在GitHub中找到:COM = Weijiawu = BovText和GitHub:Com = Weijiawu = Transvtspotter。
translated by 谷歌翻译
文本跟踪是在视频中跟踪多个文本,并为每个文本构造轨迹。现有方法通过利用逐个检测帧工作,即,检测每个帧中的文本实例,并在连续帧中的相应文本实例中检测到文本实例。我们认为,这种范式的跟踪准确性在更复杂的场景中严重限制,例如,由于行为模糊等,未错过的文本实例的错误检测文本轨迹的突破。此外,具有类似外观的不同TextInstances很容易混淆,导致文本实例的错误关联。为此,在本文中推出了一种新的时空互补文本跟踪模型。我们利用暹罗互补的模型来充分利用时间维度中的TextInstances的连续性特征,从而有效地解除了对文本实例的检测失去了检测,因此是每个文本轨迹的完整性。我们进一步通过文本相似度学习网络进一步整合了文本实例的语义提示和文本实例的视觉提示,该网络通过文本相似度学习网络提供了在具有类似外观的特性实例的存在中提供了高辨别力,因此避免了它们之间的误解。我们的方法在几个公共基准上实现了最先进的性能。在https://github.com/lsabrinax/videotextscm中提供的源代码。
translated by 谷歌翻译
近年来,视频实例细分(VIS)在很大程度上是通过离线模型提出的,而在线模型由于其性能较低而逐渐吸引了关注。但是,在线方法在处理长期视频序列和正在进行的视频中具有固有的优势,而由于计算资源的限制,离线模型失败了。因此,如果在线模型可以比离线模型获得可比甚至更好的性能,那将是非常可取的。通过解剖当前的在线模型和离线模型,我们证明了性能差距的主要原因是由特征空间中不同实例之间相似外观引起的框架之间存在错误的关联。观察到这一点,我们提出了一个基于对比度学习的在线框架,该框架能够学习更多的歧视实例嵌入,以进行关联,并充分利用历史信息以达到稳定性。尽管它很简单,但我们的方法在三个基准测试上都优于在线和离线方法。具体来说,我们在YouTube-VIS 2019上实现了49.5 AP,比先前的在线和离线艺术分别取得了13.2 AP和2.1 AP的显着改善。此外,我们在OVIS上实现了30.2 AP,这是一个更具挑战性的数据集,具有大量的拥挤和遮挡,超过了14.8 AP的先前艺术。提出的方法在第四次大规模视频对象分割挑战(CVPR2022)的视频实例细分轨道中赢得了第一名。我们希望我们方法的简单性和有效性以及对当前方法的见解,可以阐明VIS模型的探索。
translated by 谷歌翻译
在这项工作中,我们呈现SEQFormer,这是一个令人沮丧的视频实例分段模型。 SEQFormer遵循Vision变换器的原理,该方法模型视频帧之间的实例关系。然而,我们观察到一个独立的实例查询足以捕获视频中的时间序列,但应该独立地使用每个帧进行注意力机制。为此,SEQFormer在每个帧中定位一个实例,并聚合时间信息以学习视频级实例的强大表示,其用于动态地预测每个帧上的掩模序列。实例跟踪自然地实现而不进行跟踪分支或后处理。在YouTube-VIS数据集上,SEQFormer使用Reset-50个骨干和49.0 AP实现47.4个AP,其中Reset-101骨干,没有响铃和吹口哨。此类成果分别显着超过了以前的最先进的性能4.6和4.4。此外,与最近提出的Swin变压器集成,SEQFormer可以实现59.3的高得多。我们希望SEQFormer可能是一个强大的基线,促进了视频实例分段中的未来研究,同时使用更强大,准确,整洁的模型来实现该字段。代码和预先训练的型号在https://github.com/wjf5203/seqformer上公开使用。
translated by 谷歌翻译
Previous work on action representation learning focused on global representations for short video clips. In contrast, many practical applications, such as video alignment, strongly demand learning the intensive representation of long videos. In this paper, we introduce a new framework of contrastive action representation learning (CARL) to learn frame-wise action representation in a self-supervised or weakly-supervised manner, especially for long videos. Specifically, we introduce a simple but effective video encoder that considers both spatial and temporal context by combining convolution and transformer. Inspired by the recent massive progress in self-supervised learning, we propose a new sequence contrast loss (SCL) applied to two related views obtained by expanding a series of spatio-temporal data in two versions. One is the self-supervised version that optimizes embedding space by minimizing KL-divergence between sequence similarity of two augmented views and prior Gaussian distribution of timestamp distance. The other is the weakly-supervised version that builds more sample pairs among videos using video-level labels by dynamic time wrapping (DTW). Experiments on FineGym, PennAction, and Pouring datasets show that our method outperforms previous state-of-the-art by a large margin for downstream fine-grained action classification and even faster inference. Surprisingly, although without training on paired videos like in previous works, our self-supervised version also shows outstanding performance in video alignment and fine-grained frame retrieval tasks.
translated by 谷歌翻译
视频实例细分(VIS)是一项在视频中同时需要分类,细分和实例关联的任务。最近的VIS方法依靠复杂的管道来实现此目标,包括与ROI相关的操作或3D卷积。相比之下,我们通过添加额外的跟踪头提出了基于实例分割方法Condinst的简单有效的单阶段VIS框架。为了提高实例关联精度,提出了一种新型的双向时空对比度学习策略,用于跟踪跨帧的嵌入。此外,利用实例的时间一致性方案来产生时间连贯的结果。在YouTube-VIS-2019,YouTube-Vis-2021和OVIS-2021数据集上进行的实验验证了所提出方法的有效性和效率。我们希望所提出的框架可以作为许多其他实例级视频关联任务的简单而强大的替代方案。
translated by 谷歌翻译
我们研究了视频引用表达理解(REC)的问题,该问题旨在将句子中描述的引用对象定位为视频帧中的视觉区域。尽管取得了最近的进展,但现有方法却遇到了两个问题:1)视频帧之间的本地化结果不一致; 2)参考对象和上下文对象之间的混淆。为此,我们提出了一个新颖的双对应网络(称为DCNET),该网络明确增强了框架间和跨模式的密集关联。首先,我们旨在为框架内所有现有实例建立框架间的相关性。具体而言,我们计算框架间的斑点余弦相似性,以估计密集的对齐方式,然后执行框架间的对比度学习以在特征空间中映射它们。其次,我们建议构建细粒斑点字对齐,以将每个贴片与某些单词相关联。由于缺乏这种详细的注释,我们还通过余弦相似性预测了斑点字的对应关系。广泛的实验表明,我们的DCNET在视频和图像基准测试中都达到了最先进的性能。此外,我们进行了全面的消融研究和彻底的分析,以探索最佳模型设计。值得注意的是,我们的框架间和跨模式对比损失是插件功能,适用于任何视频架构架构。例如,通过在共同接地之上进行构建,我们在vid-sentence数据集的Accu。0.5上提高了1.48%的性能。
translated by 谷歌翻译
最近,跨模式的预训练任务一直是一个热点,因为它在各种下文研究中广泛应用,包括检索,字幕,问题答案等。然而,退出的方法采用单媒体预训练模型来探索进行跨模式检索的联合视觉表示,这很容易遭受计算爆炸的影响。此外,尽管常规的双流结构非常有效,但它们仍然缺乏重要的跨模式相互作用,导致性能低。在这些挑战的激励下,我们提出了一个对比的跨模式知识共享预训练(Cookie),以掌握联合文本图像表示。从结构上讲,Cookie由于可接受的时间消耗而采用了传统的双流结构。为了克服上述双流结构的固有缺陷,我们精心设计了两个有效的模块。具体而言,第一个模块是一个体重共享的变压器,它构建在视觉和文本编码器的头上,旨在将语义对齐文本和图像对齐。该设计使视觉和文本路径集中在相同的语义上。另一个是三个专门设计的对比学习,旨在分享不同模型之间的知识。共享的跨模式知识大大发展了单峰表示的研究,从而促进了单模式检索任务。对多模式匹配研究的广泛实验结果,包括跨模式检索,文本匹配和图像检索揭示了我们的计算效率和我们预训练模型的统计指标的上级。
translated by 谷歌翻译
我们提出了一种称为独角兽的统一方法,可以使用相同的模型参数同时使用单个网络解决四个跟踪问题(SOT,MOT,VOS,MOTS)。由于对象跟踪问题本身的定义零散,因此开发了大多数现有的跟踪器来解决任务的单个或一部分,并过分地对特定任务的特征进行了专业化。相比之下,Unicorn提供了一个统一的解决方案,在所有跟踪任务中采用相同的输入,骨干,嵌入和头部。我们第一次完成了跟踪网络体系结构和学习范式的巨大统一。Unicorn在8个跟踪数据集中的特定于任务特定的对应物(包括Lasot,TrackingNet,Mot17,BDD100K,Davis16-17,MOTS20和BDD100K MOT)在PAR上或更好的对应物。我们认为,独角兽将是朝着一般视觉模型迈出的坚实一步。代码可从https://github.com/masterbin-iiau/unicorn获得。
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
在统一框架中为检测和跟踪建模的时间信息已被证明是视频实例分割(VIS)的有希望的解决方案。但是,如何有效地将时间信息纳入在线模型仍然是一个空旷的问题。在这项工作中,我们提出了一个名为Inspeacity(IAI)的新的在线Vis范式,该范式以有效的方式对检测和跟踪进行建模。详细说明,IAI采用了一个新颖的识别模块来明确预测跟踪实例的标识号。为了传递时间信息跨框架,IAI使用了结合当前特征和过去嵌入的关联模块。值得注意的是,IAI可以与不同的图像模型集成。我们对三个VIS基准进行了广泛的实验。 IAI在YouTube-VIS-2019(Resnet-101 41.9地图)和YouTube-VIS-2021(Resnet-50 37.7地图)上胜过所有在线竞争对手。令人惊讶的是,在更具挑战性的OVI上,IAI实现了SOTA性能(20.3地图)。代码可从https://github.com/zfonemore/iai获得
translated by 谷歌翻译
视频文本预训练(VTP)旨在从大规模的网络视频中学习可转移的代表。迄今为止,几乎所有现有的VTP方法都仅限于基于检索的下游任务,例如视频检索,而它们在基于本地化的任务(例如时间基础)上的转移潜力不足。在本文中,我们实验分析并证明了当前VTP方法与本地化任务的不相容性,并提出了一种新颖的面向定位的视频文本预训练框架,称为LocvTP。具体而言,我们执行细粒对比度对准作为通过剪贴字对数发现方案对粗粒粒度的补充。为了进一步增强学习功能的时间推理能力,我们提出了一个上下文投影头和暂时意识的对比损失,以感知上下文关系。对六个数据集的四个下游任务进行的广泛实验表明,我们的LOCVTP在基于检索和基于本地化的任务上都达到了最先进的性能。此外,我们进行了全面的消融研究和彻底的分析,以探索最佳的模型设计和培训策略。
translated by 谷歌翻译
互动对象理解,或者我们可以对对象做些什么以及计算机愿景的长期目标。在本文中,我们通过观察野外的自我高端视频的人类手来解决这个问题。我们展示了观察人类的手与之交互以及如何提供相关数据和必要的监督。参加双手,容易定位并稳定积极的物体以进行学习,并揭示发生与对象的交互的地方。分析手显示我们可以对物体做些什么以及如何做些。我们在史诗厨房数据集上应用这些基本原则,并成功地学习了国家敏感的特征,以及互动区域和提供了麦克拉斯的地区),纯粹是通过观察在EGoCentric视频中的手。
translated by 谷歌翻译
speed among all existing VIS models, and achieves the best result among methods using single model on the YouTube-VIS dataset. For the first time, we demonstrate a much simpler and faster video instance segmentation framework built upon Transformers, achieving competitive accuracy. We hope that VisTR can motivate future research for more video understanding tasks.
translated by 谷歌翻译
The remarkable success of deep learning in various domains relies on the availability of large-scale annotated datasets. However, obtaining annotations is expensive and requires great effort, which is especially challenging for videos. Moreover, the use of human-generated annotations leads to models with biased learning and poor domain generalization and robustness. As an alternative, self-supervised learning provides a way for representation learning which does not require annotations and has shown promise in both image and video domains. Different from the image domain, learning video representations are more challenging due to the temporal dimension, bringing in motion and other environmental dynamics. This also provides opportunities for video-exclusive ideas that advance self-supervised learning in the video and multimodal domain. In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain. We summarize these methods into four different categories based on their learning objectives: 1) pretext tasks, 2) generative learning, 3) contrastive learning, and 4) cross-modal agreement. We further introduce the commonly used datasets, downstream evaluation tasks, insights into the limitations of existing works, and the potential future directions in this area.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
Video recognition in an open and dynamic world is quite challenging, as we need to handle different settings such as close-set, long-tail, few-shot and open-set. By leveraging semantic knowledge from noisy text descriptions crawled from the Internet, we focus on the general video recognition (GVR) problem of solving different recognition tasks within a unified framework. The core contribution of this paper is twofold. First, we build a comprehensive video recognition benchmark of Kinetics-GVR, including four sub-task datasets to cover the mentioned settings. To facilitate the research of GVR, we propose to utilize external textual knowledge from the Internet and provide multi-source text descriptions for all action classes. Second, inspired by the flexibility of language representation, we present a unified visual-linguistic framework (VLG) to solve the problem of GVR by an effective two-stage training paradigm. Our VLG is first pre-trained on video and language datasets to learn a shared feature space, and then devises a flexible bi-modal attention head to collaborate high-level semantic concepts under different settings. Extensive results show that our VLG obtains the state-of-the-art performance under four settings. The superior performance demonstrates the effectiveness and generalization ability of our proposed framework. We hope our work makes a step towards the general video recognition and could serve as a baseline for future research. The code and models will be available at https://github.com/MCG-NJU/VLG.
translated by 谷歌翻译