尽管视频自我监督的学习模型最近取得了成功,但关于它们的概括能力仍然有很多了解。在本文中,我们研究了敏感的视频自我监督学习对当前常规基准的方式以及方法是否超出规范评估设置的概括。我们在敏感性的四个不同因素上做到这一点:域,样本,动作和任务。我们的研究包括7个视频数据集,9种自学方法和6种视频理解任务的500多个实验,揭示了视频自我监督学习中的当前基准测试不是沿这些敏感性因素的概括指标。此外,我们发现自我监督的方法在香草的监督前训练后落后,尤其是当域移动较大并且可用下游样品的量很低时。从我们的分析中,我们将严重的基准测试(实验的一个子集)提炼出来,并讨论其对评估现有和未来自我监督视频学习方法获得的表示的普遍性的意义。
translated by 谷歌翻译
我们提出了MACLR,这是一种新颖的方法,可显式执行从视觉和运动方式中学习的跨模式自我监督的视频表示。与以前的视频表示学习方法相比,主要关注学习运动线索的研究方法是隐含的RGB输入,MACLR丰富了RGB视频片段的标准对比度学习目标,具有运动途径和视觉途径之间的跨模式学习目标。我们表明,使用我们的MACLR方法学到的表示形式更多地关注前景运动区域,因此可以更好地推广到下游任务。为了证明这一点,我们在五个数据集上评估了MACLR,以进行动作识别和动作检测,并在所有数据集上展示最先进的自我监督性能。此外,我们表明MACLR表示可以像在UCF101和HMDB51行动识别的全面监督下所学的表示一样有效,甚至超过了对Vidsitu和SSV2的行动识别的监督表示,以及对AVA的动作检测。
translated by 谷歌翻译
这项工作提出了一个名为TEG的自我监督的学习框架,探讨学习视频表示中的时间粒度。在TEG中,我们从视频中抽出一个长剪辑,以及在长夹内部的短夹。然后我们提取密集的时间嵌入品。培训目标由两部分组成:一个细粒度的时间学习目的,以最大化短夹和长剪辑中的相应时间嵌入之间的相似性,以及持续的时间学习目标,以将两个剪辑的全局嵌入在一起。我们的研究揭示了时间粒度与三个主要发现的影响。 1)不同的视频任务可能需要不同时间粒度的特征。 2)有趣的是,广泛认为需要时间感知的一些任务实际上可以通过时间持久的功能来解决。 3)TEG的灵活性对8个视频基准测试产生最先进的结果,在大多数情况下优于监督预训练。
translated by 谷歌翻译
我们提出了一种自制算法,以从以自我为中心的视频数据中学习表示形式。最近,已经做出了重大努力,以捕捉人类在日常活动中与自己的环境进行互动。结果,已经出现了几个大型的以相互作用的多模式数据的自我为中心的数据集。但是,来自视频的学习表征可能具有挑战性。首先,鉴于长期连续视频的未经保育性质,学习有效表示需要专注于互动的时间。其次,日常活动的视觉表示应对环境状态的变化敏感。但是,当前成功的多模式学习框架鼓励随着时间的推移表示代表。为了应对这些挑战,我们利用音频信号来确定有利于更好学习的可能相互作用的时刻。我们还提出了一个新颖的自我监督目标,该目标从相互作用引起的听觉状态变化中学习。我们在两个大规模的中心数据集(Epic-Kitchens-100和最近发布的EGO4D)上广泛验证了这些贡献,并显示了几个下游任务的改进,包括行动识别,长期行动预期和对象状态变化分类。
translated by 谷歌翻译
我们呈现了一个用于学习视听表示的自我监督的框架。在我们的框架中引入了一种小说概念,其中除了学习模态和标准的“同步的”跨模型关系之外,riscross也会学习“异步”的跨模式关系。我们展示通过放松音频和视觉模态之间的时间同步性,网络了解强劲的时间不变的表示。我们的实验表明,音频和视觉方式的强大增强,可放松交叉模态时间同步优化性能。要预先绘制我们提出的框架,我们使用具有不同大小,动力学,动力学-400和augioset的不同数据集。学习的表示是在许多下游任务中评估的,即行动识别,声音分类和检索。 Crisscross显示了动作识别的最先进的性能(UCF101和HMDB51)和声音分类(ESC50)。将公开可用的代码和预赠品模型。
translated by 谷歌翻译
We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.
translated by 谷歌翻译
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel selfsupervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.
translated by 谷歌翻译
最近的动作识别模型通过整合对象,其位置和互动来取得令人印象深刻的结果。但是,为每个框架获得密集的结构化注释是乏味且耗时的,使这些方法的训练昂贵且可扩展性较低。同时,如果可以在感兴趣的域内或之外使用一小部分带注释的图像,我们如何将它们用于下游任务的视频?我们提出了一个学习框架的结构(简称SVIT),该结构证明了仅在训练过程中仅可用的少量图像的结构才能改善视频模型。 SVIT依靠两个关键见解。首先,由于图像和视频都包含结构化信息,因此我们用一组\ emph {对象令牌}丰富了一个可以在图像和视频中使用的\ emph {对象令牌}的模型。其次,视频中各个帧的场景表示应与静止图像的场景表示“对齐”。这是通过\ emph {frame-clip一致性}损失来实现的,该损失可确保图像和视频之间结构化信息的流动。我们探索场景结构的特定实例化,即\ emph {手对象图},由手和对象组成,其位置为节点,以及触点/no-contact的物理关系作为边缘。 SVIT在多个视频理解任务和数据集上显示出强烈的性能改进;它在EGO4D CVPR'22对象状态本地化挑战中赢得了第一名。对于代码和预算模型,请访问\ url {https://eladb3.github.io/svit/}的项目页面
translated by 谷歌翻译
现代自我监督的学习算法通常强制执行跨视图实例的表示的持久性。虽然非常有效地学习整体图像和视频表示,但这种方法成为在视频中学习时空时间细粒度的特征的子最优,其中场景和情况通过空间和时间演变。在本文中,我们介绍了上下文化的时空对比学习(Const-CL)框架,以利用自我监督有效学习时空时间细粒度的表示。我们首先设计一种基于区域的自我监督的借口任务,该任务要求模型从一个视图中学习将实例表示转换为上下文特征的另一个视图。此外,我们介绍了一个简单的网络设计,有效地调和了整体和本地表示的同时学习过程。我们评估我们对各种下游任务和CONST-CL的学习表现,实现了四个数据集的最先进结果。对于时空行动本地化,Const-CL可以使用AVA-Kinetics验证集的检测到框实现39.4%的地图和30.5%地图。对于对象跟踪,Const-CL在OTB2015上实现了78.1%的精度和55.2%的成功分数。此外,Const-CL分别在视频动作识别数据集,UCF101和HMDB51上实现了94.8%和71.9%的前1个微调精度。我们计划向公众发布我们的代码和模型。
translated by 谷歌翻译
通过自学学习的视觉表示是一项极具挑战性的任务,因为网络需要在没有监督提供的主动指导的情况下筛选出相关模式。这是通过大量数据增强,大规模数据集和过量量的计算来实现的。视频自我监督学习(SSL)面临着额外的挑战:视频数据集通常不如图像数据集那么大,计算是一个数量级,并且优化器所必须通过的伪造模式数量乘以几倍。因此,直接从视频数据中学习自我监督的表示可能会导致次优性能。为了解决这个问题,我们建议在视频表示学习框架中利用一个以自我或语言监督为基础的强大模型,并在不依赖视频标记的数据的情况下学习强大的空间和时间信息。为此,我们修改了典型的基于视频的SSL设计和目标,以鼓励视频编码器\ textit {subsume}基于图像模型的语义内容,该模型在通用域上训练。所提出的算法被证明可以更有效地学习(即在较小的时期和较小的批次中),并在单模式SSL方法中对标准下游任务进行了新的最新性能。
translated by 谷歌翻译
The objective of this paper is visual-only self-supervised video representation learning. We make the following contributions: (i) we investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (In-foNCE) training, showing that this form of supervised contrastive learning leads to a clear improvement in performance; (ii) we propose a novel self-supervised co-training scheme to improve the popular infoNCE loss, exploiting the complementary information from different views, RGB streams and optical flow, of the same data source by using one view to obtain positive class samples for the other; (iii) we thoroughly evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval. In both cases, the proposed approach demonstrates state-of-the-art or comparable performance with other self-supervised approaches, whilst being significantly more efficient to train, i.e. requiring far less training data to achieve similar performance.
translated by 谷歌翻译
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
translated by 谷歌翻译
The remarkable success of deep learning in various domains relies on the availability of large-scale annotated datasets. However, obtaining annotations is expensive and requires great effort, which is especially challenging for videos. Moreover, the use of human-generated annotations leads to models with biased learning and poor domain generalization and robustness. As an alternative, self-supervised learning provides a way for representation learning which does not require annotations and has shown promise in both image and video domains. Different from the image domain, learning video representations are more challenging due to the temporal dimension, bringing in motion and other environmental dynamics. This also provides opportunities for video-exclusive ideas that advance self-supervised learning in the video and multimodal domain. In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain. We summarize these methods into four different categories based on their learning objectives: 1) pretext tasks, 2) generative learning, 3) contrastive learning, and 4) cross-modal agreement. We further introduce the commonly used datasets, downstream evaluation tasks, insights into the limitations of existing works, and the potential future directions in this area.
translated by 谷歌翻译
Previous work on action representation learning focused on global representations for short video clips. In contrast, many practical applications, such as video alignment, strongly demand learning the intensive representation of long videos. In this paper, we introduce a new framework of contrastive action representation learning (CARL) to learn frame-wise action representation in a self-supervised or weakly-supervised manner, especially for long videos. Specifically, we introduce a simple but effective video encoder that considers both spatial and temporal context by combining convolution and transformer. Inspired by the recent massive progress in self-supervised learning, we propose a new sequence contrast loss (SCL) applied to two related views obtained by expanding a series of spatio-temporal data in two versions. One is the self-supervised version that optimizes embedding space by minimizing KL-divergence between sequence similarity of two augmented views and prior Gaussian distribution of timestamp distance. The other is the weakly-supervised version that builds more sample pairs among videos using video-level labels by dynamic time wrapping (DTW). Experiments on FineGym, PennAction, and Pouring datasets show that our method outperforms previous state-of-the-art by a large margin for downstream fine-grained action classification and even faster inference. Surprisingly, although without training on paired videos like in previous works, our self-supervised version also shows outstanding performance in video alignment and fine-grained frame retrieval tasks.
translated by 谷歌翻译
基于变压器的体系结构已在各种视觉域(最著名的图像和视频)中变得更具竞争力。虽然先前的工作已经孤立地研究了这些模式,但拥有一个共同的体系结构表明,人们可以训练单个统一模型以多种视觉方式。事先尝试进行统一建模通常使用针对视觉任务量身定制的体系结构,或与单个模态模型相比获得较差的性能。在这项工作中,我们表明可以使用蒙版的自动编码来在图像和视频上训练简单的视觉变压器,而无需任何标记的数据。该单个模型学习了与图像和视频基准上的单模式表示相当或更好的视觉表示,同时使用了更简单的体系结构。特别是,我们的单一预算模型可以进行审核,以在ImageNet上获得86.5%的速度,而在挑战性的事物V2视频基准测试中,可以实现75.3%的范围。此外,可以通过丢弃90%的图像和95%的视频补丁来学习该模型,从而实现非常快速的训练。
translated by 谷歌翻译
区分动作是按预期执行的,还是预期的动作失败是人类不仅具有的重要技能,而且对于在人类环境中运行的智能系统也很重要。但是,由于缺乏带注释的数据,认识到一项行动是无意的还是预期的,是否会失败。尽管可以在互联网中发现无意或失败动作的视频,但高注释成本是学习网络的主要瓶颈。因此,在这项工作中,我们研究了对无意采取行动预测的自学代表学习的问题。虽然先前的作品学习基于本地时间社区的表示形式,但我们表明需要视频的全局上下文来学习三个下游任务的良好表示:无意的动作分类,本地化和预期。在补充材料中,我们表明学习的表示形式也可用于检测视频中的异常情况。
translated by 谷歌翻译
视频自我监督的学习是一项挑战的任务,这需要模型的显着表达力量来利用丰富的空间时间知识,并从大量未标记的视频产生有效的监督信号。但是,现有方法未能提高未标记视频的时间多样性,并以明确的方式忽略精心建模的多尺度时间依赖性。为了克服这些限制,我们利用视频中的多尺度时间依赖性,并提出了一个名为时间对比图学习(TCGL)的新型视频自我监督学习框架,该框架共同模拟了片段间和片段间的时间依赖性用混合图对比学习策略学习的时间表示学习。具体地,首先引入空间 - 时间知识发现(STKD)模块以基于离散余弦变换的频域分析从视频中提取运动增强的空间时间表。为了显式模拟未标记视频的多尺度时间依赖性,我们的TCGL将关于帧和片段命令的先前知识集成到图形结构中,即片段/间隙间时间对比图(TCG)。然后,特定的对比学习模块旨在最大化不同图形视图中节点之间的协议。为了为未标记的视频生成监控信号,我们介绍了一种自适应片段订购预测(ASOP)模块,它利用视频片段之间的关系知识来学习全局上下文表示并自适应地重新校准通道明智的功能。实验结果表明我们的TCGL在大规模行动识别和视频检索基准上的最先进方法中的优势。
translated by 谷歌翻译
Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, Audioset and ESC-50 when compared to previous self-supervised work. Our models are publicly available [1, 2, 3]. * Equal contribution. † Work done during an internship at DeepMind. 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
我们介绍了一种新颖的自我监督的对比学习方法,以了解来自未标记视频的表示。现有方法忽略了输入失真的细节,例如,通过学习与时间转换的不变性。相反,我们认为视频表示应该保留视频动态并反映输入的时间操纵。因此,我们利用新的约束来构建对时间转换和更好的捕获视频动态的表示表示。在我们的方法中,视频的增强剪辑之间的相对时间转换被编码在向量中并与其他转换向量形成对比。为了支持时间的设备,我们还提出了将视频的两个剪辑的自我监督分类为1.重叠2.订购或3.无序。我们的实验表明,时代的表示达到最先进的结果,导致UCF101,HMDB51和潜水48上的视频检索和动作识别基准。
translated by 谷歌翻译
近年来,随着深度神经网络方法的普及,手术计算机视觉领域经历了相当大的突破。但是,用于培训的标准全面监督方法需要大量的带注释的数据,从而实现高昂的成本;特别是在临床领域。已经开始在一般计算机视觉社区中获得吸引力的自我监督学习(SSL)方法代表了对这些注释成本的潜在解决方案,从而使仅从未标记的数据中学习有用的表示形式。尽管如此,SSL方法在更复杂和有影响力的领域(例如医学和手术)中的有效性仍然有限且未开发。在这项工作中,我们通过在手术计算机视觉的背景下研究了四种最先进的SSL方法(Moco V2,Simclr,Dino,SWAV),以解决这一关键需求。我们对这些方法在cholec80数据集上的性能进行了广泛的分析,以在手术环境理解,相位识别和工具存在检测中为两个基本和流行的任务。我们检查了它们的参数化,然后在半监督设置中相对于训练数据数量的行为。如本工作所述和进行的那样,将这些方法的正确转移到手术中,可以使SSL的一般用途获得可观的性能 - 相位识别率高达7%,而在工具存在检测方面,则具有20% - 半监督相位识别方法高达14%。该代码将在https://github.com/camma-public/selfsupsurg上提供。
translated by 谷歌翻译