We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
The remarkable success of deep learning in various domains relies on the availability of large-scale annotated datasets. However, obtaining annotations is expensive and requires great effort, which is especially challenging for videos. Moreover, the use of human-generated annotations leads to models with biased learning and poor domain generalization and robustness. As an alternative, self-supervised learning provides a way for representation learning which does not require annotations and has shown promise in both image and video domains. Different from the image domain, learning video representations are more challenging due to the temporal dimension, bringing in motion and other environmental dynamics. This also provides opportunities for video-exclusive ideas that advance self-supervised learning in the video and multimodal domain. In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain. We summarize these methods into four different categories based on their learning objectives: 1) pretext tasks, 2) generative learning, 3) contrastive learning, and 4) cross-modal agreement. We further introduce the commonly used datasets, downstream evaluation tasks, insights into the limitations of existing works, and the potential future directions in this area.
translated by 谷歌翻译
我们提出了MACLR,这是一种新颖的方法,可显式执行从视觉和运动方式中学习的跨模式自我监督的视频表示。与以前的视频表示学习方法相比,主要关注学习运动线索的研究方法是隐含的RGB输入,MACLR丰富了RGB视频片段的标准对比度学习目标,具有运动途径和视觉途径之间的跨模式学习目标。我们表明,使用我们的MACLR方法学到的表示形式更多地关注前景运动区域,因此可以更好地推广到下游任务。为了证明这一点,我们在五个数据集上评估了MACLR,以进行动作识别和动作检测,并在所有数据集上展示最先进的自我监督性能。此外,我们表明MACLR表示可以像在UCF101和HMDB51行动识别的全面监督下所学的表示一样有效,甚至超过了对Vidsitu和SSV2的行动识别的监督表示,以及对AVA的动作检测。
translated by 谷歌翻译
对于人类的行动理解,流行的研究方向是分析具有明确的语义含量的短视频剪辑,例如跳跃和饮酒。然而,了解短语行动的方法不能直接翻译成长期以来的人类动态,如跳舞,即使在语义上也是挑战的挑战。同时,自然语言处理(NLP)社区通过大规模预培训解决了稀缺的类似挑战,这改善了一种模型的几个下游任务。在这项工作中,我们研究如何以自我监督的方式进行分段和群集视频,即Acton Discovery,朝向视频标记的主要障碍。我们提出了一种两级框架,首先通过对应于它们的时间上下文的视频帧的两个增强视图对比其次的视频帧的两个增强视图来获得帧智表示。然后通过k-means群集视频集集中的帧展表示。然后通过从同一簇内的帧形成连续的运动序列来自动提取actons。通过标准化的相互信息和语言熵,我们通过Kendall的Tau和Lexicon构建步骤进行评估框架明智的表现。我们还研究了这个标记化的三种应用:类型分类,行动细分和行动组成。在AIST ++和PKU-MMD数据集上,与几个基线相比,Actons带来了显着的性能改进。
translated by 谷歌翻译
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced with the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey we analyze main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled as input-level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
translated by 谷歌翻译
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel selfsupervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.
translated by 谷歌翻译
Timeyou have a little pressure you are cutting the wood readjusting the table saw I am using a roller sure you applied glue Figure 1: We describe an efficient approach to learn visual representations from misaligned and noisy narrations (bottom) automatically extracted from instructional videos (top). Our video representations are learnt from scratch without relying on any manually annotated visual dataset yet outperform all self-supervised and many fully-supervised methods on several video recognition benchmarks.
translated by 谷歌翻译
现代自我监督的学习算法通常强制执行跨视图实例的表示的持久性。虽然非常有效地学习整体图像和视频表示,但这种方法成为在视频中学习时空时间细粒度的特征的子最优,其中场景和情况通过空间和时间演变。在本文中,我们介绍了上下文化的时空对比学习(Const-CL)框架,以利用自我监督有效学习时空时间细粒度的表示。我们首先设计一种基于区域的自我监督的借口任务,该任务要求模型从一个视图中学习将实例表示转换为上下文特征的另一个视图。此外,我们介绍了一个简单的网络设计,有效地调和了整体和本地表示的同时学习过程。我们评估我们对各种下游任务和CONST-CL的学习表现,实现了四个数据集的最先进结果。对于时空行动本地化,Const-CL可以使用AVA-Kinetics验证集的检测到框实现39.4%的地图和30.5%地图。对于对象跟踪,Const-CL在OTB2015上实现了78.1%的精度和55.2%的成功分数。此外,Const-CL分别在视频动作识别数据集,UCF101和HMDB51上实现了94.8%和71.9%的前1个微调精度。我们计划向公众发布我们的代码和模型。
translated by 谷歌翻译
区分动作是按预期执行的,还是预期的动作失败是人类不仅具有的重要技能,而且对于在人类环境中运行的智能系统也很重要。但是,由于缺乏带注释的数据,认识到一项行动是无意的还是预期的,是否会失败。尽管可以在互联网中发现无意或失败动作的视频,但高注释成本是学习网络的主要瓶颈。因此,在这项工作中,我们研究了对无意采取行动预测的自学代表学习的问题。虽然先前的作品学习基于本地时间社区的表示形式,但我们表明需要视频的全局上下文来学习三个下游任务的良好表示:无意的动作分类,本地化和预期。在补充材料中,我们表明学习的表示形式也可用于检测视频中的异常情况。
translated by 谷歌翻译
Previous work on action representation learning focused on global representations for short video clips. In contrast, many practical applications, such as video alignment, strongly demand learning the intensive representation of long videos. In this paper, we introduce a new framework of contrastive action representation learning (CARL) to learn frame-wise action representation in a self-supervised or weakly-supervised manner, especially for long videos. Specifically, we introduce a simple but effective video encoder that considers both spatial and temporal context by combining convolution and transformer. Inspired by the recent massive progress in self-supervised learning, we propose a new sequence contrast loss (SCL) applied to two related views obtained by expanding a series of spatio-temporal data in two versions. One is the self-supervised version that optimizes embedding space by minimizing KL-divergence between sequence similarity of two augmented views and prior Gaussian distribution of timestamp distance. The other is the weakly-supervised version that builds more sample pairs among videos using video-level labels by dynamic time wrapping (DTW). Experiments on FineGym, PennAction, and Pouring datasets show that our method outperforms previous state-of-the-art by a large margin for downstream fine-grained action classification and even faster inference. Surprisingly, although without training on paired videos like in previous works, our self-supervised version also shows outstanding performance in video alignment and fine-grained frame retrieval tasks.
translated by 谷歌翻译
聚类是无监督学习中无处不在的工具。大多数现有的自我监督表示方法通常基于视觉上的特征聚类样本。尽管这对于基于图像的自我审视非常有效,但它通常会失败,因为视频需要理解运动而不是专注于背景。将光流作为与RGB的互补信息可以减轻此问题。但是,我们观察到,两种观点的幼稚组合并不能带来有意义的收益。在本文中,我们提出了一种结合两种观点的原则方法。具体而言,我们提出了一种新颖的聚类策略,在该策略中,我们将每个视图的初始群集分配作为指导其他视图的最终群集分配。这个想法将对这两种视图强制执行类似的群集结构,并且形成的簇在语义上是抽象的,并且对来自每个单独视图的嘈杂输入。此外,我们提出了一种新颖的正则化策略来解决特征崩溃问题,这在基于聚类的自学学习方法中很常见。我们的广泛评估表明,我们学到的表示对下游任务的有效性,例如视频检索和动作识别。具体来说,我们在UCF上胜过7%,在HMDB上胜过4%,用于视频检索,而在UCF上的最高状态为5%,而HMDB则在HMDB上进行视频分类6%
translated by 谷歌翻译
最近,自我监督的表示学习(SSRL)在计算机视觉,语音,自然语言处理(NLP)以及最近的其他类型的模式(包括传感器的时间序列)中引起了很多关注。自我监督学习的普及是由传统模型通常需要大量通知数据进行培训的事实所驱动的。获取带注释的数据可能是一个困难且昂贵的过程。已经引入了自我监督的方法,以通过使用从原始数据自由获得的监督信号对模型进行判别预训练来提高训练数据的效率。与现有的对SSRL的评论不同,该评论旨在以单一模式为重点介绍CV或NLP领域的方法,我们旨在为时间数据提供对多模式自我监督学习方法的首次全面审查。为此,我们1)提供现有SSRL方法的全面分类,2)通过定义SSRL框架的关键组件来引入通用管道,3)根据其目标功能,网络架构和潜在应用程序,潜在的应用程序,潜在的应用程序,比较现有模型, 4)查看每个类别和各种方式中的现有多模式技术。最后,我们提出了现有的弱点和未来的机会。我们认为,我们的工作对使用多模式和/或时间数据的域中SSRL的要求有了一个观点
translated by 谷歌翻译
近年来,随着深度神经网络方法的普及,手术计算机视觉领域经历了相当大的突破。但是,用于培训的标准全面监督方法需要大量的带注释的数据,从而实现高昂的成本;特别是在临床领域。已经开始在一般计算机视觉社区中获得吸引力的自我监督学习(SSL)方法代表了对这些注释成本的潜在解决方案,从而使仅从未标记的数据中学习有用的表示形式。尽管如此,SSL方法在更复杂和有影响力的领域(例如医学和手术)中的有效性仍然有限且未开发。在这项工作中,我们通过在手术计算机视觉的背景下研究了四种最先进的SSL方法(Moco V2,Simclr,Dino,SWAV),以解决这一关键需求。我们对这些方法在cholec80数据集上的性能进行了广泛的分析,以在手术环境理解,相位识别和工具存在检测中为两个基本和流行的任务。我们检查了它们的参数化,然后在半监督设置中相对于训练数据数量的行为。如本工作所述和进行的那样,将这些方法的正确转移到手术中,可以使SSL的一般用途获得可观的性能 - 相位识别率高达7%,而在工具存在检测方面,则具有20% - 半监督相位识别方法高达14%。该代码将在https://github.com/camma-public/selfsupsurg上提供。
translated by 谷歌翻译
We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.
translated by 谷歌翻译
Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.
translated by 谷歌翻译
在深度学习研究中,自学学习(SSL)引起了极大的关注,引起了计算机视觉和遥感社区的兴趣。尽管计算机视觉取得了很大的成功,但SSL在地球观测领域的大部分潜力仍然锁定。在本文中,我们对在遥感的背景下为计算机视觉的SSL概念和最新发展提供了介绍,并回顾了SSL中的概念和最新发展。此外,我们在流行的遥感数据集上提供了现代SSL算法的初步基准,从而验证了SSL在遥感中的潜力,并提供了有关数据增强的扩展研究。最后,我们确定了SSL未来研究的有希望的方向的地球观察(SSL4EO),以铺平了两个领域的富有成效的相互作用。
translated by 谷歌翻译
通过自学学习的视觉表示是一项极具挑战性的任务,因为网络需要在没有监督提供的主动指导的情况下筛选出相关模式。这是通过大量数据增强,大规模数据集和过量量的计算来实现的。视频自我监督学习(SSL)面临着额外的挑战:视频数据集通常不如图像数据集那么大,计算是一个数量级,并且优化器所必须通过的伪造模式数量乘以几倍。因此,直接从视频数据中学习自我监督的表示可能会导致次优性能。为了解决这个问题,我们建议在视频表示学习框架中利用一个以自我或语言监督为基础的强大模型,并在不依赖视频标记的数据的情况下学习强大的空间和时间信息。为此,我们修改了典型的基于视频的SSL设计和目标,以鼓励视频编码器\ textit {subsume}基于图像模型的语义内容,该模型在通用域上训练。所提出的算法被证明可以更有效地学习(即在较小的时期和较小的批次中),并在单模式SSL方法中对标准下游任务进行了新的最新性能。
translated by 谷歌翻译
这项工作提出了一个名为TEG的自我监督的学习框架,探讨学习视频表示中的时间粒度。在TEG中,我们从视频中抽出一个长剪辑,以及在长夹内部的短夹。然后我们提取密集的时间嵌入品。培训目标由两部分组成:一个细粒度的时间学习目的,以最大化短夹和长剪辑中的相应时间嵌入之间的相似性,以及持续的时间学习目标,以将两个剪辑的全局嵌入在一起。我们的研究揭示了时间粒度与三个主要发现的影响。 1)不同的视频任务可能需要不同时间粒度的特征。 2)有趣的是,广泛认为需要时间感知的一些任务实际上可以通过时间持久的功能来解决。 3)TEG的灵活性对8个视频基准测试产生最先进的结果,在大多数情况下优于监督预训练。
translated by 谷歌翻译
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
translated by 谷歌翻译
对比学习在视频表示学习中表现出了巨大的潜力。但是,现有方法无法充分利用短期运动动态,这对于各种下游视频理解任务至关重要。在本文中,我们提出了运动敏感的对比度学习(MSCL),该学习将光学流捕获的运动信息注入RGB帧中,以增强功能学习。为了实现这一目标,除了剪辑级全球对比度学习外,我们还开发了局部运动对比度学习(LMCL),具有两种模式的框架级对比目标。此外,我们引入流动旋转增强(FRA),以生成额外的运动除件负面样品和运动差分采样(MDS)以准确筛选训练样品。对标准基准测试的广泛实验验证了该方法的有效性。以常用的3D RESNET-18为骨干,我们在UCF101上获得了91.5 \%的前1个精度,而在视频分类中进行了一些v2的v2,以及65.6 \%的top-1 top-1召回ucf1011对于视频检索,特别是改善了最新的。
translated by 谷歌翻译