We wish to automatically predict the "speediness" of moving objects in videos-whether they move faster, at, or slower than their "natural" speed. The core component in our approach is SpeedNet-a novel deep network trained to detect if a video is playing at normal rate, or if it is sped up. SpeedNet is trained on a large corpus of natural videos in a self-supervised manner, without requiring any manual annotations. We show how this single, binary classification network can be used to detect arbitrary rates of speediness of objects. We demonstrate prediction results by Speed-Net on a wide range of videos containing complex natural motions, and examine the visual cues it utilizes for making those predictions. Importantly, we show that through predicting the speed of videos, the model learns a powerful and meaningful space-time representation that goes beyond simple motion cues. We demonstrate how those learned features can boost the performance of self-supervised action recognition, and can be used for video retrieval. Furthermore, we also apply SpeedNet for generating time-varying, adaptive video speedups, which can allow viewers to watch videos faster, but with less of the jittery, unnatural motions typical to videos that are sped up uniformly.
translated by 谷歌翻译
Figure 1: Seeing these ordered frames from videos, can you tell whether each video is playing forward or backward? (answer below 1 ). Depending on the video, solving the task may require (a) low-level understanding (e.g. physics), (b) high-level reasoning (e.g. semantics), or (c) familiarity with very subtle effects or with (d) camera conventions. In this work, we learn and exploit several types of knowledge to predict the arrow of time automatically with neural network models trained on large-scale video datasets.
translated by 谷歌翻译
尽管通过深度卷积神经网络进行了视频理解的巨大进展,但现有方法学到的特征表示可能偏置到静态视觉线索。为了解决这个问题,我们提出了一种基于自我监督视频表示学习的概率分析来抑制静态视觉提示(SSVC)的新方法。在我们的方法中,首先编码视频帧以通过标准化流程根据标准正常分布获得潜在变量。通过将视频中的静态因子建模为随机变量,每个潜在变量的条件分布变为偏移并缩放正常。然后,选择沿着时间的较大潜伏变量作为静态线索并抑制以生成运动保留的视频。最后,通过运动保存的视频构建了正对,以便对比学习,以减轻对静态线索的表示偏差问题。较少偏置的视频表示可以更广泛地推广到各种下游任务。关于公开的基准测试的广泛实验表明,当仅使用单个RGB模型用于预训练时,所提出的方法优于现有技术。
translated by 谷歌翻译
我们提出了MACLR,这是一种新颖的方法,可显式执行从视觉和运动方式中学习的跨模式自我监督的视频表示。与以前的视频表示学习方法相比,主要关注学习运动线索的研究方法是隐含的RGB输入,MACLR丰富了RGB视频片段的标准对比度学习目标,具有运动途径和视觉途径之间的跨模式学习目标。我们表明,使用我们的MACLR方法学到的表示形式更多地关注前景运动区域,因此可以更好地推广到下游任务。为了证明这一点,我们在五个数据集上评估了MACLR,以进行动作识别和动作检测,并在所有数据集上展示最先进的自我监督性能。此外,我们表明MACLR表示可以像在UCF101和HMDB51行动识别的全面监督下所学的表示一样有效,甚至超过了对Vidsitu和SSV2的行动识别的监督表示,以及对AVA的动作检测。
translated by 谷歌翻译
In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.
translated by 谷歌翻译
运动,作为视频中最明显的现象,涉及随时间的变化,对视频表示学习的发展是独一无二的。在本文中,我们提出了问题:特别是对自我监督视频表示学习的运动有多重要。为此,我们撰写了一个二重奏,用于利用对比学习政权的数据增强和特征学习的动作。具体而言,我们介绍了一种以前的对比学习(MCL)方法,其将这种二重奏视为基础。一方面,MCL大写视频中的每个帧的光流量,以在时间上和空间地样本地样本(即,横跨时间的相关帧斑块的序列)作为数据增强。另一方面,MCL进一步将卷积层的梯度图对准来自空间,时间和时空视角的光流程图,以便在特征学习中地进行地面运动信息。在R(2 + 1)D骨架上进行的广泛实验证明了我们MCL的有效性。在UCF101上,在MCL学习的表示上培训的线性分类器实现了81.91%的前1个精度,表现优于6.78%的训练预测。在动力学-400上,MCL在线方案下实现66.62%的前1个精度。代码可在https://github.com/yihengzhang-cv/mcl-motion-focused-contrastive-learning。
translated by 谷歌翻译
无意的行动是罕见的事件,难以精确定义,并且高度依赖于动作的时间背景。在这项工作中,我们探讨了此类行动,并试图确定视频中的观点,这些动作从故意到无意中过渡。我们提出了一个多阶段框架,该框架利用了固有的偏见,例如运动速度,运动方向和为了识别无意的行动。为了通过自我监督的训练来增强表示,我们提出了时间转变,称为时间转变,称为无意义行动固有偏见(T2IBUA)的时间转变。多阶段方法对各个帧和完整剪辑的级别进行了时间信息。这些增强的表示表现出强烈的无意行动识别任务的表现。我们对我们的框架进行了广泛的消融研究,并报告结果对最先进的结果有了显着改善。
translated by 谷歌翻译
We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3 × 3 × 3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
translated by 谷歌翻译
最近的自我监督视频表示学习方法通​​过探索视频的基本属性,例如探讨了视频的基本属性。速度,时间顺序等。这项工作利用了一个必不可少的视频,\ Texit {视频连续性}的必要性,以获取自我监督表示学习的监督信号。具体而言,我们制定了三个新的连续性相关的借口任务,即连续性理由,不连续的本地化和缺失部分近似,该近似地监督用于视频表示学习的共享骨干。这种自我监督方法被称为连续性感知网络(CPNet),解决了三个任务,并鼓励骨干网络学习本地和长距离的运动和情境表示。它在多个下游任务中优于现有技术,例如动作识别,视频检索和动作定位。另外,视频连续性可以与其他粗粒度视频属性互补,用于表示学习的其他粗粒视频属性,并将所提出的借口任务集成到现有技术中,可以产生很大的性能增益。
translated by 谷歌翻译
鉴于在图像领域的对比学习的成功,目前的自我监督视频表示学习方法通​​常采用对比损失来促进视频表示学习。然而,当空闲地拉动视频的两个增强视图更接近时,该模型倾向于将常见的静态背景作为快捷方式学习但不能捕获运动信息,作为背景偏置的现象。这种偏差使模型遭受弱泛化能力,导致在等下游任务中的性能较差,例如动作识别。为了减轻这种偏见,我们提出\ textbf {f} Oreground-b \ textbf {a} ckground \ textbf {me} rging(sm} rging(fame)故意将所选视频的移动前景区域故意构成到其他人的静态背景上。具体而言,没有任何非货架探测器,我们通过帧差和颜色统计从背景区域中提取移动前景,并在视频中擦拭背景区域。通过利用原始剪辑和熔融夹之间的语义一致性,该模型更多地关注运动模式,并从背景快捷方式中脱位。广泛的实验表明,FAME可以有效地抵抗背景作弊,从而在UCF101,HMDB51和Diving48数据集中实现了最先进的性能。
translated by 谷歌翻译
对比学习表明,在自我监督时空表示学习中有希望的潜力。大多数作品天真地采样不同的剪辑以构建正面和负对。但是,我们观察到该公式将模型倾向于背景场景偏见。根本原因是双重的。首先,场景差异通常比运动差异更明显,更容易区分。其次,从同一视频中采样的剪辑通常具有相似的背景,但具有不同的动作。仅将它们作为正对就可以将模型绘制为静态背景而不是运动模式。为了应对这一挑战,本文提出了一种新颖的双重对比配方。具体而言,我们将输入RGB视频序列分解为两种互补模式,静态场景和动态运动。然后,将原始的RGB功能分别靠近静态特征和对齐动态特征。这样,将静态场景和动态运动同时编码为紧凑的RGB表示。我们通过激活图进一步进行特征空间解耦,以提炼静态和动态相关的特征。我们将我们的方法称为\ textbf {d} ual \ textbf {c} intrastive \ textbf {l} ginal for spatio-tempormal \ textbf {r} ePresentation(dclr)。广泛的实验表明,DCLR学习有效的时空表示,并在UCF-101,HMDB-51和潜水-48数据集中获得最先进或可比性的性能。
translated by 谷歌翻译
我们介绍了在视频中发现时间精确,细粒度事件的任务(检测到时间事件的精确时刻)。精确的斑点需要模型在全球范围内对全日制动作规模进行推理,并在本地识别微妙的框架外观和运动差异,以识别这些动作过程中事件的识别。令人惊讶的是,我们发现,最高的绩效解决方案可用于先前的视频理解任务,例如操作检测和细分,不能同时满足这两个要求。作为响应,我们提出了E2E点,这是一种紧凑的端到端模型,在精确的发现任务上表现良好,可以在单个GPU上快速培训。我们证明,E2E点的表现明显优于最近根据视频动作检测,细分和将文献发现到精确的发现任务的基线。最后,我们为几个细粒度的运动动作数据集贡献了新的注释和分裂,以使这些数据集适用于未来的精确发现工作。
translated by 谷歌翻译
We propose a new self-supervised CNN pre-training technique based on a novel auxiliary task called odd-oneout learning. In this task, the machine is asked to identify the unrelated or odd element from a set of otherwise related elements. We apply this technique to self-supervised video representation learning where we sample subsequences from videos and ask the network to learn to predict the odd video subsequence. The odd video subsequence is sampled such that it has wrong temporal order of frames while the even ones have the correct temporal order. Therefore, to generate a odd-one-out question no manual annotation is required. Our learning machine is implemented as multi-stream convolutional neural network, which is learned end-to-end. Using odd-one-out networks, we learn temporal representations for videos that generalizes to other related tasks such as action recognition.On action classification, our method obtains 60.3% on the UCF101 dataset using only UCF101 data for training which is approximately 10% better than current stateof-the-art self-supervised learning methods. Similarly, on HMDB51 dataset we outperform self-supervised state-ofthe art methods by 12.7% on action classification task.
translated by 谷歌翻译
视频自我监督的学习是一项挑战的任务,这需要模型的显着表达力量来利用丰富的空间时间知识,并从大量未标记的视频产生有效的监督信号。但是,现有方法未能提高未标记视频的时间多样性,并以明确的方式忽略精心建模的多尺度时间依赖性。为了克服这些限制,我们利用视频中的多尺度时间依赖性,并提出了一个名为时间对比图学习(TCGL)的新型视频自我监督学习框架,该框架共同模拟了片段间和片段间的时间依赖性用混合图对比学习策略学习的时间表示学习。具体地,首先引入空间 - 时间知识发现(STKD)模块以基于离散余弦变换的频域分析从视频中提取运动增强的空间时间表。为了显式模拟未标记视频的多尺度时间依赖性,我们的TCGL将关于帧和片段命令的先前知识集成到图形结构中,即片段/间隙间时间对比图(TCG)。然后,特定的对比学习模块旨在最大化不同图形视图中节点之间的协议。为了为未标记的视频生成监控信号,我们介绍了一种自适应片段订购预测(ASOP)模块,它利用视频片段之间的关系知识来学习全局上下文表示并自适应地重新校准通道明智的功能。实验结果表明我们的TCGL在大规模行动识别和视频检索基准上的最先进方法中的优势。
translated by 谷歌翻译
由于存在对象的自然时间转换,视频是一种具有自我监督学习(SSL)的丰富来源。然而,目前的方法通常是随机采样用于学习的视频剪辑,这导致监督信号差。在这项工作中,我们提出了预先使用无监督跟踪信号的SSL框架,用于选择包含相同对象的剪辑,这有助于更好地利用对象的时间变换。预先使用跟踪信号在空间上限制帧区域以学习并通过在Grad-CAM注意图上提供监督来定位模型以定位有意义的物体。为了评估我们的方法,我们在VGG-Sound和Kinetics-400数据集上培训势头对比(MOCO)编码器,预先使用预先。使用Previts的培训优于Moco在图像识别和视频分类下游任务中独自学习的表示,从而获得了行动分类的最先进的性能。预先帮助学习更强大的功能表示,以便在背景和视频数据集上进行背景和上下文更改。从大规模未婚视频中学习具有预算的大规模未能视频可能会导致更准确和强大的视觉功能表示。
translated by 谷歌翻译
We propose a self-supervised spatiotemporal learning technique which leverages the chronological order of videos. Our method can learn the spatiotemporal representation of the video by predicting the order of shuffled clips from the video. The category of the video is not required, which gives our technique the potential to take advantage of infinite unannotated videos. There exist related works which use frames, while compared to frames, clips are more consistent with the video dynamics. Clips can help to reduce the uncertainty of orders and are more appropriate to learn a video representation. The 3D convolutional neural networks are utilized to extract features for clips, and these features are processed to predict the actual order. The learned representations are evaluated via nearest neighbor retrieval experiments. We also use the learned networks as the pre-trained models and finetune them on the action recognition task. Three types of 3D convolutional neural networks are tested in experiments, and we gain large improvements compared to existing self-supervised methods.
translated by 谷歌翻译
We develop a novel framework for single-scene video anomaly localization that allows for human-understandable reasons for the decisions the system makes. We first learn general representations of objects and their motions (using deep networks) and then use these representations to build a high-level, location-dependent model of any particular scene. This model can be used to detect anomalies in new videos of the same scene. Importantly, our approach is explainable - our high-level appearance and motion features can provide human-understandable reasons for why any part of a video is classified as normal or anomalous. We conduct experiments on standard video anomaly detection datasets (Street Scene, CUHK Avenue, ShanghaiTech and UCSD Ped1, Ped2) and show significant improvements over the previous state-of-the-art.
translated by 谷歌翻译
这项工作提出了一个名为TEG的自我监督的学习框架,探讨学习视频表示中的时间粒度。在TEG中,我们从视频中抽出一个长剪辑,以及在长夹内部的短夹。然后我们提取密集的时间嵌入品。培训目标由两部分组成:一个细粒度的时间学习目的,以最大化短夹和长剪辑中的相应时间嵌入之间的相似性,以及持续的时间学习目标,以将两个剪辑的全局嵌入在一起。我们的研究揭示了时间粒度与三个主要发现的影响。 1)不同的视频任务可能需要不同时间粒度的特征。 2)有趣的是,广泛认为需要时间感知的一些任务实际上可以通过时间持久的功能来解决。 3)TEG的灵活性对8个视频基准测试产生最先进的结果,在大多数情况下优于监督预训练。
translated by 谷歌翻译
有效地对视频中的空间信息进行建模对于动作识别至关重要。为了实现这一目标,最先进的方法通常采用卷积操作员和密集的相互作用模块,例如非本地块。但是,这些方法无法准确地符合视频中的各种事件。一方面,采用的卷积是有固定尺度的,因此在各种尺度的事件中挣扎。另一方面,密集的相互作用建模范式仅在动作 - 欧元零件时实现次优性能,给最终预测带来了其他噪音。在本文中,我们提出了一个统一的动作识别框架,以通过引入以下设计来研究视频内容的动态性质。首先,在提取本地提示时,我们会生成动态尺度的时空内核,以适应各种事件。其次,为了将这些线索准确地汇总为全局视频表示形式,我们建议仅通过变压器在一些选定的前景对象之间进行交互,从而产生稀疏的范式。我们将提出的框架称为事件自适应网络(EAN),因为这两个关键设计都适应输入视频内容。为了利用本地细分市场内的短期运动,我们提出了一种新颖有效的潜在运动代码(LMC)模块,进一步改善了框架的性能。在几个大规模视频数据集上进行了广泛的实验,例如,某种东西,动力学和潜水48,验证了我们的模型是否在低拖鞋上实现了最先进或竞争性的表演。代码可在:https://github.com/tianyuan168326/ean-pytorch中找到。
translated by 谷歌翻译
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving stateof-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 73.0%).
translated by 谷歌翻译