传统时空网络的建模,计算成本和准确性是视频动作识别中最集中的研究主题。传统的2D卷积具有较低的计算成本,但它无法捕获时间关系;基于3D卷积的卷积神经网络(CNNS)模型可以获得良好的性能,但其计算成本很高,参数的数量很大。在本文中,我们提出了一个即插即用的时空移位模块(STSM),它是一种有效且高性能的通用模块。具体地,在将STSM插入其他网络之后,可以在不增加计算和参数的数量的情况下提高网络的性能。特别是,当网络是2D CNN时,我们的STSM模块允许网络了解高效的时空特征。我们对该拟议模块进行了广泛的评估,进行了许多实验,以研究其在视频动作识别方面的有效性,并在动力学-400和某些东西上实现了最先进的结果 - 某种东西的数据集。
translated by 谷歌翻译
Temporal modeling is key for action recognition in videos. It normally considers both short-range motions and long-range aggregations. In this paper, we propose a Temporal Excitation and Aggregation (TEA) block, including a motion excitation (ME) module and a multiple temporal aggregation (MTA) module, specifically designed to capture both short-and long-range temporal evolution. In particular, for short-range motion modeling, the ME module calculates the feature-level temporal differences from spatiotemporal features. It then utilizes the differences to excite the motion-sensitive channels of the features. The long-range temporal aggregations in previous works are typically achieved by stacking a large number of local temporal convolutions. Each convolution processes a local temporal window at a time. In contrast, the MTA module proposes to deform the local convolution to a group of subconvolutions, forming a hierarchical residual architecture. Without introducing additional parameters, the features will be processed with a series of sub-convolutions, and each frame could complete multiple temporal aggregations with neighborhoods. The final equivalent receptive field of temporal dimension is accordingly enlarged, which is capable of modeling the long-range temporal relationship over distant frames. The two components of the TEA block are complementary in temporal modeling. Finally, our approach achieves impressive results at low FLOPs on several action recognition benchmarks, such as Kinetics, Something-Something, HMDB51, and UCF101, which confirms its effectiveness and efficiency.
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
时空卷积通常无法学习视频中的运动动态,因此在野外的视频理解需要有效的运动表示。在本文中,我们提出了一种基于时空自相似性(STS)的丰富和强大的运动表示。给定一系列帧,STS表示每个局部区域作为空间和时间的邻居的相似度。通过将外观特征转换为关系值,它使学习者能够更好地识别空间和时间的结构模式。我们利用了整个STS,让我们的模型学会从中提取有效的运动表示。建议的神经块被称为自拍,可以轻松插入神经架构中,并在没有额外监督的情况下训练结束。在空间和时间内具有足够的邻域,它有效地捕获视频中的长期交互和快速运动,导致强大的动作识别。我们的实验分析证明了其对运动建模方法的优越性以及与直接卷积的时空特征的互补性。在标准动作识别基准测试中,某事-V1&V2,潜水-48和FineGym,该方法实现了最先进的结果。
translated by 谷歌翻译
In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block "R(2+1)D" which produces CNNs that achieve results comparable or superior to the state-of-theart on Sports-1M, Kinetics, UCF101, and HMDB51.
translated by 谷歌翻译
The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github. com/mit-han-lab/temporal-shift-module.
translated by 谷歌翻译
在视频数据中,来自移动区域的忙碌运动细节在频域中的特定频率带宽内传送。同时,视频数据的其余频率是用具有实质冗余的安静信息编码,这导致现有视频模型中的低处理效率作为输入原始RGB帧。在本文中,我们考虑为处理重要忙碌信息的处理和对安静信息的计算的处理分配。我们设计可训练的运动带通量模块(MBPM),用于将繁忙信息从RAW视频数据中的安静信息分开。通过将MBPM嵌入到两个路径CNN架构中,我们定义了一个繁忙的网络(BQN)。 BQN的效率是通过避免由两个路径处理的特征空间中的冗余来确定:一个在低分辨率的安静特征上运行,而另一个处理繁忙功能。所提出的BQN在某物V1,Kinetics400,UCF101和HMDB51数据集中略高于最近最近的视频处理模型。
translated by 谷歌翻译
自我关注学习成对相互作用以模型远程依赖性,从而产生了对视频动作识别的巨大改进。在本文中,我们寻求更深入地了解视频中的时间建模的自我关注。我们首先表明通过扁平所有像素通过扁平化的时空信息的缠结建模是次优的,未明确捕获帧之间的时间关系。为此,我们介绍了全球暂时关注(GTA),以脱钩的方式在空间关注之上进行全球时间关注。我们在像素和语义类似地区上应用GTA,以捕获不同水平的空间粒度的时间关系。与计算特定于实例的注意矩阵的传统自我关注不同,GTA直接学习全局注意矩阵,该矩阵旨在编码遍布不同样本的时间结构。我们进一步增强了GTA的跨通道多头方式,以利用通道交互以获得更好的时间建模。对2D和3D网络的广泛实验表明,我们的方法一致地增强了时间建模,并在三个视频动作识别数据集中提供最先进的性能。
translated by 谷歌翻译
对于视频识别任务,总结了视频片段的整个内容的全局表示为最终性能发挥着重要作用。然而,现有的视频架构通常通过使用简单的全局平均池(GAP)方法来生成它,这具有有限的能力捕获视频的复杂动态。对于图像识别任务,存在证据表明协方差汇总具有比GAP更强的表示能力。遗憾的是,在图像识别中使用的这种普通协方差池是无数的代表,它不能模拟视频中固有的时空结构。因此,本文提出了一个时间 - 细心的协方差池(TCP),插入深度架构结束时,以产生强大的视频表示。具体而言,我们的TCP首先开发一个时间注意力模块,以适应性地校准后续协方差汇集的时空特征,近似地产生细心的协方差表示。然后,时间协方差汇总执行临界协方差表示的时间汇集,以表征校准特征的帧内相关性和帧间互相关。因此,所提出的TCP可以捕获复杂的时间动态。最后,引入了快速矩阵功率归一化以利用协方差表示的几何形状。请注意,我们的TCP是模型 - 不可知的,可以灵活地集成到任何视频架构中,导致TCPNet用于有效的视频识别。使用各种视频架构的六个基准(例如动力学,某事物和电力)的广泛实验显示我们的TCPNet明显优于其对应物,同时具有强大的泛化能力。源代码公开可用。
translated by 谷歌翻译
基于变压器的方法最近在基于2D图像的视力任务上取得了巨大进步。但是,对于基于3D视频的任务,例如动作识别,直接将时空变压器应用于视频数据将带来沉重的计算和记忆负担,因为斑块的数量大大增加以及自我注意计算的二次复杂性。如何对视频数据的3D自我注意力进行有效地建模,这对于变压器来说是一个巨大的挑战。在本文中,我们提出了一种时间贴片移动(TPS)方法,用于在变压器中有效的3D自发明建模,以进行基于视频的动作识别。 TPS在时间尺寸中以特定的镶嵌图模式移动斑块的一部分,从而将香草的空间自我发项操作转换为时空的一部分,几乎没有额外的成本。结果,我们可以使用几乎相同的计算和记忆成本来计算3D自我注意力。 TPS是一个插件模块,可以插入现有的2D变压器模型中,以增强时空特征学习。提出的方法可以通过最先进的V1和V1,潜水-48和Kinetics400实现竞争性能,同时在计算和内存成本方面效率更高。 TPS的源代码可在https://github.com/martinxm/tps上找到。
translated by 谷歌翻译
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
translated by 谷歌翻译
We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/ facebookresearch/SlowFast.
translated by 谷歌翻译
The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) training resulted in significant overfitting for UCF-101, HMDB-51, and Ac-tivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ResNeXt-101 achieved 78.4% average accuracy on the Kinetics test set. (iii) Kinetics pretrained simple 3D architectures outperforms complex 2D architectures, and the pretrained ResNeXt-101 achieved 94.5% and 70.2% on respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available1.
translated by 谷歌翻译
卷积是现代神经网络最重要的特征变革,导致深度学习的进步。最近的变压器网络的出现,取代具有自我关注块的卷积层,揭示了静止卷积粒的限制,并将门打开到动态特征变换的时代。然而,现有的动态变换包括自我关注,全部限制了视频理解,其中空间和时间的对应关系,即运动信息,对于有效表示至关重要。在这项工作中,我们引入了一个关系功能转换,称为关系自我关注(RSA),通过动态生成关系内核和聚合关系上下文来利用视频中丰富的时空关系结构。我们的实验和消融研究表明,RSA网络基本上表现出卷积和自我关注的同行,在标准的运动中心基准上实现了用于视频动作识别的标准主导的基准,例如用于V1&V2,潜水48和Filegym。
translated by 谷歌翻译
The ultimate goal of continuous sign language recognition(CSLR) is to facilitate the communication between special people and normal people, which requires a certain degree of real-time and deploy-ability of the model. However, in the previous research on CSLR, little attention has been paid to the real-time and deploy-ability. In order to improve the real-time and deploy-ability of the model, this paper proposes a zero parameter, zero computation temporal superposition crossover module(TSCM), and combines it with 2D convolution to form a "TSCM+2D convolution" hybrid convolution, which enables 2D convolution to have strong spatial-temporal modelling capability with zero parameter increase and lower deployment cost compared with other spatial-temporal convolutions. The overall CSLR model based on TSCM is built on the improved ResBlockT network in this paper. The hybrid convolution of "TSCM+2D convolution" is applied to the ResBlock of the ResNet network to form the new ResBlockT, and random gradient stop and multi-level CTC loss are introduced to train the model, which reduces the final recognition WER while reducing the training memory usage, and extends the ResNet network from image classification task to video recognition task. In addition, this study is the first in CSLR to use only 2D convolution extraction of sign language video temporal-spatial features for end-to-end learning for recognition. Experiments on two large-scale continuous sign language datasets demonstrate the effectiveness of the proposed method and achieve highly competitive results.
translated by 谷歌翻译
有效地对视频中的空间信息进行建模对于动作识别至关重要。为了实现这一目标,最先进的方法通常采用卷积操作员和密集的相互作用模块,例如非本地块。但是,这些方法无法准确地符合视频中的各种事件。一方面,采用的卷积是有固定尺度的,因此在各种尺度的事件中挣扎。另一方面,密集的相互作用建模范式仅在动作 - 欧元零件时实现次优性能,给最终预测带来了其他噪音。在本文中,我们提出了一个统一的动作识别框架,以通过引入以下设计来研究视频内容的动态性质。首先,在提取本地提示时,我们会生成动态尺度的时空内核,以适应各种事件。其次,为了将这些线索准确地汇总为全局视频表示形式,我们建议仅通过变压器在一些选定的前景对象之间进行交互,从而产生稀疏的范式。我们将提出的框架称为事件自适应网络(EAN),因为这两个关键设计都适应输入视频内容。为了利用本地细分市场内的短期运动,我们提出了一种新颖有效的潜在运动代码(LMC)模块,进一步改善了框架的性能。在几个大规模视频数据集上进行了广泛的实验,例如,某种东西,动力学和潜水48,验证了我们的模型是否在低拖鞋上实现了最先进或竞争性的表演。代码可在:https://github.com/tianyuan168326/ean-pytorch中找到。
translated by 谷歌翻译
高效的时空建模是视频动作识别的重要而挑战性问题。现有的最先进的方法利用相邻的特征差异,以获得短期时间建模的运动线索,简单的卷积。然而,只有一个本地卷积,由于接收领域有限而无法处理各种动作。此外,摄像机运动带来的动作耳鸣还将损害提取的运动功能的质量。在本文中,我们提出了一个时间显着积分(TSI)块,其主要包含突出运动激励(SME)模块和交叉感知时间集成(CTI)模块。具体地,中小企业旨在通过空间级局部 - 全局运动建模突出显示运动敏感区域,其中显着对准和金字塔型运动建模在相邻帧之间连续进行,以捕获由未对准背景引起的噪声较少的运动动态。 CTI旨在分别通过一组单独的1D卷积进行多感知时间建模。同时,不同看法的时间相互作用与注意机制相结合。通过这两个模块,通过引入有限的附加参数,可以有效地编码长短的短期时间关系。在几个流行的基准测试中进行了广泛的实验(即,某种东西 - 某种东西 - 东西 - 400,uCF-101和HMDB-51),这证明了我们所提出的方法的有效性。
translated by 谷歌翻译
近年来,基于卷积网络的视频动作识别令人鼓舞地普及;然而,受到远程非线性时间关系建模和反向运动信息建模的限制,因此,现有模型的性能是严重的。为了解决这一紧急问题,我们引入了一个具有自我监督(TTSN)的令人惊叹的时间变压器网络。我们的高性能TTSN主要由时间变压器模块和时间序列自我监控模块组成。简明扼要地说,我们利用高效的时间变压器模块来模拟非本地帧之间的非线性时间依赖性,这显着增强了复杂的运动特征表示。我们采用的时间序列自我监控模块我们专注于“随机批量随机通道”的简化策略来反转视频帧的序列,允许从反向时间维度提高运动信息表示并提高模型的泛化能力。在三个广泛使用的数据集(HMDB51,UCF101和某事物)上的广泛实验已经得出结论地证明,我们提出的TTSN充满希望,因为它成功实现了行动识别的最先进性能。
translated by 谷歌翻译
自2020年推出以来,Vision Transformers(VIT)一直在稳步打破许多视觉任务的记录,通常被描述为``全部'''替换Convnet。而且对于嵌入式设备不友好。此外,最近的研究表明,标准的转话如果经过重新设计和培训,可以在准确性和可伸缩性方面与VIT竞争。在本文中,我们采用Convnet的现代化结构来设计一种新的骨干,以采取行动,以采取行动特别是我们的主要目标是为工业产品部署服务,例如仅支持标准操作的FPGA董事会。因此,我们的网络仅由2D卷积组成,而无需使用任何3D卷积,远程注意插件或变压器块。在接受较少的时期(5x-10x)训练时,我们的骨干线超过了(2+1)D和3D卷积的方法,并获得可比的结果s在两个基准数据集上具有vit。
translated by 谷歌翻译
Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks.This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture -Channel-Separated Convolutional Network (CSN) -which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.
translated by 谷歌翻译