随着安装摄像头的广泛使用,基于视频的监视方法已引起了针对不同目的(例如辅助生活)的广泛关注。时间冗余和原始视频的巨大大小是与视频处理算法有关的两个最常见的问题。大多数现有方法主要集中于通过探索连续帧来提高准确性,这是费力的,不能考虑实时应用程序。由于视频主要以压缩格式存储和传输,因此在许多设备上都可以使用这些视频。压缩视频包含许多有益信息,例如运动向量和量化系数。正确使用此可用信息可以大大改善视频理解方法的性能。本文提出了一种使用残差数据的方法,该方法直接在压缩视频中可用,可以通过部分解码过程获得。此外,提出了一种积累相似残差的方法,该方法大大减少了处理识别的处理帧数。仅应用神经网络,专门用于压缩域中的累积残留物,可以加速性能,而分类结果与原始视频方法具有很高的竞争力。
translated by 谷歌翻译
由于细粒度的视觉细节中的运动和丰富内容的大变化,视频是复杂的。从这些信息密集型媒体中抽象有用的信息需要详尽的计算资源。本文研究了一个两步的替代方案,首先将视频序列冷凝到信息“框架”,然后在合成帧上利用现成的图像识别系统。有效问题是如何定义“有用信息”,然后将其从视频序列蒸发到一个合成帧。本文介绍了一种新颖的信息帧综合(IFS)架构,其包含三个客观任务,即外观重建,视频分类,运动估计和两个常规方案,即对抗性学习,颜色一致性。每个任务都配备了一个能力的合成框,而每个常规器可以提高其视觉质量。利用这些,通过以端到端的方式共同学习帧合成,预期产生的帧封装了用于视频分析的所需的时空信息。广泛的实验是在大型动力学数据集上进行的。与基线方法相比,将视频序列映射到单个图像,IFS显示出优异的性能。更值得注意地,IFS始终如一地展示了基于图像的2D网络和基于剪辑的3D网络的显着改进,并且通过了具有较少计算成本的最先进方法实现了相当的性能。
translated by 谷歌翻译
压缩视频动作识别最近引起了人们的注意,因为它通过用稀疏采样的RGB帧和压缩运动提示(例如运动向量和残差)替换原始视频来大大降低存储和计算成本。但是,这项任务严重遭受了粗糙和嘈杂的动力学以及异质RGB和运动方式的融合不足。为了解决上面的两个问题,本文提出了一个新颖的框架,即具有运动增强的细心跨模式相互作用网络(MEACI-NET)。它遵循两流体系结构,即一个用于RGB模式,另一个用于运动模态。特别是,该运动流采用带有denoising模块的多尺度块来增强表示表示。然后,通过引入选择性运动补充(SMC)和跨模式增强(CMA)模块来加强两条流之间的相互作用,其中SMC与时空上的局部局部运动相互补充,CMA和CMA进一步将两种模态与两种模态相结合。选择性功能增强。对UCF-101,HMDB-51和Kinetics-400基准的广泛实验证明了MEACI-NET的有效性和效率。
translated by 谷歌翻译
人类行动识别是计算机视觉中的重要应用领域。它的主要目的是准确地描述人类的行为及其相互作用,从传感器获得的先前看不见的数据序列中。识别,理解和预测复杂人类行动的能力能够构建许多重要的应用,例如智能监视系统,人力计算机界面,医疗保健,安全和军事应用。近年来,计算机视觉社区特别关注深度学习。本文使用深度学习技术的视频分析概述了当前的动作识别最新识别。我们提出了识别人类行为的最重要的深度学习模型,并分析它们,以提供用于解决人类行动识别问题的深度学习算法的当前进展,以突出其优势和缺点。基于文献中报道的识别精度的定量分析,我们的研究确定了动作识别中最新的深层体系结构,然后为该领域的未来工作提供当前的趋势和开放问题。
translated by 谷歌翻译
人类相互作用的分析是人类运动分析的一个重要研究主题。它已经使用第一人称视觉(FPV)或第三人称视觉(TPV)进行了研究。但是,到目前为止,两种视野的联合学习几乎没有引起关注。原因之一是缺乏涵盖FPV和TPV的合适数据集。此外,FPV或TPV的现有基准数据集具有多个限制,包括样本数量有限,参与者,交互类别和模态。在这项工作中,我们贡献了一个大规模的人类交互数据集,即FT-HID数据集。 FT-HID包含第一人称和第三人称愿景的成对对齐的样本。该数据集是从109个不同受试者中收集的,并具有三种模式的90K样品。该数据集已通过使用几种现有的动作识别方法验证。此外,我们还引入了一种新型的骨骼序列的多视图交互机制,以及针对第一人称和第三人称视野的联合学习多流框架。两种方法都在FT-HID数据集上产生有希望的结果。可以预期,这一视力一致的大规模数据集的引入将促进FPV和TPV的发展,以及他们用于人类行动分析的联合学习技术。该数据集和代码可在\ href {https://github.com/endlichere/ft-hid} {here} {herefichub.com/endlichere.com/endlichere}中获得。
translated by 谷歌翻译
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
translated by 谷歌翻译
本章旨在帮助开发网络 - 物理系统(CPS)在视频监控的各种应用中自动理解事件和活动。这些事件主要由无人机,中央电视台或新手和低端设备上的非熟板捕获。由于许多质量因素,这些视频是不受约束的,这些视频是非常挑战性的。我们在多年来提出了为解决问题的各种方法提供了广泛的陈述。这根据来自基于运动(SFM)的结构的方法,涉及涉及深神经网络的最近解决方案框架的方法。我们表明,长期运动模式在识别事件的任务中,单独发挥枢轴作用。因此,每个视频由使用基于图形的方法的固定数量的键帧显着表示。仅使用混合卷积神经网络(CNN)+经常性神经网络(RNN)架构利用时间特征。我们获得的结果是令人鼓舞的,因为它们优于标准的时间CNN,并且与使用空间信息以及运动提示的人员相提并论。进一步探索多际型号,我们构思了网络的空间和时间翼的多层融合策略。使用偏置的混合技术获得对视频和帧级别的各个预测载体的整合表示。与最先进的方法相比,融合策略在每个阶段的精度赋予我们更高的精度,因此在分类中实现了强大的共识。结果记录在动作识别域,即CCV,HMDB,UCF-101和KCV中广泛使用的四个基准数据集。可推动的是,专注于视频序列的更好分类肯定会导致强大的致动设计用于事件监视和对象暨活动跟踪的系统。
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
隐式神经表示(INR)被出现为代表信号的强大范例,例如图像,视频,3D形状等。尽管它已经示出了能够表示精细细节的能力,但其效率尚未得到广泛研究数据表示。在INR中,数据以神经网络的参数的形式存储,并且通用优化算法通常不会利用信号中的空间和时间冗余。在本文中,我们建议通过明确地删除数据冗余来表示和压缩视频的新型INR方法。我们提出了跨视频帧和残差的主体剩余流场(NRFF)而不是存储原始RGB颜色,而不是存储原始RGB颜色。维护通常更光滑和更复杂的运动信息,比原始信号更少,需要更少的参数。此外,重用冗余像素值进一步提高了网络参数效率。实验结果表明,所提出的方法优于基线方法的显着边际。代码可用于https://github.com/daniel03c1/eff_video_repruseentation。
translated by 谷歌翻译
当今智能城市中产生的大型视频数据从其有目的的用法角度引起了人们的关注,其中监视摄像机等是最突出的资源,是为大量数据做出贡献的最突出的资源,使其自动化分析成为计算方面的艰巨任务。和精确。暴力检测(VD)在行动和活动识别域中广泛崩溃,用于分析大型视频数据,以了解由于人类而引起的异常动作。传统上,VD文献基于手动设计的功能,尽管开发了基于深度学习的独立模型的进步用于实时VD分析。本文重点介绍了深度序列学习方法以及检测到的暴力的本地化策略。该概述还介入了基于机器学习的初始图像处理和基于机器学习的文献及其可能具有的优势,例如针对当前复杂模型的效率。此外,讨论了数据集,以提供当前模型的分析,并用对先前方法的深入分析得出的VD域中的未来方向解释了他们的利弊。
translated by 谷歌翻译
We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3 × 3 × 3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
translated by 谷歌翻译
在本文中,我们提出了一种新的视频表示学习方法,名为时间挤压(TS)池,这可以从长期的视频帧中提取基本移动信息,并将其映射到一组名为挤压图像的几个图像中。通过将时间挤压池作为层嵌入到现成的卷积神经网络(CNN)中,我们设计了一个名为Temporal Squeeze网络(TESNet)的新视频分类模型。由此产生的挤压图像包含来自视频帧的基本移动信息,对应于视频分类任务的优化。我们在两个视频分类基准上评估我们的架构,并与最先进的结果进行了比较。
translated by 谷歌翻译
We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.
translated by 谷歌翻译
The increasing number of surveillance cameras and security concerns have made automatic violent activity detection from surveillance footage an active area for research. Modern deep learning methods have achieved good accuracy in violence detection and proved to be successful because of their applicability in intelligent surveillance systems. However, the models are computationally expensive and large in size because of their inefficient methods for feature extraction. This work presents a novel architecture for violence detection called Two-stream Multi-dimensional Convolutional Network (2s-MDCN), which uses RGB frames and optical flow to detect violence. Our proposed method extracts temporal and spatial information independently by 1D, 2D, and 3D convolutions. Despite combining multi-dimensional convolutional networks, our models are lightweight and efficient due to reduced channel capacity, yet they learn to extract meaningful spatial and temporal information. Additionally, combining RGB frames and optical flow yields 2.2% more accuracy than a single RGB stream. Regardless of having less complexity, our models obtained state-of-the-art accuracy of 89.7% on the largest violence detection benchmark dataset.
translated by 谷歌翻译
基于视觉的人类活动识别已成为视频分析领域的重要研究领域之一。在过去的十年中,已经引入了许多先进的深度学习算法,以识别视频流中复杂的人类行为。这些深度学习算法对人类活动识别任务显示出令人印象深刻的表现。但是,这些新引入的方法仅专注于模型性能或这些模型在计算效率和鲁棒性方面的有效性,从而导致其解决挑战性人类活动识别问题的提议中的偏差折衷。为了克服当代深度学习模型对人类活动识别的局限性,本文提出了一个计算高效但通用的空间级联框架,该框架利用了深层歧视性的空间和时间特征,以识别人类活动的识别。为了有效地表示人类行动,我们提出了有效的双重注意卷积神经网络(CNN)体系结构,该结构利用统一的通道空间注意机制来提取视频框架中以人为中心的显着特征。双通道空间注意力层与卷积层一起学会在具有特征图数量的物体的空间接收场中更加专注。然后将提取的判别显着特征转发到堆叠的双向封闭式复发单元(BI-GRU),以使用前进和后传球梯度学习,以实现长期时间建模和对人类行为的识别。进行了广泛的实验,其中获得的结果表明,与大多数当代动作识别方法相比,所提出的框架的执行时间的改善最高167倍。
translated by 谷歌翻译
Temporal modeling is key for action recognition in videos. It normally considers both short-range motions and long-range aggregations. In this paper, we propose a Temporal Excitation and Aggregation (TEA) block, including a motion excitation (ME) module and a multiple temporal aggregation (MTA) module, specifically designed to capture both short-and long-range temporal evolution. In particular, for short-range motion modeling, the ME module calculates the feature-level temporal differences from spatiotemporal features. It then utilizes the differences to excite the motion-sensitive channels of the features. The long-range temporal aggregations in previous works are typically achieved by stacking a large number of local temporal convolutions. Each convolution processes a local temporal window at a time. In contrast, the MTA module proposes to deform the local convolution to a group of subconvolutions, forming a hierarchical residual architecture. Without introducing additional parameters, the features will be processed with a series of sub-convolutions, and each frame could complete multiple temporal aggregations with neighborhoods. The final equivalent receptive field of temporal dimension is accordingly enlarged, which is capable of modeling the long-range temporal relationship over distant frames. The two components of the TEA block are complementary in temporal modeling. Finally, our approach achieves impressive results at low FLOPs on several action recognition benchmarks, such as Kinetics, Something-Something, HMDB51, and UCF101, which confirms its effectiveness and efficiency.
translated by 谷歌翻译
机器学习和非接触传感器的进步使您能够在医疗保健环境中理解复杂的人类行为。特别是,已经引入了几种深度学习系统,以实现对自闭症谱系障碍(ASD)等神经发展状况的全面分析。这种情况会影响儿童的早期发育阶段,并且诊断完全依赖于观察孩子的行为和检测行为提示。但是,诊断过程是耗时的,因为它需要长期的行为观察以及专家的稀缺性。我们展示了基于区域的计算机视觉系统的效果,以帮助临床医生和父母分析孩子的行为。为此,我们采用并增强了一个数据集,用于使用在不受控制的环境中捕获的儿童的视频来分析自闭症相关的动作(例如,在各种环境中使用消费级摄像机收集的视频)。通过检测视频中的目标儿童以减少背景噪声的影响,可以预处理数据。在时间卷积模型的有效性的推动下,我们提出了能够从视频帧中提取动作功能并通过分析视频中的框架之间的关系来从视频帧中提取动作功能并分类与自闭症相关的行为。通过对功能提取和学习策略的广泛评估,我们证明了通过膨胀的3D Convnet和多阶段的时间卷积网络实现最佳性能,达到了0.83加权的F1得分,以分类三种自闭症相关的动作,超越表现优于表现现有方法。我们还通过在同一系统中采用ESNET主链来提出一个轻重量解决方案,实现0.71加权F1得分的竞争结果,并在嵌入式系统上实现潜在的部署。
translated by 谷歌翻译
在视频中,人类的行为是三维(3D)信号。这些视频研究了人类行为的时空知识。使用3D卷积神经网络(CNN)研究了有希望的能力。 3D CNN尚未在静止照片中为其建立良好的二维(2D)等效物获得高输出。董事会3D卷积记忆和时空融合面部训练难以防止3D CNN完成非凡的评估。在本文中,我们实施了混合深度学习体系结构,该体系结构结合了Stip和3D CNN功能,以有效地增强3D视频的性能。实施后,在每个时空融合圈中进行训练的较详细和更深的图表。训练模型在处理模型的复杂评估后进一步增强了结果。视频分类模型在此实现模型中使用。引入了使用深度学习的多媒体数据分类的智能3D网络协议,以进一步了解人类努力中的时空关联。在实施结果时,著名的数据集(即UCF101)评估了提出的混合技术的性能。结果击败了提出的混合技术,该混合动力技术基本上超过了最初的3D CNN。将结果与文献的最新框架进行比较,以识别UCF101的行动识别,准确度为95%。
translated by 谷歌翻译
Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 (69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices. 1
translated by 谷歌翻译