Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Fusion (SWTF) module to utilize sparsely sampled video frames for obtaining global weighted temporal fusion outcome. The proposed SWTF is divided into two components. First, a temporal segment network that sparsely samples a given set of frames. Second, weighted temporal fusion, that incorporates a fusion of feature maps derived from optical flow, with raw RGB images. This is followed by base-network, which comprises a convolutional neural network module along with fully connected layers that provide us with activity recognition. The SWTF network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a significant margin.
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 (69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices. 1
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
由于其低成本和快速移动性,无人驾驶汽车(UAV)现在已广泛应用于数据获取。随着航空视频量的增加,对这些视频自动解析的需求正在激增。为了实现这一目标,当前的研究主要集中于在空间和时间维度沿着卷积的整体特征提取整体特征。但是,这些方法受到小时接收场的限制,无法充分捕获长期的时间依赖性,这对于描述复杂动力学很重要。在本文中,我们提出了一个新颖的深神经网络,称为futh-net,不仅为整体特征建模,而且还模拟了空中视频分类的时间关系。此外,在新型融合模块中,多尺度的时间关系可以完善整体特征,以产生更具歧视性的视频表示。更特别地,FUTH-NET采用了两条道路架构:(1)学习框架外观和短期时间变化的一般特征的整体代表途径,以及(2)捕获跨任意跨越任意时间关系的时间关系途径框架,提供长期的时间依赖性。之后,提出了一个新型的融合模块,以时空整合从这两种途径中学到的两个特征。我们的模型对两个航空视频分类数据集进行了评估,即ERA和无人机操作,并实现了最新结果。这表明了其在不同识别任务(事件分类和人类行动识别)之间的有效性和良好的概括能力。为了促进进一步的研究,我们在https://gitlab.lrz.de/ai4eo/reasoning/futh-net上发布该代码。
translated by 谷歌翻译
The increasing number of surveillance cameras and security concerns have made automatic violent activity detection from surveillance footage an active area for research. Modern deep learning methods have achieved good accuracy in violence detection and proved to be successful because of their applicability in intelligent surveillance systems. However, the models are computationally expensive and large in size because of their inefficient methods for feature extraction. This work presents a novel architecture for violence detection called Two-stream Multi-dimensional Convolutional Network (2s-MDCN), which uses RGB frames and optical flow to detect violence. Our proposed method extracts temporal and spatial information independently by 1D, 2D, and 3D convolutions. Despite combining multi-dimensional convolutional networks, our models are lightweight and efficient due to reduced channel capacity, yet they learn to extract meaningful spatial and temporal information. Additionally, combining RGB frames and optical flow yields 2.2% more accuracy than a single RGB stream. Regardless of having less complexity, our models obtained state-of-the-art accuracy of 89.7% on the largest violence detection benchmark dataset.
translated by 谷歌翻译
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters;(ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves stateof-the-art results. Our code and models are available at http://www.robots.ox.ac.uk/ vgg/software/two stream action
translated by 谷歌翻译
人类相互作用的分析是人类运动分析的一个重要研究主题。它已经使用第一人称视觉(FPV)或第三人称视觉(TPV)进行了研究。但是,到目前为止,两种视野的联合学习几乎没有引起关注。原因之一是缺乏涵盖FPV和TPV的合适数据集。此外,FPV或TPV的现有基准数据集具有多个限制,包括样本数量有限,参与者,交互类别和模态。在这项工作中,我们贡献了一个大规模的人类交互数据集,即FT-HID数据集。 FT-HID包含第一人称和第三人称愿景的成对对齐的样本。该数据集是从109个不同受试者中收集的,并具有三种模式的90K样品。该数据集已通过使用几种现有的动作识别方法验证。此外,我们还引入了一种新型的骨骼序列的多视图交互机制,以及针对第一人称和第三人称视野的联合学习多流框架。两种方法都在FT-HID数据集上产生有希望的结果。可以预期,这一视力一致的大规模数据集的引入将促进FPV和TPV的发展,以及他们用于人类行动分析的联合学习技术。该数据集和代码可在\ href {https://github.com/endlichere/ft-hid} {here} {herefichub.com/endlichere.com/endlichere}中获得。
translated by 谷歌翻译
基于文本的视频细分旨在通过用文本查询指定演员及其表演动作来细分视频序列中的演员。由于\ emph {emph {语义不对称}的问题,以前的方法无法根据演员及其动作以细粒度的方式将视频内容与文本查询对齐。 \ emph {语义不对称}意味着在多模式融合过程中包含不同量的语义信息。为了减轻这个问题,我们提出了一个新颖的演员和动作模块化网络,该网络将演员及其动作分别定位在两个单独的模块中。具体来说,我们首先从视频和文本查询中学习与参与者相关的内容,然后以对称方式匹配它们以定位目标管。目标管包含所需的参与者和动作,然后将其送入完全卷积的网络,以预测演员的分割掩模。我们的方法还建立了对象的关联,使其与所提出的时间建议聚合机制交叉多个框架。这使我们的方法能够有效地细分视频并保持预测的时间一致性。整个模型允许联合学习参与者的匹配和细分,并在A2D句子和J-HMDB句子数据集上实现单帧细分和完整视频细分的最新性能。
translated by 谷歌翻译
The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics.We also introduce a new Two-Stream Inflated 3D Con-vNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.
translated by 谷歌翻译
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving stateof-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 73.0%).
translated by 谷歌翻译
在本文中,我们提出了一种新的视频表示学习方法,名为时间挤压(TS)池,这可以从长期的视频帧中提取基本移动信息,并将其映射到一组名为挤压图像的几个图像中。通过将时间挤压池作为层嵌入到现成的卷积神经网络(CNN)中,我们设计了一个名为Temporal Squeeze网络(TESNet)的新视频分类模型。由此产生的挤压图像包含来自视频帧的基本移动信息,对应于视频分类任务的优化。我们在两个视频分类基准上评估我们的架构,并与最先进的结果进行了比较。
translated by 谷歌翻译
人类行动识别是计算机视觉中的重要应用领域。它的主要目的是准确地描述人类的行为及其相互作用,从传感器获得的先前看不见的数据序列中。识别,理解和预测复杂人类行动的能力能够构建许多重要的应用,例如智能监视系统,人力计算机界面,医疗保健,安全和军事应用。近年来,计算机视觉社区特别关注深度学习。本文使用深度学习技术的视频分析概述了当前的动作识别最新识别。我们提出了识别人类行为的最重要的深度学习模型,并分析它们,以提供用于解决人类行动识别问题的深度学习算法的当前进展,以突出其优势和缺点。基于文献中报道的识别精度的定量分析,我们的研究确定了动作识别中最新的深层体系结构,然后为该领域的未来工作提供当前的趋势和开放问题。
translated by 谷歌翻译
有效地对视频中的空间信息进行建模对于动作识别至关重要。为了实现这一目标,最先进的方法通常采用卷积操作员和密集的相互作用模块,例如非本地块。但是,这些方法无法准确地符合视频中的各种事件。一方面,采用的卷积是有固定尺度的,因此在各种尺度的事件中挣扎。另一方面,密集的相互作用建模范式仅在动作 - 欧元零件时实现次优性能,给最终预测带来了其他噪音。在本文中,我们提出了一个统一的动作识别框架,以通过引入以下设计来研究视频内容的动态性质。首先,在提取本地提示时,我们会生成动态尺度的时空内核,以适应各种事件。其次,为了将这些线索准确地汇总为全局视频表示形式,我们建议仅通过变压器在一些选定的前景对象之间进行交互,从而产生稀疏的范式。我们将提出的框架称为事件自适应网络(EAN),因为这两个关键设计都适应输入视频内容。为了利用本地细分市场内的短期运动,我们提出了一种新颖有效的潜在运动代码(LMC)模块,进一步改善了框架的性能。在几个大规模视频数据集上进行了广泛的实验,例如,某种东西,动力学和潜水48,验证了我们的模型是否在低拖鞋上实现了最先进或竞争性的表演。代码可在:https://github.com/tianyuan168326/ean-pytorch中找到。
translated by 谷歌翻译
具有注释的缺乏大规模的真实数据集使转移学习视频活动的必要性。我们的目标是为少数行动分类开发几次拍摄转移学习的有效方法。我们利用独立培训的本地视觉提示来学习可以从源域传输的表示,该源域只能使用少数示例来从源域传送到不同的目标域。我们使用的视觉提示包括对象 - 对象交互,手掌和地区内的动作,这些地区是手工位置的函数。我们采用了一个基于元学习的框架,以提取部署的视觉提示的独特和域不变组件。这使得能够在使用不同的场景和动作配置捕获的公共数据集中传输动作分类模型。我们呈现了我们转让学习方法的比较结果,并报告了阶级阶级和数据间数据间际传输的最先进的行动分类方法。
translated by 谷歌翻译
设计可以成功部署在日常生活环境中的活动检测系统需要构成现实情况典型挑战的数据集。在本文中,我们介绍了一个新的未修剪日常生存数据集,该数据集具有几个现实世界中的挑战:Toyota Smarthome Untrimmed(TSU)。 TSU包含以自发方式进行的各种活动。数据集包含密集的注释,包括基本的,复合活动和涉及与对象相互作用的活动。我们提供了对数据集所需的现实世界挑战的分析,突出了检测算法的开放问题。我们表明,当前的最新方法无法在TSU数据集上实现令人满意的性能。因此,我们提出了一种新的基线方法,以应对数据集提供的新挑战。此方法利用一种模态(即视线流)生成注意力权重,以指导另一种模态(即RGB)以更好地检测活动边界。这对于检测以高时间差异为特征的活动特别有益。我们表明,我们建议在TSU和另一个受欢迎的挑战数据集Charades上优于最先进方法的方法。
translated by 谷歌翻译
监视摄像机网络的显着增长需要可扩展的AI解决方案,以有效地分析这些网络产生的大量视频数据。作为在监视镜头上进行的典型分析,视频暴力检测最近受到了广泛关注。大多数研究都集中在使用监督方法(如果有的话)注意的半监督学习方法(如果有的话)上改善现有方法。在这项研究中,引入了强化学习模型,该模型可以通过半监督的方法胜过现有的模型。该方法的主要新颖性在于引入半监督的硬注意机制。使用强烈的注意,鉴定了视频的基本区域并与数据的非信息部分分开。通过删除冗余数据并专注于更高分辨率的有用视觉信息,可以提高模型的精度。使用半监督的强化学习算法实施硬注意机制可以消除对视频暴力数据集中注意注释的需求,从而使其易于适用。提出的模型利用预先训练的I3D主链来加速和稳定训练过程。提出的模型在RWF和曲棍球数据集上的最新精度分别为90.4%和98.7%。
translated by 谷歌翻译
Temporal modeling is key for action recognition in videos. It normally considers both short-range motions and long-range aggregations. In this paper, we propose a Temporal Excitation and Aggregation (TEA) block, including a motion excitation (ME) module and a multiple temporal aggregation (MTA) module, specifically designed to capture both short-and long-range temporal evolution. In particular, for short-range motion modeling, the ME module calculates the feature-level temporal differences from spatiotemporal features. It then utilizes the differences to excite the motion-sensitive channels of the features. The long-range temporal aggregations in previous works are typically achieved by stacking a large number of local temporal convolutions. Each convolution processes a local temporal window at a time. In contrast, the MTA module proposes to deform the local convolution to a group of subconvolutions, forming a hierarchical residual architecture. Without introducing additional parameters, the features will be processed with a series of sub-convolutions, and each frame could complete multiple temporal aggregations with neighborhoods. The final equivalent receptive field of temporal dimension is accordingly enlarged, which is capable of modeling the long-range temporal relationship over distant frames. The two components of the TEA block are complementary in temporal modeling. Finally, our approach achieves impressive results at low FLOPs on several action recognition benchmarks, such as Kinetics, Something-Something, HMDB51, and UCF101, which confirms its effectiveness and efficiency.
translated by 谷歌翻译
机器学习和非接触传感器的进步使您能够在医疗保健环境中理解复杂的人类行为。特别是,已经引入了几种深度学习系统,以实现对自闭症谱系障碍(ASD)等神经发展状况的全面分析。这种情况会影响儿童的早期发育阶段,并且诊断完全依赖于观察孩子的行为和检测行为提示。但是,诊断过程是耗时的,因为它需要长期的行为观察以及专家的稀缺性。我们展示了基于区域的计算机视觉系统的效果,以帮助临床医生和父母分析孩子的行为。为此,我们采用并增强了一个数据集,用于使用在不受控制的环境中捕获的儿童的视频来分析自闭症相关的动作(例如,在各种环境中使用消费级摄像机收集的视频)。通过检测视频中的目标儿童以减少背景噪声的影响,可以预处理数据。在时间卷积模型的有效性的推动下,我们提出了能够从视频帧中提取动作功能并通过分析视频中的框架之间的关系来从视频帧中提取动作功能并分类与自闭症相关的行为。通过对功能提取和学习策略的广泛评估,我们证明了通过膨胀的3D Convnet和多阶段的时间卷积网络实现最佳性能,达到了0.83加权的F1得分,以分类三种自闭症相关的动作,超越表现优于表现现有方法。我们还通过在同一系统中采用ESNET主链来提出一个轻重量解决方案,实现0.71加权F1得分的竞争结果,并在嵌入式系统上实现潜在的部署。
translated by 谷歌翻译
犯罪率与人口的增加率成比例地增加。最突出的方法是引入基于闭路电视(CCTV)相机的监视以解决问题。视频监控摄像机增加了一个新的维度来检测犯罪。目前正在进行自动安全摄像机监控的几项研究工作,基本目标是从视频饲料发现暴力活动。从技术方面来看,这是一个具有挑战性的问题,因为分析了一组帧,即时间维度的视频,以检测暴力可能需要仔细的机器学习模型训练,以减少错误的结果。本研究通过整合最先进的深度学习方法来重点介绍该问题,以确保用于检测暴力活动的自主监测的强大管道,例如,踢,冲压和拍打。最初,我们设计了这种特定兴趣的数据集,其中包含600个视频(每个动作200个)。稍后,我们已经利用现有的预先训练的模型架构来提取特征,后来使用深度学习网络进行分类。此外,我们在不同预先训练的架构上分类了我们的模型'准确性,以及像VGG16,Inceptionv3,Reset50,七峰和MobileNet V2的不同预先训练的架构中的混淆矩阵,其中VGG16和MobileNet V2更好。
translated by 谷歌翻译