婴儿对一般运动(GM)的评估是早期诊断神经发育障碍的有用工具。但是,其在临床实践中的评估依赖于专家的视觉检查,并且热切期待自动解决方案。最近,基于视频的GMS分类引起了人们的注意,但是这种方法将受到无关信息的强烈影响,例如视频中的背景混乱。此外,为了可靠性,有必要在GMS期间正确提取婴儿的时空特征。在这项研究中,我们提出了一种自动GMS分类方法,该方法由预处理网络组成,该网络从GMS视频中删除不必要的背景信息并调整婴儿的身体位置以及基于两流结构的后续运动分类网络。提出的方法可以有效地提取GMS分类的基本时空特征,同时防止过度拟合与不同记录环境无关的信息。我们使用从100名婴儿获得的视频验证了提出的方法。实验结果表明,所提出的方法的表现优于几个基线模型和现有方法。
translated by 谷歌翻译
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters;(ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves stateof-the-art results. Our code and models are available at http://www.robots.ox.ac.uk/ vgg/software/two stream action
translated by 谷歌翻译
We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
狗主人通常能够识别出揭示其狗的主观状态的行为线索,例如疼痛。但是自动识别疼痛状态非常具有挑战性。本文提出了一种基于视频的新型,两流深的神经网络方法,以解决此问题。我们提取和预处理身体关键点,并在视频中计算关键点和RGB表示的功能。我们提出了一种处理自我十分和缺少关键点的方法。我们还提出了一个由兽医专业人员收集的独特基于视频的狗行为数据集,并注释以进行疼痛,并通过建议的方法报告良好的分类结果。这项研究是基于机器学习的狗疼痛状态估计的第一批作品之一。
translated by 谷歌翻译
Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 (69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices. 1
translated by 谷歌翻译
直觉可能表明,运动和动态信息是基于视频的动作识别的关键。相比之下,有证据表明,最新的深入学习视频理解架构偏向单帧可用的静态信息。目前,缺少用于隔离视频中动态信息影响的方法和相应的数据集。他们的缺席使得很难理解当代体系结构如何利用动态和静态信息。我们以新颖的外观免费数据集(AFD)做出反应,以进行动作识别。 AFD缺乏与单个帧中的动作识别有关的静态信息。动力学的建模对于解决任务是必要的,因为仅通过考虑时间维度才能明显作用。我们评估了AFD上的11种当代行动识别体系结构及其相关的RGB视频。我们的结果表明,与RGB相比,AFD上所有体系结构的性能均显着下降。我们还对人类进行了免费研究,该研究表明他们在AFD和RGB上的识别准确性非常相似,并且比AFD评估的体系结构要好得多。我们的结果激发了一种新颖的体系结构,在当代设计中,在AFD和RGB上的最佳性能中恢复了光流的明确恢复。
translated by 谷歌翻译
In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block "R(2+1)D" which produces CNNs that achieve results comparable or superior to the state-of-theart on Sports-1M, Kinetics, UCF101, and HMDB51.
translated by 谷歌翻译
We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3 × 3 × 3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
translated by 谷歌翻译
人类堕落是非常关键的健康问题之一,尤其是对于长老和残疾人而言。在全球范围内,老年人口的数量正在稳步增加。因此,人类的跌倒发现已成为为这些人辅助生活的有效技术。为了辅助生活,大量使用了深度学习和计算机视觉。在这篇评论文章中,我们讨论了基于深度学习(DL)的最先进的非侵入性(基于视觉的)秋季检测技术。我们还提出了有关秋季检测基准数据集的调查。为了清楚理解,我们简要讨论用于评估秋季检测系统性能的不同指标。本文还为基于视觉的人类跌落检测技术提供了未来的指导。
translated by 谷歌翻译
人类相互作用的分析是人类运动分析的一个重要研究主题。它已经使用第一人称视觉(FPV)或第三人称视觉(TPV)进行了研究。但是,到目前为止,两种视野的联合学习几乎没有引起关注。原因之一是缺乏涵盖FPV和TPV的合适数据集。此外,FPV或TPV的现有基准数据集具有多个限制,包括样本数量有限,参与者,交互类别和模态。在这项工作中,我们贡献了一个大规模的人类交互数据集,即FT-HID数据集。 FT-HID包含第一人称和第三人称愿景的成对对齐的样本。该数据集是从109个不同受试者中收集的,并具有三种模式的90K样品。该数据集已通过使用几种现有的动作识别方法验证。此外,我们还引入了一种新型的骨骼序列的多视图交互机制,以及针对第一人称和第三人称视野的联合学习多流框架。两种方法都在FT-HID数据集上产生有希望的结果。可以预期,这一视力一致的大规模数据集的引入将促进FPV和TPV的发展,以及他们用于人类行动分析的联合学习技术。该数据集和代码可在\ href {https://github.com/endlichere/ft-hid} {here} {herefichub.com/endlichere.com/endlichere}中获得。
translated by 谷歌翻译
我们提出了一个双向连续连接的双通路网络(BCCN),以实现有效的手势识别。BCCN由两个路径组成:(i)关键帧途径和(ii)暂时关注途径。使用基于骨架的关键帧选择模块配置关键帧路径。关键帧通过路径以提取自身的空间特征,并且时间关注路径提取时间语义。我们的模型在视频中提高了手势识别性能,并获得了更好的激活图,用于空间和时间特性。在Chalearn DataSet,ETRI-Activity 3D DataSet和Toyota智能家庭数据集上执行测试。
translated by 谷歌翻译
深度学习模型已广泛用于监控视频中的异常检测。典型模型配备了重建普通视频的能力,并评估异常视频的重建错误以指示异常的程度。然而,现有方法遭受了两个缺点。首先,它们只能独立地编码每个身份的运动,而不考虑身份之间的相互作用,这也可以指示异常。其次,他们利用了结构在不同场景下固定的粘合模型,这种配置禁止了对场景的理解。在本文中,我们提出了一个分层时空图卷积神经网络(HSTGCNN)来解决这些问题,HSTGCNN由对应于不同级别的图形表示的多个分支组成。高级图形表示编码人们的轨迹以及多个身份之间的交互,而低级图表表示编码每个人的本地身体姿势。此外,我们建议加权组合在不同场景中更好的多个分支。以这种方式实现了对单级图形表示的改进。实现了对场景的理解并提供异常检测。在低分辨率视频中为在低分辨率视频中编码低分辨率视频中的人员的移动速度和方向编码高级别的图表表示,而在高分辨率视频中将更高的权重分配更高的权重。实验结果表明,建议的HSTGCNN在四个基准数据集(UCSD Spistrian,Shanghaitech,Cuhk Aveance和IITB-Whent)上的当前最先进的模型显着优于最新的最先进模型。
translated by 谷歌翻译
运动同步是指互动人的动作之间的动态时间联系。运动同步的应用是广泛而广泛的。例如,作为队友之间的协调量度,体育中经常报告同步分数。自闭症社区还将运动同步视为儿童社会和发展成就的关键指标。一般而言,原始视频录制通常用于运动同步估计,并且可能会揭示人们的身份。此外,这种隐私问题也阻碍了数据共享,这是自闭症研究不同方法之间公平比较的主要障碍。为了解决这个问题,本文提出了一种用于运动同步估计的合奏方法,这是在隐私保护条件下进行自动运动同步评估的首批基于深度学习的方法之一。我们的方法完全依赖于可公开共享的身份不足的二级数据,例如骨架数据和光流。我们在两个数据集上验证我们的方法:(1)从自闭症治疗干预措施中收集的PT13数据集以及(2)从同步潜水竞赛中收集的TASD-2数据集。在这种情况下,我们的方法优于其对应方法的方法,包括深层神经网络和替代方法。
translated by 谷歌翻译
基于视觉的人类活动识别已成为视频分析领域的重要研究领域之一。在过去的十年中,已经引入了许多先进的深度学习算法,以识别视频流中复杂的人类行为。这些深度学习算法对人类活动识别任务显示出令人印象深刻的表现。但是,这些新引入的方法仅专注于模型性能或这些模型在计算效率和鲁棒性方面的有效性,从而导致其解决挑战性人类活动识别问题的提议中的偏差折衷。为了克服当代深度学习模型对人类活动识别的局限性,本文提出了一个计算高效但通用的空间级联框架,该框架利用了深层歧视性的空间和时间特征,以识别人类活动的识别。为了有效地表示人类行动,我们提出了有效的双重注意卷积神经网络(CNN)体系结构,该结构利用统一的通道空间注意机制来提取视频框架中以人为中心的显着特征。双通道空间注意力层与卷积层一起学会在具有特征图数量的物体的空间接收场中更加专注。然后将提取的判别显着特征转发到堆叠的双向封闭式复发单元(BI-GRU),以使用前进和后传球梯度学习,以实现长期时间建模和对人类行为的识别。进行了广泛的实验,其中获得的结果表明,与大多数当代动作识别方法相比,所提出的框架的执行时间的改善最高167倍。
translated by 谷歌翻译
为了使婴儿脑瘫(CP)的早期医疗干预,早期诊断出脑损伤至关重要。尽管一般运动评估(GMA)在早期CP检测中显示出令人鼓舞的结果,但它很费力。大多数现有作品都以视频为输入,以对GMA自动化进行烦躁的动作(FMS)分类。这些方法需要对视频进行完整的观察,并且无法本地化包含正常FMS的视频帧。因此,我们提出了一种名为WO-GMA的新颖方法,以在弱监督的在线环境中执行FMS本地化。首先将婴儿体重点作为WO-GMA的输入提取。然后,WO-GMA执行本地时空提取,然后进行两个网络分支,以生成伪夹标签和模型在线操作。凭借剪辑级伪标签,动作建模分支学会以在线方式检测FMS。具有757个不同婴儿视频的数据集上的实验结果表明,WO-GMA可以获得最新的视频级别分类和Cliplevel检测结果。此外,仅需要前20%的视频持续时间才能获得与完全观察到的分类结果,这意味着FMS诊断时间大大缩短了。代码可在以下网址获得:https://github.com/scofiedluo/wo-gma。
translated by 谷歌翻译
本章旨在帮助开发网络 - 物理系统(CPS)在视频监控的各种应用中自动理解事件和活动。这些事件主要由无人机,中央电视台或新手和低端设备上的非熟板捕获。由于许多质量因素,这些视频是不受约束的,这些视频是非常挑战性的。我们在多年来提出了为解决问题的各种方法提供了广泛的陈述。这根据来自基于运动(SFM)的结构的方法,涉及涉及深神经网络的最近解决方案框架的方法。我们表明,长期运动模式在识别事件的任务中,单独发挥枢轴作用。因此,每个视频由使用基于图形的方法的固定数量的键帧显着表示。仅使用混合卷积神经网络(CNN)+经常性神经网络(RNN)架构利用时间特征。我们获得的结果是令人鼓舞的,因为它们优于标准的时间CNN,并且与使用空间信息以及运动提示的人员相提并论。进一步探索多际型号,我们构思了网络的空间和时间翼的多层融合策略。使用偏置的混合技术获得对视频和帧级别的各个预测载体的整合表示。与最先进的方法相比,融合策略在每个阶段的精度赋予我们更高的精度,因此在分类中实现了强大的共识。结果记录在动作识别域,即CCV,HMDB,UCF-101和KCV中广泛使用的四个基准数据集。可推动的是,专注于视频序列的更好分类肯定会导致强大的致动设计用于事件监视和对象暨活动跟踪的系统。
translated by 谷歌翻译
The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics.We also introduce a new Two-Stream Inflated 3D Con-vNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.
translated by 谷歌翻译
Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Fusion (SWTF) module to utilize sparsely sampled video frames for obtaining global weighted temporal fusion outcome. The proposed SWTF is divided into two components. First, a temporal segment network that sparsely samples a given set of frames. Second, weighted temporal fusion, that incorporates a fusion of feature maps derived from optical flow, with raw RGB images. This is followed by base-network, which comprises a convolutional neural network module along with fully connected layers that provide us with activity recognition. The SWTF network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a significant margin.
translated by 谷歌翻译