我们提出了一个双向连续连接的双通路网络(BCCN),以实现有效的手势识别。BCCN由两个路径组成:(i)关键帧途径和(ii)暂时关注途径。使用基于骨架的关键帧选择模块配置关键帧路径。关键帧通过路径以提取自身的空间特征,并且时间关注路径提取时间语义。我们的模型在视频中提高了手势识别性能,并获得了更好的激活图,用于空间和时间特性。在Chalearn DataSet,ETRI-Activity 3D DataSet和Toyota智能家庭数据集上执行测试。
translated by 谷歌翻译
这项工作侧重于老年人活动认可的任务,这是一个充满挑战的任务,因为在老年活动中的个人行为和人体对象互动存在。因此,我们试图通过专注地融合多模态特征来有效地聚合来自RGB视频和骨架序列的判别信息和与RGB视频和骨架序列的交互。最近,通过利用从挤压和激励网络(Senet)延伸的非线性关注机制来提出一些非线性多模态融合方法。灵感来自于此,我们提出了一种新颖的扩张 - 挤压激励融合网络(ESE-FN),有效地解决了老年活动识别问题,从而了解模态和渠道 - 明智的膨胀 - 挤压(ESE)注意到术语融合模态和通道方面的多模态特征。此外,我们设计了一种新的多模态损耗(ML),以通过在单个模态的最小预测损失与预测损失之间添加差异之间的差异来保持单模特征和融合多模态特征之间的一致性。融合的方式。最后,我们对最大的老年活动数据集进行实验,即ETRI-Activity3D(包括110,000多个视频和50个类别),以证明建议的ESE-FN与状态相比实现了最佳准确性 - 最新方法。此外,更广泛的实验结果表明,所提出的ESE-FN在正常动作识别任务方面也与其他方法相媲美。
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
An important challenge in vision-based action recognition is the embedding of spatiotemporal features with two or more heterogeneous modalities into a single feature. In this study, we propose a new 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme. The 3D deformable transformer consists of three attention modules: 3D deformability, local joint stride, and temporal stride attention. The two cross-modal tokens are input into the 3D deformable attention module to create a cross-attention token with a reflected spatiotemporal correlation. Local joint stride attention is applied to spatially combine attention and pose tokens. Temporal stride attention temporally reduces the number of input tokens in the attention module and supports temporal expression learning without the simultaneous use of all tokens. The deformable transformer iterates L times and combines the last cross-modal token for classification. The proposed 3D deformable transformer was tested on the NTU60, NTU120, FineGYM, and Penn Action datasets, and showed results better than or similar to pre-trained state-of-the-art methods even without a pre-training process. In addition, by visualizing important joints and correlations during action recognition through spatial joint and temporal stride attention, the possibility of achieving an explainable potential for action recognition is presented.
translated by 谷歌翻译
人类相互作用的分析是人类运动分析的一个重要研究主题。它已经使用第一人称视觉(FPV)或第三人称视觉(TPV)进行了研究。但是,到目前为止,两种视野的联合学习几乎没有引起关注。原因之一是缺乏涵盖FPV和TPV的合适数据集。此外,FPV或TPV的现有基准数据集具有多个限制,包括样本数量有限,参与者,交互类别和模态。在这项工作中,我们贡献了一个大规模的人类交互数据集,即FT-HID数据集。 FT-HID包含第一人称和第三人称愿景的成对对齐的样本。该数据集是从109个不同受试者中收集的,并具有三种模式的90K样品。该数据集已通过使用几种现有的动作识别方法验证。此外,我们还引入了一种新型的骨骼序列的多视图交互机制,以及针对第一人称和第三人称视野的联合学习多流框架。两种方法都在FT-HID数据集上产生有希望的结果。可以预期,这一视力一致的大规模数据集的引入将促进FPV和TPV的发展,以及他们用于人类行动分析的联合学习技术。该数据集和代码可在\ href {https://github.com/endlichere/ft-hid} {here} {herefichub.com/endlichere.com/endlichere}中获得。
translated by 谷歌翻译
人类行动识别是计算机视觉中的重要应用领域。它的主要目的是准确地描述人类的行为及其相互作用,从传感器获得的先前看不见的数据序列中。识别,理解和预测复杂人类行动的能力能够构建许多重要的应用,例如智能监视系统,人力计算机界面,医疗保健,安全和军事应用。近年来,计算机视觉社区特别关注深度学习。本文使用深度学习技术的视频分析概述了当前的动作识别最新识别。我们提出了识别人类行为的最重要的深度学习模型,并分析它们,以提供用于解决人类行动识别问题的深度学习算法的当前进展,以突出其优势和缺点。基于文献中报道的识别精度的定量分析,我们的研究确定了动作识别中最新的深层体系结构,然后为该领域的未来工作提供当前的趋势和开放问题。
translated by 谷歌翻译
视觉变压器正在成为解决计算机视觉问题的强大工具。最近的技术还证明了超出图像域之外的变压器来解决许多与视频相关的任务的功效。其中,由于其广泛的应用,人类的行动识别是从研究界受到特别关注。本文提供了对动作识别的视觉变压器技术的首次全面调查。我们朝着这个方向分析并总结了现有文献和新兴文献,同时突出了适应变形金刚以进行动作识别的流行趋势。由于其专业应用,我们将这些方法统称为``动作变压器''。我们的文献综述根据其架构,方式和预期目标为动作变压器提供了适当的分类法。在动作变压器的背景下,我们探讨了编码时空数据,降低维度降低,框架贴片和时空立方体构造以及各种表示方法的技术。我们还研究了变压器层中时空注意的优化,以处理更长的序列,通常通过减少单个注意操作中的令牌数量。此外,我们还研究了不同的网络学习策略,例如自我监督和零局学习,以及它们对基于变压器的行动识别的相关损失。这项调查还总结了在具有动作变压器重要基准的评估度量评分方面取得的进步。最后,它提供了有关该研究方向的挑战,前景和未来途径的讨论。
translated by 谷歌翻译
现有的基于3D骨架的动作识别方法通过将手工制作的动作功能编码为图像格式和CNN解码,从而达到了令人印象深刻的性能。但是,这种方法在两种方面受到限制:a)手工制作的动作功能很难处理具有挑战性的动作,b)通常需要复杂的CNN模型来提高动作识别精度,这通常会发生重大计算负担。为了克服这些局限性,我们引入了一种新颖的AFE-CNN,它致力于增强基于3D骨架的动作的特征,以适应具有挑战性的动作。我们提出了功能增强从关键关节,骨向量,关键框架和时间视角的模块,因此,AFE-CNN对摄像头视图和车身大小变化更为强大,并显着提高了对挑战性动作的识别精度。此外,我们的AFE-CNN采用了轻巧的CNN模型以增强动作功能来解码图像,从而确保了比最新方法低得多的计算负担。我们在三个基于基准骨架的动作数据集上评估了AFE-CNN:NTU RGB+D,NTU RGB+D 120和UTKINECT-ACTION3D,并取得了广泛的实验结果,这表明我们对AFE-CNN的出色表现。
translated by 谷歌翻译
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
translated by 谷歌翻译
Videos are multimodal in nature. Conventional video recognition pipelines typically fuse multimodal features for improved performance. However, this is not only computationally expensive but also neglects the fact that different videos rely on different modalities for predictions. This paper introduces Hierarchical and Conditional Modality Selection (HCMS), a simple yet efficient multimodal learning framework for efficient video recognition. HCMS operates on a low-cost modality, i.e., audio clues, by default, and dynamically decides on-the-fly whether to use computationally-expensive modalities, including appearance and motion clues, on a per-input basis. This is achieved by the collaboration of three LSTMs that are organized in a hierarchical manner. In particular, LSTMs that operate on high-cost modalities contain a gating module, which takes as inputs lower-level features and historical information to adaptively determine whether to activate its corresponding modality; otherwise it simply reuses historical information. We conduct extensive experiments on two large-scale video benchmarks, FCVID and ActivityNet, and the results demonstrate the proposed approach can effectively explore multimodal information for improved classification performance while requiring much less computation.
translated by 谷歌翻译
图表卷积网络(GCNS)已成为基于骨架的动作识别的主要方法。然而,它们仍然遭受两个问题,即邻域约束和纠缠的时空特征表示。大多数研究侧重于改善图形拓扑的设计,以解决第一个问题,但他们尚未充分探索后者。在这项工作中,我们设计了一个解开的时空变压器(DSTT)块,以克服GCN的上述限制三个步骤:(i)脱离时尚分解的分离;(ii)用于捕获全球背景下的相关性的全球时空注意; (iii)利用更多本地信息的本地信息增强。在其上,我们提出了一种名为分层图卷积件骨架变压器(HGCT)的新型架构,用于采用GCN(即,本地拓扑,时间动态和层级)和变压器的互补优势(即,全球背景和动态注意)。 HGCT轻量级和计算效率。定量分析证明了HGCT的优越性和良好的解释性。
translated by 谷歌翻译
动态手势识别任务已经看过各种单向和多式联运方法的研究。此前,研究人员已经探索了深度和基于2D骨架的多模式融合CRNN(卷积经常性神经网络),但在获得预期识别结果方面存在局限性。在本文中,我们重新审视了这种方法来手势识别并提出了几种改进。我们观察到原始深度图像在感兴趣的手区域(ROI)中具有低对比度。它们不突出显示重要的精细细节,例如手指方向,在手指和手掌之间重叠,或在多个手指之间重叠。因此,我们提出将深度值量化到几个离散区域中,以在手的若干关键部分之间产生更高的对比度。此外,我们提出了几种方法来解决现有多模式融合CRNN架构中的高方差问题。我们在两个基准上评估我们的方法:DHG-14/28数据集和SHREC'17 Track DataSet。我们的方法显示了先前类似的多模式方法的准确性和参数效率的显着提高,与最先进的结果相当的结果。
translated by 谷歌翻译
建模各种时空依赖项是识别骨架序列中人类动作的关键。大多数现有方法过度依赖于遍历规则或图形拓扑的设计,以利用动态关节的依赖性,这是反映远处但重要的关节的关系不足。此外,由于本地采用的操作,因此在现有的工作中探索了重要的远程时间信息。为了解决这个问题,在这项工作中,我们提出了LSTA-Net:一种新型长期短期时空聚合网络,可以以时空的方式有效地捕获长/短距离依赖性。我们将我们的模型设计成纯粹的分解体系结构,可以交替执行空间特征聚合和时间特征聚合。为了改善特征聚合效果,还设计和采用了一种通道明智的注意机制。在三个公共基准数据集中进行了广泛的实验,结果表明,我们的方法可以在空间和时域中捕获长短短程依赖性,从而产生比其他最先进的方法更高的结果。代码可在https://github.com/tailin1009/lsta-net。
translated by 谷歌翻译
Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 (69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices. 1
translated by 谷歌翻译
Temporal modeling is key for action recognition in videos. It normally considers both short-range motions and long-range aggregations. In this paper, we propose a Temporal Excitation and Aggregation (TEA) block, including a motion excitation (ME) module and a multiple temporal aggregation (MTA) module, specifically designed to capture both short-and long-range temporal evolution. In particular, for short-range motion modeling, the ME module calculates the feature-level temporal differences from spatiotemporal features. It then utilizes the differences to excite the motion-sensitive channels of the features. The long-range temporal aggregations in previous works are typically achieved by stacking a large number of local temporal convolutions. Each convolution processes a local temporal window at a time. In contrast, the MTA module proposes to deform the local convolution to a group of subconvolutions, forming a hierarchical residual architecture. Without introducing additional parameters, the features will be processed with a series of sub-convolutions, and each frame could complete multiple temporal aggregations with neighborhoods. The final equivalent receptive field of temporal dimension is accordingly enlarged, which is capable of modeling the long-range temporal relationship over distant frames. The two components of the TEA block are complementary in temporal modeling. Finally, our approach achieves impressive results at low FLOPs on several action recognition benchmarks, such as Kinetics, Something-Something, HMDB51, and UCF101, which confirms its effectiveness and efficiency.
translated by 谷歌翻译
设计可以成功部署在日常生活环境中的活动检测系统需要构成现实情况典型挑战的数据集。在本文中,我们介绍了一个新的未修剪日常生存数据集,该数据集具有几个现实世界中的挑战:Toyota Smarthome Untrimmed(TSU)。 TSU包含以自发方式进行的各种活动。数据集包含密集的注释,包括基本的,复合活动和涉及与对象相互作用的活动。我们提供了对数据集所需的现实世界挑战的分析,突出了检测算法的开放问题。我们表明,当前的最新方法无法在TSU数据集上实现令人满意的性能。因此,我们提出了一种新的基线方法,以应对数据集提供的新挑战。此方法利用一种模态(即视线流)生成注意力权重,以指导另一种模态(即RGB)以更好地检测活动边界。这对于检测以高时间差异为特征的活动特别有益。我们表明,我们建议在TSU和另一个受欢迎的挑战数据集Charades上优于最先进方法的方法。
translated by 谷歌翻译
步态情绪识别在智能系统中起着至关重要的作用。大多数现有方法通过随着时间的推移专注于当地行动来识别情绪。但是,他们忽略了时间域中不同情绪的有效距离是不同的,而且步行过程中的当地行动非常相似。因此,情绪应由全球状态而不是间接的本地行动代表。为了解决这些问题,这项工作通过构建动态的时间接受场并设计多尺度信息聚集以识别情绪,从而在这项工作中介绍了新型的多量表自适应图卷积网络(MSA-GCN)。在我们的模型中,自适应选择性时空图卷积旨在动态选择卷积内核,以获得不同情绪的软时空特征。此外,跨尺度映射融合机制(CSFM)旨在构建自适应邻接矩阵,以增强信息相互作用并降低冗余。与以前的最先进方法相比,所提出的方法在两个公共数据集上实现了最佳性能,将地图提高了2 \%。我们还进行了广泛的消融研究,以显示不同组件在我们的方法中的有效性。
translated by 谷歌翻译
人格计算和情感计算最近在许多研究领域获得了兴趣。任务的数据集通常具有视频,音频,语言和生物信号等多种方式。在本文中,我们提出了一种灵活的型号,用于利用所有可用数据的任务。该任务涉及复杂的关系,并避免使用大型模型进行视频处理,我们提出了使用行为编码,该行为编码具有对模型的最小变化的性能提升性能。近期使用变压器的横向感到流行,并且用于融合不同的方式。由于可能存在长期关系,因此不希望将输入破坏到块中,因此所提出的模型将整个输入处理在一起。我们的实验表明了上述每个贡献的重要性
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
图表卷积网络(GCNS)的方法在基于骨架的动作识别任务上实现了高级性能。然而,骨架图不能完全代表骨架数据中包含的运动信息。此外,基于GCN的方法中的骨架图的拓扑是根据自然连接手动设置的,并且它为所有样本都固定,这不能很好地适应不同的情况。在这项工作中,我们提出了一种新的动态超图卷积网络(DHGCN),用于基于骨架的动作识别。 DHGCN使用超图来表示骨架结构,以有效利用人类关节中包含的运动信息。根据其移动动态地分配了骨架超图中的每个接头,并且我们模型中的超图拓扑可以根据关节之间的关系动态调整到不同的样本。实验结果表明,我们的模型的性能在三个数据集中实现了竞争性能:动力学 - 骨架400,NTU RGB + D 60和NTU RGB + D 120。
translated by 谷歌翻译