视频的行动识别,即将视频分类为预定义的动作类型之一,一直是人工智能,多媒体和信号处理社区中的一个流行话题。但是,现有方法通常考虑一个整体上的输入视频并学习模型,例如卷积神经网络(CNNS),并带有粗糙的视频级别类标签。这些方法只能为视频输出一个动作类,但不能提供可解释的线索来回答为什么视频显示特定的动作。因此,研究人员开始专注于一项新任务,部分级别的动作解析(PAP),该作用不仅旨在预测视频级别的动作,而且还要认识到每个人的框架级别的细粒度的动作或身体部位的相互作用在视频中。为此,我们为这项具有挑战性的任务提出了一个粗到精细的框架。特别是,我们的框架首先预测输入视频的视频级别类别,然后将身体部位定位并预测零件级别的动作。此外,为了平衡部分级别的动作解析的准确性和计算,我们建议通过段级特征识别零件级的操作。此外,为了克服身体部位的歧义,我们提出了一种姿势引导的位置嵌入方法来准确地定位身体部位。通过在大规模数据集(即动力学TPS)上进行的全面实验,我们的框架可以实现最先进的性能,并且超过31.10%的ROC得分的现有方法。
translated by 谷歌翻译
该技术报告介绍了我们在ICCV DeeperAction研讨会上进行零件级动作解析的动力学-TPS轨道的第二名解决方案。2021年。我们的条目主要基于yolof,例如,零件检测,HRNET用于人体姿势估计,以及用于视频级别的CSN行动识别和框架级别的部分状态解析。我们描述了动力学-TPS数据集的技术细节,以及一些实验结果。在比赛中,我们在动力学TPS的测试集上获得了61.37%的地图。
translated by 谷歌翻译
在本文中,我们提出了挖掘机活动分析和安全监测系统,利用了深度学习和计算机视觉的最新进步。我们所提出的系统在估计挖掘机的姿势和动作时检测周围环境和挖掘机。与以前的系统相比,我们的方法在对象检测,姿势估计和动作识别任务中实现了更高的准确性。此外,我们使用自动挖掘机系统(AES)构建挖掘机数据集,废物处理回收场景以展示我们系统的有效性。我们还在基准建设数据集上评估我们的方法。实验结果表明,该拟议的动作识别方法优于最先进的方法,最先进的方法约为5.18%。
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
手语是人们表达自己的感受和情感的不同能力的窗口。但是,人们在短时间内学习手语仍然具有挑战性。为了应对这项现实世界中的挑战,在这项工作中,我们研究了运动传输系统,该系统可以将用户照片传输到特定单词的手语视频。特别是,输出视频的外观内容来自提供的用户图像,而视频的运动是从指定的教程视频中提取的。我们观察到采用最先进的运动转移方法来产生语言的两个主要局限性:(1)现有的运动转移工作忽略了人体的先前几何知识。 (2)先前的图像动画方法仅将图像对作为训练阶段的输入,这无法完全利用视频中的时间信息。为了解决上述局限性,我们提出了结构感知的时间一致性网络(STCNET),以共同优化人类的先前结构,并具有符号语言视频生成的时间一致性。本文有两个主要贡献。 (1)我们利用细粒骨骼检测器来提供人体关键点的先验知识。这样,我们确保关键点运动在有效范围内,并使模型变得更加可解释和强大。 (2)我们引入了两个周期矛盾损失,即短期周期损失和长期周期损失,这些损失是为了确保生成的视频的连续性。我们以端到端的方式优化了两个损失和关键点检测器网络。
translated by 谷歌翻译
In this paper, we develop an efficient multi-scale network to predict action classes in partial videos in an end-to-end manner. Unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales.Therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. Our proposed End-to-End MultiScale Network (E2EMSNet) is composed of two scales which are named segment scale and observed global scale. The segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2D convolutions. For observed global scale, a Long Short-Term Memory (LSTM) is incorporated to capture motion features of observed frames. Our model provides a simple and efficient modeling framework with a small computational cost. Our E2EMSNet is evaluated on three challenging datasets: BIT, HMDB51, and UCF101. The extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https://github. com/leoxiaobin/pose.pytorch.
translated by 谷歌翻译
Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together. * The work was done during an internship at SenseTime.
translated by 谷歌翻译
The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these "hard" keypoints. More specifically, our algorithm includes two stages: Glob-alNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the "simple" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the "hard" keypoints by integrating all levels of feature representations from the Global-Net together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve stateof-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19% relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code 1 and the detection results are publicly available for further research.
translated by 谷歌翻译
具有注释的缺乏大规模的真实数据集使转移学习视频活动的必要性。我们的目标是为少数行动分类开发几次拍摄转移学习的有效方法。我们利用独立培训的本地视觉提示来学习可以从源域传输的表示,该源域只能使用少数示例来从源域传送到不同的目标域。我们使用的视觉提示包括对象 - 对象交互,手掌和地区内的动作,这些地区是手工位置的函数。我们采用了一个基于元学习的框架,以提取部署的视觉提示的独特和域不变组件。这使得能够在使用不同的场景和动作配置捕获的公共数据集中传输动作分类模型。我们呈现了我们转让学习方法的比较结果,并报告了阶级阶级和数据间数据间际传输的最先进的行动分类方法。
translated by 谷歌翻译
In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.
translated by 谷歌翻译
时间动作定位中的大多数现代方法将此问题分为两个部分:(i)短期特征提取和(ii)远程时间边界定位。由于处理长期未修剪的视频引起的GPU内存成本很高,因此许多方法通过冷冻骨干或使用小型空间视频分辨率来牺牲短期功能提取器的代表力。由于最近的视频变压器模型,其中许多具有二次记忆复杂性,这个问题变得更糟。为了解决这些问题,我们提出了TallFormer,这是一种具有长期内存的记忆效率和端到端的可训练时间动作定位变压器。我们的长期记忆机制消除了在每个训练迭代期间处理数百个冗余视频帧的需求,从而大大减少了GPU的记忆消耗和训练时间。这些效率节省使我们(i)可以使用功能强大的视频变压器提取器,而无需冷冻主链或减少空间视频分辨率,而(ii)也保持了远距离的时间边界定位能力。只有RGB框架作为输入,没有外部动作识别分类器,TallFormer的表现优于先前的最先前的边距,在Thumos14上获得了59.1%的平均地图,而ActivityNet-1.3的平均地图为35.6%。该代码可公开:https://github.com/klauscc/tallformer。
translated by 谷歌翻译
最近,行动识别因其在智能监视和人为计算机互动方面的全面和实用应用而受到了越来越多的关注。但是,由于数据稀缺性,很少有射击动作识别并未得到充分的探索,并且仍然具有挑战性。在本文中,我们提出了一种新型的分层组成表示(HCR)学习方法,以进行几次识别。具体而言,我们通过精心设计的层次聚类将复杂的动作分为几个子行动,并将子动作进一步分解为更细粒度的空间注意力亚actions(SAS-Actions)。尽管基类和新颖类之间存在很大的差异,但它们可以在子行动或SAS行为中共享相似的模式。此外,我们在运输问题中采用了地球移动器的距离,以测量视频样本之间的相似性在亚行动表示方面。它计算为距离度量的子行动之间的最佳匹配流,这有利于比较细粒模式。广泛的实验表明,我们的方法在HMDB51,UCF101和动力学数据集上实现了最新结果。
translated by 谷歌翻译
有效地对视频中的空间信息进行建模对于动作识别至关重要。为了实现这一目标,最先进的方法通常采用卷积操作员和密集的相互作用模块,例如非本地块。但是,这些方法无法准确地符合视频中的各种事件。一方面,采用的卷积是有固定尺度的,因此在各种尺度的事件中挣扎。另一方面,密集的相互作用建模范式仅在动作 - 欧元零件时实现次优性能,给最终预测带来了其他噪音。在本文中,我们提出了一个统一的动作识别框架,以通过引入以下设计来研究视频内容的动态性质。首先,在提取本地提示时,我们会生成动态尺度的时空内核,以适应各种事件。其次,为了将这些线索准确地汇总为全局视频表示形式,我们建议仅通过变压器在一些选定的前景对象之间进行交互,从而产生稀疏的范式。我们将提出的框架称为事件自适应网络(EAN),因为这两个关键设计都适应输入视频内容。为了利用本地细分市场内的短期运动,我们提出了一种新颖有效的潜在运动代码(LMC)模块,进一步改善了框架的性能。在几个大规模视频数据集上进行了广泛的实验,例如,某种东西,动力学和潜水48,验证了我们的模型是否在低拖鞋上实现了最先进或竞争性的表演。代码可在:https://github.com/tianyuan168326/ean-pytorch中找到。
translated by 谷歌翻译
部分一级的行动解析针对部分状态解析为影片提升动作识别。尽管在视频分类研究领域戏剧性的进展,面临的社会的一个严重问题是,人类活动的详细了解被忽略。我们的动机是,解析人的行动需要建立模型,专注于特定的问题。我们提出了一个简单而有效的方法,迎刃而解命名解析动作(DAP)。具体来说,我们划分部分一级行动解析为三个阶段:1)人的检测,当一个人检测器采用检测从影片的所有人员以及进行实例级动作识别; 2)部分解析,其中解析部分模型提出了识别来自检测到的人物图像人类份;和3)动作解析,其中,多模态动作解析网络用于分析动作类别调节对从先前阶段获得的所有检测结果。随着应用这三大车型,我们DAP的方法记录$ 0.605 $得分的全球平均在2021动力学-TPS挑战。
translated by 谷歌翻译
Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.
translated by 谷歌翻译
在静止图像人类行动识别中,现有研究主要利用额外的边界框信息以及类标签来减轻静态图像中的时间信息;但是,使用手动注释准备额外数据是耗时的,也容易出现人类错误。此外,现有研究没有解决与长尾分布的行动识别。在本文中,我们提出了一种用于人类行动认可的两相多方专家分类方法,以通过超级学习和没有任何额外信息应对长尾分布。要为每个超级类别选择最佳配置,并在不同动作类之间表征类间依赖关系,我们提出了一种基于图形的类别选择(GCS)算法。在提出的方法中,粗粒阶段选择最相关的细粒度专家。然后,细粒度专家编码每个超级级别的复杂细节,使得级别的变化增加。在各种公共人类行动识别数据集上进行了广泛的实验评估,包括斯坦福福德40,Pascal VOC行动,Bu101 +和iHar数据集。实验结果表明,该方法产生了有希望的改善。更具体地说,在Ihar,Sanford40,Pascal VOC 2012行动和BU101 +基准中,所提出的方法优于最先进的研究,以8.92%,0.41%,0.66%和2.11%,计算成本远远较低没有任何辅助注释信息。此外,证明,在解决长尾分布的动作识别方面,该方法通过显着的边缘来实现其对应物。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
我们介绍了在视频中发现时间精确,细粒度事件的任务(检测到时间事件的精确时刻)。精确的斑点需要模型在全球范围内对全日制动作规模进行推理,并在本地识别微妙的框架外观和运动差异,以识别这些动作过程中事件的识别。令人惊讶的是,我们发现,最高的绩效解决方案可用于先前的视频理解任务,例如操作检测和细分,不能同时满足这两个要求。作为响应,我们提出了E2E点,这是一种紧凑的端到端模型,在精确的发现任务上表现良好,可以在单个GPU上快速培训。我们证明,E2E点的表现明显优于最近根据视频动作检测,细分和将文献发现到精确的发现任务的基线。最后,我们为几个细粒度的运动动作数据集贡献了新的注释和分裂,以使这些数据集适用于未来的精确发现工作。
translated by 谷歌翻译