本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
人类的姿势估计旨在弄清不同场景中所有人的关键。尽管结果有希望,但目前的方法仍然面临一些挑战。现有的自上而下的方法单独处理一个人,而没有不同的人与所在的场景之间的相互作用。因此,当发生严重闭塞时,人类检测的表现会降低。另一方面,现有的自下而上方法同时考虑所有人,并捕获整个图像的全局知识。但是,由于尺度变化,它们的准确性不如自上而下的方法。为了解决这些问题,我们通过整合自上而下和自下而上的管道来探索不同接受场的视觉线索并实现其互补性,提出了一种新颖的双皮线整合变压器(DPIT)。具体而言,DPIT由两个分支组成,自下而上的分支介绍了整个图像以捕获全局视觉信息,而自上而下的分支则从单人类边界框中提取本地视觉的特征表示。然后,从自下而上和自上而下的分支中提取的特征表示形式被馈入变压器编码器,以交互融合全局和本地知识。此外,我们定义了关键点查询,以探索全景和单人类姿势视觉线索,以实现两个管道的相互互补性。据我们所知,这是将自下而上和自上而下管道与变压器与人类姿势估计的变压器相结合的最早作品之一。关于可可和MPII数据集的广泛实验表明,我们的DPIT与最先进的方法相当。
translated by 谷歌翻译
2D姿势估计的现有作品主要集中在某个类别上,例如人,动物和车辆。但是,有许多应用程序方案需要检测看不见的对象类的姿势/关键点。在本文中,我们介绍了类别不稳定姿势估计(CAPE)的任务,该任务旨在创建一个姿势估计模型,能够检测仅给出一些具有关键点定义的样本的任何类别对象的姿势。为了实现这一目标,我们将姿势估计问题作为关键点匹配问题制定,并设计一个新颖的Cape框架,称为姿势匹配网络(POMNET)。提出了基于变压器的关键点交互模块(KIM),以捕获不同关键点之间的交互以及支持图像和查询图像之间的关系。我们还介绍了多类姿势(MP-100)数据集,该数据集是包含20K实例的100个对象类别的2D姿势数据集,并且经过精心设计用于开发CAPE算法。实验表明,我们的方法的表现优于其他基线方法。代码和数据可在https://github.com/luminxu/pose-for-venthing上找到。
translated by 谷歌翻译
In this paper, we are interested in the human pose estimation problem with a focus on learning reliable highresolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process.We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutliresolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich highresolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. In addition, we show the superiority of our network in pose tracking on the PoseTrack dataset. The code and models have been publicly available at https://github.com/leoxiaobin/ deep-high-resolution-net.pytorch.
translated by 谷歌翻译
Bottom-up human pose estimation methods have difficulties in predicting the correct pose for small persons due to challenges in scale variation. In this paper, we present HigherHRNet: a novel bottom-up human pose estimation method for learning scale-aware representations using high-resolution feature pyramids. Equipped with multi-resolution supervision for training and multiresolution aggregation for inference, the proposed approach is able to solve the scale variation challenge in bottom-up multi-person pose estimation and localize keypoints more precisely, especially for small person. The feature pyramid in HigherHRNet consists of feature map outputs from HRNet and upsampled higher-resolution outputs through a transposed convolution. HigherHR-Net outperforms the previous best bottom-up method by 2.5% AP for medium person on COCO test-dev, showing its effectiveness in handling scale variation. Furthermore, HigherHRNet achieves new state-of-the-art result on COCO test-dev (70.5% AP) without using refinement or other post-processing techniques, surpassing all existing bottom-up methods. HigherHRNet even surpasses all topdown methods on CrowdPose test (67.6% AP), suggesting its robustness in crowded scene. The code and models are available at https://github.com/HRNet/ Higher-HRNet-Human-Pose-Estimation.
translated by 谷歌翻译
在多人2D姿势估计中,自下而上的方法同时预测了所有人的姿势,与自上而下的方法不同,不依赖于人类的检测。但是,与现有的自上而下方法相比,SOTA自下而上的方法的精度仍然不如较低。这是由于预测的人类姿势是根据不一致的人类边界箱中心进行回归的,并且缺乏人类规范的正常化,从而导致预测的人类姿势被遗漏了不准确和小规模的人。为了推动自下而上的姿势估计的信封,我们首先提出了多尺度训练,以增强网络以通过单尺度测试来处理规模变化,尤其是对于小规模的人。其次,我们介绍了双解剖中心(即头部和身体),在这里我们可以更准确,可靠地预测人类的姿势,尤其是对于小规模的人。此外,现有的自下而上方法采用多尺度测试来以多个额外的前向通行证的价格提高姿势估计的准确性,这削弱了自下而上方法的效率,与自上而下的方法相比,核心强度。相比之下,我们的多尺度训练使该模型能够预测单个前向通行证(即单尺度测试)中的高质量姿势。我们的方法在边界框的精度方面取得了38.4 \%的改进,在边界框上进行了39.1 \%的改进,以对可可的具有挑战性的小规模人群进行对现状(SOTA)的回忆(SOTA)。对于人类姿势AP评估,我们在带有单尺度测试的可可测试-DEV集中实现了新的SOTA(71.0 AP)。我们还在跨数据库评估中在Ochuman数据集上实现了最高的性能(40.3 AP)。
translated by 谷歌翻译
我们提出了一种用于多实例姿态估计的端到端培训方法,称为诗人(姿势估计变压器)。将卷积神经网络与变压器编码器 - 解码器架构组合,我们将多个姿势估计从图像标记为直接设置预测问题。我们的模型能够使用双方匹配方案直接出现所有个人的姿势。诗人使用基于集的全局损失进行培训,该丢失包括关键点损耗,可见性损失和载重损失。诗歌的原因与多个检测到的个人与完整图像上下文之间的关系直接预测它们并行姿势。我们展示诗人在Coco Keypoint检测任务上实现了高精度,同时具有比其他自下而上和自上而下的方法更少的参数和更高推理速度。此外,在将诗人应用于动物姿势估计时,我们表现出了成功的转移学习。据我们所知,该模型是第一个端到端的培训多实例姿态估计方法,我们希望它将成为一种简单而有前途的替代方案。
translated by 谷歌翻译
我们提出Bapose,一种新颖的自下而上的方法,实现了多人姿态估计的最先进结果。我们的最终培训框架利用了解开的多尺度瀑布架构,并将自适应卷曲融合在拥挤的场景中更准确地推断出闭塞的关键点。由BAPOSE中的解开瀑布模块获得的多尺度表示,利用级联架构中进行逐行滤波的效率,同时保持与空间金字塔配置的多尺度视图相当。我们对挑战性的Coco和Crowdose数据集的结果表明,Bapose是多人姿态估计的高效且稳健的框架,实现了最先进的准确性的显着改善。
translated by 谷歌翻译
The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these "hard" keypoints. More specifically, our algorithm includes two stages: Glob-alNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the "simple" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the "hard" keypoints by integrating all levels of feature representations from the Global-Net together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve stateof-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19% relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code 1 and the detection results are publicly available for further research.
translated by 谷歌翻译
姿势估计在以人为本的视力应用中起关键作用。但是,由于高计算成本(每帧超过150 GMAC),很难在资源受限的边缘设备上部署最新的基于HRNET的姿势估计模型。在本文中,我们研究了在边缘实时多人姿势估计的有效体系结构设计。我们透露,通过我们的逐渐收缩实验,HRNET的高分辨率分支对于低计量区域的模型是多余的。删除它们可以提高效率和性能。受这一发现的启发,我们设计了LitePose,这是一种有效的单分支架构,用于姿势估计,并引入了两种简单的方法来增强LitePose的能力,包括Fusion Deconv Head和大型内核Corvs。 Fusion deconv头部删除了高分辨率分支中的冗余,从而使尺度感知的特征融合且开销低。大型内核会大大提高模型的能力和接受场,同时保持低计算成本。只有25%的计算增量,7x7内核的实现+14.0地图优于人群数据集上的3x3内核。在移动平台上,LitePose与先前最新的有效姿势估计模型相比,LitePose将潜伏期最高可达5.0倍,而无需牺牲性能,从而推动了实时多人姿势估计的边界。我们的代码和预培训模型在https://github.com/mit-han-lab/litepose上发布。
translated by 谷歌翻译
In this paper, we show the surprisingly good properties of plain vision transformers for body pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model dubbed ViTPose. Specifically, ViTPose employs the plain and non-hierarchical vision transformer as an encoder to encode features and a lightweight decoder to decode body keypoints in either a top-down or a bottom-up manner. It can be scaled up from about 20M to 1B parameters by taking advantage of the scalable model capacity and high parallelism of the vision transformer, setting a new Pareto front for throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, and pre-training and fine-tuning strategy. Based on the flexibility, a novel ViTPose+ model is proposed to deal with heterogeneous body keypoint categories in different types of body pose estimation tasks via knowledge factorization, i.e., adopting task-agnostic and task-specific feed-forward networks in the transformer. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our ViTPose model outperforms representative methods on the challenging MS COCO Human Keypoint Detection benchmark at both top-down and bottom-up settings. Furthermore, our ViTPose+ model achieves state-of-the-art performance simultaneously on a series of body pose estimation tasks, including MS COCO, AI Challenger, OCHuman, MPII for human keypoint detection, COCO-Wholebody for whole-body keypoint detection, as well as AP-10K and APT-36K for animal keypoint detection, without sacrificing inference speed.
translated by 谷歌翻译
在本文中,我们考虑了同时找到和从单个2D图像中恢复多手的具有挑战性的任务。先前的研究要么关注单手重建,要么以多阶段的方式解决此问题。此外,常规的两阶段管道首先检测到手部区域,然后估计每个裁剪贴片的3D手姿势。为了减少预处理和特征提取中的计算冗余,我们提出了一条简洁但有效的单阶段管道。具体而言,我们为多手重建设计了多头自动编码器结构,每个HEAD网络分别共享相同的功能图并分别输出手动中心,姿势和纹理。此外,我们采用了一个弱监督的计划来减轻昂贵的3D现实世界数据注释的负担。为此,我们提出了一系列通过舞台训练方案优化的损失,其中根据公开可用的单手数据集生成具有2D注释的多手数据集。为了进一步提高弱监督模型的准确性,我们在单手和多个手设置中采用了几个功能一致性约束。具体而言,从本地功能估算的每只手的关键点应与全局功能预测的重新投影点一致。在包括Freihand,HO3D,Interhand 2.6M和RHD在内的公共基准测试的广泛实验表明,我们的方法在弱监督和完全监督的举止中优于基于最先进的模型方法。代码和模型可在{\ url {https://github.com/zijinxuxu/smhr}}上获得。
translated by 谷歌翻译
最近,视觉变压器及其变体在人类和多视图人类姿势估计中均起着越来越重要的作用。将图像补丁视为令牌,变形金刚可以对整个图像中的全局依赖项进行建模或其他视图中的图像。但是,全球关注在计算上是昂贵的。结果,很难将这些基于变压器的方法扩展到高分辨率特征和许多视图。在本文中,我们提出了代币螺旋的姿势变压器(PPT)进行2D人姿势估计,该姿势估计可以找到粗糙的人掩模,并且只能在选定的令牌内进行自我注意。此外,我们将PPT扩展到多视图人类姿势估计。我们建立在PPT的基础上,提出了一种新的跨视图融合策略,称为人类区域融合,该策略将所有人类前景像素视为相应的候选者。可可和MPII的实验结果表明,我们的PPT可以在减少计算的同时匹配以前的姿势变压器方法的准确性。此外,对人类360万和滑雪姿势的实验表明,我们的多视图PPT可以有效地从多个视图中融合线索并获得新的最新结果。
translated by 谷歌翻译
Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.
translated by 谷歌翻译
准确的面部标志是许多与人面孔有关的任务的重要先决条件。在本文中,根据级联变压器提出了精确的面部标志性检测器。我们将面部标志性检测作为坐标回归任务,以便可以端对端训练该模型。通过在变压器中的自我注意力,我们的模型可以固有地利用地标之间的结构化关系,这将受益于在挑战性条件(例如大姿势和遮挡)下具有里程碑意义的检测。在级联精炼期间,我们的模型能够根据可变形的注意机制提取目标地标周围的最相关图像特征,以进行坐标预测,从而带来更准确的对齐。此外,我们提出了一个新颖的解码器,可以同时完善图像特征和地标性位置。随着参数增加,检测性能进一步提高。我们的模型在几个标准的面部标准检测基准上实现了新的最新性能,并在跨数据库评估中显示出良好的概括能力。
translated by 谷歌翻译
瑜伽是全球广受好评的,广泛推荐的健康生活实践。在执行瑜伽时保持正确的姿势至关重要。在这项工作中,我们采用了从人类姿势估计模型中的转移学习来提取整个人体的136个关键点,以训练一个随机的森林分类器,该分类器用于估算瑜伽室。在内部收集的内部收集的瑜伽视频数据库中评估了结果,该数据库是从4个不同的相机角度记录的51个主题。我们提出了一个三步方案,用于通过对1)看不见的帧,2)看不见的受试者进行测试来评估瑜伽分类器的普遍性。我们认为,对于大多数应用程序,对看不见的主题的验证精度和看不见的摄像头是最重要的。我们经验分析了三个公共数据集,转移学习的优势以及目标泄漏的可能性。我们进一步证明,分类精度在很大程度上取决于所采用的交叉验证方法,并且通常会产生误导。为了促进进一步的研究,我们已公开提供关键点数据集和代码。
translated by 谷歌翻译
我们观察到,由于不同身体部位的生物学约束,人类的姿势表现出强大的群体结构相关性和空间耦合。可以探索这种群体结构相关性,以提高人类姿势估计的准确性和鲁棒性。在这项工作中,我们开发了一个自我控制的预测验证网络,以表征和学习训练过程中关键点之间的结构相关性。在推理阶段,来自验证网络的反馈信息使我们能够进一步优化姿势预测,从而显着提高了人类姿势估计的性能。具体而言,我们根据人体的生物结构将关键点分组分组。在每个组中,关键点进一步分为两个子集,高信心基础关键点和低信心终端关键点。我们开发一个自我约束的预测验证网络,以在这些关键点子集之间执行前向和向后的预测。姿势估计以及通用预测任务中的一个基本挑战是,由于无法获得地面真相,因此我们没有机制可以验证获得的姿势估计或预测结果是否准确。一旦成功学习,验证网络将用作前向姿势预测的准确性验证模块。在推理阶段,它可用于指导低保持信心关键点的姿势估计结果的局部优化,而高信心关键点的自我约束损失是目标函数。我们对基准MS可可和人群数据集的广泛实验结果表明,所提出的方法可以显着改善姿势估计结果。
translated by 谷歌翻译
在诸如人类姿态估计的关键点估计任务中,尽管具有显着缺点,但基于热线的回归是主要的方法:Heatmaps本质上遭受量化误差,并且需要过多的计算来产生和后处理。有动力寻找更有效的解决方案,我们提出了一种新的热映射无关声点估计方法,其中各个关键点和空间相关的关键点(即,姿势)被建模为基于密集的单级锚的检测框架内的对象。因此,我们将我们的方法Kapao(发音为“KA-Pow!”)对于关键点并作为对象构成。我们通过同时检测人姿势对象和关键点对象并融合检测来利用两个对象表示的强度来将Kapao应用于单阶段多人人类姿势估算问题。在实验中,我们观察到Kapao明显比以前的方法更快,更准确,这极大地来自热爱处理后处理。此外,在不使用测试时间增强时,精度速度折衷特别有利。我们的大型型号Kapao-L在Microsoft Coco Keypoints验证集上实现了70.6的AP,而无需测试时增强,其比下一个最佳单级模型更准确,4.0 AP更准确。此外,Kapao在重闭塞的存在下擅长。在繁荣试验套上,Kapao-L为一个单级方法实现新的最先进的准确性,AP为68.9。
translated by 谷歌翻译
Figure 1: Dense pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. We introduce DensePose-COCO, a large-scale ground-truth dataset with image-to-surface correspondences manually annotated on 50K COCO images and train DensePose-RCNN, to densely regress part-specific UV coordinates within every human region at multiple frames per second. Left: The image and the regressed correspondence by DensePose-RCNN, Middle: DensePose COCO Dataset annotations, Right: Partitioning and UV parametrization of the body surface.
translated by 谷歌翻译