Thanks to the development of 2D keypoint detectors, monocular 3D human pose estimation (HPE) via 2D-to-3D uplifting approaches have achieved remarkable improvements. Still, monocular 3D HPE is a challenging problem due to the inherent depth ambiguities and occlusions. To handle this problem, many previous works exploit temporal information to mitigate such difficulties. However, there are many real-world applications where frame sequences are not accessible. This paper focuses on reconstructing a 3D pose from a single 2D keypoint detection. Rather than exploiting temporal information, we alleviate the depth ambiguity by generating multiple 3D pose candidates which can be mapped to an identical 2D keypoint. We build a novel diffusion-based framework to effectively sample diverse 3D poses from an off-the-shelf 2D detector. By considering the correlation between human joints by replacing the conventional denoising U-Net with graph convolutional network, our approach accomplishes further performance improvements. We evaluate our method on the widely adopted Human3.6M and HumanEva-I datasets. Comprehensive experiments are conducted to prove the efficacy of the proposed method, and they confirm that our model outperforms state-of-the-art multi-hypothesis 3D HPE methods.
translated by 谷歌翻译
Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we explore a novel pose estimation framework (DiffPose) that formulates 3D pose estimation as a reverse diffusion process. We incorporate novel designs into our DiffPose that facilitate the diffusion process for 3D pose estimation: a pose-specific initialization of pose uncertainty distributions, a Gaussian Mixture Model-based forward diffusion process, and a context-conditioned reverse diffusion process. Our proposed DiffPose significantly outperforms existing methods on the widely used pose estimation benchmarks Human3.6M and MPI-INF-3DHP.
translated by 谷歌翻译
Learning 3D human pose prior is essential to human-centered AI. Here, we present GFPose, a versatile framework to model plausible 3D human poses for various applications. At the core of GFPose is a time-dependent score network, which estimates the gradient on each body joint and progressively denoises the perturbed 3D human pose to match a given task specification. During the denoising process, GFPose implicitly incorporates pose priors in gradients and unifies various discriminative and generative tasks in an elegant framework. Despite the simplicity, GFPose demonstrates great potential in several downstream tasks. Our experiments empirically show that 1) as a multi-hypothesis pose estimator, GFPose outperforms existing SOTAs by 20% on Human3.6M dataset. 2) as a single-hypothesis pose estimator, GFPose achieves comparable results to deterministic SOTAs, even with a vanilla backbone. 3) GFPose is able to produce diverse and realistic samples in pose denoising, completion and generation tasks. Project page https://sites.google.com/view/gfpose/
translated by 谷歌翻译
Traditionally, monocular 3D human pose estimation employs a machine learning model to predict the most likely 3D pose for a given input image. However, a single image can be highly ambiguous and induces multiple plausible solutions for the 2D-3D lifting step which results in overly confident 3D pose predictors. To this end, we propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image. In comparison to similar approaches, our diffusion model is straightforward and avoids intensive hyperparameter tuning, complex network structures, mode collapse, and unstable training. Moreover, we tackle a problem of the common two-step approach that first estimates a distribution of 2D joint locations via joint-wise heatmaps and consecutively approximates them based on first- or second-moment statistics. Since such a simplification of the heatmaps removes valid information about possibly correct, though labeled unlikely, joint locations, we propose to represent the heatmaps as a set of 2D joint candidate samples. To extract information about the original distribution from these samples we introduce our \emph{embedding transformer} that conditions the diffusion model. Experimentally, we show that DiffPose slightly improves upon the state of the art for multi-hypothesis pose estimation for simple poses and outperforms it by a large margin for highly ambiguous poses.
translated by 谷歌翻译
估计单眼视频的3D人类姿势是由于深度模糊和自动阻塞的具有挑战性的任务。大多数现有的作品试图通过利用空间和时间关系来解决这两个问题。然而,这些作品忽略了它是存在多种可行解决方案(即假设)的逆问题。为了减轻这种限制,我们提出了一种多假设变压器(MHFormer),其学习多个合理的姿势假设的时空表示。为了有效地模拟多假设依赖性并构建跨假设特征的强烈关系,任务分解为三个阶段:(i)生成多个初始假设表示; (ii)模型自立通信,将多个假设合并到单个融合表示中,然后将其分组成几个分歧假设; (iii)学习横向假设通信并汇总多假设特征以合成最终的3D姿势。通过上述过程,最终表示增强,合成的姿势更准确。广泛的实验表明,MHFORMER在两个具有挑战性的数据集上实现最先进的结果:Humanet3.6M和MPI-INF-3DHP。没有钟声和吹口哨,其性能超过了以人3.6M的大幅度为3%的最佳结果。代码和模型可在https://github.com/vegetebird/mhformer中找到。
translated by 谷歌翻译
尽管来自视频的3D人类姿势估算的巨大进展,但是充分利用冗余2D姿势序列来学习用于生成一个3D姿势的代表表示的开放问题。为此,我们提出了一种改进的基于变压器的架构,称为冲压变压器,简单地有效地将长期的2D联合位置升高到单个3D姿势。具体地,采用Vanilla变压器编码器(VTE)来模拟2D姿势序列的远程依赖性。为了减少序列的冗余,vte的前馈网络中的完全连接的层被冲击卷积替换,以逐步缩小序列长度并从本地上下文聚合信息。修改的VTE称为STRIVEIVERCHER ENCODER(STE),其构建在VTE的输出时。 STE不仅有效地将远程信息聚集到分层全球和本地时尚的单载体表示,而且显着降低了计算成本。此外,全序列和单个目标帧尺度都设计了全序,分别适用于VTE和ST的输出。该方案与单个目标帧监督结合施加额外的时间平滑度约束,因此有助于产生更平滑和更准确的3D姿势。所提出的轮廓变压器在两个具有挑战性的基准数据集,Human3.6M和HumanVa-I中进行评估,并通过更少的参数实现最先进的结果。代码和模型可用于\ url {https://github.com/vegetebird/stridedtransformer-pose3d}。
translated by 谷歌翻译
Stochastic human motion prediction aims to forecast multiple plausible future motions given a single pose sequence from the past. Most previous works focus on designing elaborate losses to improve the accuracy, while the diversity is typically characterized by randomly sampling a set of latent variables from the latent prior, which is then decoded into possible motions. This joint training of sampling and decoding, however, suffers from posterior collapse as the learned latent variables tend to be ignored by a strong decoder, leading to limited diversity. Alternatively, inspired by the diffusion process in nonequilibrium thermodynamics, we propose MotionDiff, a diffusion probabilistic model to treat the kinematics of human joints as heated particles, which will diffuse from original states to a noise distribution. This process offers a natural way to obtain the "whitened" latents without any trainable parameters, and human motion prediction can be regarded as the reverse diffusion process that converts the noise distribution into realistic future motions conditioned on the observed sequence. Specifically, MotionDiff consists of two parts: a spatial-temporal transformer-based diffusion network to generate diverse yet plausible motions, and a graph convolutional network to further refine the outputs. Experimental results on two datasets demonstrate that our model yields the competitive performance in terms of both accuracy and diversity.
translated by 谷歌翻译
现代的多层感知器(MLP)模型在不自我注意力的情况下学习视觉表现方面显示了竞争成果。但是,现有的MLP模型不擅长捕获本地细节,并且缺乏人类配置的先验知识,这限制了其骨骼表示学习的模型能力。为了解决这些问题,我们提出了一个名为GraphMLP的简单而有效的图形增强的MLP样结构,该体系结构将MLP和图形卷积网络(GCN)组合在3D人类姿势估计的全球 - 局部 - 单位图形统一体系中。GraphMLP将人体的图结构结合到MLP模型中,以满足域特异性需求,同时允许局部和全局空间相互作用。广泛的实验表明,所提出的GraphMLP在两个数据集(即Human3.6M和MPI-INF-3DHP)上实现了最先进的性能。我们的源代码和预估计的模型将公开可用。
translated by 谷歌翻译
本文介绍了一个新型的预训练的空间时间多对一(p-STMO)模型,用于2D到3D人类姿势估计任务。为了减少捕获空间和时间信息的困难,我们将此任务分为两个阶段:预训练(I期)和微调(II阶段)。在第一阶段,提出了一个自我监督的预训练子任务,称为蒙面姿势建模。输入序列中的人关节在空间和时间域中随机掩盖。利用denoising自动编码器的一般形式以恢复原始的2D姿势,并且编码器能够以这种方式捕获空间和时间依赖性。在第二阶段,将预训练的编码器加载到STMO模型并进行微调。编码器之后是一个多对一的框架聚合器,以预测当前帧中的3D姿势。尤其是,MLP块被用作STMO中的空间特征提取器,其性能比其他方法更好。此外,提出了一种时间下采样策略,以减少数据冗余。在两个基准上进行的广泛实验表明,我们的方法优于较少参数和较少计算开销的最先进方法。例如,我们的P-STMO模型在使用CPN作为输入的2D姿势时,在Human3.6M数据集上达到42.1mm MPJPE。同时,它为最新方法带来了1.5-7.1倍的速度。代码可在https://github.com/patrick-swk/p-stmo上找到。
translated by 谷歌翻译
最近的2D-3D人类姿势估计工作倾向于利用人体骨架的拓扑形成的图形结构。但是,我们认为这种骨架拓扑太稀疏,无法反映身体结构并遭受严重的2D-3D模糊问题。为了克服这些弱点,我们提出了一种新颖的图表卷积网络架构,层次图形网络(HGN)。它基于我们的多尺度图结构建筑策略产生的密度图形拓扑,从而提供更精细的几何信息。所提出的架构包含三个并行组织的稀疏微小表示子网,其中通过新颖的特征融合策略处理多尺度图形结构特征,并通过新颖的特征融合策略进行交换信息,导致丰富的分层表示。我们还介绍了3D粗网格约束,以进一步提高与细节相关的特征学习。广泛的实验表明,我们的HGN通过减少的网络参数实现了最先进的性能
translated by 谷歌翻译
本文认为共同解决估计3D人体的高度相关任务,并从RGB图像序列预测未来的3D运动。基于Lie代数姿势表示,提出了一种新的自投影机制,自然保留了人类运动运动学。通过基于编码器 - 解码器拓扑的序列到序列的多任务架构进一步促进了这一点,这使我们能够利用两个任务共享的公共场所。最后,提出了一个全球细化模块来提高框架的性能。我们的方法称为PoMomemet的效力是通过消融测试和人文3.6M和Humaneva-I基准的实证评估,从而获得与最先进的竞争性能。
translated by 谷歌翻译
In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/ facebookresearch/VideoPose3D
translated by 谷歌翻译
估计3D人类姿势已被证明是一个具有挑战性的任务,主要是因为人体关节,闭塞和照明条件的可变性的复杂性。在本文中,我们介绍了一个高阶图卷积框架,具有初始剩余连接,用于2D-3D姿势估计。使用多跳邻域进行节点特征聚合,我们的模型能够捕获身体关节之间的远程依赖性。此外,我们的方法利用了通过设计中的设计集成的剩余连接,确保学习的特征表示从输入层的初始特征中保留重要信息,因为网络深度增加。在两个标准基准上进行的实验和消融研究表明了我们模型的有效性,实现了对3D人类姿态估算的强大基线方法的卓越性能。
translated by 谷歌翻译
视频3D人类姿势估计旨在将视频中人类关节的3D坐标定位。最近的基于变压器的方法着重于从顺序2D姿势捕获时空信息,由于在2D姿势估计的步骤中丢失了视觉深度特征,因此无法有效地对上下文深度特征进行建模。在本文中,我们将范式简化为端到端框架,实例引导的视频变压器(IVT),该范式可以有效地从视觉特征中学习时空的上下文深度信息,并直接从视频框架中预测3D姿势。特别是,我们首先将视频框架作为一系列实例引导令牌,每个令牌都可以预测人类实例的3D姿势。这些令牌包含身体结构信息,因为它们是由关节偏移从人体中心到相应身体关节的指导提取的。然后,这些令牌被发送到IVT中,以学习时空的上下文深度。此外,我们提出了一种跨尺度实例引导的注意机制,以处理多个人之间的变异量表。最后,每个人的3D姿势都是通过坐标回归从实例引导的代币中解码的。在三个广泛使用的3D姿势估计基准上进行的实验表明,拟议的IVT实现了最先进的性能。
translated by 谷歌翻译
Figure 1: Given challenging in-the-wild videos, a recent state-of-the-art video-pose-estimation approach [31] (top), fails to produce accurate 3D body poses. To address this, we exploit a large-scale motion-capture dataset to train a motion discriminator using an adversarial approach. Our model (VIBE) (bottom) is able to produce realistic and accurate pose and shape, outperforming previous work on standard benchmarks.
translated by 谷歌翻译
Estimating 3D human motion from an egocentric video sequence is critical to human behavior understanding and applications in VR/AR. However, naively learning a mapping between egocentric videos and human motions is challenging, because the user's body is often unobserved by the front-facing camera placed on the head of the user. In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments. To eliminate the need for paired egocentric video and human motions, we propose a new method, Ego-Body Pose Estimation via Ego-Head Pose Estimation (EgoEgo), that decomposes the problem into two stages, connected by the head motion as an intermediate representation. EgoEgo first integrates SLAM and a learning approach to estimate accurate head motion. Then, taking the estimated head pose as input, it leverages conditional diffusion to generate multiple plausible full-body motions. This disentanglement of head and body pose eliminates the need for training datasets with paired egocentric videos and 3D human motion, enabling us to leverage large-scale egocentric video datasets and motion capture datasets separately. Moreover, for systematic benchmarking, we develop a synthetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired egocentric videos and human motion. On both ARES and real data, our EgoEgo model performs significantly better than the state-of-the-art.
translated by 谷歌翻译
大多数实时人类姿势估计方法都基于检测接头位置。使用检测到的关节位置,可以计算偏差和肢体的俯仰。然而,由于这种旋转轴仍然不观察,因此不能计算沿着肢体沿着肢体至关重要的曲折,这对于诸如体育分析和计算机动画至关重要。在本文中,我们引入了方向关键点,一种用于估计骨骼关节的全位置和旋转的新方法,仅使用单帧RGB图像。灵感来自Motion-Capture Systems如何使用一组点标记来估计全骨骼旋转,我们的方法使用虚拟标记来生成足够的信息,以便准确地推断使用简单的后处理。旋转预测改善了接头角度最佳报告的平均误差48%,并且在15个骨骼旋转中实现了93%的精度。该方法还通过MPJPE在原理数据集上测量,通过MPJPE测量,该方法还改善了当前的最新结果14%,并概括为野外数据集。
translated by 谷歌翻译
多人3D姿势估计是一项具有挑战性的任务,因为遮挡和深度歧义,尤其是在人群场景的情况下。为了解决这些问题,大多数现有方法通过使用图神经网络增强特征表示或添加结构约束来探索建模身体上下文提示。但是,这些方法对于它们的单根公式并不强大,该公式将3D从根节点带有预定义的图形。在本文中,我们提出了GR-M3D,该GR-M3D模拟了\ textbf {m} ulti-person \ textbf {3d}构成构成构成效果估计,并使用动态\ textbf {g} raph \ textbf {r textbf {r} eSounting。预测GR-M3D中的解码图而不是预定。特别是,它首先生成几个数据图,并通过刻度和深度意识到的细化模块(SDAR)增强它们。然后从这些数据图估算每个人的多个根关键点和密集的解码路径。基于它们,动态解码图是通过将路径权重分配给解码路径来构建的,而路径权重是从这些增强的数据图推断出来的。此过程被命名为动态图推理(DGR)。最后,根据每个检测到的人的动态解码图对3D姿势进行解码。 GR-M3D可以根据输入数据采用软路径权重,通过采用软路径权重来调整解码图的结构,这使得解码图最能适应不同的输入人员,并且比以前的方法更有能力处理闭塞和深度歧义。我们从经验上表明,提出的自下而上方法甚至超过自上而下的方法,并在三个3D姿势数据集上实现最先进的方法。
translated by 谷歌翻译
人类运动建模对于许多现代图形应用非常重要,这些应用通常需要专业技能。为了消除外行的技能障碍,最近的运动生成方法可以直接产生以自然语言为条件的人类动作。但是,通过各种文本输入,实现多样化和细粒度的运动产生,仍然具有挑战性。为了解决这个问题,我们提出了MotionDiffuse,这是第一个基于基于文本模型的基于文本驱动的运动生成框架,该框架证明了现有方法的几种期望属性。 1)概率映射。 MotionDiffuse不是确定性的语言映射,而是通过一系列注入变化的步骤生成动作。 2)现实的综合。 MotionDiffuse在建模复杂的数据分布和生成生动的运动序列方面表现出色。 3)多级操作。 Motion-Diffuse响应有关身体部位的细粒度指示,以及随时间变化的文本提示,任意长度运动合成。我们的实验表明,Motion-Diffuse通过说服文本驱动运动产生和动作条件运动的运动来优于现有的SOTA方法。定性分析进一步证明了MotionDiffuse对全面运动产生的可控性。主页:https://mingyuan-zhang.github.io/projects/motiondiffuse.html
translated by 谷歌翻译
Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3dimensional positions.With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, "lifting" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results -this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.
translated by 谷歌翻译