3D从单眼RGB图像中的人类姿势和形状恢复是一个具有挑战性的任务。基于现有的基于学习的方法高度依赖于弱监管信号,例如, 2D和3D联合位置,由于缺乏野外配对的3D监督。然而,考虑到这些弱监管标签中存在的2D-3D模糊,网络在用此类标签培训时容易在本地最佳状态下卡。在本文中,我们通过优化多个初始化来减少势措施。具体而言,我们提出了一个名为多初始化优化网络(MION)的三级框架。在第一阶段,我们策略性地选择与输入样本的2D关键点兼容的不同粗略的3D重建候选。每个粗略重建可以被视为初始化导致一个优化分支。在第二阶段,我们设计网格精制变压器(MRT)以分别通过自我关注机制来优化每个粗略重建结果。最后,提出了一种一致性估计网络(CEN)来通过评估RGB图像中的视觉证据与给定的3D重建匹配,以通过评估来查找来自候选的最佳结果。实验表明,我们的多初始化优化网络优于多个公共基准上的现有3D网格的方法。
translated by 谷歌翻译
全面监督的人类网格恢复方法是渴望数据的,由于3D规定基准数据集的可用性有限和多样性,因此具有较差的概括性。使用合成数据驱动的训练范例,已经从合成配对的2D表示(例如2D关键点和分段掩码)和3D网格中训练了模型的最新进展,其中已使用合成数据驱动的训练范例和3D网格进行了训练。但是,由于合成训练数据和实际测试数据之间的域间隙很难解决2D密集表示,因此很少探索合成密集的对应图(即IUV)。为了减轻IUV上的这个领域差距,我们提出了使用可靠但稀疏表示的互补信息(2D关键点)提出的交叉代理对齐。具体而言,初始网格估计和两个2D表示之间的比对误差将转发为回归器,并在以下网格回归中动态校正。这种适应性的交叉代理对准明确地从偏差和捕获互补信息中学习:从稀疏的表示和浓郁的浓度中的稳健性。我们对多个标准基准数据集进行了广泛的实验,并展示了竞争结果,帮助减少在人类网格估计中生产最新模型所需的注释工作。
translated by 谷歌翻译
人类姿势和形状估计的任务中的关键挑战是闭塞,包括自闭合,对象 - 人闭塞和人际闭塞。缺乏多样化和准确的姿势和形状训练数据成为一个主要的瓶颈,特别是对于野外闭塞的场景。在本文中,我们专注于在人际闭塞的情况下估计人类姿势和形状,同时处理对象 - 人闭塞和自动闭塞。我们提出了一种新颖的框架,该框架综合了遮挡感知的轮廓和2D关键点数据,并直接回归到SMPL姿势和形状参数。利用神经3D网格渲染器以启用剪影监控,这有助于形状估计的巨大改进。此外,合成了全景视点中的关键点和轮廓驱动的训练数据,以补偿任何现有数据集中缺乏视点的多样性。实验结果表明,在姿势估计准确性方面,我们在3DPW和3DPW-Crowd数据集中是最先进的。所提出的方法在形状估计方面显着优于秩1方法。在形状预测精度方面,SSP-3D还实现了顶级性能。
translated by 谷歌翻译
在本文中,我们考虑了同时找到和从单个2D图像中恢复多手的具有挑战性的任务。先前的研究要么关注单手重建,要么以多阶段的方式解决此问题。此外,常规的两阶段管道首先检测到手部区域,然后估计每个裁剪贴片的3D手姿势。为了减少预处理和特征提取中的计算冗余,我们提出了一条简洁但有效的单阶段管道。具体而言,我们为多手重建设计了多头自动编码器结构,每个HEAD网络分别共享相同的功能图并分别输出手动中心,姿势和纹理。此外,我们采用了一个弱监督的计划来减轻昂贵的3D现实世界数据注释的负担。为此,我们提出了一系列通过舞台训练方案优化的损失,其中根据公开可用的单手数据集生成具有2D注释的多手数据集。为了进一步提高弱监督模型的准确性,我们在单手和多个手设置中采用了几个功能一致性约束。具体而言,从本地功能估算的每只手的关键点应与全局功能预测的重新投影点一致。在包括Freihand,HO3D,Interhand 2.6M和RHD在内的公共基准测试的广泛实验表明,我们的方法在弱监督和完全监督的举止中优于基于最先进的模型方法。代码和模型可在{\ url {https://github.com/zijinxuxu/smhr}}上获得。
translated by 谷歌翻译
基于回归的方法可以通过直接以馈送方式将原始像素直接映射到模型参数来估算从单眼图像的身体,手甚至全身模型。但是,参数的微小偏差可能导致估计的网格和输入图像之间的明显未对准,尤其是在全身网格恢复的背景下。为了解决这个问题,我们建议在我们的回归网络中进行锥体网状对准反馈(PYMAF)循环,以进行良好的人类网格恢复,并将其扩展到PYMAF-X,以恢复表达全身模型。 PYMAF的核心思想是利用特征金字塔并根据网格图像对准状态明确纠正预测参数。具体而言,给定当前预测的参数,将相应地从更优质的特征中提取网格对准的证据,并将其送回以进行参数回流。为了增强一致性的看法,采用辅助密集的监督来提供网格图像对应指南,同时引入了空间对齐的注意,以使我们的网络对全球环境的认识。当扩展PYMAF以进行全身网状恢复时,PYMAF-X中提出了一种自适应整合策略来调整肘部扭转旋转,该旋转会产生自然腕部姿势,同时保持部分特定估计的良好性能。我们的方法的功效在几个基准数据集上得到了验证,以实现身体和全身网状恢复,在该数据集中,PYMAF和PYMAF-X有效地改善了网格图像的对准并实现了新的最新结果。具有代码和视频结果的项目页面可以在https://www.liuyebin.com/pymaf-x上找到。
translated by 谷歌翻译
We present a new method, called MEsh TRansfOrmer (METRO), to reconstruct 3D human pose and mesh vertices from a single image. Our method uses a transformer encoder to jointly model vertex-vertex and vertex-joint interactions, and outputs 3D joint coordinates and mesh vertices simultaneously. Compared to existing techniques that regress pose and shape parameters, METRO does not rely on any parametric mesh models like SMPL, thus it can be easily extended to other objects such as hands. We further relax the mesh topology and allow the transformer self-attention mechanism to freely attend between any two vertices, making it possible to learn non-local relationships among mesh vertices and joints. With the proposed masked vertex modeling, our method is more robust and effective in handling challenging situations like partial occlusions. METRO generates new state-of-the-art results for human mesh reconstruction on the public Human3.6M and 3DPW datasets. Moreover, we demonstrate the generalizability of METRO to 3D hand reconstruction in the wild, outperforming existing state-of-the-art methods on FreiHAND dataset. Code and pre-trained models are available at https: //github.com/microsoft/MeshTransformer.
translated by 谷歌翻译
Model-based human pose estimation is currently approached through two different paradigms. Optimizationbased methods fit a parametric body model to 2D observations in an iterative manner, leading to accurate imagemodel alignments, but are often slow and sensitive to the initialization. In contrast, regression-based methods, that use a deep network to directly estimate the model parameters from pixels, tend to provide reasonable, but not pixel accurate, results while requiring huge amounts of supervision. In this work, instead of investigating which approach is better, our key insight is that the two paradigms can form a strong collaboration. A reasonable, directly regressed estimate from the network can initialize the iterative optimization making the fitting faster and more accurate. Similarly, a pixel accurate fit from iterative optimization can act as strong supervision for the network. This is the core of our proposed approach SPIN (SMPL oPtimization IN the loop). The deep network initializes an iterative optimization routine that fits the body model to 2D joints within the training loop, and the fitted estimate is subsequently used to supervise the network. Our approach is self-improving by nature, since better network estimates can lead the optimization to better solutions, while more accurate optimization fits provide better supervision for the network. We demonstrate the effectiveness of our approach in different settings, where 3D ground truth is scarce, or not available, and we consistently outperform the state-of-the-art model-based pose estimation approaches by significant margins. The project website with videos, results, and code can be found at https://seas.upenn.edu/ ˜nkolot/projects/spin.
translated by 谷歌翻译
This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of 3D shape from a single color image.
translated by 谷歌翻译
大多数现有的动物姿势和形状估计方法用参数模型重建动物网格。这是因为Smal模型的低维姿势和形状参数使得深网络更容易学习高维动物网。然而,Smal模型从具有限制和形状变化的玩具动物的扫描学习,因此可能无法良好地代表高度不同的真实动物。这可能导致估计网格的差,例如2D证据的差。 2d关键点或剪影。为了缓解此问题,我们提出了一种从单个图像重建3D动物网格的粗细方法。粗略估计阶段首先估计Smal模型的姿势,形状和翻译参数。然后将估计的网格用作图表卷积网络(GCN)的起点,以预测细化阶段的每顶顶点变形。基于SMAL和基于顶点的表示的这种组合来自参数和非参数表示。我们将网眼细化GCN(MRGCN)设计为具有分层特征表示的编码器解码器结构,以克服传统GCN的有限接收领域。此外,我们观察到,现有动物网格重建工作所使用的全局图像特征无法捕获用于网格细化的详细形状信息。因此,我们引入了本地特征提取器来检索顶点级别功能,并将其与全局功能一起用作MRGCN的输入。我们在Stanfordextra DataSet上测试我们的方法,实现最先进的结果。此外,我们在动物姿势和BADJA数据集中测试我们方法的泛化能力。我们的代码可在项目网站上获得。
translated by 谷歌翻译
This paper addresses the problem of 3D human pose and shape estimation from a single image. Previous approaches consider a parametric model of the human body, SMPL, and attempt to regress the model parameters that give rise to a mesh consistent with image evidence. This parameter regression has been a very challenging task, with modelbased approaches underperforming compared to nonparametric solutions in terms of pose estimation. In our work, we propose to relax this heavy reliance on the model's parameter space. We still retain the topology of the SMPL template mesh, but instead of predicting model parameters, we directly regress the 3D location of the mesh vertices. This is a heavy task for a typical network, but our key insight is that the regression becomes significantly easier using a Graph-CNN. This architecture allows us to explicitly encode the template mesh structure within the network and leverage the spatial locality the mesh has to offer. Image-based features are attached to the mesh vertices and the Graph-CNN is responsible to process them on the mesh structure, while the regression target for each vertex is its 3D location. Having recovered the complete 3D geometry of the mesh, if we still require a specific model parametrization, this can be reliably regressed from the vertices locations. We demonstrate the flexibility and the effectiveness of our proposed graphbased mesh regression by attaching different types of features on the mesh vertices. In all cases, we outperform the comparable baselines relying on model parameter regression, while we also achieve state-of-the-art results among model-based pose estimation approaches. 1
translated by 谷歌翻译
我们建议使用像素对齐的局部图像特征来改进基于人类形状的基于人体形状和姿势估计的方法和姿势估计。给定单个输入彩色图像,现有的图形卷积网络(GCN)用于人类形状和姿势估计的技术使用单个卷积神经网络(CNN)生成的全局图像特征,同样地附加到所有网眼顶点以初始化GCN级,其变换α模板T型网格到目标姿势。相比之下,我们首次提出了每个顶点使用本地图像特征的想法。通过利用用密集产生的像素对应的对应,从CNN图像特征映射中采样这些特征。我们对标准基准的定量和定性结果表明,使用当地特征可以改善全球性,并导致关于最先进的竞争性表演。
translated by 谷歌翻译
我们提出了一种基于优化的新型范式,用于在图像和扫描上拟合3D人类模型。与直接回归输入图像中低维统计体模型(例如SMPL)的参数的现有方法相反,我们训练了每个vertex神经场网络的集合。该网络以分布式的方式预测基于当前顶点投影处提取的神经特征的顶点下降方向。在推断时,我们在梯度降低的优化管道中采用该网络,称为LVD,直到其收敛性为止,即使将所有顶点初始化为单个点,通常也会以一秒钟的分数出现。一项详尽的评估表明,我们的方法能够捕获具有截然不同的身体形状的穿着的人体,与最先进的人相比取得了重大改进。 LVD也适用于人类和手的3D模型配合,为此,我们以更简单,更快的方法对SOTA显示出显着改善。
translated by 谷歌翻译
自上而下的方法主导了3D人类姿势和形状估计的领域,因为它们与人类的检测脱钩,并使研究人员能够专注于核心问题。但是,裁剪是他们的第一步,从一开始就丢弃了位置信息,这使自己无法准确预测原始摄像机坐标系中的全局旋转。为了解决此问题,我们建议将完整框架(悬崖)的位置信息携带到此任务中。具体而言,我们通过将裁剪图像功能与其边界盒信息连接在一起来养活更多的整体功能来悬崖。我们通过更广泛的全帧视图来计算2D再投影损失,进行了类似于图像中投射的人的投影过程。克里夫(Cliff)通过全球态度感知信息进行了喂养和监督,直接预测全球旋转以及更准确的明确姿势。此外,我们提出了一个基于Cliff的伪基真实注释,该注释为野外2D数据集提供了高质量的3D注释,并为基于回归的方法提供了至关重要的全面监督。对流行基准测试的广泛实验表明,悬崖的表现要超过先前的艺术,并在Agora排行榜上获得了第一名(SMPL-Algorithms曲目)。代码和数据可在https://github.com/huawei-noah/noah-research/tree/master/cliff中获得。
translated by 谷歌翻译
我们提出了CrossHuman,这是一种新颖的方法,该方法从参数人类模型和多帧RGB图像中学习了交叉指导,以实现高质量的3D人类重建。为了恢复几何细节和纹理,即使在无形区域中,我们设计了一个重建管道,结合了基于跟踪的方法和无跟踪方法。给定一个单眼RGB序列,我们在整个序列中跟踪参数人模型,与目标框架相对应的点(体素)被参数体运动扭曲为参考框架。在参数体的几何学先验和RGB序列的空间对齐特征的指导下,稳健隐式表面被融合。此外,将多帧变压器(MFT)和一个自我监管的经过修补模块集成到框架中,以放宽参数主体的要求并帮助处理非常松散的布。与以前的作品相比,我们的十字人类可以在可见的和无形区域启用高保真的几何细节和纹理,并提高人类重建的准确性,即使在估计的不准确的参数人类模型下也是如此。实验表明我们的方法达到了最新的(SOTA)性能。
translated by 谷歌翻译
Figure 1: Given challenging in-the-wild videos, a recent state-of-the-art video-pose-estimation approach [31] (top), fails to produce accurate 3D body poses. To address this, we exploit a large-scale motion-capture dataset to train a motion discriminator using an adversarial approach. Our model (VIBE) (bottom) is able to produce realistic and accurate pose and shape, outperforming previous work on standard benchmarks.
translated by 谷歌翻译
Input Reconstruction Side and top down view Part Segmentation Input Reconstruction Side and top down view Part Segmentation Figure 1: Human Mesh Recovery (HMR): End-to-end adversarial learning of human pose and shape. We describe a real time framework for recovering the 3D joint angles and shape of the body from a single RGB image. The first two rowsshow results from our model trained with some 2D-to-3D supervision, the bottom row shows results from a model that is trained in a fully weakly-supervised manner without using any paired 2D-to-3D supervision. We infer the full 3D body even in case of occlusions and truncations. Note that we capture head and limb orientations.
translated by 谷歌翻译
推断人类场景接触(HSC)是了解人类如何与周围环境相互作用的第一步。尽管检测2D人类对象的相互作用(HOI)和重建3D人姿势和形状(HPS)已经取得了重大进展,但单个图像的3D人习惯接触的推理仍然具有挑战性。现有的HSC检测方法仅考虑几种类型的预定义接触,通常将身体和场景降低到少数原语,甚至忽略了图像证据。为了预测单个图像的人类场景接触,我们从数据和算法的角度解决了上述局限性。我们捕获了一个名为“真实场景,互动,联系和人类”的新数据集。 Rich在4K分辨率上包含多视图室外/室内视频序列,使用无标记运动捕获,3D身体扫描和高分辨率3D场景扫描捕获的地面3D人体。 Rich的一个关键特征是它还包含身体上精确的顶点级接触标签。使用Rich,我们训练一个网络,该网络可预测单个RGB图像的密集车身场景接触。我们的主要见解是,接触中的区域总是被阻塞,因此网络需要能够探索整个图像以获取证据。我们使用变压器学习这种非本地关系,并提出新的身体场景接触变压器(BSTRO)。很少有方法探索3D接触;那些只专注于脚的人,将脚接触作为后处理步骤,或从身体姿势中推断出无需看现场的接触。据我们所知,BSTRO是直接从单个图像中直接估计3D身体场景接触的方法。我们证明,BSTRO的表现明显优于先前的艺术。代码和数据集可在https://rich.is.tue.mpg.de上获得。
translated by 谷歌翻译
To date, little attention has been given to multi-view 3D human mesh estimation, despite real-life applicability (e.g., motion capture, sport analysis) and robustness to single-view ambiguities. Existing solutions typically suffer from poor generalization performance to new settings, largely due to the limited diversity of image-mesh pairs in multi-view training data. To address this shortcoming, people have explored the use of synthetic images. But besides the usual impact of visual gap between rendered and target data, synthetic-data-driven multi-view estimators also suffer from overfitting to the camera viewpoint distribution sampled during training which usually differs from real-world distributions. Tackling both challenges, we propose a novel simulation-based training pipeline for multi-view human mesh recovery, which (a) relies on intermediate 2D representations which are more robust to synthetic-to-real domain gap; (b) leverages learnable calibration and triangulation to adapt to more diversified camera setups; and (c) progressively aggregates multi-view information in a canonical 3D space to remove ambiguities in 2D representations. Through extensive benchmarking, we demonstrate the superiority of the proposed solution especially for unseen in-the-wild scenarios.
translated by 谷歌翻译
从单眼RGB图像中重建3D手网络,由于其在AR/VR领域的巨大潜在应用,引起了人们的注意力越来越多。大多数最先进的方法试图以匿名方式解决此任务。具体而言,即使在连续录制会话中用户没有变化的实际应用程序中实际上可用,因此忽略了该主题的身份。在本文中,我们提出了一个身份感知的手网格估计模型,该模型可以结合由受试者的内在形状参数表示的身份信息。我们通过将提出的身份感知模型与匿名对待主题的基线进行比较来证明身份信息的重要性。此外,为了处理未见测试对象的用例,我们提出了一条新型的个性化管道来校准固有的形状参数,仅使用该受试者的少数未标记的RGB图像。在两个大型公共数据集上进行的实验验证了我们提出的方法的最先进性能。
translated by 谷歌翻译
大多数先前的作品在从图像中感知3D人类的作品是孤立的,而没有周围的环境。但是,人类一直在与周围的物体互动,因此呼吁不仅可以推理人类,而且可以推理对象及其相互作用的模型。由于人类与物体之间的严重阻塞,不同的相互作用类型和深度歧义,问题极具挑战性。在本文中,我们介绍了一种新颖的方法,该方法学会了从单个RGB图像中共同重建人和物体。乔尔从最近的隐性表面学习和基于经典模型的拟合方面的进步中汲取灵感。我们计算人类和对象的神经重建,该神经用两个无符号距离字段隐式表示,一个对应物的对应字段和一个对象姿势场。这使我们能够在相互作用的推理的同时,可牢固地拟合参数的身体模型和3D对象模板。此外,先前的像素对齐的隐式学习方法使用合成数据并做出实际数据中未满足的假设。我们提出了一个优雅的深度缩放,可以在真实数据上进行更有效的形状学习。实验表明,我们的联合重建通过提出的策略学到了明显优于SOTA。我们的代码和型号可在https://virtualhumans.mpi-inf.mpg.de/chore上找到
translated by 谷歌翻译