对于视觉操作任务,我们旨在表示具有语义上有意义的功能的图像内容。但是,从图像中学习隐式表示通常缺乏解释性,尤其是当属性交织在一起时。我们专注于仅从2D图像数据中提取删除的3D属性的具有挑战性的任务。具体而言,我们专注于人类外观,并从RGB图像中学习穿着人类的隐性姿势,形状和服装表示。我们的方法学习了这三个图像属性的分解潜在表示的嵌入式,并通过2到3D编码器解码器结构可以有意义地重新组装特征和属性控制。 3D模型仅从学到的嵌入空间中的特征图推断出来。据我们所知,我们的方法是第一个解决这个高度不足的问题的跨域分解的方法。我们在定性和定量上证明了框架在虚拟数据上3D重建中转移姿势,形状和服装的能力,并显示隐性形状损失如何使模型恢复细粒度重建细节的能力有益。
translated by 谷歌翻译
在计算机愿景中已经过了很长一段时间的3D表示和人体重建。传统方法主要依赖于参数统计线性模型,将可能的身体的空间限制在线性组合。近来,一些方法才试图利用人体建模的神经隐式表示,同时展示令人印象深刻的结果,它们是通过表示能力的限制或没有物理有意义和可控的。在这项工作中,我们提出了一种用于人体的新型神经隐含表示,其具有完全可分辨:无戒开的形状和姿势潜在空间的优化。与事先工作相反,我们的代表是基于运动模型设计的,这使得可以为姿势动画等任务提供可控制的表示,同时允许为3D配件和姿势跟踪等任务进行整形和姿势。我们的模型可以直接培训和精细调整,直接在具有精心设计的损失的非水密原始数据上。实验展示了SOTA方法的改进的3D重建性能,并显示了我们的方法来形状插值,模型拟合,姿势跟踪和运动重新定位的适用性。
translated by 谷歌翻译
Recent approaches to drape garments quickly over arbitrary human bodies leverage self-supervision to eliminate the need for large training sets. However, they are designed to train one network per clothing item, which severely limits their generalization abilities. In our work, we rely on self-supervision to train a single network to drape multiple garments. This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network, which models garments as unsigned distance fields. Our pipeline can generate and drape previously unseen garments of any topology, whose shape can be edited by manipulating their latent codes. Being fully differentiable, our formulation makes it possible to recover accurate 3D models of garments from partial observations -- images or 3D scans -- via gradient descent. Our code will be made publicly available.
translated by 谷歌翻译
用于形状生成和编辑的AutoEncoders的使用遭受了可能导致输出形状不可预测的变化的潜在空间中的操纵。我们介绍了一种基于AutoEncoder的方法,通过解开潜在的子空间来获得潜在空间的直观形状,以获得可以独立操纵的表面和样式变量的控制点。关键思想是向损耗函数添加一个LipsChitz型约束,即将输出形状的变化与潜在空间的变化相结合,导致可解释的潜在空间表示。然后可以自由地移动表面上的控制点,允许直接在潜空间中直接编辑。我们通过将其与最先进的数据驱动的形状编辑方法进行比较来评估我们的方法。除了形状操纵外,我们通过利用他们为无监督的部分分割来展示我们的控制点的表现力。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
In this report, we focus on reconstructing clothed humans in the canonical space given multiple views and poses of a human as the input. To achieve this, we utilize the geometric prior of the SMPLX model in the canonical space to learn the implicit representation for geometry reconstruction. Based on the observation that the topology between the posed mesh and the mesh in the canonical space are consistent, we propose to learn latent codes on the posed mesh by leveraging multiple input images and then assign the latent codes to the mesh in the canonical space. Specifically, we first leverage normal and geometry networks to extract the feature vector for each vertex on the SMPLX mesh. Normal maps are adopted for better generalization to unseen images compared to 2D images. Then, features for each vertex on the posed mesh from multiple images are integrated by MLPs. The integrated features acting as the latent code are anchored to the SMPLX mesh in the canonical space. Finally, latent code for each 3D point is extracted and utilized to calculate the SDF. Our work for reconstructing the human shape on canonical pose achieves 3rd performance on WCPA MVP-Human Body Challenge.
translated by 谷歌翻译
深层隐式表面在建模通用形状方面表现出色,但并不总是捕获制造物体中存在的规律性,这是简单的几何原始词特别擅长。在本文中,我们提出了一个结合潜在和显式参数的表示,可以将其解码为一组彼此一致的深层隐式和几何形状。结果,我们可以有效地对制成物体共存的复杂形状和高度规则形状进行建模。这使我们能够以有效而精确的方式操纵3D形状的方法。
translated by 谷歌翻译
为了使3D人的头像广泛可用,我们必须能够在任意姿势中产生各种具有不同身份和形状的多种3D虚拟人。由于衣服的身体形状,复杂的关节和由此产生的丰富,随机几何细节,这项任务是挑战的挑战。因此,目前代表3D人的方法不提供服装中的人的全部生成模型。在本文中,我们提出了一种新的方法,这些方法可以学习在具有相应的剥皮重量的各种衣服中产生详细的3D形状。具体而言,我们设计了一个多主题前进的剥皮模块,这些模块只有几个受试者的未预装扫描。为了捕获服装中高频细节的随机性,我们利用对抗的侵害制定,鼓励模型捕获潜在统计数据。我们提供了经验证据,这导致了皱纹的局部细节的现实生成。我们表明我们的模型能够产生佩戴各种和详细的衣服的自然人头像。此外,我们表明我们的方法可以用于拟合人类模型到原始扫描的任务,优于以前的最先进。
translated by 谷歌翻译
我们介绍重做,一个类无话的框架来重建RGBD或校准视频的动态对象。与事先工作相比,我们的问题设置是更真实的,更具挑战性的三个原因:1)由于遮挡或相机设置,感兴趣的对象可能永远不会完全可见,但我们的目标是重建完整的形状; 2)我们的目标是处理不同的对象动态,包括刚性运动,非刚性运动和关节; 3)我们的目标是通过一个统一的框架重建不同类别的对象。为了解决这些挑战,我们开发了两种新模块。首先,我们介绍了一个规范的4D隐式功能,它是与聚合的时间视觉线索对齐的像素对齐。其次,我们开发了一个4D变换模块,它捕获对象动态以支持时间传播和聚合。我们研究了重做在综合性RGBD视频数据集风帆-VOS 3D和Deformingthings4d ++上的大量实验中的疗效,以及现实世界视频数据3DPW。我们发现重做优于最先进的动态重建方法。在消融研究中,我们验证每个发达的组件。
translated by 谷歌翻译
Implicit fields have been very effective to represent and learn 3D shapes accurately. Signed distance fields and occupancy fields are the preferred representations, both with well-studied properties, despite their restriction to closed surfaces. Several other variations and training principles have been proposed with the goal to represent all classes of shapes. In this paper, we develop a novel and yet fundamental representation by considering the unit vector field defined on 3D space: at each point in $\mathbb{R}^3$ the vector points to the closest point on the surface. We theoretically demonstrate that this vector field can be easily transformed to surface density by applying the vector field divergence. Unlike other standard representations, it directly encodes an important physical property of the surface, which is the surface normal. We further show the advantages of our vector field representation, specifically in learning general (open, closed, or multi-layered) surfaces as well as piecewise planar surfaces. We compare our method on several datasets including ShapeNet where the proposed new neural implicit field shows superior accuracy in representing any type of shape, outperforming other standard methods. The code will be released at https://github.com/edomel/ImplicitVF
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image. While implicit function-based methods recently achieved reasonable reconstruction performance, they still bear limitations showing degraded quality in both surface geometry and texture from an unobserved view. In response, to generate a realistic textured surface, we propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body. To suppress the diffused occupancy that causes noise in projection images and reconstructed meshes, we propose to train occupancy probability by simultaneously utilizing 2D and 3D supervisions with occupancy-based volume rendering. We also introduce a refinement architecture that generates detail-preserving backside-view images with front-to-back warping. Extensive experiments demonstrate that our method achieves state-of-the-art performance in 3D human reconstruction from a single image, showing enhanced geometry and texture quality from an unobserved view.
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
精确地重建由单个图像的各种姿势和服装引起的精确复杂的人类几何形状非常具有挑战性。最近,基于像素对齐的隐式函数(PIFU)的作品已迈出了一步,并在基于图像的3D人数数字化上实现了最先进的保真度。但是,PIFU的培训在很大程度上取决于昂贵且有限的3D地面真相数据(即合成数据),从而阻碍了其对更多样化的现实世界图像的概括。在这项工作中,我们提出了一个名为selfpifu的端到端自我监督的网络,以利用丰富和多样化的野外图像,在对无约束的内部图像进行测试时,在很大程度上改善了重建。 SelfPifu的核心是深度引导的体积/表面感知的签名距离领域(SDF)学习,它可以自欺欺人地学习PIFU,而无需访问GT网格。整个框架由普通估计器,深度估计器和基于SDF的PIFU组成,并在训练过程中更好地利用了额外的深度GT。广泛的实验证明了我们自我监督框架的有效性以及使用深度作为输入的优越性。在合成数据上,与PIFUHD相比,我们的交叉点(IOU)达到93.5%,高18%。对于野外图像,我们对重建结果进行用户研究,与其他最先进的方法相比,我们的结果的选择率超过68%。
translated by 谷歌翻译
从单个2D图像推断3D位置和多个对象的形状是计算机视觉的长期目标。大多数现有的作品都预测这些3D属性之一或专注于解决单个对象。一个基本挑战在于如何学习适合3D检测和重建的图像的有效表示。在这项工作中,我们建议从输入图像中学习3D体素特征的常规网格,其通过3D特征升降操作员与3D场景空间对齐。基于3D体素特征,我们的新型中心-3D检测头在3D空间中配制了3D检测作为关键点检测。此外,我们设计了一种高效的粗致细重建模块,包括粗级体轴和新的本地PCA-SDF形状表示,其能够精细的细节重建和比现有方法更快地推理的阶数。通过3D检测和重建的互补监督,可以使3D体素特征成为几何和上下文保留,从而通过单个对象中的3D检测和重建来证明我们的方法的有效性和多个对象场景。
translated by 谷歌翻译
我们介绍了一个现实的单发网眼的人体头像创作的系统,即简称罗马。使用一张照片,我们的模型估计了特定于人的头部网格和相关的神经纹理,该神经纹理编码局部光度和几何细节。最终的化身是操纵的,可以使用神经网络进行渲染,该神经网络与野外视频数据集上的网格和纹理估计器一起训练。在实验中,我们观察到我们的系统在头部几何恢复和渲染质量方面都具有竞争性的性能,尤其是对于跨人的重新制定。请参阅结果https://samsunglabs.github.io/rome/
translated by 谷歌翻译
We present PhoMoH, a neural network methodology to construct generative models of photorealistic 3D geometry and appearance of human heads including hair, beards, clothing and accessories. In contrast to prior work, PhoMoH models the human head using neural fields, thus supporting complex topology. Instead of learning a head model from scratch, we propose to augment an existing expressive head model with new features. Concretely, we learn a highly detailed geometry network layered on top of a mid-resolution head model together with a detailed, local geometry-aware, and disentangled color field. Our proposed architecture allows us to learn photorealistic human head models from relatively little data. The learned generative geometry and appearance networks can be sampled individually and allow the creation of diverse and realistic human heads. Extensive experiments validate our method qualitatively and across different metrics.
translated by 谷歌翻译
在这项工作中,我们为来自多视图RGB图像的3D面部重建提供了一种新方法。与以前的方法(3DMMS)构建的先前方法不同,我们的方法利用隐式表示来编码丰富的几何特征。我们的整体管道由两个主要组件组成,包括几何网络,它学习可变形的神经签名距离函数(SDF)作为3D面部表示,以及渲染网络,该渲染网络学会呈现神经SDF的面积点以匹配通过自我监督优化输入图像。要处理在测试时间的不同表达式的相同目标的野外稀疏视图输入,我们进一步提出了残余潜代码,以有效地扩展了学习的隐式面部表示的形状空间,以及新颖的视图开关丢失强制执行不同视图之间的一致性。我们在多个基准数据集上的实验结果表明,与最先进的方法相比,我们的方法优于替代基准,实现了优越的面部重建结果。
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译
神经隐式表示在新的视图合成和来自多视图图像的高质量3D重建方面显示了其有效性。但是,大多数方法都集中在整体场景表示上,但忽略了其中的各个对象,从而限制了潜在的下游应用程序。为了学习对象组合表示形式,一些作品将2D语义图作为训练中的提示,以掌握对象之间的差异。但是他们忽略了对象几何和实例语义信息之间的牢固联系,这导致了单个实例的不准确建模。本文提出了一个新颖的框架ObjectsDF,以在3D重建和对象表示中构建具有高保真度的对象复合神经隐式表示。观察常规音量渲染管道的歧义,我们通过组合单个对象的签名距离函数(SDF)来对场景进行建模,以发挥明确的表面约束。区分不同实例的关键是重新审视单个对象的SDF和语义标签之间的牢固关联。特别是,我们将语义信息转换为对象SDF的函数,并为场景和对象开发统一而紧凑的表示形式。实验结果表明,ObjectSDF框架在表示整体对象组合场景和各个实例方面的优越性。可以在https://qianyiwu.github.io/objectsdf/上找到代码
translated by 谷歌翻译