High-fidelity facial avatar reconstruction from a monocular video is a significant research problem in computer graphics and computer vision. Recently, Neural Radiance Field (NeRF) has shown impressive novel view rendering results and has been considered for facial avatar reconstruction. However, the complex facial dynamics and missing 3D information in monocular videos raise significant challenges for faithful facial reconstruction. In this work, we propose a new method for NeRF-based facial avatar reconstruction that utilizes 3D-aware generative prior. Different from existing works that depend on a conditional deformation field for dynamic modeling, we propose to learn a personalized generative prior, which is formulated as a local and low dimensional subspace in the latent space of 3D-GAN. We propose an efficient method to construct the personalized generative prior based on a small set of facial images of a given individual. After learning, it allows for photo-realistic rendering with novel views and the face reenactment can be realized by performing navigation in the latent space. Our proposed method is applicable for different driven signals, including RGB images, 3DMM coefficients, and audios. Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance.
translated by 谷歌翻译
我们提出了一些动态神经辐射场(FDNERF),这是第一种基于NERF的方法,能够根据少量动态图像重建和表达3D面的表达编辑。与需要密集图像作为输入的现有动态NERF不同,并且只能为单个身份建模,我们的方法可以使跨不同人的不同人进行面对重建。与设计用于建模静态场景的最先进的几杆NERF相比,提出的FDNERF接受视图的动态输入,并支持任意的面部表达编辑,即产生具有输入超出输入的新表达式的面孔。为了处理动态输入之间的不一致之处,我们引入了精心设计的条件特征翘曲(CFW)模块,以在2D特征空间中执行表达条件的翘曲,这也是身份自适应和3D约束。结果,不同表达式的特征被转换为目标的特征。然后,我们根据这些视图一致的特征构建一个辐射场,并使用体积渲染来合成建模面的新型视图。进行定量和定性评估的广泛实验表明,我们的方法在3D面重建和表达编辑任务上都优于现有的动态和几乎没有射击的NERF。我们的代码和模型将在接受后提供。
translated by 谷歌翻译
Talking Head Synthesis是一项新兴技术,在电影配音,虚拟化身和在线教育中具有广泛的应用。最近基于NERF的方法会产生更自然的会话视频,因为它们更好地捕获了面部的3D结构信息。但是,需要使用大型数据集对每个身份进行特定模型。在本文中,我们提出了动态面部辐射场(DFRF),以进行几次交谈的头部综合,这可以在很少的训练数据中迅速概括为看不见的身份。与现有的基于NERF的方法不同,该方法将特定人的3D几何形状和外观直接编码到网络中,我们的DFRF条件面对2D外观图像上的辐射场,以便先验学习面部。因此,可以通过很少的参考图像灵活地调整面部辐射场。此外,为了更好地对面部变形进行建模,我们提出了一个在音频信号条件下的可区分面翘曲模块,以使所有参考图像变形到查询空间。广泛的实验表明,只有数十秒钟的训练剪辑可用,我们提出的DFRF可以合成天然和高质量的音频驱动的会说话的头视频,用于只有40k迭代的新身份。我们强烈建议读者查看我们的补充视频以进行直观的比较。代码可在https://sstzal.github.io/dfrf/中找到。
translated by 谷歌翻译
体积神经渲染方法,例如神经辐射场(NERFS),已实现了光真实的新型视图合成。但是,以其标准形式,NERF不支持场景中的物体(例如人头)的编辑。在这项工作中,我们提出了Rignerf,该系统不仅仅是仅仅是新颖的视图综合,并且可以完全控制头姿势和从单个肖像视频中学到的面部表情。我们使用由3D可变形面模型(3DMM)引导的变形场对头姿势和面部表情的变化进行建模。 3DMM有效地充当了Rignerf的先验,该rignerf学会仅预测3DMM变形的残留物,并使我们能够在输入序列中呈现不存在的新颖(刚性)姿势和(非刚性)表达式。我们仅使用智能手机捕获的简短视频进行培训,我们证明了我们方法在自由视图合成肖像场景的有效性,并具有明确的头部姿势和表达控制。项目页面可以在此处找到:http://shahrukhathar.github.io/2022/06/06/rignerf.html
translated by 谷歌翻译
与传统的头像创建管道相反,这是一个昂贵的过程,现代生成方法直接从照片中学习数据分布,而艺术的状态现在可以产生高度的照片现实图像。尽管大量作品试图扩展无条件的生成模型并达到一定程度的可控性,但要确保多视图一致性,尤其是在大型姿势中,仍然具有挑战性。在这项工作中,我们提出了一个3D肖像生成网络,该网络可产生3D一致的肖像,同时根据有关姿势,身份,表达和照明的语义参数可控。生成网络使用神经场景表示在3D中建模肖像,其生成以支持明确控制的参数面模型为指导。尽管可以通过将图像与部分不同的属性进行对比,但可以进一步增强潜在的分离,但在非面积区域(例如,在动画表达式)时,仍然存在明显的不一致。我们通过提出一种体积混合策略来解决此问题,在该策略中,我们通过将动态和静态辐射场融合在一起,形成一个复合输出,并从共同学习的语义场中分割了两个部分。我们的方法在广泛的实验中优于先前的艺术,在自由视点中观看时,在自然照明中产生了逼真的肖像。所提出的方法还证明了真实图像以及室外卡通面孔的概括能力,在实际应用中显示出巨大的希望。其他视频结果和代码将在项目网页上提供。
translated by 谷歌翻译
Existing neural rendering methods for creating human avatars typically either require dense input signals such as video or multi-view images, or leverage a learned prior from large-scale specific 3D human datasets such that reconstruction can be performed with sparse-view inputs. Most of these methods fail to achieve realistic reconstruction when only a single image is available. To enable the data-efficient creation of realistic animatable 3D humans, we propose ELICIT, a novel method for learning human-specific neural radiance fields from a single image. Inspired by the fact that humans can easily reconstruct the body geometry and infer the full-body clothing from a single image, we leverage two priors in ELICIT: 3D geometry prior and visual semantic prior. Specifically, ELICIT introduces the 3D body shape geometry prior from a skinned vertex-based template model (i.e., SMPL) and implements the visual clothing semantic prior with the CLIP-based pre-trained models. Both priors are used to jointly guide the optimization for creating plausible content in the invisible areas. In order to further improve visual details, we propose a segmentation-based sampling strategy that locally refines different parts of the avatar. Comprehensive evaluations on multiple popular benchmarks, including ZJU-MoCAP, Human3.6M, and DeepFashion, show that ELICIT has outperformed current state-of-the-art avatar creation methods when only a single image is available. Code will be public for reseach purpose at https://elicit3d.github.io .
translated by 谷歌翻译
Figure 1: Given a monocular portrait video sequence of a person, we reconstruct a dynamic neural radiance field representing a 4D facial avatar, which allows us to synthesize novel head poses as well as changes in facial expressions.
translated by 谷歌翻译
我们提出了一种参数模型,将自由视图图像映射到编码面部形状,表达和外观的矢量空间,即使用神经辐射场,即可变的面部nerf。具体地,MoFanerf将编码的面部形状,表达和外观以及空间坐标和视图方向作为输入,作为输入到MLP,并输出光学逼真图像合成的空间点的辐射。与传统的3D可变模型(3DMM)相比,MoFanerf在直接综合光学逼真的面部细节方面表现出优势,即使是眼睛,嘴巴和胡须也是如此。而且,通过插入输入形状,表达和外观码,可以容易地实现连续的面部。通过引入特定于特定于特定的调制和纹理编码器,我们的模型合成精确的光度测量细节并显示出强的表示能力。我们的模型显示了多种应用的强大能力,包括基于图像的拟合,随机产生,面部索具,面部编辑和新颖的视图合成。实验表明,我们的方法比以前的参数模型实现更高的表示能力,并在几种应用中实现了竞争性能。据我们所知,我们的作品是基于神经辐射场上的第一款,可用于配合,发电和操作。我们的代码和型号在https://github.com/zhuhao-nju/mofanerf中发布。
translated by 谷歌翻译
在本文中,我们提出了一个新的基于NERF的参数头模型,该参数头模型集成了神经辐射场到人头的参数表示。它可以实时呈现高保真头图像,并直接控制生成的图像渲染姿势和各种语义属性。与现有相关参数模型不同,我们使用神经辐射字段作为新颖的3D代理而不是传统的3D纹理网格,这使得HeadnerF能够生成高保真图像。然而,原始NERF的计算昂贵的渲染过程阻碍了参数NERF模型的构造。为了解决这个问题,我们采用将2D神经渲染集成到NERF的渲染过程和设计新颖损失条款的策略。结果,可以显着加速头部的渲染速度,并且一帧的渲染时间从5s降至25ms。新颖的设计损失术语还提高了渲染精度,并且人体头部的细级细节,例如牙齿,皱纹和胡须之间的间隙,可以由Headnerf表示和合成。广泛的实验结果和一些应用展示了其有效性。我们将向公众推出代码和培训的模型。
translated by 谷歌翻译
最近的神经人类表示可以产生高质量的多视图渲染,但需要使用密集的多视图输入和昂贵的培训。因此,它们在很大程度上仅限于静态模型,因为每个帧都是不可行的。我们展示了人类学 - 一种普遍的神经表示 - 用于高保真自由观察动态人类的合成。类似于IBRNET如何通过避免每场景训练来帮助NERF,Humannerf跨多视图输入采用聚合像素对准特征,以及用于解决动态运动的姿势嵌入的非刚性变形场。原始人物员已经可以在稀疏视频输入的稀疏视频输入上产生合理的渲染。为了进一步提高渲染质量,我们使用外观混合模块增强了我们的解决方案,用于组合神经体积渲染和神经纹理混合的益处。各种多视图动态人类数据集的广泛实验证明了我们在挑战运动中合成照片 - 现实自由观点的方法和非常稀疏的相机视图输入中的普遍性和有效性。
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译
Face Animation是计算机视觉中最热门的主题之一,在生成模型的帮助下取得了有希望的性能。但是,由于复杂的运动变形和复杂的面部细节建模,生成保留身份和光真实图像的身份仍然是一个关键的挑战。为了解决这些问题,我们提出了一个面部神经量渲染(FNEVR)网络,以充分探索在统一框架中2D运动翘曲和3D体积渲染的潜力。在FNEVR中,我们设计了一个3D面积渲染(FVR)模块,以增强图像渲染的面部细节。具体而言,我们首先使用精心设计的体系结构提取3D信息,然后引入一个正交自适应射线采样模块以进行有效的渲染。我们还设计了一个轻巧的姿势编辑器,使FNEVR能够以简单而有效的方式编辑面部姿势。广泛的实验表明,我们的FNEVR在广泛使用的说话头基准上获得了最佳的总体质量和性能。
translated by 谷歌翻译
We propose GazeNeRF, a 3D-aware method for the task of gaze redirection. Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results. Instead, we build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion. Our method leverages recent advancements in conditional image-based neural radiance fields and proposes a two-stream architecture that predicts volumetric features for the face and eye regions separately. Rigidly transforming the eye features via a 3D rotation matrix provides fine-grained control over the desired gaze angle. The final, redirected image is then attained via differentiable volume compositing. Our experiments show that this architecture outperforms naively conditioned NeRF baselines as well as previous state-of-the-art 2D gaze redirection methods in terms of redirection accuracy and identity preservation.
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
Point of View & TimeFigure 1: We propose D-NeRF, a method for synthesizing novel views, at an arbitrary point in time, of dynamic scenes with complex non-rigid geometries. We optimize an underlying deformable volumetric function from a sparse set of input monocular views without the need of ground-truth geometry nor multi-view images. The figure shows two scenes under variable points of view and time instances synthesised by the proposed model.
translated by 谷歌翻译
我们向渲染和时间(4D)重建人类的渲染和时间(4D)重建的神经辐射场,通过稀疏的摄像机捕获或甚至来自单眼视频。我们的方法将思想与神经场景表示,新颖的综合合成和隐式统计几何人称的人类表示相结合,耦合使用新颖的损失功能。在先前使用符号距离功能表示的结构化隐式人体模型,而不是使用统一的占用率来学习具有统一占用的光域字段。这使我们能够从稀疏视图中稳健地融合信息,并概括超出在训练中观察到的姿势或视图。此外,我们应用几何限制以共同学习观察到的主题的结构 - 包括身体和衣服 - 并将辐射场正规化为几何合理的解决方案。在多个数据集上的广泛实验证明了我们方法的稳健性和准确性,其概括能力显着超出了一系列的姿势和视图,以及超出所观察到的形状的统计外推。
translated by 谷歌翻译
以前的纵向图像生成方法大致分为两类:2D GAN和3D感知的GAN。 2D GAN可以产生高保真肖像,但具有低视图一致性。 3D感知GaN方法可以维护查看一致性,但它们所生成的图像不是本地可编辑的。为了克服这些限制,我们提出了FENERF,一个可以生成查看一致和本地可编辑的纵向图像的3D感知生成器。我们的方法使用两个解耦潜码,以在具有共享几何体的空间对齐的3D卷中生成相应的面部语义和纹理。从这种底层3D表示中受益,FENERF可以联合渲染边界对齐的图像和语义掩码,并使用语义掩模通过GaN反转编辑3D音量。我们进一步示出了可以从广泛可用的单手套图像和语义面膜对中学习这种3D表示。此外,我们揭示了联合学习语义和纹理有助于产生更精细的几何形状。我们的实验表明FENERF在各种面部编辑任务中优于最先进的方法。
translated by 谷歌翻译
面部3D形态模型是无数应用程序的主要计算机视觉主题,并且在过去二十年中已得到高度优化。深层生成网络的巨大改进创造了改善此类模型的各种可能性,并引起了广泛的兴趣。此外,神经辐射领域的最新进展正在彻底改变已知场景的新颖视图综合。在这项工作中,我们提出了一个面部3D形态模型,该模型利用了上述两者,并且可以准确地对受试者的身份,姿势和表达进行建模,并以任意照明形式呈现。这是通过利用强大的基于风格的发电机来克服神经辐射场的两个主要弱点,即它们的刚度和渲染速度来实现的。我们介绍了一个基于样式的生成网络,该网络在一个通过中综合了全部,并且仅在神经辐射场的所需渲染样品中构成。我们创建了一个庞大的标记为面部渲染的合成数据集,并在这些数据上训练网络,以便它可以准确地建模并推广到面部身份,姿势和外观。最后,我们表明该模型可以准确地适合“野外”的任意姿势和照明的面部图像,提取面部特征,并用于在可控条件下重新呈现面部。
translated by 谷歌翻译
我们提出了神经演员(NA),一种用于从任意观点和任意可控姿势的高质量合成人类的新方法。我们的方法是基于最近的神经场景表示和渲染工作,从而从仅从2D图像中学习几何形状和外观的表示。虽然现有的作品令人兴奋地呈现静态场景和动态场景的播放,具有神经隐含方法的照片 - 现实重建和人类的渲染,特别是在用户控制的新颖姿势下,仍然很困难。为了解决这个问题,我们利用一个粗体模型作为将周围的3D空间的代理放入一个规范姿势。神经辐射场从多视图视频输入中了解在规范空间中的姿势依赖几何变形和姿势和视图相关的外观效果。为了综合高保真动态几何和外观的新颖视图,我们利用身体模型上定义的2D纹理地图作为预测残余变形和动态外观的潜变量。实验表明,我们的方法能够比播放的最先进,以及新的姿势合成来实现更好的质量,并且甚至可以概括到新的姿势与训练姿势不同的姿势。此外,我们的方法还支持对合成结果的体形控制。
translated by 谷歌翻译