从单眼图像中恢复纹理的3D网格是高度挑战的,尤其是对于缺乏3D地面真理的野外物体。在这项工作中,我们提出了网络文化,这是一个新的框架,可通过利用3D GAN预先训练的3D纹理网格合成的3D GAN的生成性先验。重建是通过在3D GAN中搜索最类似于目标网格的潜在空间来实现重建。由于预先训练的GAN以网状几何形状和纹理封装了丰富的3D语义,因此在GAN歧管内进行搜索,因此自然地使重建的真实性和忠诚度正常。重要的是,这种正则化直接应用于3D空间,从而提供了在2D空间中未观察到的网格零件的关键指导。标准基准测试的实验表明,我们的框架获得了忠实的3D重建,并在观察到的部分和未观察到的部分中都具有一致的几何形状和纹理。此外,它可以很好地推广到不太常见的网格中,例如可变形物体的扩展表达。代码在https://github.com/junzhezhang/mesh-inversion上发布
translated by 谷歌翻译
从2D图像中学习可变形的3D对象通常是一个不适的问题。现有方法依赖于明确的监督来建立多视图对应关系,例如模板形状模型和关键点注释,这将其在“野外”中的对象上限制了。建立对应关系的一种更自然的方法是观看四处移动的对象的视频。在本文中,我们介绍了Dove,一种方法,可以从在线可用的单眼视频中学习纹理的3D模型,而无需关键点,视点或模板形状监督。通过解决对称性诱导的姿势歧义并利用视频中的时间对应关系,该模型会自动学会从每个单独的RGB框架中分解3D形状,表达姿势和纹理,并准备在测试时间进行单像推断。在实验中,我们表明现有方法无法学习明智的3D形状,而无需其他关键点或模板监督,而我们的方法在时间上产生了时间一致的3D模型,可以从任意角度来对其进行动画和呈现。
translated by 谷歌翻译
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.
translated by 谷歌翻译
单视图重建的方法通常依赖于观点注释,剪影,缺乏背景,同一实例的多个视图,模板形状或对称性。我们通过明确利用不同对象实例的图像之间的一致性来避免所有此类监督和假设。结果,我们的方法可以从描述相同对象类别的大量未标记图像中学习。我们的主要贡献是利用跨境一致性的两种方法:(i)渐进式调理,一种培训策略,以逐步将模型从类别中逐步专业为课程学习方式进行实例; (ii)邻居重建,具有相似形状或纹理的实例之间的损失。对于我们方法的成功也至关重要的是:我们的结构化自动编码体系结构将图像分解为显式形状,纹理,姿势和背景;差异渲染的适应性公式;以及一个新的优化方案在3D和姿势学习之间交替。我们将我们的方法(独角兽)在多样化的合成造型数据集上进行比较,这是需要多种视图作为监督的方法的经典基准 - 以及标准的实数基准(Pascal3d+ Car,Cub,Cub,Cub,Cub),大多数方法都需要已知的模板和Silhouette注释。我们还展示了对更具挑战性的现实收藏集(Compcars,LSUN)的适用性,在该收藏中,剪影不可用,图像没有在物体周围裁剪。
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
最近的研究表明,基于预训练的gan的可控图像生成可以使广泛的计算机视觉任务受益。但是,较少的关注专用于3D视觉任务。鉴于此,我们提出了一个新颖的图像条件神经隐式领域,该领域可以利用GAN生成的多视图图像的2D监督,并执行通用对象的单视图重建。首先,提出了一个新颖的基于脱机的发电机,以生成具有对视点的完全控制的合理伪图像。然后,我们建议利用神经隐式函数,以及可区分的渲染器,从带有对象掩模和粗糙姿势初始化的伪图像中学习3D几何形状。为了进一步检测不可靠的监督,我们引入了一个新颖的不确定性模块来预测不确定性图,该模块可以补救伪图像中不确定区域的负面影响,从而导致更好的重建性能。我们方法的有效性是通过通用对象的出色单视3D重建结果证明的。
translated by 谷歌翻译
The neural radiance field (NeRF) has shown promising results in preserving the fine details of objects and scenes. However, unlike mesh-based representations, it remains an open problem to build dense correspondences across different NeRFs of the same category, which is essential in many downstream tasks. The main difficulties of this problem lie in the implicit nature of NeRF and the lack of ground-truth correspondence annotations. In this paper, we show it is possible to bypass these challenges by leveraging the rich semantics and structural priors encapsulated in a pre-trained NeRF-based GAN. Specifically, we exploit such priors from three aspects, namely 1) a dual deformation field that takes latent codes as global structural indicators, 2) a learning objective that regards generator features as geometric-aware local descriptors, and 3) a source of infinite object-specific NeRF samples. Our experiments demonstrate that such priors lead to 3D dense correspondence that is accurate, smooth, and robust. We also show that established dense correspondence across NeRFs can effectively enable many NeRF-based downstream applications such as texture transfer.
translated by 谷歌翻译
用于运动中的人类的新型视图综合是一个具有挑战性的计算机视觉问题,使得诸如自由视视频之类的应用。现有方法通常使用具有多个输入视图,3D监控或预训练模型的复杂设置,这些模型不会概括为新标识。旨在解决这些限制,我们提出了一种新颖的视图综合框架,以从单视图传感器捕获的任何人的看法生成现实渲染,其具有稀疏的RGB-D,类似于低成本深度摄像头,而没有参与者特定的楷模。我们提出了一种架构来学习由基于球体的神经渲染获得的小说视图中的密集功能,并使用全局上下文修复模型创建完整的渲染。此外,增强剂网络利用了整体保真度,即使在原始视图中的遮挡区域中也能够产生细节的清晰渲染。我们展示了我们的方法为单个稀疏RGB-D输入产生高质量的合成和真实人体演员的新颖视图。它概括了看不见的身份,新的姿势,忠实地重建面部表情。我们的方法优于现有人体观测合成方法,并且对不同水平的输入稀疏性具有稳健性。
translated by 谷歌翻译
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, beyond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, relighting, and re-posing the reconstruction, and can naturally be extended to handle multiple input images (e.g. different views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications.
translated by 谷歌翻译
We present HARP (HAnd Reconstruction and Personalization), a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input and reconstructs a faithful hand avatar exhibiting a high-fidelity appearance and geometry. In contrast to the major trend of neural implicit representations, HARP models a hand with a mesh-based parametric hand model, a vertex displacement map, a normal map, and an albedo without any neural components. As validated by our experiments, the explicit nature of our representation enables a truly scalable, robust, and efficient approach to hand avatar creation. HARP is optimized via gradient descent from a short sequence captured by a hand-held mobile phone and can be directly used in AR/VR applications with real-time rendering capability. To enable this, we carefully design and implement a shadow-aware differentiable rendering scheme that is robust to high degree articulations and self-shadowing regularly present in hand motion sequences, as well as challenging lighting conditions. It also generalizes to unseen poses and novel viewpoints, producing photo-realistic renderings of hand animations performing highly-articulated motions. Furthermore, the learned HARP representation can be used for improving 3D hand pose estimation quality in challenging viewpoints. The key advantages of HARP are validated by the in-depth analyses on appearance reconstruction, novel-view and novel pose synthesis, and 3D hand pose refinement. It is an AR/VR-ready personalized hand representation that shows superior fidelity and scalability.
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译
通过手动创建或使用3D扫描工具来创建高质量的铰接3D动物3D模型。因此,从2D图像重建铰接的3D对象的技术至关重要且非常有用。在这项工作中,我们提出了一个实用问题设置,以估算只有几个(10-30)特定动物物种(例如马)的野外图像(Horse)的3D姿势和形状。与依赖于预定义模板形状的现有作品相反,我们不假设任何形式的2D或3D地面真相注释,也不利用任何多视图或时间信息。此外,每个输入图像合奏都可以包含具有不同姿势,背景,照明和纹理的动物实例。我们的主要见解是,与整体动物相比,3D零件的形状要简单得多,并且它们是强大的W.R.T.动物姿势关节。遵循这些见解,我们提出了Lassie,这是一个新颖的优化框架,以最少的用户干预以自我监督的方式发现3D部分。 Lassie背后的关键推动力是使用自我篇幅的深度功能实现2D-3D零件的一致性。与先前的艺术相比,关于Pascal-Part和自我收集的野生动物数据集的实验表明,3D重建以及2D和3D部分的发现都更好。项目页面:chhankyo.github.io/lassie/
translated by 谷歌翻译
Recovering the skeletal shape of an animal from a monocular video is a longstanding challenge. Prevailing animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown animal by associating it with a seen articulated character in their memory. Inspired by this fact, we present CASA, a novel Category-Agnostic Skeletal Animal reconstruction method consisting of two major components: a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first retrieves an articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained language-vision model. CASA then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of CASA regarding shape reconstruction and articulation. We further demonstrate that the resulting skeletal-animated characters can be used for re-animation.
translated by 谷歌翻译
仅使用单视2D照片的收藏集对3D感知生成对抗网络(GAN)的无监督学习最近取得了很多进展。然而,这些3D gan尚未证明人体,并且现有框架的产生的辐射场不是直接编辑的,从而限制了它们在下游任务中的适用性。我们通过开发一个3D GAN框架来解决这些挑战的解决方案,该框架学会在规范的姿势中生成人体或面部的辐射场,并使用显式变形场将其扭曲成所需的身体姿势或面部表达。使用我们的框架,我们展示了人体的第一个高质量的辐射现场生成结果。此外,我们表明,与未接受明确变形训练的3D GAN相比,在编辑其姿势或面部表情时,我们的变形感知训练程序可显着提高产生的身体或面部的质量。
translated by 谷歌翻译
我们介绍了一个现实的单发网眼的人体头像创作的系统,即简称罗马。使用一张照片,我们的模型估计了特定于人的头部网格和相关的神经纹理,该神经纹理编码局部光度和几何细节。最终的化身是操纵的,可以使用神经网络进行渲染,该神经网络与野外视频数据集上的网格和纹理估计器一起训练。在实验中,我们观察到我们的系统在头部几何恢复和渲染质量方面都具有竞争性的性能,尤其是对于跨人的重新制定。请参阅结果https://samsunglabs.github.io/rome/
translated by 谷歌翻译
我们向渲染和时间(4D)重建人类的渲染和时间(4D)重建的神经辐射场,通过稀疏的摄像机捕获或甚至来自单眼视频。我们的方法将思想与神经场景表示,新颖的综合合成和隐式统计几何人称的人类表示相结合,耦合使用新颖的损失功能。在先前使用符号距离功能表示的结构化隐式人体模型,而不是使用统一的占用率来学习具有统一占用的光域字段。这使我们能够从稀疏视图中稳健地融合信息,并概括超出在训练中观察到的姿势或视图。此外,我们应用几何限制以共同学习观察到的主题的结构 - 包括身体和衣服 - 并将辐射场正规化为几何合理的解决方案。在多个数据集上的广泛实验证明了我们方法的稳健性和准确性,其概括能力显着超出了一系列的姿势和视图,以及超出所观察到的形状的统计外推。
translated by 谷歌翻译
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image. High-fidelity 3D GAN inversion is inherently challenging due to the geometry-texture trade-off in 3D inversion, where overfitting to a single view input image often damages the estimated geometry during the latent optimization. To solve this challenge, we propose a novel pipeline that builds on the pseudo-multi-view estimation with visibility analysis. We keep the original textures for the visible parts and utilize generative priors for the occluded parts. Extensive experiments show that our approach achieves advantageous reconstruction and novel view synthesis quality over state-of-the-art methods, even for images with out-of-distribution textures. The proposed pipeline also enables image attribute editing with the inverted latent code and 3D-aware texture modification. Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
translated by 谷歌翻译
我们提出了神经演员(NA),一种用于从任意观点和任意可控姿势的高质量合成人类的新方法。我们的方法是基于最近的神经场景表示和渲染工作,从而从仅从2D图像中学习几何形状和外观的表示。虽然现有的作品令人兴奋地呈现静态场景和动态场景的播放,具有神经隐含方法的照片 - 现实重建和人类的渲染,特别是在用户控制的新颖姿势下,仍然很困难。为了解决这个问题,我们利用一个粗体模型作为将周围的3D空间的代理放入一个规范姿势。神经辐射场从多视图视频输入中了解在规范空间中的姿势依赖几何变形和姿势和视图相关的外观效果。为了综合高保真动态几何和外观的新颖视图,我们利用身体模型上定义的2D纹理地图作为预测残余变形和动态外观的潜变量。实验表明,我们的方法能够比播放的最先进,以及新的姿势合成来实现更好的质量,并且甚至可以概括到新的姿势与训练姿势不同的姿势。此外,我们的方法还支持对合成结果的体形控制。
translated by 谷歌翻译