创建和编辑3D对象的形状和颜色需要巨大的人类努力和专业知识。与3D接口中的直​​接操作相比,诸如草图和涂鸦之类的2D交互对用户通常更自然和直观。在本文中,我们提出了一个通用的多模式生成模型,该模型通过共享的潜在空间耦合2D模式和隐式3D表示。通过提出的模型,通过简单地通过潜在空间从特定的2D控制模式传播编辑,可以实现多功能3D生成和操纵。例如,通过绘制草图来编辑3D形状,通过绘画颜色在2D渲染上重新色彩,或者在一个或几个参考图像中生成特定类别的3D形状。与先前的作品不同,我们的模型不需要每个编辑任务进行重新训练或微调,并且在概念上也很简单,易于实现,对输入域移动的强大,并且可以在部分2D输入中进行多样化的重建。我们在灰度线草图和渲染颜色图像的两种代表性2D模态上评估了我们的框架,并证明我们的方法可以通过以下2D模态实现各种形状的操纵和生成任务。
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
图像翻译和操纵随着深层生成模型的快速发展而引起了越来越多的关注。尽管现有的方法带来了令人印象深刻的结果,但它们主要在2D空间中运行。鉴于基于NERF的3D感知生成模型的最新进展,我们介绍了一项新的任务,语义到网络翻译,旨在重建由NERF模型的3D场景,该场景以一个单视语义掩码作为输入为条件。为了启动这项新颖的任务,我们提出了SEM2NERF框架。特别是,SEM2NERF通过将语义面膜编码到控制预训练的解码器的3D场景表示形式中来解决高度挑战的任务。为了进一步提高映射的准确性,我们将新的区域感知学习策略集成到编码器和解码器的设计中。我们验证了提出的SEM2NERF的功效,并证明它在两个基准数据集上的表现优于几个强基础。代码和视频可从https://donydchen.github.io/sem2nerf/获得
translated by 谷歌翻译
我们呈现剪辑NERF,一种用于神经辐射字段(NERF)的多模态3D对象操纵方法。通过利用近期对比语言图像预培训(剪辑)模型的联合语言图像嵌入空间,我们提出了一个统一的框架,它允许以用户友好的方式操纵nerf,使用短文本提示或示例图像。具体地,为了结合NERF的新型视图合成能力以及从生成模型的潜在表示的可控操纵能力,我们引入了一种允许单独控制形状和外观的脱屑的条件NERF架构。这是通过通过将学习的变形字段应用于对体积渲染阶段的位置编码和延迟颜色调节来实现的来实现。要将这种解除潜在的潜在潜在表示到剪辑嵌入,我们设计了两个代码映射器,将剪辑嵌入为输入并更新潜在码以反映目标编辑。用基于剪辑的匹配损耗训练映射器,以确保操纵精度。此外,我们提出了一种逆优化方法,可以将输入图像精确地将输入图像投影到潜在码以进行操作以使在真实图像上进行编辑。我们在各种文本提示和示例图像上进行广泛的实验评估我们的方法,并为交互式编辑提供了直观的接口。我们的实现是在https://cassiepython.github.io/clipnerf/上获得的
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译
最近的研究表明,基于预训练的gan的可控图像生成可以使广泛的计算机视觉任务受益。但是,较少的关注专用于3D视觉任务。鉴于此,我们提出了一个新颖的图像条件神经隐式领域,该领域可以利用GAN生成的多视图图像的2D监督,并执行通用对象的单视图重建。首先,提出了一个新颖的基于脱机的发电机,以生成具有对视点的完全控制的合理伪图像。然后,我们建议利用神经隐式函数,以及可区分的渲染器,从带有对象掩模和粗糙姿势初始化的伪图像中学习3D几何形状。为了进一步检测不可靠的监督,我们引入了一个新颖的不确定性模块来预测不确定性图,该模块可以补救伪图像中不确定区域的负面影响,从而导致更好的重建性能。我们方法的有效性是通过通用对象的出色单视3D重建结果证明的。
translated by 谷歌翻译
深层隐式表面在建模通用形状方面表现出色,但并不总是捕获制造物体中存在的规律性,这是简单的几何原始词特别擅长。在本文中,我们提出了一个结合潜在和显式参数的表示,可以将其解码为一组彼此一致的深层隐式和几何形状。结果,我们可以有效地对制成物体共存的复杂形状和高度规则形状进行建模。这使我们能够以有效而精确的方式操纵3D形状的方法。
translated by 谷歌翻译
从单个视图中重建高质量的3D对象,从单个视图中的部分观测可能对计算机视觉,机器人和图形的各种应用来说至关重要。虽然最近的神经隐式建模方法显示了合成或密集数据的有希望的结果,但它们在稀疏和嘈杂的现实世界数据上表现不佳。我们发现流行的神经隐式模型的局限性是由于缺乏鲁棒形状的主管和缺乏适当的正则化。在这项工作中,我们展示了使用:(i)一个深度编码器作为形状潜在代码的鲁棒初始化器的深度编码器; (ii)正规化的测试时间优化潜在代码; (iii)以学习的高维形状为深度鉴别者; (iv)一种新颖的课程学习策略,允许模型学习合成数据的形状前瞻,并将其平稳地将它们转移到稀疏的现实世界数据。我们的方法更好地捕获了全局结构,在遮挡和稀疏观测上表现良好,并用地面真理形状良好寄存。我们在两个现实世界数据集上展示了最先进的3D对象重建方法的卓越性能。
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译
In this work, we present a novel framework built to simplify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, including images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multi-modal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.
translated by 谷歌翻译
由于其语义上的理解和用户友好的可控性,通过三维引导,通过三维引导的面部图像操纵已广泛应用于各种交互式场景。然而,现有的基于3D形式模型的操作方法不可直接适用于域名面,例如非黑色素化绘画,卡通肖像,甚至是动物,主要是由于构建每个模型的强大困难具体面部域。为了克服这一挑战,据我们所知,我们建议使用人为3DMM操纵任意域名的第一种方法。这是通过两个主要步骤实现的:1)从3DMM参数解开映射到潜在的STYLEGO2的潜在空间嵌入,可确保每个语义属性的解除响应和精确的控制; 2)通过实施一致的潜空间嵌入,桥接域差异并使人类3DMM适用于域外面的人类3DMM。实验和比较展示了我们高质量的语义操作方法在各种面部域中的优越性,所有主要3D面部属性可控姿势,表达,形状,反照镜和照明。此外,我们开发了直观的编辑界面,以支持用户友好的控制和即时反馈。我们的项目页面是https://cassiepython.github.io/cddfm3d/index.html
translated by 谷歌翻译
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
您将如何通过一些错过来修复物理物体?您可能会想象它的原始形状从先前捕获的图像中,首先恢复其整体(全局)但粗大的形状,然后完善其本地细节。我们有动力模仿物理维修程序以解决点云完成。为此,我们提出了一个跨模式的形状转移双转化网络(称为CSDN),这是一种带有全循环参与图像的粗到精细范式,以完成优质的点云完成。 CSDN主要由“ Shape Fusion”和“ Dual-Refinect”模块组成,以应对跨模式挑战。第一个模块将固有的形状特性从单个图像传输,以指导点云缺失区域的几何形状生成,在其中,我们建议iPadain嵌入图像的全局特征和部分点云的完成。第二个模块通过调整生成点的位置来完善粗糙输出,其中本地改进单元通过图卷积利用了小说和输入点之间的几何关系,而全局约束单元则利用输入图像来微调生成的偏移。与大多数现有方法不同,CSDN不仅探讨了图像中的互补信息,而且还可以在整个粗到精细的完成过程中有效利用跨模式数据。实验结果表明,CSDN对十个跨模式基准的竞争对手表现出色。
translated by 谷歌翻译
最近已经示出了从2D图像中提取隐式3D表示的生成神经辐射场(GNERF)模型,以产生代表刚性物体的现实图像,例如人面或汽车。然而,他们通常难以产生代表非刚性物体的高质量图像,例如人体,这对许多计算机图形应用具有很大的兴趣。本文提出了一种用于人类图像综合的3D感知语义导向生成模型(3D-SAGGA),其集成了GNERF和纹理发生器。前者学习人体的隐式3D表示,并输出一组2D语义分段掩模。后者将这些语义面部掩模转化为真实的图像,为人类的外观添加了逼真的纹理。如果不需要额外的3D信息,我们的模型可以使用照片现实可控生成学习3D人类表示。我们在Deepfashion DataSet上的实验表明,3D-SAGGAN显着优于最近的基线。
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
The neural radiance field (NeRF) has shown promising results in preserving the fine details of objects and scenes. However, unlike mesh-based representations, it remains an open problem to build dense correspondences across different NeRFs of the same category, which is essential in many downstream tasks. The main difficulties of this problem lie in the implicit nature of NeRF and the lack of ground-truth correspondence annotations. In this paper, we show it is possible to bypass these challenges by leveraging the rich semantics and structural priors encapsulated in a pre-trained NeRF-based GAN. Specifically, we exploit such priors from three aspects, namely 1) a dual deformation field that takes latent codes as global structural indicators, 2) a learning objective that regards generator features as geometric-aware local descriptors, and 3) a source of infinite object-specific NeRF samples. Our experiments demonstrate that such priors lead to 3D dense correspondence that is accurate, smooth, and robust. We also show that established dense correspondence across NeRFs can effectively enable many NeRF-based downstream applications such as texture transfer.
translated by 谷歌翻译
代表物体粒度的场景是场景理解和决策的先决条件。我们提出PrisMoNet,一种基于先前形状知识的新方法,用于学习多对象3D场景分解和来自单个图像的表示。我们的方法学会在平面曲面上分解具有多个对象的合成场景的图像,进入其组成场景对象,并从单个视图推断它们的3D属性。经常性编码器从输入的RGB图像中回归3D形状,姿势和纹理的潜在表示。通过可差异化的渲染,我们培训我们的模型以自我监督方式从RGB-D图像中分解场景。 3D形状在功能空间中连续表示,作为我们以监督方式从示例形状预先训练的符号距离函数。这些形状的前沿提供弱监管信号,以更好地条件挑战整体学习任务。我们评估我们模型在推断3D场景布局方面的准确性,展示其生成能力,评估其对真实图像的概括,并指出了学习的表示的益处。
translated by 谷歌翻译