我们提出了夹子演员,这是人类网状动画的文本驱动运动建议和神经网格式化系统。剪贴画将动画3D人类网格通过推荐运动序列和学习网格样式属性来符合文本提示。当艺术家设计的网格内容从一开始就不符合文本时,先前的工作将无法产生合理的结果。取而代之的是,我们通过利用具有语言标签的大规模人类运动数据集来构建文本驱动的人类运动推荐系统。鉴于自然语言提示,剪贴器首先提出了一种人类运动,该运动以粗到精细的方式符合提示。然后,我们提出了一种合成的直接优化方法,该方法从每个帧的姿势中以分离的方式详细详细介绍了建议的网格序列。它允许样式属性以时间一致和姿势不合时宜的方式符合提示。脱钩的神经优化还可以使人类运动的时空视图增强。我们进一步提出了掩盖加权的嵌入注意力,该嵌入的注意力通过拒绝含有稀缺前景像素的分心渲染来稳定优化过程。我们证明剪贴器会产生合理的和人类识别的样式3D人物,并从自然语言提示中带有详细的几何形状和纹理。
translated by 谷歌翻译
在这项工作中,我们开发直观的控制,用于编辑3D对象的风格。我们的框架Text2Mesh,通过预测符合目标文本提示的颜色和本地几何细节来体验3D网格。我们考虑使用与学习的神经网络耦合的固定网格输入(内容)进行3D对象的脱信表示,我们使用神经风格现场网络。为了修改样式,我们通过利用剪辑的代表性来获取文本提示(描述样式)和风格化网格之间的相似性分数。Text2Mesh既不需要预先训练的生成模型,也不需要专门的3D网状数据集。它可以处理具有任意属的低质量网格(非歧管,边界等),并且不需要UV参数化。我们展示了我们技术在各种各样的3D网格上综合了符合无数款式的能力。
translated by 谷歌翻译
Existing neural rendering methods for creating human avatars typically either require dense input signals such as video or multi-view images, or leverage a learned prior from large-scale specific 3D human datasets such that reconstruction can be performed with sparse-view inputs. Most of these methods fail to achieve realistic reconstruction when only a single image is available. To enable the data-efficient creation of realistic animatable 3D humans, we propose ELICIT, a novel method for learning human-specific neural radiance fields from a single image. Inspired by the fact that humans can easily reconstruct the body geometry and infer the full-body clothing from a single image, we leverage two priors in ELICIT: 3D geometry prior and visual semantic prior. Specifically, ELICIT introduces the 3D body shape geometry prior from a skinned vertex-based template model (i.e., SMPL) and implements the visual clothing semantic prior with the CLIP-based pre-trained models. Both priors are used to jointly guide the optimization for creating plausible content in the invisible areas. In order to further improve visual details, we propose a segmentation-based sampling strategy that locally refines different parts of the avatar. Comprehensive evaluations on multiple popular benchmarks, including ZJU-MoCAP, Human3.6M, and DeepFashion, show that ELICIT has outperformed current state-of-the-art avatar creation methods when only a single image is available. Code will be public for reseach purpose at https://elicit3d.github.io .
translated by 谷歌翻译
在创建3D内容时,通常需要高度专业的技能来设计和生成对象和其他资产的模型。我们通过从多模式输入(包括2D草图,图像和文本)中检索高质量的3D资产来解决此问题。我们使用夹子,因为它为高级潜在特征提供了桥梁。我们使用这些功能来执行多模式融合,以解决影响常见数据驱动方法的缺乏艺术控制。我们的方法通过使用输入潜在的嵌入方式组合,可以通过3D资产数据库进行多模式条件特征驱动的检索。我们探讨了不同输入类型和加权方法的特征嵌入不同组合的影响。
translated by 谷歌翻译
我们将神经渲染与多模态图像和文本表示相结合,以仅从自然语言描述中综合不同的3D对象。我们的方法,梦场,可以产生多种物体的几何和颜色而无需3D监控。由于不同,标题3D数据的稀缺性,先前的方法仅生成来自少数类别的对象,例如ShapEnet。相反,我们指导生成与从Web的标题图像的大型数据集预先培训的图像文本模型。我们的方法优化了许多相机视图的神经辐射场,使得根据预先训练的剪辑模型,渲染图像非常高度地使用目标字幕。为了提高保真度和视觉质量,我们引入简单的几何前瞻,包括突出透射率正则化,场景界限和新的MLP架构。在实验中,梦场从各种自然语言标题中产生现实,多视图一致的物体几何和颜色。
translated by 谷歌翻译
We propose ClipFace, a novel self-supervised approach for text-guided editing of textured 3D morphable model of faces. Specifically, we employ user-friendly language prompts to enable control of the expressions as well as appearance of 3D faces. We leverage the geometric expressiveness of 3D morphable models, which inherently possess limited controllability and texture expressivity, and develop a self-supervised generative model to jointly synthesize expressive, textured, and articulated faces in 3D. We enable high-quality texture generation for 3D faces by adversarial self-supervised training, guided by differentiable rendering against collections of real RGB images. Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model. To this end, we propose a neural network that predicts both texture and expression latent codes of the morphable model. Our model is trained in a self-supervised fashion by exploiting differentiable rendering and losses based on a pre-trained CLIP model. Once trained, our model jointly predicts face textures in UV-space, along with expression parameters to capture both geometry and texture changes in facial expressions in a single forward pass. We further show the applicability of our method to generate temporally changing textures for a given animation sequence.
translated by 谷歌翻译
我们提出了仅使用目标文本提示的3D模型的零击生成技术。在没有任何3D监督的情况下,我们的方法变形了极限细分表面的控制形状及其纹理地图和正常地图,以获得与输入文本提示相对应的3D资产,并且可以轻松地部署到游戏或建模应用程序中。我们仅依靠预先训练的剪辑模型,该模型将输入文本提示与我们3D模型的渲染图像进行了分化。虽然先前的作品集中在风格化或对生成模型的必要培训上,但我们直接对网格参数进行优化,以生成形状,纹理或两者兼而有之。为了限制优化以产生合理的网格和纹理,我们使用图像增强量引入了许多技术,并使用预验证的先验,该技术在给定文本嵌入的情况下生成了剪贴图像嵌入。
translated by 谷歌翻译
We present 3D Highlighter, a technique for localizing semantic regions on a mesh using text as input. A key feature of our system is the ability to interpret "out-of-domain" localizations. Our system demonstrates the ability to reason about where to place non-obviously related concepts on an input 3D shape, such as adding clothing to a bare 3D animal model. Our method contextualizes the text description using a neural field and colors the corresponding region of the shape using a probability-weighted blend. Our neural optimization is guided by a pre-trained CLIP encoder, which bypasses the need for any 3D datasets or 3D annotations. Thus, 3D Highlighter is highly flexible, general, and capable of producing localizations on a myriad of input shapes. Our code is publicly available at https://github.com/threedle/3DHighlighter.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
我们按照用户指定的文本提示,以直观和语义的方式处理视频对象的任务。这是一项具有挑战性的任务,因为结果视频必须满足多个属性:(1)它必须在时间上保持一致并避免抖动或类似的工件,(2)结果的风格必须保留对象的全局语义及其细粒度的全局语义。详细信息,(3)必须遵守用户指定的文本提示。为此,根据两个目标文本,我们的方法在视频中对对象进行了修改。第一个目标文本提示说明了全局语义,第二个目标文本提示提示描述了本地语义。要修改对象的样式,我们利用剪辑的代表力,以在(1)本地目标文本和一组本地风格化视图之间获得相似性得分,以及(2)全局目标文本和一组风格化的全局视图。我们使用预估计的ATLA分解网络以时间一致的方式传播编辑。我们证明,我们的方法可以为各种对象和视频产生一致的样式变化,这些样式遵守目标文本的规范。我们还展示了如何改变目标文本的特异性并使用一组前缀增强文本会导致具有不同细节级别的样式化。在我们的项目网页上给出了完整的结果:https://sloeschcke.github.io/text-driven-stylization-of-video-objects/
translated by 谷歌翻译
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially on simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found in our project page: https://cassiepython.github.io/nerfart/.
translated by 谷歌翻译
我们解决了3D室内场景的语言引导语义风格转移的新问题。输入是一个3D室内场景网格和几个描述目标场景的短语。首先,通过多层感知器将3D顶点坐标映射到RGB残基。其次,通过针对室内场景量身定制的视点采样策略将彩色的3D网格分化为2D图像。第三,通过预训练的视觉模型将渲染的2D图像与短语进行比较。最后,错误被反向传播到多层感知器,以更新与某些语义类别相对应的顶点颜色。我们对公共扫描仪和场景数据集进行了大规模定性分析和A/B用户测试。我们证明:(1)视觉令人愉悦的结果,这些结果可能对多媒体应用有用。 (2)从与人类先验一致的观点渲染3D​​室内场景很重要。 (3)合并语义可显着提高样式转移质量。 (4)HSV正则化项会导致结果与输入更一致,并且通常评分更好。代码和用户研究工具箱可从https://github.com/air-discover/lasst获得
translated by 谷歌翻译
Text-guided 3D object generation aims to generate 3D objects described by user-defined captions, which paves a flexible way to visualize what we imagined. Although some works have been devoted to solving this challenging task, these works either utilize some explicit 3D representations (e.g., mesh), which lack texture and require post-processing for rendering photo-realistic views; or require individual time-consuming optimization for every single case. Here, we make the first attempt to achieve generic text-guided cross-category 3D object generation via a new 3D-TOGO model, which integrates a text-to-views generation module and a views-to-3D generation module. The text-to-views generation module is designed to generate different views of the target 3D object given an input caption. prior-guidance, caption-guidance and view contrastive learning are proposed for achieving better view-consistency and caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D generation module to obtain the implicit 3D neural representation from the previously-generated views. Our 3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture and requires no time-cost optimization for every single caption. Besides, 3D-TOGO can control the category, color and shape of generated 3D objects with the input caption. Extensive experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects according to the input captions across 98 different categories, in terms of PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields.
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译
对于场景重建和新型视图综合的数量表示形式的普及最近,人们的普及使重点放在以高视觉质量和实时为实时的体积内容动画上。尽管基于学习功能的隐性变形方法可以产生令人印象深刻的结果,但它们是艺术家和内容创建者的“黑匣子”,但它们需要大量的培训数据才能有意义地概括,并且在培训数据之外不会产生现实的外推。在这项工作中,我们通过引入实时的音量变形方法来解决这些问题,该方法是实时的,易于使用现成的软件编辑,并且可以令人信服地推断出来。为了证明我们方法的多功能性,我们将其应用于两种情况:基于物理的对象变形和触发性,其中使用Blendshapes控制着头像。我们还进行了彻底的实验,表明我们的方法与两种体积方法相比,结合了基于网格变形的隐式变形和方法。
translated by 谷歌翻译
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image. While implicit function-based methods recently achieved reasonable reconstruction performance, they still bear limitations showing degraded quality in both surface geometry and texture from an unobserved view. In response, to generate a realistic textured surface, we propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body. To suppress the diffused occupancy that causes noise in projection images and reconstructed meshes, we propose to train occupancy probability by simultaneously utilizing 2D and 3D supervisions with occupancy-based volume rendering. We also introduce a refinement architecture that generates detail-preserving backside-view images with front-to-back warping. Extensive experiments demonstrate that our method achieves state-of-the-art performance in 3D human reconstruction from a single image, showing enhanced geometry and texture quality from an unobserved view.
translated by 谷歌翻译
Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
translated by 谷歌翻译
我们提出了神经演员(NA),一种用于从任意观点和任意可控姿势的高质量合成人类的新方法。我们的方法是基于最近的神经场景表示和渲染工作,从而从仅从2D图像中学习几何形状和外观的表示。虽然现有的作品令人兴奋地呈现静态场景和动态场景的播放,具有神经隐含方法的照片 - 现实重建和人类的渲染,特别是在用户控制的新颖姿势下,仍然很困难。为了解决这个问题,我们利用一个粗体模型作为将周围的3D空间的代理放入一个规范姿势。神经辐射场从多视图视频输入中了解在规范空间中的姿势依赖几何变形和姿势和视图相关的外观效果。为了综合高保真动态几何和外观的新颖视图,我们利用身体模型上定义的2D纹理地图作为预测残余变形和动态外观的潜变量。实验表明,我们的方法能够比播放的最先进,以及新的姿势合成来实现更好的质量,并且甚至可以概括到新的姿势与训练姿势不同的姿势。此外,我们的方法还支持对合成结果的体形控制。
translated by 谷歌翻译
关于语言引导的图像操纵的最新作品在提供丰富的语义方面表现出了极大的语言力量,尤其是对于面部图像。但是,语言中的其他自然信息,动作的探索较少。在本文中,我们利用运动信息并研究一项新颖的任务,语言引导的面部动画,旨在在语言的帮助下对静态面部图像进行动画。为了更好地利用语言的语义和动作,我们提出了一个简单而有效的框架。具体而言,我们提出了一个经常性运动生成器,以从语言中提取一系列语义和运动信息,并将其与视觉信息一起提供给预训练的样式,以生成高质量的帧。为了优化所提出的框架,提出了三个精心设计的损失功能,包括保持面部身份的正规化损失,路径长度正规化损失以确保运动平滑度和对比度损失,以在一个模型中使用各种语言指导启用视频综合。对不同领域的定性和定量评估进行了广泛的实验(\ textit {ef。语。代码将在https://github.com/tiankaihang/language-guided-animation.git上找到。
translated by 谷歌翻译