在创建3D内容时,通常需要高度专业的技能来设计和生成对象和其他资产的模型。我们通过从多模式输入(包括2D草图,图像和文本)中检索高质量的3D资产来解决此问题。我们使用夹子,因为它为高级潜在特征提供了桥梁。我们使用这些功能来执行多模式融合,以解决影响常见数据驱动方法的缺乏艺术控制。我们的方法通过使用输入潜在的嵌入方式组合,可以通过3D资产数据库进行多模式条件特征驱动的检索。我们探讨了不同输入类型和加权方法的特征嵌入不同组合的影响。
translated by 谷歌翻译
我们提出了夹子演员,这是人类网状动画的文本驱动运动建议和神经网格式化系统。剪贴画将动画3D人类网格通过推荐运动序列和学习网格样式属性来符合文本提示。当艺术家设计的网格内容从一开始就不符合文本时,先前的工作将无法产生合理的结果。取而代之的是,我们通过利用具有语言标签的大规模人类运动数据集来构建文本驱动的人类运动推荐系统。鉴于自然语言提示,剪贴器首先提出了一种人类运动,该运动以粗到精细的方式符合提示。然后,我们提出了一种合成的直接优化方法,该方法从每个帧的姿势中以分离的方式详细详细介绍了建议的网格序列。它允许样式属性以时间一致和姿势不合时宜的方式符合提示。脱钩的神经优化还可以使人类运动的时空视图增强。我们进一步提出了掩盖加权的嵌入注意力,该嵌入的注意力通过拒绝含有稀缺前景像素的分心渲染来稳定优化过程。我们证明剪贴器会产生合理的和人类识别的样式3D人物,并从自然语言提示中带有详细的几何形状和纹理。
translated by 谷歌翻译
我们提出了仅使用目标文本提示的3D模型的零击生成技术。在没有任何3D监督的情况下,我们的方法变形了极限细分表面的控制形状及其纹理地图和正常地图,以获得与输入文本提示相对应的3D资产,并且可以轻松地部署到游戏或建模应用程序中。我们仅依靠预先训练的剪辑模型,该模型将输入文本提示与我们3D模型的渲染图像进行了分化。虽然先前的作品集中在风格化或对生成模型的必要培训上,但我们直接对网格参数进行优化,以生成形状,纹理或两者兼而有之。为了限制优化以产生合理的网格和纹理,我们使用图像增强量引入了许多技术,并使用预验证的先验,该技术在给定文本嵌入的情况下生成了剪贴图像嵌入。
translated by 谷歌翻译
在这项工作中,我们开发直观的控制,用于编辑3D对象的风格。我们的框架Text2Mesh,通过预测符合目标文本提示的颜色和本地几何细节来体验3D网格。我们考虑使用与学习的神经网络耦合的固定网格输入(内容)进行3D对象的脱信表示,我们使用神经风格现场网络。为了修改样式,我们通过利用剪辑的代表性来获取文本提示(描述样式)和风格化网格之间的相似性分数。Text2Mesh既不需要预先训练的生成模型,也不需要专门的3D网状数据集。它可以处理具有任意属的低质量网格(非歧管,边界等),并且不需要UV参数化。我们展示了我们技术在各种各样的3D网格上综合了符合无数款式的能力。
translated by 谷歌翻译
我们将神经渲染与多模态图像和文本表示相结合,以仅从自然语言描述中综合不同的3D对象。我们的方法,梦场,可以产生多种物体的几何和颜色而无需3D监控。由于不同,标题3D数据的稀缺性,先前的方法仅生成来自少数类别的对象,例如ShapEnet。相反,我们指导生成与从Web的标题图像的大型数据集预先培训的图像文本模型。我们的方法优化了许多相机视图的神经辐射场,使得根据预先训练的剪辑模型,渲染图像非常高度地使用目标字幕。为了提高保真度和视觉质量,我们引入简单的几何前瞻,包括突出透射率正则化,场景界限和新的MLP架构。在实验中,梦场从各种自然语言标题中产生现实,多视图一致的物体几何和颜色。
translated by 谷歌翻译
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space. This zero-shot approach enables task-agnostic training and open-vocabulary queries. For example, to perform SOTA zero-shot 3D semantic segmentation it first infers CLIP features for every 3D point and later classifies them based on similarities to embeddings of arbitrary class labels. More interestingly, it enables a suite of open-vocabulary scene understanding applications that have never been done before. For example, it allows a user to enter an arbitrary text query and then see a heat map indicating which parts of a scene match. Our approach is effective at identifying objects, materials, affordances, activities, and room types in complex 3D scenes, all using a single model trained without any labeled 3D data.
translated by 谷歌翻译
We present 3D Highlighter, a technique for localizing semantic regions on a mesh using text as input. A key feature of our system is the ability to interpret "out-of-domain" localizations. Our system demonstrates the ability to reason about where to place non-obviously related concepts on an input 3D shape, such as adding clothing to a bare 3D animal model. Our method contextualizes the text description using a neural field and colors the corresponding region of the shape using a probability-weighted blend. Our neural optimization is guided by a pre-trained CLIP encoder, which bypasses the need for any 3D datasets or 3D annotations. Thus, 3D Highlighter is highly flexible, general, and capable of producing localizations on a myriad of input shapes. Our code is publicly available at https://github.com/threedle/3DHighlighter.
translated by 谷歌翻译
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially on simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found in our project page: https://cassiepython.github.io/nerfart/.
translated by 谷歌翻译
文本对图像模型提供了前所未有的自由,可以通过自然语言指导创作。然而,尚不清楚如何行使这种自由以生成特定独特概念,修改其外观或以新角色和新颖场景构成它们的图像。换句话说,我们问:我们如何使用语言指导的模型将猫变成绘画,或者想象基于我们喜欢的玩具的新产品?在这里,我们提出了一种简单的方法,可以允许这种创造性自由。我们仅使用3-5个用户提供的概念(例如对象或样式)的图像,我们学会通过在冷冻文本到图像模型的嵌入空间中通过新的“单词”表示它。这些“单词”可以组成自然语言句子,以直观的方式指导个性化的创作。值得注意的是,我们发现有证据表明单词嵌入足以捕获独特而多样的概念。我们将我们的方法比较了各种基线,并证明它可以更忠实地描绘出一系列应用程序和任务的概念。我们的代码,数据和新单词将在以下网址提供:https://textual-inversion.github.io
translated by 谷歌翻译
我们提出了COGS,这是一种新颖的方法,用于图像的样式条件,素描驱动的合成。 COGS可以为给定的草图对象探索各种外观可能性,从而对输出的结构和外观进行了脱钩的控制。通过输入草图和基于变压器的草图和样式编码器的示例“样式”调理图像启用了对物体结构和外观的粗粒粒度控制,以生成离散的代码簿表示。我们将代码簿表示形式映射到度量空间中,从而在通过量化量化的GAN(VQGAN)解码器生成图像之前,可以对多个合成选项之间的选择和插值进行细粒度的控制和插值。我们的框架因此统一了搜索和综合任务,因为草图和样式对可以用于运行初始合成,该合成可以通过结合结合在搜索语料库中结合使用,以使图像更加与用户的意图更匹配。我们表明,我们的模型对新创建的Pseudosketches数据集的125个对象类培训,能够生产出多种语义内容和外观样式的范围。
translated by 谷歌翻译
我们解决了用草图和文本查询检索图像的问题。我们提出任务形成器(文本和草图变压器),这是一种可使用文本说明和草图作为输入的端到端训练模型。我们认为,两种输入方式都以一种单独的方式无法轻易实现的方式相互补充。任务形成器遵循延迟融合双编码方法,类似于剪辑,该方法允许有效且可扩展的检索,因为检索集可以独立于查询而独立于索引。我们从经验上证明,与传统的基于文本的图像检索相比,除文本外,使用输入草图(甚至是绘制的草图)大大增加了检索召回。为了评估我们的方法,我们在可可数据集的测试集中收集了5,000个手绘草图。收集的草图可获得https://janesjanes.github.io/tsbir/。
translated by 谷歌翻译
We propose ClipFace, a novel self-supervised approach for text-guided editing of textured 3D morphable model of faces. Specifically, we employ user-friendly language prompts to enable control of the expressions as well as appearance of 3D faces. We leverage the geometric expressiveness of 3D morphable models, which inherently possess limited controllability and texture expressivity, and develop a self-supervised generative model to jointly synthesize expressive, textured, and articulated faces in 3D. We enable high-quality texture generation for 3D faces by adversarial self-supervised training, guided by differentiable rendering against collections of real RGB images. Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model. To this end, we propose a neural network that predicts both texture and expression latent codes of the morphable model. Our model is trained in a self-supervised fashion by exploiting differentiable rendering and losses based on a pre-trained CLIP model. Once trained, our model jointly predicts face textures in UV-space, along with expression parameters to capture both geometry and texture changes in facial expressions in a single forward pass. We further show the applicability of our method to generate temporally changing textures for a given animation sequence.
translated by 谷歌翻译
Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also use LidarCLIP as a tool to investigate fundamental lidar capabilities through natural language. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding. Code and trained models available at https://github.com/atonderski/lidarclip.
translated by 谷歌翻译
Stone" "Mohawk hairstyle" "Without makeup" "Cute cat" "Lion" "Gothic church" * Equal contribution, ordered alphabetically. Code and video are available on https://github.com/orpatashnik/StyleCLIP
translated by 谷歌翻译
最近的生成模型的成功表明,利用多模态嵌入空间可以使用文本信息操纵图像。然而,由于源的动态特性,使用其他来源而不是声音的文本来操纵图像,而不是声音,并不容易。特别是,声音可以传达真实世界的生动情感和动态表达。在这里,我们提出了一个框架,该框架将声音直接编码为多模态(图像文本)嵌入空间,并从空间操纵图像。我们的音频编码器受过培训以产生来自音频输入的潜在表示,该音频输入被强制与多模式嵌入空间中的图像和文本表示对齐。我们使用基于对齐的嵌入式的直接潜在优化方法进行声音引导图像操纵。我们还表明,我们的方法可以混合文本和音频模态,这丰富了各种图像修改。我们验证了定量和定性的声音引导图像操纵的有效性。我们还表明,我们的方法可以混合不同的模态,即文本和音频,这丰富了图像修改的各种。零射频分类和语义级图像分类的实验表明,我们所提出的模型优于其他文本和声音引导最先进的方法。
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
我们呈现剪辑NERF,一种用于神经辐射字段(NERF)的多模态3D对象操纵方法。通过利用近期对比语言图像预培训(剪辑)模型的联合语言图像嵌入空间,我们提出了一个统一的框架,它允许以用户友好的方式操纵nerf,使用短文本提示或示例图像。具体地,为了结合NERF的新型视图合成能力以及从生成模型的潜在表示的可控操纵能力,我们引入了一种允许单独控制形状和外观的脱屑的条件NERF架构。这是通过通过将学习的变形字段应用于对体积渲染阶段的位置编码和延迟颜色调节来实现的来实现。要将这种解除潜在的潜在潜在表示到剪辑嵌入,我们设计了两个代码映射器,将剪辑嵌入为输入并更新潜在码以反映目标编辑。用基于剪辑的匹配损耗训练映射器,以确保操纵精度。此外,我们提出了一种逆优化方法,可以将输入图像精确地将输入图像投影到潜在码以进行操作以使在真实图像上进行编辑。我们在各种文本提示和示例图像上进行广泛的实验评估我们的方法,并为交互式编辑提供了直观的接口。我们的实现是在https://cassiepython.github.io/clipnerf/上获得的
translated by 谷歌翻译
Incidental supervision from language has become a popular approach for learning generic visual representations that can be prompted to perform many recognition tasks in computer vision. We conduct an in-depth exploration of the CLIP model and show that its visual representation is often strongly biased towards solving some tasks more than others. Moreover, which task the representation will be biased towards is unpredictable, with little consistency across images. To resolve this task bias, we show how to learn a visual prompt that guides the representation towards features relevant to their task of interest. Our results show that these visual prompts can be independent of the input image and still effectively provide a conditioning mechanism to steer visual representations towards the desired task.
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译
Text-guided 3D object generation aims to generate 3D objects described by user-defined captions, which paves a flexible way to visualize what we imagined. Although some works have been devoted to solving this challenging task, these works either utilize some explicit 3D representations (e.g., mesh), which lack texture and require post-processing for rendering photo-realistic views; or require individual time-consuming optimization for every single case. Here, we make the first attempt to achieve generic text-guided cross-category 3D object generation via a new 3D-TOGO model, which integrates a text-to-views generation module and a views-to-3D generation module. The text-to-views generation module is designed to generate different views of the target 3D object given an input caption. prior-guidance, caption-guidance and view contrastive learning are proposed for achieving better view-consistency and caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D generation module to obtain the implicit 3D neural representation from the previously-generated views. Our 3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture and requires no time-cost optimization for every single caption. Besides, 3D-TOGO can control the category, color and shape of generated 3D objects with the input caption. Extensive experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects according to the input captions across 98 different categories, in terms of PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields.
translated by 谷歌翻译