用用户涂鸦的可控图像合成是对计算机视觉社区感兴趣的主题。在本文中,我们第一次研究了不完整和原始人类绘画的影像现实主义图像合成问题。特别是,我们提出了一种新颖的方法Paint2Pix,该方法通过学习从不完整的人类绘画的绘图到其现实效果图的映射来预测(和适应用户想要从基本的笔触输入中绘制的内容)。当与自动绘画剂的最新作品结合使用时,我们表明Paint2Pix可用于从头开始进行渐进的图像合成。在此过程中,Paint2Pix允许新手用户逐步合成所需的图像输出,同时仅需要几乎没有粗的用户涂鸦来准确地引导合成过程的轨迹。此外,我们发现我们的方法还形成了一种令人惊讶的方便方法来进行真实的图像编辑,并允许用户通过仅添加几种位置良好的笔触来执行各种自定义细粒度编辑。补充视频和演示可从https://1jsingh.github.io/paint2pix获得
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions. The proposed method consists of three components: StyleGAN inversion module, visual-linguistic similarity learning, and instance-level optimization. The inversion module maps real images to the latent space of a well-trained StyleGAN. The visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space. The instancelevel optimization is for identity preservation in manipulation. Our model can produce diverse and high-quality images with an unprecedented resolution at 1024 2 . Using a control mechanism based on style-mixing, our Tedi-GAN inherently supports image synthesis with multi-modal inputs, such as sketches or semantic labels, with or without instance guidance. To facilitate text-guided multimodal synthesis, we propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Extensive experiments on the introduced dataset demonstrate the superior performance of our proposed method. Code and data are available at https://github.com/weihaox/TediGAN.
translated by 谷歌翻译
现在,使用最近的生成对抗网络(GAN)可以使用高现实主义的不受约束图像产生。但是,用给定的一组属性生成图像非常具有挑战性。最近的方法使用基于样式的GAN模型来执行图像编辑,通过利用发电机层中存在的语义层次结构。我们提出了一些基于潜在的属性操纵和编辑(火焰),这是一个简单而有效的框架,可通过潜在空间操纵执行高度控制的图像编辑。具体而言,我们估计了控制生成图像中语义属性的潜在空间(预训练样式的)中的线性方向。与以前的方法相反,这些方法依赖于大规模属性标记的数据集或属性分类器,而火焰则使用一些策划的图像对的最小监督来估算删除的编辑指示。火焰可以在保留身份的同时,在各种图像集上同时进行高精度和顺序编辑。此外,我们提出了一项新颖的属性样式操纵任务,以生成各种样式的眼镜和头发等属性。我们首先编码相同身份的一组合成图像,但在潜在空间中具有不同的属性样式,以估计属性样式歧管。从该歧管中采样新的潜在将导致生成图像中的新属性样式。我们提出了一种新颖的抽样方法,以从歧管中采样潜在的样品,使我们能够生成各种属性样式,而不是训练集中存在的样式。火焰可以以分离的方式生成多种属性样式。我们通过广泛的定性和定量比较来说明火焰与先前的图像编辑方法相对于先前的图像编辑方法的卓越性能。火焰在多个数据集(例如汽车和教堂)上也很好地概括了。
translated by 谷歌翻译
Controllable image synthesis with user scribbles has gained huge public interest with the recent advent of text-conditioned latent diffusion models. The user scribbles control the color composition while the text prompt provides control over the overall image semantics. However, we note that prior works in this direction suffer from an intrinsic domain shift problem, wherein the generated outputs often lack details and resemble simplistic representations of the target domain. In this paper, we propose a novel guided image synthesis framework, which addresses this problem by modeling the output image as the solution of a constrained optimization problem. We show that while computing an exact solution to the optimization is infeasible, an approximation of the same can be achieved while just requiring a single pass of the reverse diffusion process. Additionally, we show that by simply defining a cross-attention based correspondence between the input text tokens and the user stroke-painting, the user is also able to control the semantics of different painted regions without requiring any conditional training or finetuning. Human user study results show that the proposed approach outperforms the previous state-of-the-art by over 85.32% on the overall user satisfaction scores. Project page for our paper is available at https://1jsingh.github.io/gradop.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
尽管使用StyleGan进行语义操纵的最新进展,但对真实面孔的语义编辑仍然具有挑战性。 $ W $空间与$ W $+空间之间的差距需要重建质量与编辑质量之间的不良权衡。为了解决这个问题,我们建议通过用基于注意的变压器代替Stylegan映射网络中的完全连接的层来扩展潜在空间。这种简单有效的技术将上述两个空间整合在一起,并将它们转换为一个名为$ W $ ++的新的潜在空间。我们的修改后的Stylegan保持了原始StyleGan的最新一代质量,并具有中等程度的多样性。但更重要的是,提议的$ W $ ++空间在重建质量和编辑质量方面都取得了卓越的性能。尽管有这些显着优势,但我们的$ W $ ++空间支持现有的反转算法和编辑方法,仅由于其与$ w/w $+空间的结构相似性,因此仅可忽略不计的修改。 FFHQ数据集上的广泛实验证明,我们提出的$ W $ ++空间显然比以前的$ w/w $+空间更可取。该代码可在https://github.com/anonsubm2021/transstylegan上公开提供。
translated by 谷歌翻译
最近在图像编辑中找到了生成的对抗网络(GANS)。但是,大多数基于GaN的图像编辑方法通常需要具有用于训练的语义分段注释的大规模数据集,只提供高级控制,或者仅在不同图像之间插入。在这里,我们提出了EditGan,一种用于高质量,高精度语义图像编辑的新方法,允许用户通过修改高度详细的部分分割面罩,例如,为汽车前灯绘制新掩模来编辑图像。编辑登上的GAN框架上建立联合模型图像及其语义分割,只需要少数标记的示例,使其成为编辑的可扩展工具。具体地,我们将图像嵌入GaN潜在空间中,并根据分割编辑执行条件潜代码优化,这有效地修改了图像。算优化优化,我们发现在实现编辑的潜在空间中找到编辑向量。该框架允许我们学习任意数量的编辑向量,然后可以直接应用于交互式速率的其他图像。我们通过实验表明,EditGan可以用前所未有的细节和自由来操纵图像,同时保留完整的图像质量。我们还可以轻松地组合多个编辑并执行超出EditGan训练数据的合理编辑。我们在各种图像类型上展示编辑,并定量优于标准编辑基准任务的几种先前编辑方法。
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
尽管在预验证的GAN模型的潜在空间中表现出的编辑能力,但倒置现实世界的图像被陷入困境,即重建不能忠于原始输入。这样做的主要原因是,训练和现实世界数据之间的分布未对准,因此,对于真实图像编辑而言,它不稳定。在本文中,我们提出了一个基于GAN的新型编辑框架,以通过组成分解范式解决室外反转问题。特别是,在构图阶段,我们引入了一个差分激活模块,用于从全局角度\ ie(IE)检测语义变化,这是编辑和未编辑图像的特征之间的相对差距。借助生成的diff-cam掩模,配对的原始图像和编辑图像可以直观地进行粗糙的重建。这样,几乎整体可以生存属性,而这种中间结果的质量仍然受到不可避免的幽灵效果的限制。因此,在分解阶段,我们进一步提出了一个基于GAN的基于GAN的DEGHOSTING网络,用于将最终的精细编辑图像与粗糙重建分开。在定性和定量评估方面,广泛的实验比最新方法具有优势。我们方法的鲁棒性和灵活性在两个属性和多属性操作的方案上也得到了验证。
translated by 谷歌翻译
由于GaN潜在空间的勘探和利用,近年来,现实世界的图像操纵实现了奇妙的进展。 GaN反演是该管道的第一步,旨在忠实地将真实图像映射到潜在代码。不幸的是,大多数现有的GaN反演方法都无法满足下面列出的三个要求中的至少一个:重建质量,可编辑性和快速推断。我们在本研究中提出了一种新的两阶段策略,同时适合所有要求。在第一阶段,我们训练编码器将输入图像映射到StyleGan2 $ \ Mathcal {W} $ - 空间,这被证明具有出色的可编辑性,但重建质量较低。在第二阶段,我们通过利用一系列HyperNetWorks来补充初始阶段的重建能力以在反转期间恢复缺失的信息。这两个步骤互相补充,由于Hypernetwork分支和由于$ \ Mathcal {W} $ - 空间中的反转,因此由于HyperNetwork分支和优异的可编辑性而相互作用。我们的方法完全是基于编码器的,导致极快的推断。关于两个具有挑战性的数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
真实图像进入样式中的潜在空间是一个研究的问题。然而,由于重建和可编辑性之间的固有权衡,将现有的现实情景方法应用于现实世界的情况仍然是一个开放的挑战:可以准确代表真实图像的潜在空间区域通常遭受降级的语义控制。最近的工作提出通过微调发电机将目标图像添加到潜在空间的良好编辑区域来减轻此权衡。在有希望的同时,这种微调方案对于普遍使用而言是不切实际的,因为它需要每个新图像需要冗长的训练阶段。在这项工作中,我们将这种方法介绍到基于编码器的反演的领域。我们提出了一个HyperSTYLE,一个高度作品,用于学习调制Stylegan权重,以忠实地在潜在空间的可编辑区域中表达给定的图像。一个天真的调制方法需要培训超过30亿参数的高度工作。通过仔细的网络设计,我们将其降低到与现有的编码器一致。 Hyperstyle产生与具有编码器的近实时推理能力的优化技术相当的重建。最后,我们展示了超出了超出了反转任务的若干应用的效力,包括编辑域名域名的域外图像。
translated by 谷歌翻译
由于其语义上的理解和用户友好的可控性,通过三维引导,通过三维引导的面部图像操纵已广泛应用于各种交互式场景。然而,现有的基于3D形式模型的操作方法不可直接适用于域名面,例如非黑色素化绘画,卡通肖像,甚至是动物,主要是由于构建每个模型的强大困难具体面部域。为了克服这一挑战,据我们所知,我们建议使用人为3DMM操纵任意域名的第一种方法。这是通过两个主要步骤实现的:1)从3DMM参数解开映射到潜在的STYLEGO2的潜在空间嵌入,可确保每个语义属性的解除响应和精确的控制; 2)通过实施一致的潜空间嵌入,桥接域差异并使人类3DMM适用于域外面的人类3DMM。实验和比较展示了我们高质量的语义操作方法在各种面部域中的优越性,所有主要3D面部属性可控姿势,表达,形状,反照镜和照明。此外,我们开发了直观的编辑界面,以支持用户友好的控制和即时反馈。我们的项目页面是https://cassiepython.github.io/cddfm3d/index.html
translated by 谷歌翻译
Stone" "Mohawk hairstyle" "Without makeup" "Cute cat" "Lion" "Gothic church" * Equal contribution, ordered alphabetically. Code and video are available on https://github.com/orpatashnik/StyleCLIP
translated by 谷歌翻译
从手绘中生成图像是内容创建的至关重要和基本任务。翻译很困难,因为存在无限的可能性,并且不同的用户通常会期望不同的结果。因此,我们提出了一个统一的框架,该框架支持基于扩散模型的草图和笔触对图像合成的三维控制。用户不仅可以确定输入笔画和草图的忠诚程度,而且还可以确定现实程度,因为用户输入通常与真实图像不一致。定性和定量实验表明,我们的框架实现了最新的性能,同时提供了具有控制形状,颜色和现实主义的自定义图像的灵活性。此外,我们的方法释放了应用程序,例如在真实图像上编辑,部分草图和笔触的生成以及多域多模式合成。
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
可以训练生成模型,以从特定域中生成图像,仅由文本提示引导,而不看到任何图像?换句话说:可以将图像生成器“盲目地训练”吗?利用大规模对比语言图像预训练(CLIP)模型的语义力量,我们提出了一种文本驱动方法,允许将生成模型转移到新域,而无需收集单个图像。我们展示通过自然语言提示和几分钟的培训,我们的方法可以通过各种风格和形状的多种域调整发电机。值得注意的是,许多这些修改难以与现有方法达到困难或完全不可能。我们在广泛的域中进行了广泛的实验和比较。这些证明了我们方法的有效性,并表明我们的移动模型保持了对下游任务吸引的生成模型的潜在空间属性。
translated by 谷歌翻译