现有的GAN倒置和编辑方法适用于具有干净背景的对齐物体,例如肖像和动物面孔,但通常会为更加困难的类别而苦苦挣扎,具有复杂的场景布局和物体遮挡,例如汽车,动物和室外图像。我们提出了一种新方法,以在gan的潜在空间(例如stylegan2)中倒转和编辑复杂的图像。我们的关键想法是用一系列层的集合探索反演,从而将反转过程适应图像的难度。我们学会预测不同图像段的“可逆性”,并将每个段投影到潜在层。更容易的区域可以倒入发电机潜在空间中的较早层,而更具挑战性的区域可以倒入更晚的特征空间。实验表明,与最新的复杂类别的方法相比,我们的方法获得了更好的反转结果,同时保持下游的编辑性。请参阅我们的项目页面,网址为https://www.cs.cmu.edu/~saminversion。
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
由于GaN潜在空间的勘探和利用,近年来,现实世界的图像操纵实现了奇妙的进展。 GaN反演是该管道的第一步,旨在忠实地将真实图像映射到潜在代码。不幸的是,大多数现有的GaN反演方法都无法满足下面列出的三个要求中的至少一个:重建质量,可编辑性和快速推断。我们在本研究中提出了一种新的两阶段策略,同时适合所有要求。在第一阶段,我们训练编码器将输入图像映射到StyleGan2 $ \ Mathcal {W} $ - 空间,这被证明具有出色的可编辑性,但重建质量较低。在第二阶段,我们通过利用一系列HyperNetWorks来补充初始阶段的重建能力以在反转期间恢复缺失的信息。这两个步骤互相补充,由于Hypernetwork分支和由于$ \ Mathcal {W} $ - 空间中的反转,因此由于HyperNetwork分支和优异的可编辑性而相互作用。我们的方法完全是基于编码器的,导致极快的推断。关于两个具有挑战性的数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
真实图像进入样式中的潜在空间是一个研究的问题。然而,由于重建和可编辑性之间的固有权衡,将现有的现实情景方法应用于现实世界的情况仍然是一个开放的挑战:可以准确代表真实图像的潜在空间区域通常遭受降级的语义控制。最近的工作提出通过微调发电机将目标图像添加到潜在空间的良好编辑区域来减轻此权衡。在有希望的同时,这种微调方案对于普遍使用而言是不切实际的,因为它需要每个新图像需要冗长的训练阶段。在这项工作中,我们将这种方法介绍到基于编码器的反演的领域。我们提出了一个HyperSTYLE,一个高度作品,用于学习调制Stylegan权重,以忠实地在潜在空间的可编辑区域中表达给定的图像。一个天真的调制方法需要培训超过30亿参数的高度工作。通过仔细的网络设计,我们将其降低到与现有的编码器一致。 Hyperstyle产生与具有编码器的近实时推理能力的优化技术相当的重建。最后,我们展示了超出了超出了反转任务的若干应用的效力,包括编辑域名域名的域外图像。
translated by 谷歌翻译
随着方法的发展,反转主要分为两个步骤。第一步是图像嵌入,其中编码器或优化过程嵌入图像以获取相应的潜在代码。之后,第二步旨在完善反转和编辑结果,我们将其命名为“结果”。尽管第二步显着提高了忠诚度,但感知和编辑性几乎没有变化,深处取决于第一步中获得的反向潜在代码。因此,一个关键问题是在保留重建保真度的同时获得更好的感知和编辑性的潜在代码。在这项工作中,我们首先指出,这两个特征与合成分布的逆代码的对齐程度(或不对准)有关。然后,我们提出了潜在空间比对反转范式(LSAP),该范式由评估度量和解决方案组成。具体来说,我们引入了归一化样式空间($ \ Mathcal {s^n} $ space)和$ \ Mathcal {s^n} $ cosine距离(SNCD)以测量反转方法的不对准。由于我们提出的SNCD是可区分的,因此可以在基于编码器和基于优化的嵌入方法中进行优化,以执行均匀的解决方案。在各个域中进行的广泛实验表明,SNCD有效地反映了感知和编辑性,并且我们的对齐范式在两个步骤中都归档了最新的。代码可在https://github.com/caopulan/ganinverter上找到。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
最近在图像编辑中找到了生成的对抗网络(GANS)。但是,大多数基于GaN的图像编辑方法通常需要具有用于训练的语义分段注释的大规模数据集,只提供高级控制,或者仅在不同图像之间插入。在这里,我们提出了EditGan,一种用于高质量,高精度语义图像编辑的新方法,允许用户通过修改高度详细的部分分割面罩,例如,为汽车前灯绘制新掩模来编辑图像。编辑登上的GAN框架上建立联合模型图像及其语义分割,只需要少数标记的示例,使其成为编辑的可扩展工具。具体地,我们将图像嵌入GaN潜在空间中,并根据分割编辑执行条件潜代码优化,这有效地修改了图像。算优化优化,我们发现在实现编辑的潜在空间中找到编辑向量。该框架允许我们学习任意数量的编辑向量,然后可以直接应用于交互式速率的其他图像。我们通过实验表明,EditGan可以用前所未有的细节和自由来操纵图像,同时保留完整的图像质量。我们还可以轻松地组合多个编辑并执行超出EditGan训练数据的合理编辑。我们在各种图像类型上展示编辑,并定量优于标准编辑基准任务的几种先前编辑方法。
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
Recent work has shown that a variety of semantics emerge in the latent space of Generative Adversarial Networks (GANs) when being trained to synthesize images. However, it is difficult to use these learned semantics for real image editing. A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code. However, existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space. As a result, the reconstructed image cannot well support semantic editing through varying the inverted code. To solve this problem, we propose an in-domain GAN inversion approach, which not only faithfully reconstructs the input image but also ensures the inverted code to be semantically meaningful for editing. We first learn a novel domain-guided encoder to project a given image to the native latent space of GANs. We then propose domain-regularized optimization by involving the encoder as a regularizer to fine-tune the code produced by the encoder and better recover the target image. Extensive experiments suggest that our inversion method achieves satisfying real image reconstruction and more importantly facilitates various image editing tasks, significantly outperforming start-of-the-arts. 1
translated by 谷歌翻译
GAN倒置旨在将输入图像倒入预训练GAN的潜在空间中。尽管GAN倒置最近取得了进步,但减轻失真和编辑性之间的权衡仍然存在挑战,即准确地重建输入图像并以较小的视觉质量下降来编辑倒置图像。最近提出的关键调整模型通过使用两步方法将输入图像转变为潜在代码,称为枢轴代码,然后改变生成器,以便可以准确映射输入图像,从而取得了重大进展,从而取得了重大进展。进入枢轴代码。在这里,我们表明可以通过适当的枢轴代码设计来改进重建和编辑性。我们提出了一种简单而有效的方法,称为“循环编码”,以提供高质量的枢轴代码。我们方法的关键思想是根据周期方案在不同空间中逐步训练编码器:w-> w+ - > w。该训练方法保留了W+空间的性质,即W+的低畸变的高编辑性。为了进一步减少失真,我们还建议使用基于优化的方法来完善枢轴代码,其中引入正则化项以减少编辑性的降解。对几种最新方法的定性和定量比较证明了我们方法的优势。
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
edu.hk (a) Image Reconstruction (b) Image Colorization (c) Image Super-Resolution (d) Image Denoising (e) Image Inpainting (f) Semantic Manipulation Figure 1: Multi-code GAN prior facilitates many image processing applications using the reconstruction from fixed PGGAN [23] models.
translated by 谷歌翻译
反转生成对抗网络(GAN)可以使用预训练的发电机来促进广泛的图像编辑任务。现有方法通常采用gan的潜在空间作为反转空间,但观察到空间细节的恢复不足。在这项工作中,我们建议涉及发电机的填充空间,以通过空间信息补充潜在空间。具体来说,我们替换具有某些实例感知系数的卷积层中使用的恒定填充(例如,通常为零)。通过这种方式,可以适当地适当地适应了预训练模型中假定的归纳偏差以适合每个单独的图像。通过学习精心设计的编码器,我们设法在定性和定量上提高了反演质量,超过了现有的替代方案。然后,我们证明了这样的空间扩展几乎不会影响天然甘纳的歧管,因此我们仍然可以重复使用甘斯(Gans)对各种下游应用学到的先验知识。除了在先前的艺术中探讨的编辑任务外,我们的方法还可以进行更灵活的图像操纵,例如对面部轮廓和面部细节的单独控制,并启用一种新颖的编辑方式,用户可以高效地自定义自己的操作。
translated by 谷歌翻译
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image. High-fidelity 3D GAN inversion is inherently challenging due to the geometry-texture trade-off in 3D inversion, where overfitting to a single view input image often damages the estimated geometry during the latent optimization. To solve this challenge, we propose a novel pipeline that builds on the pseudo-multi-view estimation with visibility analysis. We keep the original textures for the visible parts and utilize generative priors for the occluded parts. Extensive experiments show that our approach achieves advantageous reconstruction and novel view synthesis quality over state-of-the-art methods, even for images with out-of-distribution textures. The proposed pipeline also enables image attribute editing with the inverted latent code and 3D-aware texture modification. Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
translated by 谷歌翻译
生成的对抗网络(GAN)表现出了真实图像的令人印象深刻的图像生成质量和语义编辑功能,例如更改对象类,修改属性或传输样式。但是,将这些基于GAN的编辑应用于每个框架的视频,不可避免地会导致时间闪烁的伪影。我们提出了一种简单而有效的方法,以促进时间连贯的视频编辑。我们的核心思想是通过优化潜在代码和预训练的发电机来最大程度地减少时间光度不一致。我们评估了在不同领域和GAN倒置技术上编辑的质量,并对基线显示出优惠的结果。
translated by 谷歌翻译
The fidelity of Generative Adversarial Networks (GAN) inversion is impeded by Out-Of-Domain (OOD) areas (e.g., background, accessories) in the image. Detecting the OOD areas beyond the generation ability of the pretrained model and blending these regions with the input image can enhance fidelity. The ``invertibility mask" figures out these OOD areas, and existing methods predict the mask with the reconstruction error. However, the estimated mask is usually inaccurate due to the influence of the reconstruction error in the In-Domain (ID) area. In this paper, we propose a novel framework that enhances the fidelity of human face inversion by designing a new module to decompose the input images to ID and OOD partitions with invertibility masks. Unlike previous works, our invertibility detector is simultaneously learned with a spatial alignment module. We iteratively align the generated features to the input geometry and reduce the reconstruction error in the ID regions. Thus, the OOD areas are more distinguishable and can be precisely predicted. Then, we improve the fidelity of our results by blending the OOD areas from the input image with the ID GAN inversion results. Our method produces photo-realistic results for real-world human face image inversion and manipulation. Extensive experiments demonstrate our method's superiority over existing methods in the quality of GAN inversion and attribute manipulation.
translated by 谷歌翻译
由于其语义上的理解和用户友好的可控性,通过三维引导,通过三维引导的面部图像操纵已广泛应用于各种交互式场景。然而,现有的基于3D形式模型的操作方法不可直接适用于域名面,例如非黑色素化绘画,卡通肖像,甚至是动物,主要是由于构建每个模型的强大困难具体面部域。为了克服这一挑战,据我们所知,我们建议使用人为3DMM操纵任意域名的第一种方法。这是通过两个主要步骤实现的:1)从3DMM参数解开映射到潜在的STYLEGO2的潜在空间嵌入,可确保每个语义属性的解除响应和精确的控制; 2)通过实施一致的潜空间嵌入,桥接域差异并使人类3DMM适用于域外面的人类3DMM。实验和比较展示了我们高质量的语义操作方法在各种面部域中的优越性,所有主要3D面部属性可控姿势,表达,形状,反照镜和照明。此外,我们开发了直观的编辑界面,以支持用户友好的控制和即时反馈。我们的项目页面是https://cassiepython.github.io/cddfm3d/index.html
translated by 谷歌翻译
由于发型的复杂性和美味,编辑发型是独一无二的,而且具有挑战性。尽管最近的方法显着改善了头发的细节,但是当源图像的姿势与目标头发图像的姿势大不相同时,这些模型通常会产生不良的输出,从而限制了其真实世界的应用。发型是一种姿势不变的发型转移模型,可以减轻这种限制,但在保留精致的头发质地方面仍然表现出不令人满意的质量。为了解决这些局限性,我们提出了配备潜在优化和新呈现的局部匹配损失的高性能姿势不变的发型转移模型。在stylegan2潜在空间中,我们首先探索目标头发的姿势对准的潜在代码,并根据本地风格匹配保留了详细纹理。然后,我们的模型对源的遮挡构成了对齐的目标头发的遮挡,并将两个图像混合在一起以产生最终输出。实验结果表明,我们的模型在在较大的姿势差异和保留局部发型纹理下转移发型方面具有优势。
translated by 谷歌翻译
由于简单但有效的训练机制和出色的图像产生质量,生成的对抗网络(GAN)引起了极大的关注。具有生成照片现实的高分辨率(例如$ 1024 \ times1024 $)的能力,最近的GAN模型已大大缩小了生成的图像与真实图像之间的差距。因此,许多最近的作品表明,通过利用良好的潜在空间和博学的gan先验来利用预先训练的GAN模型的新兴兴趣。在本文中,我们简要回顾了从三个方面利用预先培训的大规模GAN模型的最新进展,即1)大规模生成对抗网络的培训,2)探索和理解预训练的GAN模型,以及预先培训的GAN模型,以及3)利用这些模型进行后续任务,例如图像恢复和编辑。有关相关方法和存储库的更多信息,请访问https://github.com/csmliu/pretretaining-gans。
translated by 谷歌翻译