我们提出了Vecgan,这是一个图像到图像翻译框架,用于带有可解释潜在方向的面部属性编辑。面部属性编辑任务面临着精确属性编辑的挑战,具有可控的强度和图像的其他属性的保存。对于此目标,我们通过潜在空间分解设计属性编辑,对于每个属性,我们学习了与其他属性正交的线性方向。另一个组件是变化的可控强度,标量值。在我们的框架中,可以通过投影从参考图像中对此标量进行采样或编码。我们的工作灵感来自固定预验证的gan的潜在空间分解作品。但是,尽管这些模型无法进行端到端训练,并难以精确编辑编码的图像,但Vecgan受到了端到端的培训,用于图像翻译任务,并成功地编辑了属性,同时保留了其他属性。我们的广泛实验表明,vecgan对本地和全球编辑的最先进进行了重大改进。
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
属性操作的目的是控制给定图像中的指定属性。先前的工作通过学习每个属性的分解表示形式来解决此问题,以使其能够操纵针对目标属性的编码源属性。但是,编码的属性通常与相关的图像内容相关。因此,源属性信息通常可以隐藏在分离的功能中,从而导致不需要的图像编辑效果。在本文中,我们提出了一个属性信息删除和重建(AIRR)网络,该网络可以通过学习如何完全删除属性信息,创建属性排除的功能,然后学习将所需属性直接注入重建图像中。我们在四个不同的数据集上评估了我们的方法,其中包括多种属性,包括DeepFashion合成,DeepFashion Fashion Felasion Feline Attribute,Celeba和Celeba-HQ,我们的模型将属性操纵精度和TOP-K检索率提高了10% 。一项用户研究还报告说,在多达76%的案件中,AIRR操纵图像比先前的工作更优选。
translated by 谷歌翻译
现在,使用最近的生成对抗网络(GAN)可以使用高现实主义的不受约束图像产生。但是,用给定的一组属性生成图像非常具有挑战性。最近的方法使用基于样式的GAN模型来执行图像编辑,通过利用发电机层中存在的语义层次结构。我们提出了一些基于潜在的属性操纵和编辑(火焰),这是一个简单而有效的框架,可通过潜在空间操纵执行高度控制的图像编辑。具体而言,我们估计了控制生成图像中语义属性的潜在空间(预训练样式的)中的线性方向。与以前的方法相反,这些方法依赖于大规模属性标记的数据集或属性分类器,而火焰则使用一些策划的图像对的最小监督来估算删除的编辑指示。火焰可以在保留身份的同时,在各种图像集上同时进行高精度和顺序编辑。此外,我们提出了一项新颖的属性样式操纵任务,以生成各种样式的眼镜和头发等属性。我们首先编码相同身份的一组合成图像,但在潜在空间中具有不同的属性样式,以估计属性样式歧管。从该歧管中采样新的潜在将导致生成图像中的新属性样式。我们提出了一种新颖的抽样方法,以从歧管中采样潜在的样品,使我们能够生成各种属性样式,而不是训练集中存在的样式。火焰可以以分离的方式生成多种属性样式。我们通过广泛的定性和定量比较来说明火焰与先前的图像编辑方法相对于先前的图像编辑方法的卓越性能。火焰在多个数据集(例如汽车和教堂)上也很好地概括了。
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
在这项工作中,我们为面部年龄编辑提出了一种新颖的架构,该架构可以产生结构修改,同时保持原始图像中存在相关细节。我们删除输入图像的样式和内容,并提出了一个新的解码器网络,该网络采用了一种基于样式的策略来结合输入图像的样式和内容表示,同时将输出在目标年龄上调节。我们超越了现有的衰老方法,使用户可以在推理过程中调整输入图像中的结构保存程度。为此,我们引入了一种掩盖机制,即自定义结构保存模块,该模块将输入图像中的相关区域与应丢弃的区域区分开。尖峰不需要其他监督。最后,我们的定量和定性分析在内,包括用户研究,表明我们的方法优于先前的艺术,并证明了我们在图像编辑和可调节结构保存方面的策略的有效性。可以在https://github.com/guillermogogotre/cusp上获得代码和预估计的模型。
translated by 谷歌翻译
最近关于多领域面部图像翻译的研究取得了令人印象深刻的结果。现有方法通常提供具有辅助分类器的鉴别器,以施加域转换。但是,这些方法忽略了关于域分布匹配的重要信息。为了解决这个问题,我们提出了一种与更自适应的鉴别器结构和匹配的发电机具有更自适应的鉴别器结构和匹配的发电机之间的开关生成的对抗网络(SwitchGan),以在多个域之间执行精密图像转换。提出了一种特征切换操作以在我们的条件模块中实现特征选择和融合。我们展示了我们模型的有效性。此外,我们还引入了发电机的新功能,该功能代表了属性强度控制,并在没有定制培训的情况下提取内容信息。在视觉上和定量地显示了Morph,RAFD和Celeba数据库的实验,表明我们扩展的SwitchGan(即,门控SwitchGan)可以实现比Stargan,Attgan和Staggan更好的翻译结果。使用培训的Reset-18模型实现的属性分类准确性和使用ImageNet预先预订的Inception-V3模型获得的FIC分数也定量展示了模型的卓越性能。
translated by 谷歌翻译
由于GaN潜在空间的勘探和利用,近年来,现实世界的图像操纵实现了奇妙的进展。 GaN反演是该管道的第一步,旨在忠实地将真实图像映射到潜在代码。不幸的是,大多数现有的GaN反演方法都无法满足下面列出的三个要求中的至少一个:重建质量,可编辑性和快速推断。我们在本研究中提出了一种新的两阶段策略,同时适合所有要求。在第一阶段,我们训练编码器将输入图像映射到StyleGan2 $ \ Mathcal {W} $ - 空间,这被证明具有出色的可编辑性,但重建质量较低。在第二阶段,我们通过利用一系列HyperNetWorks来补充初始阶段的重建能力以在反转期间恢复缺失的信息。这两个步骤互相补充,由于Hypernetwork分支和由于$ \ Mathcal {W} $ - 空间中的反转,因此由于HyperNetwork分支和优异的可编辑性而相互作用。我们的方法完全是基于编码器的,导致极快的推断。关于两个具有挑战性的数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
Facial attribute editing aims to manipulate single or multiple attributes of a face image, i.e., to generate a new face with desired attributes while preserving other details. Recently, generative adversarial net (GAN) and encoder-decoder architecture are usually incorporated to handle this task with promising results. Based on the encoder-decoder architecture, facial attribute editing is achieved by decoding the latent representation of the given face conditioned on the desired attributes. Some existing methods attempt to establish an attributeindependent latent representation for further attribute editing. However, such attribute-independent constraint on the latent representation is excessive because it restricts the capacity of the latent representation and may result in information loss, leading to over-smooth and distorted generation. Instead of imposing constraints on the latent representation, in this work we apply an attribute classification constraint to the generated image to just guarantee the correct change of desired attributes, i.e., to "change what you want". Meanwhile, the reconstruction learning is introduced to preserve attribute-excluding details, in other words, to "only change what you want". Besides, the adversarial learning is employed for visually realistic editing. These three components cooperate with each other forming an effective framework for high quality facial attribute editing, referred as AttGAN. Furthermore, our method is also directly applicable for attribute intensity control and can be naturally extended for attribute style manipulation. Experiments on CelebA dataset show that our method outperforms the state-of-the-arts on realistic attribute editing with facial details well preserved.
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
在本文中,我们解决了神经面部重演的问题,鉴于一对源和目标面部图像,我们需要通过将目标的姿势(定义为头部姿势及其面部表情定义)通过同时保留源的身份特征(例如面部形状,发型等),即使在源头和目标面属于不同身份的挑战性情况下也是如此。在此过程中,我们解决了最先进作品的一些局限在推理期间标记的数据以及c)它们不保留大型头部姿势变化中的身份。更具体地说,我们提出了一个框架,该框架使用未配对的随机生成的面部图像学会通过合并最近引入的样式空间$ \ Mathcal $ \ Mathcal {S} $ of Stylegan2的姿势,以将面部的身份特征从其姿势中解脱出来表现出显着的分解特性。通过利用这一点,我们学会使用3D模型的监督成功地混合了一对源和目标样式代码。随后用于重新制定的最终潜在代码由仅与源的面部姿势相对应的潜在单位和仅与源身份相对应的单位组成,从而显着改善了与最近的状态性能相比的重新制定性能。艺术方法。与艺术的状态相比,我们定量和定性地表明,即使在极端的姿势变化下,提出的方法也会产生更高的质量结果。最后,我们通过首先将它们嵌入预告片发电机的潜在空间来报告实际图像。我们在:https://github.com/stelabou/stylemask上公开提供代码和预估计的模型
translated by 谷歌翻译
基于生成神经辐射场(GNERF)基于生成神经辐射场(GNERF)的3D感知gan已达到令人印象深刻的高质量图像产生,同时保持了强3D一致性。最显着的成就是在面部生成领域中取得的。但是,这些模型中的大多数都集中在提高视图一致性上,但忽略了分离的方面,因此这些模型无法提供高质量的语义/属性控制对生成。为此,我们引入了一个有条件的GNERF模型,该模型使用特定属性标签作为输入,以提高3D感知生成模型的控制能力和解散能力。我们利用预先训练的3D感知模型作为基础,并集成了双分支属性编辑模块(DAEM),该模块(DAEM)利用属性标签来提供对生成的控制。此外,我们提出了一个Triot(作为INIT的训练,并针对调整进行优化),以优化潜在矢量以进一步提高属性编辑的精度。广泛使用的FFHQ上的广泛实验表明,我们的模型在保留非目标区域的同时产生具有更好视图一致性的高质量编辑。该代码可在https://github.com/zhangqianhui/tt-gnerf上找到。
translated by 谷歌翻译
我们提出了Exe-Gan,这是一种新型的使用生成对抗网络的典范引导的面部介绍框架。我们的方法不仅可以保留输入面部图像的质量,而且还可以使用类似示例性的面部属性来完成图像。我们通过同时利用输入图像的全局样式,从随机潜在代码生成的随机样式以及示例图像的示例样式来实现这一目标。我们介绍了一个新颖的属性相似性指标,以鼓励网络以一种自我监督的方式从示例中学习面部属性的风格。为了确保跨地区边界之间的自然过渡,我们引入了一种新型的空间变体梯度反向传播技术,以根据空间位置调整损耗梯度。关于公共Celeba-HQ和FFHQ数据集的广泛评估和实际应用,可以验证Exe-GAN的优越性,从面部镶嵌的视觉质量来看。
translated by 谷歌翻译
最近的研究表明,风格老年提供了对图像合成和编辑的下游任务的有希望的现有模型。然而,由于样式盖的潜在代码被设计为控制全球样式,因此很难实现对合成图像的细粒度控制。我们提出了SemanticStylegan,其中发电机训练以分别培训局部语义部件,并以组成方式合成图像。不同局部部件的结构和纹理由相应的潜在码控制。实验结果表明,我们的模型在不同空间区域之间提供了强烈的解剖。当与为样式器设计的编辑方法结合使用时,它可以实现更细粒度的控制,以编辑合成或真实图像。该模型也可以通过传输学习扩展到其他域。因此,作为具有内置解剖学的通用先前模型,它可以促进基于GaN的应用的发展并实现更多潜在的下游任务。
translated by 谷歌翻译
Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random code to a photo-realistic image. In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space. We first find that GANs learn various semantics in some linear subspaces of the latent space. After identifying these subspaces, we can realistically manipulate the corresponding facial attributes without retraining the model. We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection, resulting in more precise control of the attribute manipulation. Besides manipulating the gender, age, expression, and presence of eyeglasses, we can even alter the face pose and fix the artifacts accidentally made by GANs. Furthermore, we perform an in-depth face identity analysis and a layer-wise analysis to evaluate the editing results quantitatively. Finally, we apply our approach to real face editing by employing GAN inversion approaches and explicitly training feed-forward models based on the synthetic data established by InterFaceGAN. Extensive experimental results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable face representation.
translated by 谷歌翻译
尽管在预验证的GAN模型的潜在空间中表现出的编辑能力,但倒置现实世界的图像被陷入困境,即重建不能忠于原始输入。这样做的主要原因是,训练和现实世界数据之间的分布未对准,因此,对于真实图像编辑而言,它不稳定。在本文中,我们提出了一个基于GAN的新型编辑框架,以通过组成分解范式解决室外反转问题。特别是,在构图阶段,我们引入了一个差分激活模块,用于从全局角度\ ie(IE)检测语义变化,这是编辑和未编辑图像的特征之间的相对差距。借助生成的diff-cam掩模,配对的原始图像和编辑图像可以直观地进行粗糙的重建。这样,几乎整体可以生存属性,而这种中间结果的质量仍然受到不可避免的幽灵效果的限制。因此,在分解阶段,我们进一步提出了一个基于GAN的基于GAN的DEGHOSTING网络,用于将最终的精细编辑图像与粗糙重建分开。在定性和定量评估方面,广泛的实验比最新方法具有优势。我们方法的鲁棒性和灵活性在两个属性和多属性操作的方案上也得到了验证。
translated by 谷歌翻译