This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Component Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.
translated by 谷歌翻译
Stylegan的成功使得在合成和真实图像上启用了前所未有的语义编辑能力。然而,这种编辑操作要么是使用人类指导的语义监督或描述的培训。在另一个开发中,剪辑架构已被互联网级图像和文本配对培训,并且已被示出在几个零拍摄学习设置中有用。在这项工作中,我们调查了如何有效地链接样式登录和剪辑的预训练潜空间,这反过来允许我们从Stylegan,查找和命名有意义的编辑操作自动提取语义标记的编辑方向,而无需任何额外的人类指导。从技术上讲,我们提出了两块新颖的建筑块;一个用于查找有趣的夹子方向,一个用于在CLIP潜在空间中标记任意方向。安装程序不假设任何预定的标签,因此我们不需要任何其他监督文本/属性来构建编辑框架。我们评估所提出的方法的有效性,并证明了解标记标记的样式编辑方向的提取确实可能,并揭示了有趣和非琐碎的编辑方向。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images. In order to identify such latent dimensions for image editing, previous methods typically annotate a collection of synthesized samples and train linear classifiers in the latent space. However, they require a clear definition of the target attribute as well as the corresponding manual annotations, limiting their applications in practice. In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner. In particular, we take a closer look into the generation mechanism of GANs and further propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights. With a lightning-fast implementation, our approach is capable of not only finding semantically meaningful dimensions comparably to the state-of-the-art supervised methods, but also resulting in far more versatile concepts across multiple GAN models trained on a wide range of datasets. 1
translated by 谷歌翻译
本文解决了在预训练的生成对抗网络(GANS)的潜在空间中找到可解释方向的问题,以便于可控的图像合成。这种可解释的方向对应于可以影响合成图像的样式和几何体的变换。然而,利用线性技术来查找这些变换的现有方法通常无法提供直观的方式来分离这两个变化源。为了解决这个问题,我们建议a)对中间表示的张量进行多线性分解,b)使用基于张量的回归来利用该分解对潜在空间的映射方向。我们的方案允许与张量的各个模式相对应的线性编辑,并且非线性的编辑模型它们之间的乘法相互作用。我们通过实验显示我们可以利用前者与基于几何的转换更好的单独的风格,以及与现有作品相比,后者产生一组可能的变换。与目前的最先进,我们展示了我们的方法的效果和定性。
translated by 谷歌翻译
现在,使用最近的生成对抗网络(GAN)可以使用高现实主义的不受约束图像产生。但是,用给定的一组属性生成图像非常具有挑战性。最近的方法使用基于样式的GAN模型来执行图像编辑,通过利用发电机层中存在的语义层次结构。我们提出了一些基于潜在的属性操纵和编辑(火焰),这是一个简单而有效的框架,可通过潜在空间操纵执行高度控制的图像编辑。具体而言,我们估计了控制生成图像中语义属性的潜在空间(预训练样式的)中的线性方向。与以前的方法相反,这些方法依赖于大规模属性标记的数据集或属性分类器,而火焰则使用一些策划的图像对的最小监督来估算删除的编辑指示。火焰可以在保留身份的同时,在各种图像集上同时进行高精度和顺序编辑。此外,我们提出了一项新颖的属性样式操纵任务,以生成各种样式的眼镜和头发等属性。我们首先编码相同身份的一组合成图像,但在潜在空间中具有不同的属性样式,以估计属性样式歧管。从该歧管中采样新的潜在将导致生成图像中的新属性样式。我们提出了一种新颖的抽样方法,以从歧管中采样潜在的样品,使我们能够生成各种属性样式,而不是训练集中存在的样式。火焰可以以分离的方式生成多种属性样式。我们通过广泛的定性和定量比较来说明火焰与先前的图像编辑方法相对于先前的图像编辑方法的卓越性能。火焰在多个数据集(例如汽车和教堂)上也很好地概括了。
translated by 谷歌翻译
高保真语义图像编辑的最新进展依赖于最先进的生成模型的概述潜在的潜在空间,例如风格。具体而言,最近的作品表明,通过线性偏移以及潜在方向,可以实现面部图像中的属性的体面可控性。几个最近的方法解决了这种方向的发现,隐含地假设最先进的GAN学习潜在空间,具有固有的线性可分离属性分布和语义矢量算术属性。在我们的工作中,我们表明,作为培训神经颂歌的流动实现的非线性潜在的代码操纵对于许多具有更复杂的非纹理变化因子的实用非面孔图像域有益。特别是,我们调查具有已知属性的大量数据集,并证明某些属性操作仅具有线性移位的挑战。
translated by 谷歌翻译
Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random code to a photo-realistic image. In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space. We first find that GANs learn various semantics in some linear subspaces of the latent space. After identifying these subspaces, we can realistically manipulate the corresponding facial attributes without retraining the model. We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection, resulting in more precise control of the attribute manipulation. Besides manipulating the gender, age, expression, and presence of eyeglasses, we can even alter the face pose and fix the artifacts accidentally made by GANs. Furthermore, we perform an in-depth face identity analysis and a layer-wise analysis to evaluate the editing results quantitatively. Finally, we apply our approach to real face editing by employing GAN inversion approaches and explicitly training feed-forward models based on the synthetic data established by InterFaceGAN. Extensive experimental results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable face representation.
translated by 谷歌翻译
潜在空间探索是一种发现可解释的潜在方向并操纵潜在代码以编辑生成对抗网络(GAN)生成的图像中的各种属性的技术。但是,在先前的工作中,空间控制仅限于简单的转换(例如翻译和旋转),并且努力地识别适当的潜在方向并调整其参数。在本文中,我们通过直接注释图像来解决编辑样式图像布局的问题。为此,我们提出了一个交互式框架,用于根据用户输入来操纵潜在代码。在我们的框架中,用户用他们想移动或不移动的位置来注释stylegan图像,并通过鼠标拖动指定运动方向。从这些用户输入和初始潜在代码中,我们的潜在变压器基于变压器编码器架构架构估算输出潜在代码,这些代码被馈送到stylegan生成器中以获得结果图像。为了训练我们的潜在变压器,我们利用了由现成的样式和光流模型生成的合成数据和伪用户输入,而无需手动监督。定量和定性评估证明了我们方法对现有方法的有效性。
translated by 谷歌翻译
Our method performs local semantic editing on GAN output images, transferring the appearance of a specific object part from a reference image to a target image.
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
生成对抗网络(GAN)已广泛应用于建模各种图像分布。然而,尽管具有令人印象深刻的应用,但甘恩(Gans)中潜在空间的结构在很大程度上仍然是一个黑框,使其可控的一代问题是一个开放的问题,尤其是当图像分布中存在不同语义属性之间的虚假相关性时。为了解决此问题,以前的方法通常会学习控制图像空间中语义属性的线性方向或单个通道。但是,他们通常会遭受不完美的分解,或者无法获得多向控制。在这项工作中,根据上述挑战,我们提出了一种新的方法,可以发现非线性控件,该方法基于学识渊博的gan潜在空间中的梯度信息,可以实现多个方向的操作以及有效的分解。更具体地说,我们首先通过从对属性分别训练的分类网络中遵循梯度来学习插值方向,然后通过专门控制针对目标属性在学习的方向上激活目标属性的通道来导航潜在空间。从经验上讲,借助小型培训数据,我们的方法能够获得对各种双向和多方向属性的细粒度控制,并且我们展示了其实现分离的能力,其能力明显优于先进方法。定性和定量。
translated by 谷歌翻译
Stone" "Mohawk hairstyle" "Without makeup" "Cute cat" "Lion" "Gothic church" * Equal contribution, ordered alphabetically. Code and video are available on https://github.com/orpatashnik/StyleCLIP
translated by 谷歌翻译
由于其语义上的理解和用户友好的可控性,通过三维引导,通过三维引导的面部图像操纵已广泛应用于各种交互式场景。然而,现有的基于3D形式模型的操作方法不可直接适用于域名面,例如非黑色素化绘画,卡通肖像,甚至是动物,主要是由于构建每个模型的强大困难具体面部域。为了克服这一挑战,据我们所知,我们建议使用人为3DMM操纵任意域名的第一种方法。这是通过两个主要步骤实现的:1)从3DMM参数解开映射到潜在的STYLEGO2的潜在空间嵌入,可确保每个语义属性的解除响应和精确的控制; 2)通过实施一致的潜空间嵌入,桥接域差异并使人类3DMM适用于域外面的人类3DMM。实验和比较展示了我们高质量的语义操作方法在各种面部域中的优越性,所有主要3D面部属性可控姿势,表达,形状,反照镜和照明。此外,我们开发了直观的编辑界面,以支持用户友好的控制和即时反馈。我们的项目页面是https://cassiepython.github.io/cddfm3d/index.html
translated by 谷歌翻译
在本文中,我们提出了基于卷的方法,用于建模生成模型的潜空间的张力。目标是识别潜伏空间中的语义方向。为此,我们建议在结构化的面部表情数据库上拟合多线性张量模型,其最初嵌入到潜伏中。我们使用Bu-3DFE作为结构化的面部表情数据库验证了我们在FFHQ上训练的样式登上的方法。我们展示了如何通过交替的最小二乘来近似多线性张量模型的参数。此外,我们介绍了一个闪烁的样式分离的张量模型,被定义为特定于风格的模型的集合,以将我们的方法与样式达的延长潜空间集成在一起。我们表明,考虑到延长潜空间的各种方式导致更高的模型灵活性和更低的重建误差。最后,我们做了几个实验比较了我们对前所不机和多线性模型的前工作的方法。具体地,我们分析表达子空间,发现表达轨迹在与早期工作一致的冷漠面上相遇。我们还表明,通过改变一个人的姿势,我们方法的产生图像比两个竞争方法的结果更接近地面。
translated by 谷歌翻译
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
translated by 谷歌翻译
最近在图像编辑中找到了生成的对抗网络(GANS)。但是,大多数基于GaN的图像编辑方法通常需要具有用于训练的语义分段注释的大规模数据集,只提供高级控制,或者仅在不同图像之间插入。在这里,我们提出了EditGan,一种用于高质量,高精度语义图像编辑的新方法,允许用户通过修改高度详细的部分分割面罩,例如,为汽车前灯绘制新掩模来编辑图像。编辑登上的GAN框架上建立联合模型图像及其语义分割,只需要少数标记的示例,使其成为编辑的可扩展工具。具体地,我们将图像嵌入GaN潜在空间中,并根据分割编辑执行条件潜代码优化,这有效地修改了图像。算优化优化,我们发现在实现编辑的潜在空间中找到编辑向量。该框架允许我们学习任意数量的编辑向量,然后可以直接应用于交互式速率的其他图像。我们通过实验表明,EditGan可以用前所未有的细节和自由来操纵图像,同时保留完整的图像质量。我们还可以轻松地组合多个编辑并执行超出EditGan训练数据的合理编辑。我们在各种图像类型上展示编辑,并定量优于标准编辑基准任务的几种先前编辑方法。
translated by 谷歌翻译
Figure 1: Manipulating various facial attributes through varying the latent codes of a well-trained GAN model. The first column shows the original synthesis from PGGAN [21], while each of the other columns shows the results of manipulating a specific attribute.
translated by 谷歌翻译
生成的对抗网络(GANS)是在图像生成中最先进的驱动力。尽管他们能够合成高分辨率的照片真实图像,但在不同粒度的按需调节产生内容仍然是一个挑战。这一挑战通常是通过利用兴趣属性的大规模数据集,这是一个并不总是可行的选项的艰巨任务。因此,将控制进入无监督的生成模型的生成过程至关重要。在这项工作中,我们通过利用以无监督的时尚训练良好的GAN来专注于可控制的图像。为此,我们发现发电机的中间层的表示空间形成多个集群,该集群将数据分离为根据语义​​有意义的属性(例如,头发颜色和姿势)。通过在群集分配上调节,所提出的方法能够控制生成图像的语义类。我们的方法使通过隐式最大似然估计(IMLE)从每个集群中采样。我们使用不同的预先培训的生成模型展示我们对面孔(Celeba-HQ和FFHQ),动物(Imagenet)和物体(LSUN)的效果。结果突出了我们在面孔上像性,姿势和发型等属性的条件图像生成的能力,以及不同对象类别的各种功能。
translated by 谷歌翻译