We explore and analyze the latent style space of Style-GAN2, a state-of-the-art architecture for image generation, using models pretrained on several different datasets. We first show that StyleSpace, the space of channel-wise style parameters, is significantly more disentangled than the other intermediate latent spaces explored by previous works. Next, we describe a method for discovering a large collection of style channels, each of which is shown to control a distinct visual attribute in a highly localized and disentangled manner. Third, we propose a simple method for identifying style channels that control a specific attribute, using a pretrained classifier or a small number of example images. Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works. To show this, we make use of a newly proposed Attribute Dependency metric. Finally, we demonstrate the applicability of StyleSpace controls to the manipulation of real images. Our findings pave the way to semantically meaningful and well-disentangled image manipulations via simple and intuitive interfaces.
translated by 谷歌翻译
与Stylegan的图像操纵近年来一直是越来越多的问题。由于这些潜在空间中的语义和空间操纵精度有限,而且由于这些潜在空间中的语义和空间操纵精度有限,而且,则在分析几个语义潜在空间方面取得了巨大成功。然而,由于这些潜在空间中的语义和空间操纵精度有限,现有的努力被击败在细粒度的样式图像操作中,即本地属性翻译。要解决此问题,我们发现特定于属性的控制单元,该单元由多个特征映射和调制样式组成。具体而言,我们协同处理调制样式通道,并以控制单元而不是单独的方式映射,以获得语义和空间解除态控制。此外,我们提出了一种简单但有效的方法来检测特定于属性的控制单元。我们沿着特定稀疏方向向量移动调制样式,并更换用于计算要素映射的滤波器方号以操纵这些控制单元。我们在各种面部属性操纵任务中评估我们所提出的方法。广泛的定性和定量结果表明,我们的提出方法对最先进的方法有利地表现出。实图像的操纵结果进一步显示了我们方法的有效性。
translated by 谷歌翻译
Stone" "Mohawk hairstyle" "Without makeup" "Cute cat" "Lion" "Gothic church" * Equal contribution, ordered alphabetically. Code and video are available on https://github.com/orpatashnik/StyleCLIP
translated by 谷歌翻译
生成对抗网络(GAN)已广泛应用于建模各种图像分布。然而,尽管具有令人印象深刻的应用,但甘恩(Gans)中潜在空间的结构在很大程度上仍然是一个黑框,使其可控的一代问题是一个开放的问题,尤其是当图像分布中存在不同语义属性之间的虚假相关性时。为了解决此问题,以前的方法通常会学习控制图像空间中语义属性的线性方向或单个通道。但是,他们通常会遭受不完美的分解,或者无法获得多向控制。在这项工作中,根据上述挑战,我们提出了一种新的方法,可以发现非线性控件,该方法基于学识渊博的gan潜在空间中的梯度信息,可以实现多个方向的操作以及有效的分解。更具体地说,我们首先通过从对属性分别训练的分类网络中遵循梯度来学习插值方向,然后通过专门控制针对目标属性在学习的方向上激活目标属性的通道来导航潜在空间。从经验上讲,借助小型培训数据,我们的方法能够获得对各种双向和多方向属性的细粒度控制,并且我们展示了其实现分离的能力,其能力明显优于先进方法。定性和定量。
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
现在,使用最近的生成对抗网络(GAN)可以使用高现实主义的不受约束图像产生。但是,用给定的一组属性生成图像非常具有挑战性。最近的方法使用基于样式的GAN模型来执行图像编辑,通过利用发电机层中存在的语义层次结构。我们提出了一些基于潜在的属性操纵和编辑(火焰),这是一个简单而有效的框架,可通过潜在空间操纵执行高度控制的图像编辑。具体而言,我们估计了控制生成图像中语义属性的潜在空间(预训练样式的)中的线性方向。与以前的方法相反,这些方法依赖于大规模属性标记的数据集或属性分类器,而火焰则使用一些策划的图像对的最小监督来估算删除的编辑指示。火焰可以在保留身份的同时,在各种图像集上同时进行高精度和顺序编辑。此外,我们提出了一项新颖的属性样式操纵任务,以生成各种样式的眼镜和头发等属性。我们首先编码相同身份的一组合成图像,但在潜在空间中具有不同的属性样式,以估计属性样式歧管。从该歧管中采样新的潜在将导致生成图像中的新属性样式。我们提出了一种新颖的抽样方法,以从歧管中采样潜在的样品,使我们能够生成各种属性样式,而不是训练集中存在的样式。火焰可以以分离的方式生成多种属性样式。我们通过广泛的定性和定量比较来说明火焰与先前的图像编辑方法相对于先前的图像编辑方法的卓越性能。火焰在多个数据集(例如汽车和教堂)上也很好地概括了。
translated by 谷歌翻译
Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random code to a photo-realistic image. In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space. We first find that GANs learn various semantics in some linear subspaces of the latent space. After identifying these subspaces, we can realistically manipulate the corresponding facial attributes without retraining the model. We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection, resulting in more precise control of the attribute manipulation. Besides manipulating the gender, age, expression, and presence of eyeglasses, we can even alter the face pose and fix the artifacts accidentally made by GANs. Furthermore, we perform an in-depth face identity analysis and a layer-wise analysis to evaluate the editing results quantitatively. Finally, we apply our approach to real face editing by employing GAN inversion approaches and explicitly training feed-forward models based on the synthetic data established by InterFaceGAN. Extensive experimental results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable face representation.
translated by 谷歌翻译
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
translated by 谷歌翻译
在GAN的潜在空间中发现有意义的方向来操纵语义属性通常需要大量标记的数据。最近的工作旨在通过利用对比语言图像预训练(CLIP),联合文本图像模型来克服这种限制。在有希望的同时,这些方法需要几个小时的预处理或培训来达到所需的操纵。在本文中,我们展示了Stylemc,一种快速有效的文本驱动图像生成和操纵方法。 Stylemc使用基于剪辑的丢失和身份丢失来通过单个文本提示来操纵图像,而不会显着影响其他属性。与现有工作不同,Stylemc只需要几秒钟的每个文本提示培训,以找到稳定的全局方向,不需要提示工程,可以与任何预先训练的样式模型一起使用。我们展示了我们方法的有效性,并将其与最先进的方法进行比较。我们的代码可以在http://catlab-team.github.io/stylemc找到。
translated by 谷歌翻译
最近的研究表明,风格老年提供了对图像合成和编辑的下游任务的有希望的现有模型。然而,由于样式盖的潜在代码被设计为控制全球样式,因此很难实现对合成图像的细粒度控制。我们提出了SemanticStylegan,其中发电机训练以分别培训局部语义部件,并以组成方式合成图像。不同局部部件的结构和纹理由相应的潜在码控制。实验结果表明,我们的模型在不同空间区域之间提供了强烈的解剖。当与为样式器设计的编辑方法结合使用时,它可以实现更细粒度的控制,以编辑合成或真实图像。该模型也可以通过传输学习扩展到其他域。因此,作为具有内置解剖学的通用先前模型,它可以促进基于GaN的应用的发展并实现更多潜在的下游任务。
translated by 谷歌翻译
生成的对抗网络(GANS)已经实现了图像生成的照片逼真品质。但是,如何最好地控制图像内容仍然是一个开放的挑战。我们介绍了莱特基照片,这是一个两级GaN,它在古典GAN目标上训练了训练,在一组空间关键点上有内部调节。这些关键点具有相关的外观嵌入,分别控制生成对象的位置和样式及其部件。我们使用合适的网络架构和培训方案地址的一个主要困难在没有领域知识和监督信号的情况下将图像解开到空间和外观因素中。我们展示了莱特基点提供可解释的潜在空间,可用于通过重新定位和交换Keypoint Embedding来重新安排生成的图像,例如通过组合来自不同图像的眼睛,鼻子和嘴巴来产生肖像。此外,关键点和匹配图像的显式生成启用了一种用于无监督的关键点检测的新的GaN的方法。
translated by 谷歌翻译
高保真语义图像编辑的最新进展依赖于最先进的生成模型的概述潜在的潜在空间,例如风格。具体而言,最近的作品表明,通过线性偏移以及潜在方向,可以实现面部图像中的属性的体面可控性。几个最近的方法解决了这种方向的发现,隐含地假设最先进的GAN学习潜在空间,具有固有的线性可分离属性分布和语义矢量算术属性。在我们的工作中,我们表明,作为培训神经颂歌的流动实现的非线性潜在的代码操纵对于许多具有更复杂的非纹理变化因子的实用非面孔图像域有益。特别是,我们调查具有已知属性的大量数据集,并证明某些属性操作仅具有线性移位的挑战。
translated by 谷歌翻译
Our method performs local semantic editing on GAN output images, transferring the appearance of a specific object part from a reference image to a target image.
translated by 谷歌翻译
GAN的进展使高分辨率的感性质量形象产生了产生。 stylegans允许通过数学操作对W/W+空间中的潜在样式向量进行数学操作进行引人入胜的属性修改,从而有效调节生成器的丰富层次结构表示。最近,此类操作已被推广到原始StyleGan纸中的属性交换之外,以包括插值。尽管StyleGans有许多重大改进,但仍被认为会产生不自然的图像。生成的图像的质量基于两个假设。 (a)生成器学到的层次表示的丰富性,以及(b)样式空间的线性和平滑度。在这项工作中,我们提出了一个层次的语义正常化程序(HSR),该层次正常化程序将生成器学到的层次表示与大量数据学到的相应的强大功能保持一致。 HSR不仅可以改善发电机的表示,还可以改善潜在风格空间的线性和平滑度,从而导致产生更自然的样式编辑的图像。为了证明线性改善,我们提出了一种新型的度量 - 属性线性评分(ALS)。通过改善感知路径长度(PPL)度量的改善,在不同的标准数据集中平均16.19%的不自然图像的生成显着降低,同时改善了属性编辑任务中属性变化的线性变化。
translated by 谷歌翻译
在本文中,我们解决了神经面部重演的问题,鉴于一对源和目标面部图像,我们需要通过将目标的姿势(定义为头部姿势及其面部表情定义)通过同时保留源的身份特征(例如面部形状,发型等),即使在源头和目标面属于不同身份的挑战性情况下也是如此。在此过程中,我们解决了最先进作品的一些局限在推理期间标记的数据以及c)它们不保留大型头部姿势变化中的身份。更具体地说,我们提出了一个框架,该框架使用未配对的随机生成的面部图像学会通过合并最近引入的样式空间$ \ Mathcal $ \ Mathcal {S} $ of Stylegan2的姿势,以将面部的身份特征从其姿势中解脱出来表现出显着的分解特性。通过利用这一点,我们学会使用3D模型的监督成功地混合了一对源和目标样式代码。随后用于重新制定的最终潜在代码由仅与源的面部姿势相对应的潜在单位和仅与源身份相对应的单位组成,从而显着改善了与最近的状态性能相比的重新制定性能。艺术方法。与艺术的状态相比,我们定量和定性地表明,即使在极端的姿势变化下,提出的方法也会产生更高的质量结果。最后,我们通过首先将它们嵌入预告片发电机的潜在空间来报告实际图像。我们在:https://github.com/stelabou/stylemask上公开提供代码和预估计的模型
translated by 谷歌翻译
Privacy of machine learning models is one of the remaining challenges that hinder the broad adoption of Artificial Intelligent (AI). This paper considers this problem in the context of image datasets containing faces. Anonymization of such datasets is becoming increasingly important due to their central role in the training of autonomous cars, for example, and the vast amount of data generated by surveillance systems. While most prior work de-identifies facial images by modifying identity features in pixel space, we instead project the image onto the latent space of a Generative Adversarial Network (GAN) model, find the features that provide the biggest identity disentanglement, and then manipulate these features in latent space, pixel space, or both. The main contribution of the paper is the design of a feature-preserving anonymization framework, StyleID, which protects the individuals' identity, while preserving as many characteristics of the original faces in the image dataset as possible. As part of the contribution, we present a novel disentanglement metric, three complementing disentanglement methods, and new insights into identity disentanglement. StyleID provides tunable privacy, has low computational complexity, and is shown to outperform current state-of-the-art solutions.
translated by 谷歌翻译
由于简单但有效的训练机制和出色的图像产生质量,生成的对抗网络(GAN)引起了极大的关注。具有生成照片现实的高分辨率(例如$ 1024 \ times1024 $)的能力,最近的GAN模型已大大缩小了生成的图像与真实图像之间的差距。因此,许多最近的作品表明,通过利用良好的潜在空间和博学的gan先验来利用预先训练的GAN模型的新兴兴趣。在本文中,我们简要回顾了从三个方面利用预先培训的大规模GAN模型的最新进展,即1)大规模生成对抗网络的培训,2)探索和理解预训练的GAN模型,以及预先培训的GAN模型,以及3)利用这些模型进行后续任务,例如图像恢复和编辑。有关相关方法和存储库的更多信息,请访问https://github.com/csmliu/pretretaining-gans。
translated by 谷歌翻译
We introduce a new method for diverse foreground generation with explicit control over various factors. Existing image inpainting based foreground generation methods often struggle to generate diverse results and rarely allow users to explicitly control specific factors of variation (e.g., varying the facial identity or expression for face inpainting results). We leverage contrastive learning with latent codes to generate diverse foreground results for the same masked input. Specifically, we define two sets of latent codes, where one controls a pre-defined factor (``known''), and the other controls the remaining factors (``unknown''). The sampled latent codes from the two sets jointly bi-modulate the convolution kernels to guide the generator to synthesize diverse results. Experiments demonstrate the superiority of our method over state-of-the-arts in result diversity and generation controllability.
translated by 谷歌翻译
由于其语义上的理解和用户友好的可控性,通过三维引导,通过三维引导的面部图像操纵已广泛应用于各种交互式场景。然而,现有的基于3D形式模型的操作方法不可直接适用于域名面,例如非黑色素化绘画,卡通肖像,甚至是动物,主要是由于构建每个模型的强大困难具体面部域。为了克服这一挑战,据我们所知,我们建议使用人为3DMM操纵任意域名的第一种方法。这是通过两个主要步骤实现的:1)从3DMM参数解开映射到潜在的STYLEGO2的潜在空间嵌入,可确保每个语义属性的解除响应和精确的控制; 2)通过实施一致的潜空间嵌入,桥接域差异并使人类3DMM适用于域外面的人类3DMM。实验和比较展示了我们高质量的语义操作方法在各种面部域中的优越性,所有主要3D面部属性可控姿势,表达,形状,反照镜和照明。此外,我们开发了直观的编辑界面,以支持用户友好的控制和即时反馈。我们的项目页面是https://cassiepython.github.io/cddfm3d/index.html
translated by 谷歌翻译