尽管对生成对抗网络(GAN)进行了广泛的研究,但如何可靠地从其潜在空间中可靠地采样高质量的图像仍然是一个不足的主题。在本文中,我们通过探索和利用GAN潜伏分布的中心先验来提出一种新型的GAN潜伏方法。我们的关键见解是,GAN潜在空间的高维度不可避免地会导致集线器潜伏期的出现通常比潜在空间中的其他潜在潜在潜伏期更大。结果,这些枢纽潜伏期得到了更好的训练,因此有助于高质量图像的合成。与A后“樱桃挑剔”不同,我们的方法高效,因为它是一种先验方法,可以在合成图像之前识别高质量的潜在。此外,我们表明,众所周知但纯粹的经验截断技巧是对集线器潜伏期的中心聚类效应的幼稚近似,这不仅揭示了截断技巧的基本原理,而且还表明了我们方法的优越性和基础性。广泛的实验结果证明了该方法的有效性。
translated by 谷歌翻译
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research. We present an evaluation metric that can separately and reliably measure both of these aspects in image generation tasks by forming explicit, non-parametric representations of the manifolds of real and generated data. We demonstrate the effectiveness of our metric in StyleGAN and BigGAN by providing several illustrative examples where existing metrics yield uninformative or contradictory results. Furthermore, we analyze multiple design variants of StyleGAN to better understand the relationships between the model architecture, training methods, and the properties of the resulting sample distribution. In the process, we identify new variants that improve the state-of-the-art. We also perform the first principled analysis of truncation methods and identify an improved method. Finally, we extend our metric to estimate the perceptual quality of individual samples, and use this to study latent space interpolations.
translated by 谷歌翻译
评估图像生成模型(例如生成对抗网络(GAN))是一个具有挑战性的问题。一种常见的方法是比较地面真相图像集和生成的测试图像集的分布。 Frech \'Et启动距离是评估gan的最广泛使用的指标之一,该指标假定一组图像的训练有素的启动模型中的特征遵循正态分布。在本文中,我们认为这是一个过度简化的假设,这可能会导致不可靠的评估结果,并且可以使用截断的广义正态分布来实现更准确的密度估计。基于此,我们提出了一个新的度量,以准确评估gan,称为趋势(截断了截断的正常密度估计,对嵌入植物的嵌入)。我们证明我们的方法大大减少了密度估计的错误,因此消除了评估结果错误的风险。此外,我们表明所提出的指标可显着提高评估结果的鲁棒性,以防止图像样品数量变化。
translated by 谷歌翻译
这项工作评估了生成模型的质量度量的鲁棒性,例如INPECTION评分(IS)和FR \'Echet Inception距离(FID)。类似于深层模型对各种对抗性攻击的脆弱性,我们表明这种指标也可以通过添加剂像素扰动来操纵。我们的实验表明,可以生成分数很高但知觉质量低的图像分布。相反,人们可以优化对小型扰动,当将其添加到现实世界图像中时,会使他们的分数恶化。我们进一步将评估扩展到生成模型本身,包括最先进的网络样式。我们展示了生成模型和FID的脆弱性,反对潜在空间中的累加扰动。最后,我们证明,通过简单地以强大的启动来代替标准发明,可以强大地实现FID。我们通过广泛的实验来验证鲁棒度量的有效性,这表明它对操纵更为强大。
translated by 谷歌翻译
GAN的进展使高分辨率的感性质量形象产生了产生。 stylegans允许通过数学操作对W/W+空间中的潜在样式向量进行数学操作进行引人入胜的属性修改,从而有效调节生成器的丰富层次结构表示。最近,此类操作已被推广到原始StyleGan纸中的属性交换之外,以包括插值。尽管StyleGans有许多重大改进,但仍被认为会产生不自然的图像。生成的图像的质量基于两个假设。 (a)生成器学到的层次表示的丰富性,以及(b)样式空间的线性和平滑度。在这项工作中,我们提出了一个层次的语义正常化程序(HSR),该层次正常化程序将生成器学到的层次表示与大量数据学到的相应的强大功能保持一致。 HSR不仅可以改善发电机的表示,还可以改善潜在风格空间的线性和平滑度,从而导致产生更自然的样式编辑的图像。为了证明线性改善,我们提出了一种新型的度量 - 属性线性评分(ALS)。通过改善感知路径长度(PPL)度量的改善,在不同的标准数据集中平均16.19%的不自然图像的生成显着降低,同时改善了属性编辑任务中属性变化的线性变化。
translated by 谷歌翻译
生成的对抗网络(GANS)产生高质量的图像,但致力于训练。它们需要仔细正常化,大量计算和昂贵的超参数扫描。我们通过将生成和真实样本投影到固定的预级特征空间中,在这些问题上进行了重要的头路。发现鉴别者无法充分利用来自预押模型的更深层次的特征,我们提出了更有效的策略,可以在渠道和分辨率中混合特征。我们预计的GaN提高了图像质量,样品效率和收敛速度。它与最多一个百万像素的分辨率进一步兼容,并在二十二个基准数据集上推进最先进的FR \'Echet Inception距离(FID)。重要的是,预计的GAN符合先前最低的FID速度快40倍,鉴于相同的计算资源,将壁钟时间从5天切割到不到3小时。
translated by 谷歌翻译
图像合成中的评估指标起着测量生成模型的性能的关键作用。但是,大多数指标主要集中于图像保真度。现有的多样性指标是通过比较分布来得出的,因此它们无法量化每个生成图像的多样性或稀有程度。在这项工作中,我们提出了一个新的评估度量,称为“稀有分数”,以测量通过生成模型合成的每个图像的稀有性。我们首先表明经验观察表明,共同样品彼此接近,并且在特征空间最近的邻居距离处,稀有的样本彼此遥远。然后,我们使用我们的指标来证明可以有效比较不同生成模型产生稀有图像的程度。我们还提出了一种比较共享相同概念(例如Celeba-HQ和FFHQ)的数据集之间的稀有度的方法。最后,我们分析了在特征空间的不同设计中的指标的使用,以更好地了解特征空间和产生的稀疏图像之间的关系。代码将在网上公开用于研究社区。
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
生成对抗网络(GAN)是现实图像合成的最新生成模型之一。虽然培训和评估GAN变得越来越重要,但当前的GAN研究生态系统并未提供可靠的基准,以始终如一地进行评估。此外,由于GAN实施很少,因此研究人员将大量时间用于重现基线。我们研究了GAN方法的分类法,并提出了一个名为Studiogan的新开源库。 Studiogan支持7种GAN体系结构,9种调理方法,4种对抗损失,13个正则化模块,3个可区分的增强,7个评估指标和5个评估骨干。通过我们的培训和评估协议,我们使用各种数据集(CIFAR10,ImageNet,AFHQV2,FFHQ和Baby/Papa/Granpa-Imagenet)和3个不同的评估骨干(InceptionV3,Swav,Swav和Swin Transformer)提出了大规模的基准。与GAN社区中使用的其他基准不同,我们在统一的培训管道中培训了包括Biggan,stylegan2和stylegan3在内的代表GAN,并使用7个评估指标量化了生成性能。基准测试评估其他尖端生成模型(例如,stylegan-xl,adm,maskgit和rq-transformer)。 Studiogan提供了预先训练的权重的GAN实现,培训和评估脚本。 Studiogan可从https://github.com/postech-cvlab/pytorch-studiogan获得。
translated by 谷歌翻译
We propose an efficient algorithm to embed a given image into the latent space of StyleGAN. This embedding enables semantic image editing operations that can be applied to existing photographs. Taking the StyleGAN trained on the FFHQ dataset as an example, we show results for image morphing, style transfer, and expression transfer. Studying the results of the embedding algorithm provides valuable insights into the structure of the StyleGAN latent space. We propose a set of experiments to test what class of images can be embedded, how they are embedded, what latent space is suitable for embedding, and if the embedding is semantically meaningful.
translated by 谷歌翻译
The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128 and 2-4× reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans.
translated by 谷歌翻译
生成模型的评估主要基于特定特征空间中估计分布和地面真实分布之间的比较。为了将样本嵌入信息丰富的特征中,以前的作品经常使用针对分类进行优化的卷积神经网络,这是最近的研究批评。因此,已经探索了各种特征空间以发现替代方案。其中,一种令人惊讶的方法是使用随机初始化的神经网络进行功能嵌入。但是,采用随机特征的基本依据尚未足够合理。在本文中,我们严格研究具有随机权重的模型的特征空间,与训练有素的模型相比。此外,我们提供了一个经验证据,可以选择网络以获取随机特征以获得一致可靠的结果。我们的结果表明,随机网络的功能可以与训练有素的网络相似,可以很好地评估生成模型,此外,这两种功能可以以互补的方式一起使用。
translated by 谷歌翻译
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65.
translated by 谷歌翻译
尽管使用StyleGan进行语义操纵的最新进展,但对真实面孔的语义编辑仍然具有挑战性。 $ W $空间与$ W $+空间之间的差距需要重建质量与编辑质量之间的不良权衡。为了解决这个问题,我们建议通过用基于注意的变压器代替Stylegan映射网络中的完全连接的层来扩展潜在空间。这种简单有效的技术将上述两个空间整合在一起,并将它们转换为一个名为$ W $ ++的新的潜在空间。我们的修改后的Stylegan保持了原始StyleGan的最新一代质量,并具有中等程度的多样性。但更重要的是,提议的$ W $ ++空间在重建质量和编辑质量方面都取得了卓越的性能。尽管有这些显着优势,但我们的$ W $ ++空间支持现有的反转算法和编辑方法,仅由于其与$ w/w $+空间的结构相似性,因此仅可忽略不计的修改。 FFHQ数据集上的广泛实验证明,我们提出的$ W $ ++空间显然比以前的$ w/w $+空间更可取。该代码可在https://github.com/anonsubm2021/transstylegan上公开提供。
translated by 谷歌翻译
The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
translated by 谷歌翻译
使用诸如GAN的生成模型产生多样化和现实图像通常需要大量的图像训练。具有极其限制的数据培训的GAN可以容易地覆盖很少的训练样本,并显示出“楼梯”潜在的空间,如潜在空间的过渡存在不连续性,偶尔会产生输出的突然变化。在这项工作中,我们认为我们的兴趣或可转让源数据集没有大规模数据集的情况,并寻求培训具有最小的过度和模式折叠的现有生成模型。我们在发电机和对应鉴别器的特征空间上提出基于潜在的混合距离正则化,这促使这两个玩家不仅仅是关于稀缺观察到的数据点,而且驻留的特征空间中的相对距离。不同数据集的定性和定量评估表明,我们的方法通常适用于现有模型,以在有限数据的约束下提高保真度和多样性。代码将公开。
translated by 谷歌翻译
我们提出了一种具有多个鉴别器的生成的对抗性网络,其中每个鉴别者都专门用于区分真实数据集的子集。这种方法有助于学习与底层数据分布重合的发电机,从而减轻慢性模式崩溃问题。从多项选择学习的灵感来看,我们引导每个判别者在整个数据的子集中具有专业知识,并允许发电机在没有监督训练示例和鉴别者的数量的情况下自动找到潜伏和真实数据空间之间的合理对应关系。尽管使用多种鉴别器,但骨干网络在鉴别器中共享,并且培训成本的增加最小化。我们使用多个评估指标展示了我们算法在标准数据集中的有效性。
translated by 谷歌翻译
生成的对抗网络(GANS)通常需要充分的数据进行培训,以综合高保真图像。最近的研究表明,由于鉴别器过度拟合,带有有限数据的培训GAN仍然是强大的,阻碍发电机收敛的根本原因。本文介绍了一种称为自适应伪增强(APA)的新战略,以鼓励发电机与鉴别者之间的健康竞争。作为依赖标准数据增强或模型正则化的现有方法的替代方法,APA通过采用发电机本身增加具有生成图像的真实数据分布来缓解过度装备,这使得判别符号自适应地欺骗鉴别器。广泛的实验证明了APA在降低数据制度中改善合成质量方面的有效性。我们提供了理论分析,以研究我们新培训策略的收敛性和合理性。 APA简单有效。它可以无缝添加到强大的当代GAN,例如Stylegan2,计算成本可忽略不计。
translated by 谷歌翻译
Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random code to a photo-realistic image. In this work, we propose a framework called InterFaceGAN to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space. We first find that GANs learn various semantics in some linear subspaces of the latent space. After identifying these subspaces, we can realistically manipulate the corresponding facial attributes without retraining the model. We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection, resulting in more precise control of the attribute manipulation. Besides manipulating the gender, age, expression, and presence of eyeglasses, we can even alter the face pose and fix the artifacts accidentally made by GANs. Furthermore, we perform an in-depth face identity analysis and a layer-wise analysis to evaluate the editing results quantitatively. Finally, we apply our approach to real face editing by employing GAN inversion approaches and explicitly training feed-forward models based on the synthetic data established by InterFaceGAN. Extensive experimental results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable face representation.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译