We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at 1024 2 . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CELEBA dataset.
translated by 谷歌翻译
The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
translated by 谷歌翻译
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
translated by 谷歌翻译
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
translated by 谷歌翻译
生成的对抗网络(GANS)产生高质量的图像,但致力于训练。它们需要仔细正常化,大量计算和昂贵的超参数扫描。我们通过将生成和真实样本投影到固定的预级特征空间中,在这些问题上进行了重要的头路。发现鉴别者无法充分利用来自预押模型的更深层次的特征,我们提出了更有效的策略,可以在渠道和分辨率中混合特征。我们预计的GaN提高了图像质量,样品效率和收敛速度。它与最多一个百万像素的分辨率进一步兼容,并在二十二个基准数据集上推进最先进的FR \'Echet Inception距离(FID)。重要的是,预计的GAN符合先前最低的FID速度快40倍,鉴于相同的计算资源,将壁钟时间从5天切割到不到3小时。
translated by 谷歌翻译
即使自然图像有多种尺寸,生成模型也以固定分辨率运行。由于高分辨率的细节被删除并完全丢弃了低分辨率图像,因此丢失了宝贵的监督。我们认为,每个像素都很重要,并创建具有可变大小图像的数据集,该图像以本机分辨率收集。为了利用各种大小的数据,我们引入了连续尺度训练,该过程以随机尺度进行采样以训练具有可变输出分辨率的新发电机。首先,对生成器进行调节,可以使我们能够生成比以前更高的分辨率图像,而无需在模型中添加层。其次,通过对连续坐标进行调节,我们可以采样仍然遵守一致的全局布局的贴片,这也允许在更高分辨率下进行可扩展的训练。受控的FFHQ实验表明,与离散的多尺度方法相比,我们的方法可以更好地利用多分辨率培训数据,从而获得更好的FID分数和更清洁的高频细节。我们还训练包括教堂,山脉和鸟类在内的其他自然图像领域,并通过连贯的全球布局和现实的本地细节来展示任意量表的综合,超出了我们的实验中的2K分辨率。我们的项目页面可在以下网址找到:https://chail.github.io/anyres-gan/。
translated by 谷歌翻译
生成的对抗网络(GANS)已经实现了图像生成的照片逼真品质。但是,如何最好地控制图像内容仍然是一个开放的挑战。我们介绍了莱特基照片,这是一个两级GaN,它在古典GAN目标上训练了训练,在一组空间关键点上有内部调节。这些关键点具有相关的外观嵌入,分别控制生成对象的位置和样式及其部件。我们使用合适的网络架构和培训方案地址的一个主要困难在没有领域知识和监督信号的情况下将图像解开到空间和外观因素中。我们展示了莱特基点提供可解释的潜在空间,可用于通过重新定位和交换Keypoint Embedding来重新安排生成的图像,例如通过组合来自不同图像的眼睛,鼻子和嘴巴来产生肖像。此外,关键点和匹配图像的显式生成启用了一种用于无监督的关键点检测的新的GaN的方法。
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research. We present an evaluation metric that can separately and reliably measure both of these aspects in image generation tasks by forming explicit, non-parametric representations of the manifolds of real and generated data. We demonstrate the effectiveness of our metric in StyleGAN and BigGAN by providing several illustrative examples where existing metrics yield uninformative or contradictory results. Furthermore, we analyze multiple design variants of StyleGAN to better understand the relationships between the model architecture, training methods, and the properties of the resulting sample distribution. In the process, we identify new variants that improve the state-of-the-art. We also perform the first principled analysis of truncation methods and identify an improved method. Finally, we extend our metric to estimate the perceptual quality of individual samples, and use this to study latent space interpolations.
translated by 谷歌翻译
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
使用单视图2D照片仅集合,无监督的高质量多视图 - 一致的图像和3D形状一直是一个长期存在的挑战。现有的3D GAN是计算密集型的,也是没有3D-一致的近似;前者限制了所生成的图像的质量和分辨率,并且后者对多视图一致性和形状质量产生不利影响。在这项工作中,我们提高了3D GAN的计算效率和图像质量,而无需依赖这些近似。为此目的,我们介绍了一种表现力的混合明确隐式网络架构,与其他设计选择一起,不仅可以实时合成高分辨率多视图一致图像,而且还产生高质量的3D几何形状。通过解耦特征生成和神经渲染,我们的框架能够利用最先进的2D CNN生成器,例如Stylega2,并继承它们的效率和表现力。在其他实验中,我们展示了与FFHQ和AFHQ猫的最先进的3D感知合成。
translated by 谷歌翻译
最近对变形金刚的爆炸利益提出了他们成为计算机视觉任务的强大“通用”模型的潜力,例如分类,检测和分割。虽然这些尝试主要研究歧视模型,但我们探索变压器,更加臭名昭着的难以愿景任务,例如生成的对抗网络(GANS)。我们的目标是通过仅使用纯的变压器的架构,开展一项完全没有卷曲的GAN的试点研究。我们的Vanilla GaN架构被称为Cransgan,包括一个基于内存友好的变换器的发电机,逐渐增加了特征分辨率,并且相应地是多尺度鉴别器来捕获同时语义上下文和低级纹理。在他们之上,我们介绍了新的网格自我关注模块,以便进一步缓解记忆瓶颈,以便扩展到高分辨率的发电。我们还开发了一个独特的培训配方,包括一系列技术,可以减轻转发的培训不稳定问题,例如数据增强,修改的归一化和相对位置编码。与使用卷积骨架的当前最先进的GAN相比,我们最好的建筑达到了竞争力的表现。具体而言,转发在STL-10上设置10.43和18.28的最新的最新成立得分为18.28,表现优于样式。当涉及更高分辨率(例如256 x 256)的生成任务时,例如Celeba-HQ和Lsun-Church,Rancorgan继续生产具有高保真度和令人印象深刻的纹理细节的不同视觉示例。此外,我们通过可视化培训动力学,深入了解基于变压器的生成模型,了解他们的行为如何与卷积的行为。代码可在https://github.com/vita-group/transgan中获得。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
现实的高光谱图像(HSI)超分辨率(SR)技术旨在从其低分辨率(LR)对应物中产生具有更高光谱和空间忠诚的高分辨率(HR)HSI。生成的对抗网络(GAN)已被证明是图像超分辨率的有效深入学习框架。然而,现有GaN的模型的优化过程经常存在模式崩溃问题,导致光谱间不变重建容量有限。这可能导致所生成的HSI上的光谱空间失真,尤其是具有大的升级因子。为了缓解模式崩溃的问题,这项工作提出了一种与潜在编码器(Le-GaN)耦合的新型GaN模型,其可以将产生的光谱空间特征从图像空间映射到潜在空间并产生耦合组件正规化生成的样本。基本上,我们将HSI视为嵌入在潜在空间中的高维歧管。因此,GaN模型的优化被转换为学习潜在空间中的高分辨率HSI样本的分布的问题,使得产生的超分辨率HSI的分布更接近其原始高分辨率对应物的那些。我们对超级分辨率的模型性能进行了实验评估及其在缓解模式崩溃中的能力。基于具有不同传感器(即Aviris和UHD-185)的两种实际HSI数据集进行了测试和验证,用于各种升高因素并增加噪声水平,并与最先进的超分辨率模型相比(即Hyconet,LTTR,Bagan,SR-GaN,Wgan)。
translated by 谷歌翻译
In biomedical image analysis, the applicability of deep learning methods is directly impacted by the quantity of image data available. This is due to deep learning models requiring large image datasets to provide high-level performance. Generative Adversarial Networks (GANs) have been widely utilized to address data limitations through the generation of synthetic biomedical images. GANs consist of two models. The generator, a model that learns how to produce synthetic images based on the feedback it receives. The discriminator, a model that classifies an image as synthetic or real and provides feedback to the generator. Throughout the training process, a GAN can experience several technical challenges that impede the generation of suitable synthetic imagery. First, the mode collapse problem whereby the generator either produces an identical image or produces a uniform image from distinct input features. Second, the non-convergence problem whereby the gradient descent optimizer fails to reach a Nash equilibrium. Thirdly, the vanishing gradient problem whereby unstable training behavior occurs due to the discriminator achieving optimal classification performance resulting in no meaningful feedback being provided to the generator. These problems result in the production of synthetic imagery that is blurry, unrealistic, and less diverse. To date, there has been no survey article outlining the impact of these technical challenges in the context of the biomedical imagery domain. This work presents a review and taxonomy based on solutions to the training problems of GANs in the biomedical imaging domain. This survey highlights important challenges and outlines future research directions about the training of GANs in the domain of biomedical imagery.
translated by 谷歌翻译
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation. * This work was done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
translated by 谷歌翻译
Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation. Pretrained BigBiGAN models -including image generators and encoders -are available on TensorFlow Hub 1 .
translated by 谷歌翻译