现代生成的对抗网络(GANS)主要使用判别者(或批评者)中的分段线性激活功能,包括Relu和Leaceryru。这些模型学习分段线性映射,其中每个部分处理输入空间的子集,每个子​​集的梯度​​是分段常数。在这样一类鉴别者(或批评者)函数下,我们呈现梯度标准化(Gran),一种新的输入相关标准化方法,可确保输入空间中的分段k-lipschitz约束。与光谱归一化相比,Gran不约束各个网络层的处理,并且与梯度惩罚不同,严格执行几乎无处不在的分段Lipschitz约束。凭经验,我们展示了多个数据集的改进了图像生成性能(包括Cifar-10/100,STL-10,LSUN卧室和Celeba),GaN丢失功能和指标。此外,我们分析了在几个标准GAN中改变了经常无核的Lipschitz常数K,而不仅仅是实现显着的性能增益,还可以在普通的ADAM优化器中找到K和培训动态之间的连接,特别是在低梯度损失平台之间。
translated by 谷歌翻译
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † * Now at Google Brain † Code for our models is available at https://github.com/igul222/improved_wgan_training.
translated by 谷歌翻译
生成对抗网络(GAN)是最受欢迎的图像生成模型,在各种计算机视觉任务上取得了显着进度。但是,训练不稳定仍然是所有基于GAN的算法的开放问题之一。已经提出了许多方法来稳定gan的训练,其重点分别放在损失功能,正则化和归一化技术,训练算法和模型体系结构上。与上述方法不同,在本文中,提出了有关稳定gan训练的新观点。发现有时发电机产生的图像在训练过程中像歧视者的对抗示例一样,这可能是导致gan不稳定训练的原因的一部分。有了这一发现,我们提出了直接的对抗训练(DAT)方法来稳定gan的训练过程。此外,我们证明DAT方法能够适应歧视器的Lipschitz常数。 DAT的高级性能在多个损失功能,网络体系结构,超参数和数据集上进行了验证。具体而言,基于SSGAN的CIFAR-100无条件生成,DAT在CIFAR-100的无条件生成上实现了11.5%的FID,基于SSGAN的STL-10无条件生成的FID和基于SSGAN的LSUN卧室无条件生成的13.2%FID。代码将在https://github.com/iceli1007/dat-gan上找到
translated by 谷歌翻译
We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.
translated by 谷歌翻译
Recent years have seen adversarial losses been applied to many fields. Their applications extend beyond the originally proposed generative modeling to conditional generative and discriminative settings. While prior work has proposed various output activation functions and regularization approaches, some open questions still remain unanswered. In this paper, we aim to study the following two research questions: 1) What types of output activation functions form a well-behaved adversarial loss? 2) How different combinations of output activation functions and regularization approaches perform empirically against one another? To answer the first question, we adopt the perspective of variational divergence minimization and consider an adversarial loss well-behaved if it behaves as a divergence-like measure between the data and model distributions. Using a generalized formulation for adversarial losses, we derive the necessary and sufficient conditions of a well-behaved adversarial loss. Our analysis reveals a large class of theoretically valid adversarial losses. For the second question, we propose a simple comparative framework for adversarial losses using discriminative adversarial networks. The proposed framework allows us to efficiently evaluate adversarial losses using a standard evaluation metric such as the classification accuracy. With the proposed framework, we evaluate a comprehensive set of 168 combinations of twelve output activation functions and fourteen regularization approaches on the handwritten digit classification problem to decouple their effects. Our empirical findings suggest that there is no single winning combination of output activation functions and regularization approaches across all settings. Our theoretical and empirical results may together serve as a reference for choosing or designing adversarial losses in future research.
translated by 谷歌翻译
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65.
translated by 谷歌翻译
生成对抗网络(GAN)的最新发展驱动了许多计算机视觉应用。尽管综合质量很高,但训练甘斯经常会面临几个问题,包括非缔合,模式崩溃和梯度消失。有几个解决方法,例如,正规化Lipschitz的连续性和采用Wasserstein距离。尽管这些方法可以部分解决问题,但我们认为这些问题是由于用深神经网络对歧视者建模而引起的。在本文中,我们基于新衍生的深神网络理论,称为神经切线内核(NTK),并提出了一种称为生成对抗性NTK(GA-NTK)的新生成算法。 GA-NTK将鉴别器建模为高斯过程(GP)。借助NTK理论,可以用封闭式公式来描述GA-NTK的训练动力学。为了将数据与封闭形式公式合成,可以将目标简化为单层对抗优化问题。我们在现实世界数据集上进行了广泛的实验,结果表明,GA-NTK可以生成与GAN相当的图像,但在各种条件下训练要容易得多。我们还研究了GA-NTK的当前局限性,并提出了一些解决方法,以使GA-NTK更加实用。
translated by 谷歌翻译
本文介绍了一种新颖的卷积方法,称为生成卷积(GCONV),这对于改善生成的对抗网络(GaN)性能来说是简单而有效的。与标准卷积不同,GCONV首先选择与给定的潜像兼容的有用内核,然后线性地将所选内核结合起来制作潜在特定的内核。使用潜在特定的内核,所提出的方法产生潜在特定的特征,鼓励发电机产生高质量的图像。这种方法很简单,但令人惊讶地有效。首先,GaN性能随着额外的硬件成本而显着提高。其次,GCONV可以用于现有的最先进的发电机而不修改网络架构。为了揭示GCONV的优越性,本文使用各种标准数据集提供了广泛的实验,包括CiFar-10,CiFar-100,Lsun-Church,Celeba和微小想象成。定量评估证明,GCONV在成立得分(IS)和FRECHET成立距离(FID)方面大大提高了无条件和条件GAN的性能。例如,所提出的方法改善了FID,分别从35.13到29.76和20.23到22.64的微小想象网数据集上的分数。
translated by 谷歌翻译
在本文中,我们通过将歧管学习步骤纳入鉴别器来改善生成的对抗网络。我们考虑了基于位置约束的线性和子空间的歧管,以及地方约束的非线性歧管。在我们的设计中,歧管学习和编码步骤与鉴别器的层交织在一起,其目标是将中间特征表示吸引到歧管上。我们自适应地平衡特征表示和歧管视图之间的差异,这代表了在歧管上的去噪和精炼歧管之间的折衷。我们得出结论,由于它们的不均匀密度和平滑度,地区限制的非线性歧管具有上部的线性歧管。我们对不同最近最新的基线显示出实质性的改进。
translated by 谷歌翻译
Learning high-dimensional distributions is often done with explicit likelihood modeling or implicit modeling via minimizing integral probability metrics (IPMs). In this paper, we expand this learning paradigm to stochastic orders, namely, the convex or Choquet order between probability measures. Towards this end, exploiting the relation between convex orders and optimal transport, we introduce the Choquet-Toland distance between probability measures, that can be used as a drop-in replacement for IPMs. We also introduce the Variational Dominance Criterion (VDC) to learn probability measures with dominance constraints, that encode the desired stochastic order between the learned measure and a known baseline. We analyze both quantities and show that they suffer from the curse of dimensionality and propose surrogates via input convex maxout networks (ICMNs), that enjoy parametric rates. We provide a min-max framework for learning with stochastic orders and validate it experimentally on synthetic and high-dimensional image generation, with promising results. Finally, our ICMNs class of convex functions and its derived Rademacher Complexity are of independent interest beyond their application in convex orders.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the 'Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
translated by 谷歌翻译
通过从最佳运输理论的思想启发,我们呈现了信任评论家(TTC),一种新的生成型材算法。该算法消除了来自Wassersein GaN的可培训发电机;相反,它迭代地使用培训批评网络序列的梯度下降来修改源数据。这部分是由我们在评论者的梯度提供的最佳运输方向之间观察到的未对准,以及在由可训练发电机参数化的数据点实际移动的方向之间的最佳运输方向之间观察到的未对准。以前的工作已经从不同的观点到达类似的想法,但我们在最佳运输理论中的基础是激励自适应步长的选择,与恒定步长相比大大加速了会聚。使用此步骤规则,我们在具有密度的源分布的情况下证明了初始几何收敛速率。这些融合率仅停止仅在非可忽略的生成数据与真实数据中无法区分时申请。解决未对准问题提高了性能,我们在实验中表明,显示给出了定期的训练时期,TTC产生更高的质量图像,尽管在增加的内存要求上。此外,TTC提供了转化密度的迭代公式,传统的WGAN没有。最后,可以应用TTC将任何源分布映射到任何目标上;我们通过实验证明TTC可以在没有专用算法的图像生成,翻译和去噪中获得竞争性能。
translated by 谷歌翻译
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at 1024 2 . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CELEBA dataset.
translated by 谷歌翻译
可逆的神经网络(Inns)已被用于设计生成模型,实现节省内存梯度计算,并解决逆问题。在这项工作中,我们展示了普通二手纪念架构遭受爆炸逆,因此易于变得数值不可逆转。在广泛的Inn用例中,我们揭示了包括在分配和分配的变化(OOD)数据的变化公式的不适用性的失败,用于节省内存返回的不正确渐变,以及无法从标准化流量模型中采样。我们进一步推出了普通架构原子构建块的双嘴唇特性。这些见解对旅馆的稳定性然后提供了前进的方法来解决这些故障。对于本地可释放足够的任务,如记忆保存的倒退,我们提出了一种灵活且高效的常规器。对于必要的全球可逆性的问题,例如在ood数据上应用标准化流动,我们展示了设计稳定的旅馆构建块的重要性。
translated by 谷歌翻译
我们为生成对抗网络(GAN)提出了一个新颖的理论框架。我们揭示了先前分析的基本缺陷,通过错误地对GANS的训练计划进行了错误的建模,该缺陷受到定义不定的鉴别梯度的约束。我们克服了这个问题,该问题阻碍了对GAN培训的原则研究,并考虑了歧视者的体系结构在我们的框架内解决它。为此,我们通过其神经切线核为歧视者提供了无限宽度神经网络的理论。我们表征了训练有素的判别器,以实现广泛的损失,并建立网络的一般可怜性属性。由此,我们获得了有关生成分布的融合的新见解,从而促进了我们对GANS训练动态的理解。我们通过基于我们的框架的分析工具包来证实这些结果,并揭示了与GAN实践一致的直觉。
translated by 谷歌翻译
In biomedical image analysis, the applicability of deep learning methods is directly impacted by the quantity of image data available. This is due to deep learning models requiring large image datasets to provide high-level performance. Generative Adversarial Networks (GANs) have been widely utilized to address data limitations through the generation of synthetic biomedical images. GANs consist of two models. The generator, a model that learns how to produce synthetic images based on the feedback it receives. The discriminator, a model that classifies an image as synthetic or real and provides feedback to the generator. Throughout the training process, a GAN can experience several technical challenges that impede the generation of suitable synthetic imagery. First, the mode collapse problem whereby the generator either produces an identical image or produces a uniform image from distinct input features. Second, the non-convergence problem whereby the gradient descent optimizer fails to reach a Nash equilibrium. Thirdly, the vanishing gradient problem whereby unstable training behavior occurs due to the discriminator achieving optimal classification performance resulting in no meaningful feedback being provided to the generator. These problems result in the production of synthetic imagery that is blurry, unrealistic, and less diverse. To date, there has been no survey article outlining the impact of these technical challenges in the context of the biomedical imagery domain. This work presents a review and taxonomy based on solutions to the training problems of GANs in the biomedical imaging domain. This survey highlights important challenges and outlines future research directions about the training of GANs in the domain of biomedical imagery.
translated by 谷歌翻译
依靠这样的前提是,二进制神经网络的性能可以在很大程度上恢复,而完全精确的权重向量与其相应的二进制向量之间的量化错误,网络二线化的现有作品经常采用模型鲁棒性的想法以达到上述目标。但是,鲁棒性仍然是一个不明智的概念,而没有扎实的理论支持。在这项工作中,我们介绍了Lipschitz的连续性,即定义明确的功能特性,是定义BNN模型鲁棒性的严格标准。然后,我们建议将Lipschitz连续性保留为正规化项,以提高模型的鲁棒性。特别是,虽然流行的Lipschitz涉及正则化方法由于其极端稀疏而经常在BNN中崩溃,但我们将保留矩阵设计以近似于目标重量矩阵的光谱规范,可以将其作为BNN的Lipschitz常数的近似值部署精确的L​​ipschitz恒定计算(NP-HARD)。我们的实验证明,我们的BNN特异性正则化方法可以有效地增强BNN的鲁棒性(在Imagenet-C上作证),从而在CIFAR和Imagenet上实现最新性能。
translated by 谷歌翻译
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.
translated by 谷歌翻译
我们介绍了用于生成建模的广义能量模型(GEBM)。这些模型组合了两个训练有素的组件:基本分布(通常是隐式模型),可以在高维空间中学习具有低固有尺寸的数据的支持;和能量功能,优化学习支持的概率质量。能量函数和基座都共同构成了最终模型,与GANS不同,它仅保留基本分布(“发电机”)。通过在学习能量和基础之间交替进行培训GEBMS。我们表明,两种培训阶段都明确定义:通过最大化广义可能性来学习能量,并且由此产生的能源的损失提供了学习基础的信息梯度。可以通过MCMC获得来自训练模型的潜在空间的后部的样品,从而在该空间中找到产生更好的质量样本的区域。经验上,图像生成任务上的GEBM样本比来自学习发电机的图像更好,表明所有其他相同,GEBM将优于同样复杂性的GAN。当使用归一化流作为基础测量时,GEBMS成功地启动密度建模任务,返回相当的性能以直接相同网络的最大可能性。
translated by 谷歌翻译