Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † * Now at Google Brain † Code for our models is available at https://github.com/igul222/improved_wgan_training.
translated by 谷歌翻译
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.
translated by 谷歌翻译
现代生成的对抗网络(GANS)主要使用判别者(或批评者)中的分段线性激活功能,包括Relu和Leaceryru。这些模型学习分段线性映射,其中每个部分处理输入空间的子集,每个子​​集的梯度​​是分段常数。在这样一类鉴别者(或批评者)函数下,我们呈现梯度标准化(Gran),一种新的输入相关标准化方法,可确保输入空间中的分段k-lipschitz约束。与光谱归一化相比,Gran不约束各个网络层的处理,并且与梯度惩罚不同,严格执行几乎无处不在的分段Lipschitz约束。凭经验,我们展示了多个数据集的改进了图像生成性能(包括Cifar-10/100,STL-10,LSUN卧室和Celeba),GaN丢失功能和指标。此外,我们分析了在几个标准GAN中改变了经常无核的Lipschitz常数K,而不仅仅是实现显着的性能增益,还可以在普通的ADAM优化器中找到K和培训动态之间的连接,特别是在低梯度损失平台之间。
translated by 谷歌翻译
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection.
translated by 谷歌翻译
We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.
translated by 谷歌翻译
Quantum machine learning (QML) has received increasing attention due to its potential to outperform classical machine learning methods in various problems. A subclass of QML methods is quantum generative adversarial networks (QGANs) which have been studied as a quantum counterpart of classical GANs widely used in image manipulation and generation tasks. The existing work on QGANs is still limited to small-scale proof-of-concept examples based on images with significant down-scaling. Here we integrate classical and quantum techniques to propose a new hybrid quantum-classical GAN framework. We demonstrate its superior learning capabilities by generating $28 \times 28$ pixels grey-scale images without dimensionality reduction or classical pre/post-processing on multiple classes of the standard MNIST and Fashion MNIST datasets, which achieves comparable results to classical frameworks with 3 orders of magnitude less trainable generator parameters. To gain further insight into the working of our hybrid approach, we systematically explore the impact of its parameter space by varying the number of qubits, the size of image patches, the number of layers in the generator, the shape of the patches and the choice of prior distribution. Our results show that increasing the quantum generator size generally improves the learning capability of the network. The developed framework provides a foundation for future design of QGANs with optimal parameter set tailored for complex image generation tasks.
translated by 谷歌翻译
Recent years have seen adversarial losses been applied to many fields. Their applications extend beyond the originally proposed generative modeling to conditional generative and discriminative settings. While prior work has proposed various output activation functions and regularization approaches, some open questions still remain unanswered. In this paper, we aim to study the following two research questions: 1) What types of output activation functions form a well-behaved adversarial loss? 2) How different combinations of output activation functions and regularization approaches perform empirically against one another? To answer the first question, we adopt the perspective of variational divergence minimization and consider an adversarial loss well-behaved if it behaves as a divergence-like measure between the data and model distributions. Using a generalized formulation for adversarial losses, we derive the necessary and sufficient conditions of a well-behaved adversarial loss. Our analysis reveals a large class of theoretically valid adversarial losses. For the second question, we propose a simple comparative framework for adversarial losses using discriminative adversarial networks. The proposed framework allows us to efficiently evaluate adversarial losses using a standard evaluation metric such as the classification accuracy. With the proposed framework, we evaluate a comprehensive set of 168 combinations of twelve output activation functions and fourteen regularization approaches on the handwritten digit classification problem to decouple their effects. Our empirical findings suggest that there is no single winning combination of output activation functions and regularization approaches across all settings. Our theoretical and empirical results may together serve as a reference for choosing or designing adversarial losses in future research.
translated by 谷歌翻译
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65.
translated by 谷歌翻译
我们介绍了用于生成建模的广义能量模型(GEBM)。这些模型组合了两个训练有素的组件:基本分布(通常是隐式模型),可以在高维空间中学习具有低固有尺寸的数据的支持;和能量功能,优化学习支持的概率质量。能量函数和基座都共同构成了最终模型,与GANS不同,它仅保留基本分布(“发电机”)。通过在学习能量和基础之间交替进行培训GEBMS。我们表明,两种培训阶段都明确定义:通过最大化广义可能性来学习能量,并且由此产生的能源的损失提供了学习基础的信息梯度。可以通过MCMC获得来自训练模型的潜在空间的后部的样品,从而在该空间中找到产生更好的质量样本的区域。经验上,图像生成任务上的GEBM样本比来自学习发电机的图像更好,表明所有其他相同,GEBM将优于同样复杂性的GAN。当使用归一化流作为基础测量时,GEBMS成功地启动密度建模任务,返回相当的性能以直接相同网络的最大可能性。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zerocentered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn highresolution generative image models for a variety of datasets with little hyperparameter tuning.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the 'Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
translated by 谷歌翻译
通过从最佳运输理论的思想启发,我们呈现了信任评论家(TTC),一种新的生成型材算法。该算法消除了来自Wassersein GaN的可培训发电机;相反,它迭代地使用培训批评网络序列的梯度下降来修改源数据。这部分是由我们在评论者的梯度提供的最佳运输方向之间观察到的未对准,以及在由可训练发电机参数化的数据点实际移动的方向之间的最佳运输方向之间观察到的未对准。以前的工作已经从不同的观点到达类似的想法,但我们在最佳运输理论中的基础是激励自适应步长的选择,与恒定步长相比大大加速了会聚。使用此步骤规则,我们在具有密度的源分布的情况下证明了初始几何收敛速率。这些融合率仅停止仅在非可忽略的生成数据与真实数据中无法区分时申请。解决未对准问题提高了性能,我们在实验中表明,显示给出了定期的训练时期,TTC产生更高的质量图像,尽管在增加的内存要求上。此外,TTC提供了转化密度的迭代公式,传统的WGAN没有。最后,可以应用TTC将任何源分布映射到任何目标上;我们通过实验证明TTC可以在没有专用算法的图像生成,翻译和去噪中获得竞争性能。
translated by 谷歌翻译
为了稳定地训练生成对抗网络(GAN),将实例噪声注入歧视器的输入中被认为是理论上的声音解决方案,但是,在实践中尚未实现其承诺。本文介绍了采用高斯混合物分布的扩散 - 在正向扩散链的所有扩散步骤中定义,以注入实例噪声。从观察到或生成的数据扩散的混合物中的随机样品被作为歧视器的输入。通过将其梯度通过前向扩散链进行反向传播来更新,该链的长度可自适应地调节以控制每个训练步骤允许的最大噪声与数据比率。理论分析验证了所提出的扩散gan的声音,该扩散器提供了模型和域 - 不可分割的可区分增强。在各种数据集上进行的一系列实验表明,扩散 - GAN可以提供稳定且具有数据效率的GAN训练,从而使对强GAN基准的性能保持一致,以综合构成照片现实的图像。
translated by 谷歌翻译
生成对抗网络(GAN)是最受欢迎的图像生成模型,在各种计算机视觉任务上取得了显着进度。但是,训练不稳定仍然是所有基于GAN的算法的开放问题之一。已经提出了许多方法来稳定gan的训练,其重点分别放在损失功能,正则化和归一化技术,训练算法和模型体系结构上。与上述方法不同,在本文中,提出了有关稳定gan训练的新观点。发现有时发电机产生的图像在训练过程中像歧视者的对抗示例一样,这可能是导致gan不稳定训练的原因的一部分。有了这一发现,我们提出了直接的对抗训练(DAT)方法来稳定gan的训练过程。此外,我们证明DAT方法能够适应歧视器的Lipschitz常数。 DAT的高级性能在多个损失功能,网络体系结构,超参数和数据集上进行了验证。具体而言,基于SSGAN的CIFAR-100无条件生成,DAT在CIFAR-100的无条件生成上实现了11.5%的FID,基于SSGAN的STL-10无条件生成的FID和基于SSGAN的LSUN卧室无条件生成的13.2%FID。代码将在https://github.com/iceli1007/dat-gan上找到
translated by 谷歌翻译
在没有明确或易于处理的可能性的情况下,贝叶斯人经常诉诸于贝叶斯计算(ABC)进行推理。我们的工作基于生成的对抗网络(GAN)和对抗性变分贝叶斯(GAN),为ABC桥接了ABC。 ABC和GAN都比较了观察到的数据和假数据的各个方面,分别从后代和似然模拟。我们开发了一个贝叶斯gan(B-GAN)采样器,该采样器通过解决对抗性优化问题直接靶向后部。 B-GAN是由有条件gan在ABC参考上学习的确定性映射驱动的。一旦训练了映射,就可以通过以可忽略的额外费用过滤噪声来获得IID后样品。我们建议使用(1)数据驱动的提案和(2)变化贝叶斯提出两项后处理的本地改进。我们通过常见的bayesian结果支持我们的发现,表明对于某些神经网络发生器和歧视器,真实和近似后骨之间的典型总变化距离收敛到零。我们对模拟数据的发现相对于一些最新的无可能后验模拟器显示出竞争激烈的性能。
translated by 谷歌翻译
机器学习中的半监管可用于搜索信号加背景区域未标记的新物理学。这强烈降低了搜索标准模型的信号的模型依赖性。这种方法显示了过度拟合可以产生假信号的缺点。折腾玩具蒙特卡罗(MC)事件可用于通过频繁推断估计相应的试验因子。但是,基于完全检测器模拟的MC事件是资源密集型的。生成的对抗网络(GANS)可用于模拟MC发生器。 GANS是强大的生成模型,但经常遭受培训不稳定。今后我们展示了对GAN的审查。我们倡导使用Wassersein Gan(Wan)的重量剪裁和渐变刑罚(Wan-GP),批评评论者的渐变率是对其投入的惩罚。在多Lepton异常的出现之后,我们在LHC的$ B $ -Quark结合时使用GANS为Di-Leptons最终状态。找到MC和Wgan-GP生成的事件之间的良好一致性,用于研究中选择的可观察结果。
translated by 谷歌翻译
基于能量的模型(EBMS)为密度估计提供了优雅的框架,但它们难以训练。最近的工作已经建立了与生成的对抗网络的联系,eBM通过具有变分值函数的最小游戏培训。我们提出了EBM Log-似然的双向界限,使得我们最大限度地提高了较低的界限,并在解决Minimax游戏时最小化上限。我们将一个绑定到梯度惩罚的一个稳定,稳定培训,从而提供最佳工程实践的基础。为了评估界限,我们开发了EBM发生器的Jacobi确定的新的高效估算器。我们证明这些发展显着稳定培训并产生高质量密度估计和样品生成。
translated by 谷歌翻译
实时估计实际环境深度是各种自主系统任务(例如本地化,障碍检测和姿势估计)的重要模块。在机器学习的最后十年中,将深度学习方法的广泛部署到计算机视觉任务中,从简单的RGB模式中产生了成功的方法,以实现现实的深度综合。尽管这些模型中的大多数都基于配对的深度数据或视频序列和立体声图像的可用性,但缺乏以无监督方式面对单像深度综合的方法。因此,在这项研究中,将生成神经网络领域的最新进步杠杆化以完全无监督的单像深度综合。更确切地说,使用Wasserstein-1距离实现了两个用于RGB至深度和深度RGB传输的周期符合发电机,并同时优化。为了确保所提出的方法的合理性,我们将模型应用于自称的工业数据集以及著名的NYU DEPTH V2数据集,从而可以与现有方法进行比较。在这项研究中,观察到的成功表明,在现实世界应用中,不成对的单像深度估计的潜力很高。
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译