2017-03-02

translated by 谷歌翻译

2017-06-13
Despite the growing prominence of generative adversarial networks (GANs),optimization in GANs is still a poorly understood topic. In this paper, weanalyze the "gradient descent" form of GAN optimization i.e., the naturalsetting where we simultaneously take small gradient steps in both generator anddiscriminator parameters. We show that even though GAN optimization does notcorrespond to a convex-concave game (even for simple parameterizations), underproper conditions, equilibrium points of this optimization procedure are still\emph{locally asymptotically stable} for the traditional GAN formulation. Onthe other hand, we show that the recently proposed Wasserstein GAN can havenon-convergent limit cycles near equilibrium. Motivated by this stabilityanalysis, we propose an additional regularization term for gradient descent GANupdates, which \emph{is} able to guarantee local stability for both the WGANand the traditional GAN, and also shows practical promise in speeding upconvergence and addressing mode collapse.
translated by 谷歌翻译

2018-01-13

translated by 谷歌翻译

2018-06-29

translated by 谷歌翻译

2018-10-05
Wasserstein GAN（WGAN）是一种模型，可以最小化数据分布和样本分布之间的Wasserstein距离。最近的研究提出稳定WGAN的训练过程并实施Lipschitz约束。在这项研究中，我们证明了在关于平衡和惩罚措施$\ mu$的适当假设下优化简单梯度惩罚的局部稳定性$\ mu$ -WGAN（SGP $\ mu$ -WGAN）。采用度量评估的差异化概念来处理罚款项的导数，这有助于处理具有较低维度支持的抽象奇异度量。基于这种分析，我们声称惩罚数据流形或样本流形是使原始WGAN正规化并具有梯度惩罚的关键。通过满足我们假设的非直观惩罚措施获得的实验结果也被提供用于支持理论结果。
translated by 谷歌翻译
We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization. The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training. The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods. A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN. Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.
translated by 谷歌翻译

2017-05-25

translated by 谷歌翻译
We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN 1. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real world tasks our approach exhibits improved stability and performance compared to standard GAN training. 1 We base this name on the Chekhov's gun (dramatic) principle that states that every element in a story must be necessary, and irrelevant elements should be removed. Analogously, our Chekhov GAN algorithm introduces a sequence of elements which are eventually composed to yield a generator.
translated by 谷歌翻译

2017-11-16

translated by 谷歌翻译

2017-12-12

translated by 谷歌翻译

2018-05-19

translated by 谷歌翻译

2016-12-07
Although Generative Adversarial Networks achieve state-of-the-art results ona variety of generative tasks, they are regarded as highly unstable and proneto miss modes. We argue that these bad behaviors of GANs are due to the veryparticular functional shape of the trained discriminators in high dimensionalspaces, which can easily make training stuck or push probability mass in thewrong direction, towards that of higher concentration than that of the datagenerating distribution. We introduce several ways of regularizing theobjective, which can dramatically stabilize the training of GAN models. We alsoshow that our regularizers can help the fair distribution of probability massacross the modes of the data generating distribution, during the early phasesof training and thus providing a unified solution to the missing modes problem.
translated by 谷歌翻译
translated by 谷歌翻译

translated by 谷歌翻译

translated by 谷歌翻译

2017-05-24

translated by 谷歌翻译
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.
translated by 谷歌翻译

translated by 谷歌翻译

2018-06-24

translated by 谷歌翻译

2017-11-28

translated by 谷歌翻译
${authors} 分类：${tags}
${pubdate}${abstract_cn}
translated by 谷歌翻译