Denoising diffusion probabilistic models (DDPMs) have been proven capable of synthesizing high-quality images with remarkable diversity when trained on large amounts of data. However, to our knowledge, few-shot image generation tasks have yet to be studied with DDPM-based approaches. Modern approaches are mainly built on Generative Adversarial Networks (GANs) and adapt models pre-trained on large source domains to target domains using a few available samples. In this paper, we make the first attempt to study when do DDPMs overfit and suffer severe diversity degradation as training data become scarce. Then we propose to adapt DDPMs pre-trained on large source domains to target domains using limited data. Our results show that utilizing knowledge from pre-trained DDPMs can significantly accelerate convergence and improve the quality and diversity of the generated images. Moreover, we propose a DDPM-based pairwise similarity loss to preserve the relative distances between generated samples during domain adaptation. In this way, we further improve the generation diversity of the proposed DDPM-based approaches. We demonstrate the effectiveness of our approaches qualitatively and quantitatively on a series of few-shot image generation tasks and achieve results better than current state-of-the-art GAN-based approaches in quality and diversity.
translated by 谷歌翻译
使用诸如GAN的生成模型产生多样化和现实图像通常需要大量的图像训练。具有极其限制的数据培训的GAN可以容易地覆盖很少的训练样本,并显示出“楼梯”潜在的空间,如潜在空间的过渡存在不连续性,偶尔会产生输出的突然变化。在这项工作中,我们认为我们的兴趣或可转让源数据集没有大规模数据集的情况,并寻求培训具有最小的过度和模式折叠的现有生成模型。我们在发电机和对应鉴别器的特征空间上提出基于潜在的混合距离正则化,这促使这两个玩家不仅仅是关于稀缺观察到的数据点,而且驻留的特征空间中的相对距离。不同数据集的定性和定量评估表明,我们的方法通常适用于现有模型,以在有限数据的约束下提高保真度和多样性。代码将公开。
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
这项工作旨在将在一个图像域上预先训练的生成的对抗网络(GaN)转移到新域名,其仅仅是只有一个目标图像。主要挑战是,在有限的监督下,综合照片现实和高度多样化的图像非常困难,同时获取目标的代表性。不同于采用Vanilla微调策略的现有方法,我们分别将两个轻量级模块导入发电机和鉴别器。具体地,我们将属性适配器引入发电机中冻结其原始参数,通过该参数,它可以通过其重复利用现有知识,因此保持合成质量和多样性。然后,我们用一个属性分类器装备了学习良好的鉴别器骨干,以确保生成器从引用中捕获相应的字符。此外,考虑到培训数据的多样性差(即,只有一个图像),我们建议在培训过程中建议在生成域中的多样性限制,减轻优化难度。我们的方法在各种环境下提出了吸引力的结果,基本上超越了最先进的替代方案,特别是在合成多样性方面。明显的是,我们的方法即使具有大域间隙,并且在几分钟内为每个实验提供鲁棒地收敛。
translated by 谷歌翻译
可以训练生成模型,以从特定域中生成图像,仅由文本提示引导,而不看到任何图像?换句话说:可以将图像生成器“盲目地训练”吗?利用大规模对比语言图像预训练(CLIP)模型的语义力量,我们提出了一种文本驱动方法,允许将生成模型转移到新域,而无需收集单个图像。我们展示通过自然语言提示和几分钟的培训,我们的方法可以通过各种风格和形状的多种域调整发电机。值得注意的是,许多这些修改难以与现有方法达到困难或完全不可能。我们在广泛的域中进行了广泛的实验和比较。这些证明了我们方法的有效性,并表明我们的移动模型保持了对下游任务吸引的生成模型的潜在空间属性。
translated by 谷歌翻译
Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive loglikelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality, which is important for the practical deployment of these models. We additionally use precision and recall to compare how well DDPMs and GANs cover the target distribution. Finally, we show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable. We release our code at https://github.com/ openai/improved-diffusion.
translated by 谷歌翻译
最近,GaN反演方法与对比语言 - 图像预先绘制(CLIP)相结合,可以通过文本提示引导零拍摄图像操作。然而,由于GaN反转能力有限,它们对不同实物的不同实物的应用仍然困难。具体地,这些方法通常在与训练数据相比,改变对象标识或产生不需要的图像伪影的比较与新颖姿势,视图和高度可变内容重建具有新颖姿势,视图和高度可变内容的困难。为了减轻这些问题并实现真实图像的忠实操纵,我们提出了一种新的方法,Dumbused Clip,其使用扩散模型执行文本驱动的图像操纵。基于近期扩散模型的完整反转能力和高质量的图像生成功率,即使在看不见的域之间也成功地执行零拍摄图像操作。此外,我们提出了一种新颖的噪声组合方法,允许简单的多属性操作。与现有基线相比,广泛的实验和人类评估确认了我们的方法的稳健和卓越的操纵性能。
translated by 谷歌翻译
Can a text-to-image diffusion model be used as a training objective for adapting a GAN generator to another domain? In this paper, we show that the classifier-free guidance can be leveraged as a critic and enable generators to distill knowledge from large-scale text-to-image diffusion models. Generators can be efficiently shifted into new domains indicated by text prompts without access to groundtruth samples from target domains. We demonstrate the effectiveness and controllability of our method through extensive experiments. Although not trained to minimize CLIP loss, our model achieves equally high CLIP scores and significantly lower FID than prior work on short prompts, and outperforms the baseline qualitatively and quantitatively on long and complicated prompts. To our best knowledge, the proposed method is the first attempt at incorporating large-scale pre-trained diffusion models and distillation sampling for text-driven image generator domain adaptation and gives a quality previously beyond possible. Moreover, we extend our work to 3D-aware style-based generators and DreamBooth guidance.
translated by 谷歌翻译
Domain adaptation of GANs is a problem of fine-tuning the state-of-the-art GAN models (e.g. StyleGAN) pretrained on a large dataset to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are a great number of methods that tackle this problem in different ways there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. First, we perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. In particular, we show that affine layers of StyleGAN can be sufficient for fine-tuning to similar domains. Second, inspired by these findings, we investigate StyleSpace to utilize it for domain adaptation. We show that there exist directions in the StyleSpace that can adapt StyleGAN to new domains. Further, we examine these directions and discover their many surprising properties. Finally, we leverage our analysis and findings to deliver practical improvements and applications in such standard tasks as image-to-image translation and cross-domain morphing.
translated by 谷歌翻译
很少有图像生成和几张相关的图像翻译是两个相关的任务,这两个任务旨在为只有几张图像的看不见类别生成新图像。在这项工作中,我们首次尝试将几张图像翻译方法调整为几乎没有图像生成任务。几乎没有图像翻译将图像分解为样式向量和内容图。看不见的样式矢量可以与不同的见面内容映射结合使用,以产生不同的图像。但是,它需要存储可见的图像以提供内容图,并且看不见的样式向量可能与可见的内容映射不相容。为了使其适应少量图像生成任务,我们通过将连续内容映射量化为离散的内容映射而不是存储可见图像,从而学习了局部内容向量的紧凑词字典。此外,我们对根据样式向量进行的离散内容图的自回归分布进行建模,这可以减轻内容映射和样式向量之间的不兼容。三个真实数据集的定性和定量结果表明,与以前的方法相比,我们的模型可以为看不见的类别产生更高的多样性和忠诚度图像。
translated by 谷歌翻译
数字艺术合成在多媒体社区中受到越来越多的关注,因为有效地与公众参与了艺术。当前的数字艺术合成方法通常使用单模式输入作为指导,从而限制了模型的表现力和生成结果的多样性。为了解决这个问题,我们提出了多模式引导的艺术品扩散(MGAD)模型,该模型是一种基于扩散的数字艺术品生成方法,它利用多模式提示作为控制无分类器扩散模型的指导。此外,对比度语言图像预处理(剪辑)模型用于统一文本和图像模式。关于生成的数字艺术绘画质量和数量的广泛实验结果证实了扩散模型和多模式指导的组合有效性。代码可从https://github.com/haha-lisa/mgad-multimodal-guided-artwork-diffusion获得。
translated by 谷歌翻译
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. We release our code at https://github.com/openai/guided-diffusion.
translated by 谷歌翻译
The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128 and 2-4× reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans.
translated by 谷歌翻译
学习为仅基于几个图像(称为少数图像生成的少数图像)生成新类别的新图像,引起了研究的兴趣。几项最先进的作品取得了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提出了一个新型的三角洲生成对抗网络(Deltagan),该网络由重建子网和一代子网组成。重建子网捕获了类别内转换,即“ delta”,在相同类别对之间。生成子网为输入图像生成了特定于样本的“ delta”,该图像与此输入图像结合使用,以在同一类别中生成新图像。此外,对抗性的三角洲匹配损失旨在将上述两个子网链接在一起。在五个少量图像数据集上进行的广泛实验证明了我们提出的方法的有效性。
translated by 谷歌翻译
学习为仅基于几个图像(称为少数图像生成的少数图像)生成新类别的新图像,引起了研究的兴趣。几项最先进的作品取得了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提出了一个新型的三角洲生成对抗网络(Deltagan),该网络由重建子网和一代子网组成。重建子网捕获了类别内转换,即同一类别对之间的三角洲。该生成子网为输入图像生成了特定于样本的三角洲,该图像与此输入图像结合使用,以在同一类别中生成新图像。此外,对抗性的三角洲匹配损失旨在将上述两个子网链接在一起。六个基准数据集的广泛实验证明了我们提出的方法的有效性。我们的代码可从https://github.com/bcmi/deltagan-few-shot-image-generation获得。
translated by 谷歌翻译
基于得分的生成模型(SGM)是最近提出的深层生成任务范式,现在显示出最新的采样性能。众所周知,原始SGM设计解决了生成三元素的两个问题:i)取样质量,ii)采样多样性。但是,三元素的最后一个问题没有解决,即,众所周知,他们的训练/采样复杂性很高。为此,将SGM蒸馏成更简单的模型,例如生成对抗网络(GAN),目前正在引起很多关注。我们提出了一种增强的蒸馏方法,称为直透插值GAN(SPI-GAN),可以将其与最新的基于快捷方式的蒸馏方法进行比较,称为Denoising扩散GAN(DD-GAN)。但是,我们的方法对应于一种极端方法,该方法不使用反向SDE路径的任何中间快捷方式,在这种情况下,DD-GAN无法获得良好的结果。然而,我们的直径插值方法极大地稳定了整体训练过程。结果,就CIFAR-10,Celeba-HQ-256和Lsun-Church-256的采样质量/多样性/时间而言,SPI-GAN是最佳模型之一。
translated by 谷歌翻译
作为生成部件作为自回归模型的向量量化变形式自动化器(VQ-VAE)的集成在图像生成上产生了高质量的结果。但是,自回归模型将严格遵循采样阶段的逐步扫描顺序。这导致现有的VQ系列模型几乎不会逃避缺乏全球信息的陷阱。连续域中的去噪扩散概率模型(DDPM)显示了捕获全局背景的能力,同时产生高质量图像。在离散状态空间中,一些作品已经证明了执行文本生成和低分辨率图像生成的可能性。我们认为,在VQ-VAE的富含内容的离散视觉码本的帮助下,离散扩散模型还可以利用全局上下文产生高保真图像,这补偿了沿像素空间的经典自回归模型的缺陷。同时,离散VAE与扩散模型的集成解决了传统的自回归模型的缺点是超大的,以及在生成图像时需要在采样过程中的过度时间的扩散模型。结果发现所生成的图像的质量严重依赖于离散的视觉码本。广泛的实验表明,所提出的矢量量化离散扩散模型(VQ-DDM)能够实现与低复杂性的顶层方法的相当性能。它还展示了在没有额外培训的图像修复任务方面与自回归模型量化的其他矢量突出的优势。
translated by 谷歌翻译
过去十年已经开发了各种各样的深度生成模型。然而,这些模型通常同时努力解决三个关键要求,包括:高样本质量,模式覆盖和快速采样。我们称之为这些要求所征收的挑战是生成的学习Trielemma,因为现有模型经常为他人交易其中一些。特别是,去噪扩散模型表明了令人印象深刻的样本质量和多样性,但它们昂贵的采样尚未允许它们在许多现实世界应用中应用。在本文中,我们认为这些模型中的缓慢采样基本上归因于去噪步骤中的高斯假设,这些假设仅针对小型尺寸的尺寸。为了使得具有大步骤的去噪,从而减少去噪步骤的总数,我们建议使用复杂的多模态分布来模拟去噪分布。我们引入了去噪扩散生成的对抗网络(去噪扩散GANS),其使用多模式条件GaN模拟每个去噪步骤。通过广泛的评估,我们表明去噪扩散GAN获得原始扩散模型的样本质量和多样性,而在CIFAR-10数据集中是2000 $ \时代。与传统的GAN相比,我们的模型表现出更好的模式覆盖和样本多样性。据我们所知,去噪扩散GaN是第一模型,可在扩散模型中降低采样成本,以便允许它们廉价地应用于现实世界应用。项目页面和代码:https://nvlabs.github.io/denoising-diffusion-gan
translated by 谷歌翻译
扩散概率模型采用前向马尔可夫扩散链逐渐将数据映射到噪声分布,学习如何通过推断一个反向马尔可夫扩散链来生成数据以颠倒正向扩散过程。为了实现竞争性数据生成性能,他们需要一条长长的扩散链,这使它们在培训中不仅在培训中而且发电。为了显着提高计算效率,我们建议通过废除将数据扩散到随机噪声的要求来截断正向扩散链。因此,我们从隐式生成分布而不是随机噪声启动逆扩散链,并通过将其与截断的正向扩散链损坏的数据的分布相匹配来学习其参数。实验结果表明,就发电性能和所需的逆扩散步骤的数量而言,我们的截短扩散概率模型对未截断的概率模型提供了一致的改进。
translated by 谷歌翻译
虽然扩散概率模型可以产生高质量的图像内容,但仍然存在高分辨率图像的关键限制及其相关的高计算要求。最近的矢量量化图像模型已经克服了图像分辨率的这种限制,而是通过从之前的元素 - 明智的自回归采样生成令牌时,这是对图像分辨率的速度和单向的。相比之下,在本文中,我们提出了一种新的离散扩散概率模型,其通过使用无约束的变压器架构作为骨干来支持矢量量化令牌的并行预测。在培训期间,令牌以订单不可知的方式随机掩盖,变压器学会预测原始令牌。这种矢量量化令牌预测的并行性反过来促进了在计算费用的一小部分下的全球一致的高分辨率和多样性图像的无条件生成。以这种方式,我们可以产生超过原始训练集样本的图像分辨率,而另外提供每个图像似然估计(从生成的对抗方法的差点)。我们的方法在密度方面实现了最先进的结果(Lsun卧室:1.51; Lsun Churches:1.12; FFHQ:1.20)和覆盖范围(Lsun卧室:0.83; Lsun Churches:0.73; FFHQ:0.80),并执行竞争对手(LSUN卧室:3.64; LSUN教堂:4.07; FFHQ:6.11)在计算和减少训练套件要求方面提供优势。
translated by 谷歌翻译