生成建模研究的持续趋势是将样本分辨率推高更高,同时减少培训和采样的计算要求。我们的目标是通过技术的组合进一步推动这一趋势 - 每个组件代表当前效率在各自领域的顶峰。其中包括载体定量的GAN(VQ-GAN),该模型具有高水平的损耗 - 但感知上微不足道的压缩模型;沙漏变形金刚,一个高度可扩展的自我注意力模型;和逐步未胶片的denoising自动编码器(Sundae),一种非自动化(NAR)文本生成模型。出乎意料的是,当应用于多维数据时,我们的方法突出了沙漏变压器的原始公式中的弱点。鉴于此,我们建议对重采样机制进行修改,该机制适用于将分层变压器应用于多维数据的任何任务。此外,我们证明了圣代表到长序列长度的可伸缩性 - 比先前的工作长四倍。我们提出的框架秤达到高分辨率($ 1024 \ times 1024 $),并迅速火车(2-4天)。至关重要的是,训练有素的模型在消费级GPU(GTX 1080TI)上大约2秒内生产多样化和现实的百像样品。通常,该框架是灵活的:支持任意数量的采样步骤,示例自动插入,自我纠正功能,有条件的生成和NAR公式,以允许任意介绍掩护。我们在FFHQ256上获得10.56的FID得分 - 仅在100个采样步骤中以不到一半的采样步骤接近原始VQ -GAN,而FFHQ1024的FFHQ1024和21.85。
translated by 谷歌翻译
虽然扩散概率模型可以产生高质量的图像内容,但仍然存在高分辨率图像的关键限制及其相关的高计算要求。最近的矢量量化图像模型已经克服了图像分辨率的这种限制,而是通过从之前的元素 - 明智的自回归采样生成令牌时,这是对图像分辨率的速度和单向的。相比之下,在本文中,我们提出了一种新的离散扩散概率模型,其通过使用无约束的变压器架构作为骨干来支持矢量量化令牌的并行预测。在培训期间,令牌以订单不可知的方式随机掩盖,变压器学会预测原始令牌。这种矢量量化令牌预测的并行性反过来促进了在计算费用的一小部分下的全球一致的高分辨率和多样性图像的无条件生成。以这种方式,我们可以产生超过原始训练集样本的图像分辨率,而另外提供每个图像似然估计(从生成的对抗方法的差点)。我们的方法在密度方面实现了最先进的结果(Lsun卧室:1.51; Lsun Churches:1.12; FFHQ:1.20)和覆盖范围(Lsun卧室:0.83; Lsun Churches:0.73; FFHQ:0.80),并执行竞争对手(LSUN卧室:3.64; LSUN教堂:4.07; FFHQ:6.11)在计算和减少训练套件要求方面提供优势。
translated by 谷歌翻译
Recent neural compression methods have been based on the popular hyperprior framework. It relies on Scalar Quantization and offers a very strong compression performance. This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed. In this work, we attempt to bring these lines of research closer by revisiting vector quantization for image compression. We build upon the VQ-VAE framework and introduce several modifications. First, we replace the vanilla vector quantizer by a product quantizer. This intermediate solution between vector and scalar quantization allows for a much wider set of rate-distortion points: It implicitly defines high-quality quantizers that would otherwise require intractably large codebooks. Second, inspired by the success of Masked Image Modeling (MIM) in the context of self-supervised learning and generative image models, we propose a novel conditional entropy model which improves entropy coding by modelling the co-dependencies of the quantized latent codes. The resulting PQ-MIM model is surprisingly effective: its compression performance on par with recent hyperprior methods. It also outperforms HiFiC in terms of FID and KID metrics when optimized with perceptual losses (e.g. adversarial). Finally, since PQ-MIM is compatible with image generation frameworks, we show qualitatively that it can operate under a hybrid mode between compression and generation, with no further training or finetuning. As a result, we explore the extreme compression regime where an image is compressed into 200 bytes, i.e., less than a tweet.
translated by 谷歌翻译
通过将图像形成过程分解成逐个申请的去噪自身额,扩散模型(DMS)实现了最先进的合成导致图像数据和超越。另外,它们的配方允许引导机构来控制图像生成过程而不会再刷新。然而,由于这些模型通常在像素空间中直接操作,因此强大的DMS的优化通常消耗数百个GPU天,并且由于顺序评估,推理是昂贵的。为了在保留其质量和灵活性的同时启用有限计算资源的DM培训,我们将它们应用于强大的佩带自动化器的潜在空间。与以前的工作相比,这种代表上的培训扩散模型允许第一次达到复杂性降低和细节保存之间的近乎最佳点,极大地提高了视觉保真度。通过将跨关注层引入模型架构中,我们将扩散模型转化为强大而柔性的发电机,以进行诸如文本或边界盒和高分辨率合成的通用调节输入,以卷积方式变得可以实现。我们的潜在扩散模型(LDMS)实现了一种新的技术状态,可在各种任务中进行图像修复和高竞争性能,包括无条件图像生成,语义场景合成和超级分辨率,同时与基于像素的DMS相比显着降低计算要求。代码可在https://github.com/compvis/lattent-diffusion获得。
translated by 谷歌翻译
Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a contextrich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semanticallyguided synthesis of megapixel images with transformers and obtain the state of the art among autoregressive models on class-conditional ImageNet. Code and pretrained models can be found at https://git.io/JnyvK.
translated by 谷歌翻译
在本文中,我们提出了一种新的生成模型,逐步逐步的去噪AutoEncoder(Sundae),不依赖于自回归模型。类似地与去噪扩散技术,在从随机输入开始并从随机输入开始并每次直到收敛改善它们时,日出施加Sundae。我们提出了一个简单的新改进运算符,它比扩散方法更少迭代,同时在定性地在自然语言数据集上产生更好的样本。Sundae在WMT'14英语到德语翻译任务上实现最先进的结果(非自回归方法),在巨大清洁的常见爬网数据集和Python代码的数据集上对无条件语言建模的良好定性结果来自GitHub。通过在模板中填充任意空白模式,Sundae的非自动增加性质开辟了超出左右提示的可能性。
translated by 谷歌翻译
作为生成部件作为自回归模型的向量量化变形式自动化器(VQ-VAE)的集成在图像生成上产生了高质量的结果。但是,自回归模型将严格遵循采样阶段的逐步扫描顺序。这导致现有的VQ系列模型几乎不会逃避缺乏全球信息的陷阱。连续域中的去噪扩散概率模型(DDPM)显示了捕获全局背景的能力,同时产生高质量图像。在离散状态空间中,一些作品已经证明了执行文本生成和低分辨率图像生成的可能性。我们认为,在VQ-VAE的富含内容的离散视觉码本的帮助下,离散扩散模型还可以利用全局上下文产生高保真图像,这补偿了沿像素空间的经典自回归模型的缺陷。同时,离散VAE与扩散模型的集成解决了传统的自回归模型的缺点是超大的,以及在生成图像时需要在采样过程中的过度时间的扩散模型。结果发现所生成的图像的质量严重依赖于离散的视觉码本。广泛的实验表明,所提出的矢量量化离散扩散模型(VQ-DDM)能够实现与低复杂性的顶层方法的相当性能。它还展示了在没有额外培训的图像修复任务方面与自回归模型量化的其他矢量突出的优势。
translated by 谷歌翻译
尽管两阶段矢量量化(VQ)生成模型允许合成高保真性和高分辨率图像,但其量化操作员将图像中的相似贴片编码为相同的索引,从而为相似的相邻区域重复使用现有的解码器体系结构的相似相似区域的重复伪像。为了解决这个问题,我们建议将空间条件的归一化结合起来,以调节量化的向量,以便将空间变体信息插入嵌入式索引图中,从而鼓励解码器生成更真实的图像。此外,我们使用多通道量化来增加离散代码的重组能力,而无需增加模型和代码簿的成本。此外,为了在第二阶段生成离散令牌,我们采用掩盖的生成图像变压器(MaskGit)来学习压缩潜在空间中的基础先验分布,该分布比常规自动回归模型快得多。两个基准数据集的实验表明,我们提出的调制VQGAN能够大大提高重建的图像质量,并提供高保真图像的产生。
translated by 谷歌翻译
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusion-inspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
translated by 谷歌翻译
计算机愿景的进步正在推动IM-Age操作的限制,具有在各种任务上采样详细图像的生成模型。但是,通常为每个特定任务开发和培训专门的模型,即使许多图像编辑任务共享相似之处。在去噪,染色或图像合成中,一个始终旨在从低质量的那样产生现实形象。在本文中,我们旨在迈出朝着图像编辑的统一方法。为此,我们提出Edibert,这是一个在由矢量量化的自动编码器构建的离散潜在空间中培训的双向变压器。我们认为这种双向模型适用于图像操纵,因为可以将任何补丁根据整个图像重新采样。使用这种独特和简单的培训目标,我们表明由此产生的模型与各种任务的最先进的性能相匹配:图像去噪,图像完成和图像组成。
translated by 谷歌翻译
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, making our model an attractive candidate for applications where the encoding and/or decoding speed is critical. Additionally, VQ-VAE requires sampling an autoregressive model only in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.
translated by 谷歌翻译
矢量量化变量自动编码器(VQ-VAE)是基于数据的离散潜在表示的生成模型,其中输入映射到有限的学习嵌入式集合。要生成新样品,必须对离散状态进行自动介绍的先验分布。分别地。这一先验通常非常复杂,并导致生成缓慢。在这项工作中,我们提出了一个新模型,以同时训练先验和编码器/解码器网络。我们在连续编码的向量和非信息性先验分布之间建立扩散桥。然后将潜在离散状态作为这些连续向量的随机函数。我们表明,我们的模型与迷你imagenet和Cifar数据集的自动回归先验具有竞争力,并且在优化和采样方面都有效。我们的框架还扩展了标准VQ-VAE,并可以启用端到端培训。
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
现实世界中的数据是高维的:即使在压缩后,书籍,图像或音乐表演也很容易包含数十万个元素。但是,最常用的自回归模型,变压器非常昂贵,以缩放捕获这种远程结构所需的输入和层数。我们开发了感知者AR,这是一种自回归的模态 - 不合骨架构,它使用交叉注意力将远程输入映射到少数潜在的潜在,同时还可以维护端到端的因果关系掩盖。感知器AR可以直接进行十万个令牌,从而实现了实用的长篇小写密度估计,而无需手工制作的稀疏模式或记忆机制。当对图像或音乐进行培训时,感知器AR会生成具有清晰长期连贯性和结构的输出。我们的架构还获得了长期基准测试的最新可能性,包括64 x 64个Imagenet图像和PG-19书籍。
translated by 谷歌翻译
尽管自回归模型在图像生成上取得了令人鼓舞的结果,但它们的单向生成过程阻止了所得图像完全反映全球环境。为了解决这个问题,我们提出了一个有效的图像生成框架,该框架与上下文RQ-Transformer的草稿和革命框架在生成过程中考虑了全局上下文。作为广义的VQ-VAE,RQ-VAE首先将高分辨率图像表示为一系列离散代码堆栈。序列中的代码堆栈被随机掩盖后,对上下文RQ转换器进行了训练,以根据图像的未掩盖上下文来填充蒙版的代码堆栈。然后,上下文的RQ-Transformer使用我们的两阶段解码,草稿和重新观察并生成图像,同时在生成过程中利用图像的全局上下文。具体来说。在草稿阶段,尽管质量相当低,但我们的模型首先着重于产生多样化的图像。然后,在修订阶段,模型迭代地改善了图像的质量,同时保留了生成图像的全局环境。在实验中,我们的方法在条件图像生成上实现了最新的结果。我们还验证了,通过有效控制图像生成中质量多样性权衡的质量多样性权衡,草稿进行解码可以实现高性能。
translated by 谷歌翻译
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. We release our code at https://github.com/openai/guided-diffusion.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Vector-Quantized (VQ-based) generative models usually consist of two basic components, i.e., VQ tokenizers and generative transformers. Prior research focuses on improving the reconstruction fidelity of VQ tokenizers but rarely examines how the improvement in reconstruction affects the generation ability of generative transformers. In this paper, we surprisingly find that improving the reconstruction fidelity of VQ tokenizers does not necessarily improve the generation. Instead, learning to compress semantic features within VQ tokenizers significantly improves generative transformers' ability to capture textures and structures. We thus highlight two competing objectives of VQ tokenizers for image synthesis: semantic compression and details preservation. Different from previous work that only pursues better details preservation, we propose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance the two objectives. In the first phase, we propose a semantic-enhanced perceptual loss for better semantic compression. In the second phase, we fix the encoder and codebook, but enhance and finetune the decoder to achieve better details preservation. The proposed SeQ-GAN greatly improves VQ-based generative models and surpasses the GAN and Diffusion Models on both unconditional and conditional image generation. Our SeQ-GAN (364M) achieves Frechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on 256x256 ImageNet generation, a remarkable improvement over VIT-VQGAN (714M), which obtains 11.2 FID and 97.2 IS.
translated by 谷歌翻译
我们介绍了文本到图像生成的矢量量化扩散(VQ-扩散)模型。该方法基于矢量量化变分性AutoEncoder(VQ-VAE),其潜像通过最近开发的去噪扩散概率(DDPM)的条件变体为基础。我们发现这种潜在空间方法非常适合于图像到图像生成任务,因为它不仅消除了具有现有方法的单向偏差,还允许我们结合掩模和更换的扩散策略,以避免积累错误,这是现有方法的严重问题。我们的实验表明,与具有类似数量的参数数量的传统自回归(AR)模型相比,VQ扩散产生明显更好的文本到图像生成结果。与以前的基于GAN的文本到图像方法相比,我们的VQ扩散可以通过大边缘处理更复杂的场景并提高合成的图像质量。最后,我们表明我们的方法中的图像生成计算可以通过Reparameter化进行高效。利用传统的AR方法,文本到图像生成时间随输出图像分辨率线性增加,因此即使对于正常尺寸图像也是相当耗时的。 VQ-扩散使我们能够在质量和速度之间实现更好的权衡。我们的实验表明,具有Reparameterization的VQ扩散模型比传统的AR方法快15倍,同时实现更好的图像质量。
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译