归一化流量是输入和潜在表示之间的基础映射,具有完全分解的分布。由于精确的可能性估值和有效的抽样,它们非常有吸引力。然而,由于杀硅约束限制了模型宽度,因此它们的有效容量通常不足。我们通过逐渐填充噪音的中间表示来解决此问题。我们根据先前可逆的单位预处理噪声,我们将其描述为交叉单元耦合。我们可逆的发光模块通过融合具有腹部自我关注的密集连接块来提高模型表达性。我们将我们的体系结构称为致密流,因为跨单元和模块内联轴器都依赖于密集的连接。实验表现出显着的改善,因为拟议的贡献和揭示了中等计算预算下的最先进的密度估算。
translated by 谷歌翻译
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1 × 1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realisticlooking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow.
translated by 谷歌翻译
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
translated by 谷歌翻译
基于流量的生成模型最近已成为模拟数据生成的最有效方法之一。实际上,它们是由一系列可逆和可触觉转换构建的。Glow首先使用可逆$ 1 \ times 1 $卷积引入了一种简单的生成流。但是,与标准卷积相比,$ 1 \ times 1 $卷积的灵活性有限。在本文中,我们提出了一种新颖的可逆$ n \ times n $卷积方法,该方法克服了可逆$ 1 \ times 1 $卷积的局限性。此外,我们所提出的网络不仅可以处理和可逆,而且比标准卷积使用的参数少。CIFAR-10,ImageNet和Celeb-HQ数据集的实验表明,我们可逆的$ N \ times n $卷积有助于显着提高生成模型的性能。
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
归一化流提供一种优雅的方法,用于通过使用可逆的变换获得来自分布的易于密度估计。主要挑战是提高模型的表现,同时保持可逆性约束完整。我们建议通过纳入本地化的自我关注来这样做。然而,传统的自我关注机制不满足获得可逆流的要求,并且不能胆无利地结合到标准化流中。为了解决这一点,我们介绍了一种称为细微的收缩流(ACF)的新方法,它利用了一种特殊类别的基于流的生成模型 - 收缩流。我们证明可以以即插即用的方式将ACF引入到最新的现有技术的状态。这被证明是不仅改善了这些模型的表示力(改善了每次昏暗度量的比特),而且还导致训练它们的速度明显更快。在包括测试图像之间的分隔的定性结果证明样本更加现实并捕获数据中的本地相关性。我们通过使用AWGN进行扰动分析来进一步评估结果,证明ACF模型(特别是点 - 产品变体)表现出更好,更加一致的恢复能力噪声。
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
现代生成模型大致分为两个主要类别:(1)可以产生高质量随机样品但无法估算新数据点的确切密度的模型,以及(2)提供精确密度估计的模型,以样本为代价潜在空间的质量和紧凑性。在这项工作中,我们提出了LED,这是一种与gan密切相关的新生成模型,不仅允许有效采样,而且允许有效的密度估计。通过最大程度地提高对数可能的歧视器输出,我们得出了一个替代对抗优化目标,鼓励生成的数据多样性。这种表述提供了对几种流行生成模型之间关系的见解。此外,我们构建了一个基于流的生成器,该发电机可以计算生成样品的精确概率,同时允许低维度变量作为输入。我们在各种数据集上的实验结果表明,我们的密度估计器会产生准确的估计值,同时保留了生成的样品质量良好。
translated by 谷歌翻译
标准化流是生成模型,其通过从简单的基本分布到复杂的目标分布的可逆性转换提供易于变换的工艺模型。然而,该技术不能直接模拟支持未知的低维歧管的数据,在诸如图像数据之类的现实世界域中的公共发生。最近的补救措施的尝试引入了击败归一化流量的中央好处的几何并发症:精确密度估计。我们通过保形嵌入流量来恢复这种福利,这是一种设计流动与贸易密度的流动的流动的框架。我们争辩说,使用培训保育嵌入的标准流量是模型支持数据的最自然的方式。为此,我们提出了一系列保形构建块,并在具有合成和实际数据的实验中应用它们,以证明流动可以在不牺牲贸易可能性的情况下模拟歧管支持的分布。
translated by 谷歌翻译
A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms both offer exact density evaluation and sampling, but rely on the parameterization of an easily invertible elementwise transformation, whose choice determines the flexibility of these models. Building upon recent work, we propose a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility. We demonstrate that neural spline flows improve density estimation, variational inference, and generative modeling of images.
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
归一化流量是漫射的,通常是维持尺寸保存,使用模型的可能性训练的模型。我们使用Surve Framework通过新的层构建尺寸减少调节流量,称为漏斗。我们展示了对各种数据集的功效,并表明它改善或匹配现有流量的性能,同时具有降低的潜在空间尺寸。漏斗层可以由各种变换构成,包括限制卷积和馈送前部。
translated by 谷歌翻译
使用通过组成可逆层获得的地图进行标准化模型复杂概率分布。特殊的线性层(例如蒙版和1x1卷积)在现有体系结构中起着关键作用,因为它们在具有可拖动的Jacobians和倒置的同时增加表达能力。我们提出了一个基于蝴蝶层的新的可逆线性层家族,理论上捕获复杂的线性结构,包括排列和周期性,但可以有效地倒置。这种代表力是我们方法的关键优势,因为这些结构在许多现实世界数据集中很常见。根据我们的可逆蝴蝶层,我们构建了一个新的称为蝴蝶流的归一化流量模型。从经验上讲,我们证明蝴蝶不仅可以在MNIST,CIFAR-10和Imagenet 32​​x32等自然图像上实现强密度估计结果,而且还可以在结构化数据集中获得明显更好的对数可能性,例如Galaxy图像和Mimic-III患者群体 - - 同时,在记忆和计算方面比相关基线更有效。
translated by 谷歌翻译
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels. The source code is available at https://github.com/NVlabs/NVAE.34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
归一化的流提供了一种优雅的生成建模方法,可以有效地采样和确切的数据分布的密度评估。但是,当在低维歧管上支持数据分布或具有非平凡的拓扑结构时,当前技术的表现性有显着局限性。我们介绍了一个新的统计框架,用于学习局部正常流的混合物作为数据歧管上的“图表图”。我们的框架增强了最近方法的表现力,同时保留了标准化流的签名特性,他们承认了精确的密度评估。我们通过量化自动编码器(VQ-AE)学习了数据歧管图表的合适地图集,并使用条件流量学习了它们的分布。我们通过实验验证我们的概率框架可以使现有方法更好地模拟数据分布,而不是复杂的歧管。
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
translated by 谷歌翻译
最近的生成机器学习模型的进展重新推出了密码猜测领域的研究兴趣。基于GAN的数据驱动密码猜测方法和深度潜变量模型的方法显示了令人印象深刻的泛化性能,并为密码猜测提供了引人注目的属性。在本文中,我们提出了Passflow,一种基于流的生成模型方法来猜测。基于流的模型允许精确的对数似然计算和优化,这实现了精确潜在的变量推断。此外,基于流的模型提供了有意义的潜在空间表示,这使得能够探索潜在空间和插值的特定子空间。我们展示了生成流量的适用性到密码猜测的背景下,脱离了主要限于图像生成的连续空间的流网络的先前应用。我们显示Passflow能够在使用培训集中的密码猜测任务中以前的最先进的GaN的方法,这是一个训练集,该训练集是小于前一体的训练集。此外,生成的样本的定性分析表明,通信流可以准确地模拟原始密码的分布,甚至是不匹配的样本非常类似于人类的密码。
translated by 谷歌翻译
我们考虑来自高维数据的信息压缩问题。在许多研究考虑到不可逆转的转变的压缩问题,我们强调了可逆压缩的重要性。我们介绍了具有伪基本架构的新阶段基于似的的AutoEncoders,我们调用伪可逆的编码器。我们提供了对原则的理论解释。我们在MNIST上评估高斯伪可逆编码器,其中我们的模型优于生成图像的锐度的WAE和VAE。
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译