The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
translated by 谷歌翻译
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables. However, such amortized variational inference faces two challenges: (1) the limited posterior expressiveness of fully-factorized Gaussian assumption and (2) the amortization error of the inference model. We present a novel approach that addresses both challenges. First, we focus on ReLU networks with Gaussian output and illustrate their connection to probabilistic PCA. Building on this observation, we derive an iterative algorithm that finds the mode of the posterior and apply full-covariance Gaussian posterior approximation centered on the mode. Subsequently, we present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models. Based on the Laplace approximation of the latent variable posterior, VLAEs enhance the expressiveness of the posterior while reducing the amortization error. Empirical results on MNIST, Omniglot, Fashion-MNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other recent amortized or iterative methods on the ReLU networks.
translated by 谷歌翻译
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1 × 1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realisticlooking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow.
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
变异推理通常从近似分布q到后p中最小化“反向” kullbeck-leibeler(kl)kl(q || p)。最近的工作研究“正向” KL KL(P || Q),它与反向KL不同并不能导致低估不确定性的变异近似值。本文介绍了运输评分攀登(TSC),该方法通过使用汉密尔顿蒙特卡洛(HMC)和新型的自适应传输图来优化KL(P || Q)。传输图通过充当潜在变量空间和扭曲空间之间变量的变化来改善HMC的轨迹。TSC使用HMC样品在优化KL时动态训练传输图(P || Q)。TSC利用协同作用,在该协同作用下,更好的运输地图会导致更好的HMC采样,从而导致更好的传输地图。我们在合成和真实数据上演示了TSC。我们发现,在训练大规模数据的变异自动编码器时,TSC可以实现竞争性能。
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
Normalizing flows provide a general mechanism for defining expressive probability distributions, only requiring the specification of a (usually simple) base distribution and a series of bijective transformations. There has been much recent work on normalizing flows, ranging from improving their expressive power to expanding their application. We believe the field has now matured and is in need of a unified perspective. In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference. We place special emphasis on the fundamental principles of flow design, and discuss foundational topics such as expressive power and computational trade-offs. We also broaden the conceptual framing of flows by relating them to more general probability transformations. Lastly, we summarize the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
translated by 谷歌翻译
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
translated by 谷歌翻译
归一化流量是漫射的,通常是维持尺寸保存,使用模型的可能性训练的模型。我们使用Surve Framework通过新的层构建尺寸减少调节流量,称为漏斗。我们展示了对各种数据集的功效,并表明它改善或匹配现有流量的性能,同时具有降低的潜在空间尺寸。漏斗层可以由各种变换构成,包括限制卷积和馈送前部。
translated by 谷歌翻译
马尔可夫链蒙特卡洛(MCMC),例如langevin Dynamics,有效地近似顽固的分布。但是,由于昂贵的数据采样迭代和缓慢的收敛性,它的用法在深层可变模型的背景下受到限制。本文提出了摊销的langevin Dynamics(ALD),其中数据划分的MCMC迭代完全被编码器的更新替换为将观测值映射到潜在变量中。这种摊销可实现有效的后验采样,而无需数据迭代。尽管具有效率,但我们证明ALD是MCMC算法有效的,其马尔可夫链在轻度假设下将目标后部作为固定分布。基于ALD,我们还提出了一个名为Langevin AutoCodeer(LAE)的新的深层变量模型。有趣的是,可以通过稍微修改传统自动编码器来实现LAE。使用多个合成数据集,我们首先验证ALD可以从目标后代正确获取样品。我们还在图像生成任务上评估了LAE,并证明我们的LAE可以根据变异推断(例如变异自动编码器)和其他基于MCMC的方法在测试可能性方面胜过现有的方法。
translated by 谷歌翻译
A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms both offer exact density evaluation and sampling, but rely on the parameterization of an easily invertible elementwise transformation, whose choice determines the flexibility of these models. Building upon recent work, we propose a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility. We demonstrate that neural spline flows improve density estimation, variational inference, and generative modeling of images.
translated by 谷歌翻译
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
translated by 谷歌翻译
矢量量化变量自动编码器(VQ-VAE)是基于数据的离散潜在表示的生成模型,其中输入映射到有限的学习嵌入式集合。要生成新样品,必须对离散状态进行自动介绍的先验分布。分别地。这一先验通常非常复杂,并导致生成缓慢。在这项工作中,我们提出了一个新模型,以同时训练先验和编码器/解码器网络。我们在连续编码的向量和非信息性先验分布之间建立扩散桥。然后将潜在离散状态作为这些连续向量的随机函数。我们表明,我们的模型与迷你imagenet和Cifar数据集的自动回归先验具有竞争力,并且在优化和采样方面都有效。我们的框架还扩展了标准VQ-VAE,并可以启用端到端培训。
translated by 谷歌翻译
概率分布允许从业者发现数据中的隐藏结构,并构建模型,以使用有限的数据解决监督的学习问题。该报告的重点是变异自动编码器,这是一种学习大型复杂数据集概率分布的方法。该报告提供了对变异自动编码器的理论理解,并巩固了该领域的当前研究。该报告分为多个章节,第一章介绍了问题,描述了变异自动编码器并标识了该领域的关键研究方向。第2、3、4和5章深入研究了每个关键研究领域的细节。第6章总结了报告,并提出了未来工作的指示。具有机器学习基本思想但想了解机器学习研究中的一般主题的读者可以从报告中受益。该报告解释了有关学习概率分布的中心思想,人们为使这种危险做些什么,并介绍了有关当前如何应用深度学习的细节。该报告还为希望为这个子场做出贡献的人提供了温和的介绍。
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
translated by 谷歌翻译
变异自动编码器(VAE)是最常用的无监督机器学习模型之一。但是,尽管对先前和后验的高斯分布的默认选择通常代表了数学方便的分布通常会导致竞争结果,但我们表明该参数化无法用潜在的超球体结构对数据进行建模。为了解决这个问题,我们建议使用von Mises-fisher(VMF)分布,从而导致超级潜在空间。通过一系列实验,我们展示了这种超球vae或$ \ mathcal {s} $ - vae如何更适合于用超球形结构捕获数据,同时胜过正常的,$ \ mathcal {n} $ - vae-,在其他数据类型的低维度中。http://github.com/nicola-decao/s-vae-tf和https://github.com/nicola-decao/nicola-decao/s-vae-pytorch
translated by 谷歌翻译