Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables. However, such amortized variational inference faces two challenges: (1) the limited posterior expressiveness of fully-factorized Gaussian assumption and (2) the amortization error of the inference model. We present a novel approach that addresses both challenges. First, we focus on ReLU networks with Gaussian output and illustrate their connection to probabilistic PCA. Building on this observation, we derive an iterative algorithm that finds the mode of the posterior and apply full-covariance Gaussian posterior approximation centered on the mode. Subsequently, we present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models. Based on the Laplace approximation of the latent variable posterior, VLAEs enhance the expressiveness of the posterior while reducing the amortization error. Empirical results on MNIST, Omniglot, Fashion-MNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other recent amortized or iterative methods on the ReLU networks.
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
马尔可夫链蒙特卡洛(MCMC),例如langevin Dynamics,有效地近似顽固的分布。但是,由于昂贵的数据采样迭代和缓慢的收敛性,它的用法在深层可变模型的背景下受到限制。本文提出了摊销的langevin Dynamics(ALD),其中数据划分的MCMC迭代完全被编码器的更新替换为将观测值映射到潜在变量中。这种摊销可实现有效的后验采样,而无需数据迭代。尽管具有效率,但我们证明ALD是MCMC算法有效的,其马尔可夫链在轻度假设下将目标后部作为固定分布。基于ALD,我们还提出了一个名为Langevin AutoCodeer(LAE)的新的深层变量模型。有趣的是,可以通过稍微修改传统自动编码器来实现LAE。使用多个合成数据集,我们首先验证ALD可以从目标后代正确获取样品。我们还在图像生成任务上评估了LAE,并证明我们的LAE可以根据变异推断(例如变异自动编码器)和其他基于MCMC的方法在测试可能性方面胜过现有的方法。
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译
变异推理通常从近似分布q到后p中最小化“反向” kullbeck-leibeler(kl)kl(q || p)。最近的工作研究“正向” KL KL(P || Q),它与反向KL不同并不能导致低估不确定性的变异近似值。本文介绍了运输评分攀登(TSC),该方法通过使用汉密尔顿蒙特卡洛(HMC)和新型的自适应传输图来优化KL(P || Q)。传输图通过充当潜在变量空间和扭曲空间之间变量的变化来改善HMC的轨迹。TSC使用HMC样品在优化KL时动态训练传输图(P || Q)。TSC利用协同作用,在该协同作用下,更好的运输地图会导致更好的HMC采样,从而导致更好的传输地图。我们在合成和真实数据上演示了TSC。我们发现,在训练大规模数据的变异自动编码器时,TSC可以实现竞争性能。
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
表示学习已成为一种实用的方法,可以在重建方面成功地建立大量高维数据的丰富参数编码。在考虑具有测试训练分布变化的无监督任务时,概率的观点有助于解决预测过度自信和不良校准。但是,由于多种原因,即维度或顽固性问题的诅咒,直接引入贝叶斯推断仍然是一个艰难的问题。 Laplace近似(LA)在这里提供了一个解决方案,因为可以通过二阶Taylor膨胀在参数空间的某些位置通过二阶Taylor膨胀来建立重量的高斯近似值。在这项工作中,我们为洛杉矶启发的无监督表示学习提供了贝叶斯自动编码器。我们的方法实现了迭代的拉普拉斯更新,以获得新型自动编码器证据的新变化下限。二阶部分衍生物的巨大计算负担是通过Hessian矩阵的近似来跳过的。从经验上讲,我们通过为分布外检测提供了良好的不确定性,用于差异几何形状的大地测量和缺失数据归思的方法来证明拉普拉斯自动编码器的可伸缩性和性能。
translated by 谷歌翻译
主体组件分析(PCA)在给定固定组件维度的一类线性模型的情况下,将重建误差最小化。概率PCA通过学习PCA潜在空间权重的概率分布,从而创建生成模型,从而添加了概率结构。自动编码器(AE)最小化固定潜在空间维度的一类非线性模型中的重建误差,在固定维度处胜过PCA。在这里,我们介绍了概率自动编码器(PAE),该自动编码器(PAE)使用归一化流量(NF)了解了AE潜在空间权重的概率分布。 PAE快速且易于训练,并在下游任务中遇到小的重建错误,样本质量高以及良好的性能。我们将PAE与差异AE(VAE)进行比较,表明PAE训练更快,达到较低的重建误差,并产生良好的样品质量,而无需特殊的调整参数或培训程序。我们进一步证明,PAE是在贝叶斯推理的背景下,用于涂抹和降解应用程序的贝叶斯推断,可以执行概率图像重建的下游任务的强大模型。最后,我们将NF的潜在空间密度确定为有希望的离群检测度量。
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Neural Networks. It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable and often tend to make over-confident predictions for OoDs, assigning to them a higher density than to the in-distribution data. This over-confidence in a single model can be potentially mitigated with Bayesian inference over the model parameters that take into account epistemic uncertainty. This paper investigates three approaches to Bayesian inference: stochastic gradient Markov chain Monte Carlo, Bayes by Backpropagation, and Stochastic Weight Averaging-Gaussian. The inference is implemented over the weights of the deep neural networks that parameterize the likelihood of the Variational Autoencoder. We empirically evaluate the approaches against several benchmarks that are often used for OoD detection: estimation of the marginal likelihood utilizing sampled model ensemble, typicality test, disagreement score, and Watanabe-Akaike Information Criterion. Finally, we introduce two simple scores that demonstrate the state-of-the-art performance.
translated by 谷歌翻译
该报告解释,实施和扩展了“更紧密的变化界限不一定更好”所介绍的作品(T Rainforth等,2018)。我们提供了理论和经验证据,这些证据增加了重要性的重要性数量$ k $在重要性加权自动编码器(IWAE)中(Burda等,2016)降低了推理中梯度估计量的信噪比(SNR)网络,从而影响完整的学习过程。换句话说,即使增加$ k $减少了梯度的标准偏差,但它也会更快地降低真实梯度的幅度,从而增加梯度更新的相对差异。进行广泛的实验以了解$ k $的重要性。这些实验表明,更紧密的变化界限对生成网络有益,而宽松的边界对推理网络来说是可取的。通过这些见解,可以实施和研究三种方法:部分重要性加权自动编码器(PIWAE),倍增重要性加权自动编码器(MIWAE)和组合重要性加权自动编码器(CIWAE)。这三种方法中的每一种都需要IWAE作为一种特殊情况,但采用不同的重量权重,以确保较高的梯度估计器的SNR。在我们的研究和分析中,这些算法的疗效在多个数据集(如MNIST和Omniglot)上进行了测试。最后,我们证明了三种呈现的IWAE变化能够产生近似后验分布,这些分布与IWAE更接近真正的后验分布,同时匹配IWAE生成网络的性能,或者在PIWAE的情况下可能超过其表现。
translated by 谷歌翻译
统计模型是机器学习的核心,具有广泛适用性,跨各种下游任务。模型通常由通过最大似然估计从数据估计的自由参数控制。但是,当面对现实世界数据集时,许多模型运行到一个关键问题:它们是在完全观察到的数据方面配制的,而在实践中,数据集会困扰缺失数据。来自不完整数据的统计模型估计理论在概念上类似于潜在变量模型的估计,其中存在强大的工具,例如变分推理(VI)。然而,与标准潜在变量模型相比,具有不完整数据的参数估计通常需要估计缺失变量的指数 - 许多条件分布,因此使标准的VI方法是棘手的。通过引入变分Gibbs推理(VGI),是一种新的通用方法来解决这个差距,以估计来自不完整数据的统计模型参数。我们在一组合成和实际估算任务上验证VGI,从不完整的数据中估算重要的机器学习模型,VAE和标准化流程。拟议的方法,同时通用,实现比现有的特定模型特定估计方法竞争或更好的性能。
translated by 谷歌翻译
神经网络在许多科学学科中发挥着越来越大的作用,包括物理学。变形AutoEncoders(VAE)是能够表示在低维潜空间中的高维数据的基本信息,该神经网络具有概率解释。特别是所谓的编码器网络,VAE的第一部分,其将其输入到潜伏空间中的位置,另外在该位置的方差方面提供不确定性信息。在这项工作中,介绍了对AutoEncoder架构的扩展,渔民。在该架构中,借助于Fisher信息度量,不使用编码器中的附加信息信道生成潜在空间不确定性,而是从解码器导出。这种架构具有来自理论观点的优点,因为它提供了从模型的直接不确定性量化,并且还考虑不确定的交叉相关。我们可以通过实验表明,渔民生产比可比较的VAE更准确的数据重建,并且其学习性能也明显较好地缩放了潜伏空间尺寸的数量。
translated by 谷歌翻译
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
translated by 谷歌翻译
深度神经网络拥有的一个重要股权是在以前看不见的数据上对分发检测(OOD)进行强大的能力。在为现实世界应用程序部署模型时,此属性对于安全目的至关重要。最近的研究表明,概率的生成模型可以在这项任务上表现不佳,这令他们寻求估计培训数据的可能性。为了减轻这个问题,我们提出了对变分性自动化器(VAE)的指数倾斜的高斯先前分配。通过此之前,我们能够使用VAE自然分配的负面日志可能性来实现最先进的结果,同时比某些竞争方法快的数量级。我们还表明,我们的模型生产高质量的图像样本,这些样本比标准高斯VAE更清晰。新的先前分配具有非常简单的实现,它使用kullback leibler发散,该kullback leibler发散,该横向leibler发散,该分解比较潜伏向量的长度与球体的半径之间的差异。
translated by 谷歌翻译
A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation (BP). A prominent example is predictive coding (PC), which is a neuroscience-inspired method that performs inference on hierarchical Gaussian generative models. These methods, however, fail to keep up with modern neural networks, as they are unable to replicate the dynamics of complex layers and activation functions. In this work, we solve this problem by generalizing PC to arbitrary probability distributions, enabling the training of architectures, such as transformers, that are hard to approximate with only Gaussian assumptions. We perform three experimental analyses. First, we study the gap between our method and the standard formulation of PC on multiple toy examples. Second, we test the reconstruction quality on variational autoencoders, where our method reaches the same reconstruction quality as BP. Third, we show that our method allows us to train transformer networks and achieve a performance comparable with BP on conditional language models. More broadly, this method allows neuroscience-inspired learning to be applied to multiple domains, since the internal distributions can be flexibly adapted to the data, tasks, and architectures used.
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译