How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
近似复杂的概率密度是现代统计中的核心问题。在本文中,我们介绍了变分推理(VI)的概念,这是一种机器学习中的流行方法,该方法使用优化技术来估计复杂的概率密度。此属性允许VI汇聚速度比经典方法更快,例如Markov Chain Monte Carlo采样。概念上,VI通过选择一个概率密度函数,然后找到最接近实际概率密度的家庭 - 通常使用Kullback-Leibler(KL)发散作为优化度量。我们介绍了缩窄的证据,以促进近似的概率密度,我们审查了平均场变分推理背后的想法。最后,我们讨论VI对变分式自动编码器(VAE)和VAE-生成的对抗网络(VAE-GAN)的应用。用本文,我们的目标是解释VI的概念,并通过这种方法协助协助。
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
translated by 谷歌翻译
概率分布允许从业者发现数据中的隐藏结构,并构建模型,以使用有限的数据解决监督的学习问题。该报告的重点是变异自动编码器,这是一种学习大型复杂数据集概率分布的方法。该报告提供了对变异自动编码器的理论理解,并巩固了该领域的当前研究。该报告分为多个章节,第一章介绍了问题,描述了变异自动编码器并标识了该领域的关键研究方向。第2、3、4和5章深入研究了每个关键研究领域的细节。第6章总结了报告,并提出了未来工作的指示。具有机器学习基本思想但想了解机器学习研究中的一般主题的读者可以从报告中受益。该报告解释了有关学习概率分布的中心思想,人们为使这种危险做些什么,并介绍了有关当前如何应用深度学习的细节。该报告还为希望为这个子场做出贡献的人提供了温和的介绍。
translated by 谷歌翻译
该报告解释,实施和扩展了“更紧密的变化界限不一定更好”所介绍的作品(T Rainforth等,2018)。我们提供了理论和经验证据,这些证据增加了重要性的重要性数量$ k $在重要性加权自动编码器(IWAE)中(Burda等,2016)降低了推理中梯度估计量的信噪比(SNR)网络,从而影响完整的学习过程。换句话说,即使增加$ k $减少了梯度的标准偏差,但它也会更快地降低真实梯度的幅度,从而增加梯度更新的相对差异。进行广泛的实验以了解$ k $的重要性。这些实验表明,更紧密的变化界限对生成网络有益,而宽松的边界对推理网络来说是可取的。通过这些见解,可以实施和研究三种方法:部分重要性加权自动编码器(PIWAE),倍增重要性加权自动编码器(MIWAE)和组合重要性加权自动编码器(CIWAE)。这三种方法中的每一种都需要IWAE作为一种特殊情况,但采用不同的重量权重,以确保较高的梯度估计器的SNR。在我们的研究和分析中,这些算法的疗效在多个数据集(如MNIST和Omniglot)上进行了测试。最后,我们证明了三种呈现的IWAE变化能够产生近似后验分布,这些分布与IWAE更接近真正的后验分布,同时匹配IWAE生成网络的性能,或者在PIWAE的情况下可能超过其表现。
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译
Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.
translated by 谷歌翻译
自动编码变化贝叶斯(AEVB)是一种用于拟合潜在变量模型(无监督学习的有前途的方向)的强大而通用的算法,并且是训练变量自动编码器(VAE)的众所周知的。在本教程中,我们专注于从经典的期望最大化(EM)算法中激励AEVB,而不是确定性自动编码器。尽管自然而有些不言而喻,但在最近的深度学习文献中并未强调EM与AEVB之间的联系,我们认为强调这种联系可以改善社区对AEVB的理解。特别是,我们发现(1)优化有关推理参数的证据下限(ELBO)作为近似E-step,并且(2)优化ELBO相对于生成参数作为近似M-step;然后,与AEVB中的同时进行同时进行,然后同时拧紧并推动Elbo。我们讨论如何将近似E-Step解释为执行变异推断。详细讨论了诸如摊销和修复技巧之类的重要概念。最后,我们从划痕中得出了非深度和几个深层变量模型的AEVB训练程序,包括VAE,有条件的VAE,高斯混合物VAE和变异RNN。我们希望读者能够将AEVB认识为一种通用算法,可用于拟合广泛的潜在变量模型(不仅仅是VAE),并将AEVB应用于自己的研究领域中出现的此类模型。所有纳入型号的Pytorch代码均可公开使用。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
退火重要性采样(AIS)是一种流行的算法,用于估计深层生成模型的棘手边际可能性。尽管AIS可以保证为任何一组超参数提供无偏估计,但共同的实现依赖于简单的启发式方法,例如初始和目标分布之间的几何平均桥接分布,这些分布在计算预算有限时会影响估计性性能。由于使用Markov过渡中的大都市磨碎(MH)校正步骤,因此对完全参数AI的优化仍然具有挑战性。我们提出一个具有灵活中间分布的参数AIS过程,并优化桥接分布以使用较少数量的采样步骤。一种重新聚集方法,它允许我们优化分布序列和Markov转换的参数,该参数适用于具有MH校正的大型Markov内核。我们评估了优化AIS的性能,以进行深层生成模型的边际可能性估计,并将其与其他估计器进行比较。
translated by 谷歌翻译
The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
变异推理通常从近似分布q到后p中最小化“反向” kullbeck-leibeler(kl)kl(q || p)。最近的工作研究“正向” KL KL(P || Q),它与反向KL不同并不能导致低估不确定性的变异近似值。本文介绍了运输评分攀登(TSC),该方法通过使用汉密尔顿蒙特卡洛(HMC)和新型的自适应传输图来优化KL(P || Q)。传输图通过充当潜在变量空间和扭曲空间之间变量的变化来改善HMC的轨迹。TSC使用HMC样品在优化KL时动态训练传输图(P || Q)。TSC利用协同作用,在该协同作用下,更好的运输地图会导致更好的HMC采样,从而导致更好的传输地图。我们在合成和真实数据上演示了TSC。我们发现,在训练大规模数据的变异自动编码器时,TSC可以实现竞争性能。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
马尔可夫链蒙特卡洛(MCMC),例如langevin Dynamics,有效地近似顽固的分布。但是,由于昂贵的数据采样迭代和缓慢的收敛性,它的用法在深层可变模型的背景下受到限制。本文提出了摊销的langevin Dynamics(ALD),其中数据划分的MCMC迭代完全被编码器的更新替换为将观测值映射到潜在变量中。这种摊销可实现有效的后验采样,而无需数据迭代。尽管具有效率,但我们证明ALD是MCMC算法有效的,其马尔可夫链在轻度假设下将目标后部作为固定分布。基于ALD,我们还提出了一个名为Langevin AutoCodeer(LAE)的新的深层变量模型。有趣的是,可以通过稍微修改传统自动编码器来实现LAE。使用多个合成数据集,我们首先验证ALD可以从目标后代正确获取样品。我们还在图像生成任务上评估了LAE,并证明我们的LAE可以根据变异推断(例如变异自动编码器)和其他基于MCMC的方法在测试可能性方面胜过现有的方法。
translated by 谷歌翻译
统计模型是机器学习的核心,具有广泛适用性,跨各种下游任务。模型通常由通过最大似然估计从数据估计的自由参数控制。但是,当面对现实世界数据集时,许多模型运行到一个关键问题:它们是在完全观察到的数据方面配制的,而在实践中,数据集会困扰缺失数据。来自不完整数据的统计模型估计理论在概念上类似于潜在变量模型的估计,其中存在强大的工具,例如变分推理(VI)。然而,与标准潜在变量模型相比,具有不完整数据的参数估计通常需要估计缺失变量的指数 - 许多条件分布,因此使标准的VI方法是棘手的。通过引入变分Gibbs推理(VGI),是一种新的通用方法来解决这个差距,以估计来自不完整数据的统计模型参数。我们在一组合成和实际估算任务上验证VGI,从不完整的数据中估算重要的机器学习模型,VAE和标准化流程。拟议的方法,同时通用,实现比现有的特定模型特定估计方法竞争或更好的性能。
translated by 谷歌翻译
贝叶斯结构学习允许从数据推断贝叶斯网络结构,同时推理认识性不确定性 - 朝着实现现实世界系统的主动因果发现和设计干预的关键因素。在这项工作中,我们为贝叶斯结构学习(DIBS)提出了一般,完全可微分的框架,其在潜在概率图表表示的连续空间中运行。与现有的工作相反,DIBS对局部条件分布的形式不可知,并且允许图形结构和条件分布参数的关节后部推理。这使得我们的配方直接适用于复杂贝叶斯网络模型的后部推理,例如,具有由神经网络编码的非线性依赖性。使用DIBS,我们设计了一种高效,通用的变分推理方法,用于近似结构模型的分布。在模拟和现实世界数据的评估中,我们的方法显着优于关节后部推理的相关方法。
translated by 谷歌翻译