Restricted Boltzmann Machines (RBMs) are probabilistic generative models that can be trained by maximum likelihood in principle, but are usually trained by an approximate algorithm called Contrastive Divergence (CD) in practice. In general, a CD-k algorithm estimates an average with respect to the model distribution using a sample obtained from a k-step Markov Chain Monte Carlo Algorithm (e.g., block Gibbs sampling) starting from some initial configuration. Choices of k typically vary from 1 to 100. This technical report explores if it's possible to leverage a simple approximate sampling algorithm with a modified version of CD in order to train an RBM with k=0. As usual, the method is illustrated on MNIST.
translated by 谷歌翻译
我们提出了基于能量的生成流网络(EB-GFN),这是一种用于高维离散数据的新型概率建模算法。基于生成流网络(GFLOWNETS)的理论,我们通过随机数据构建政策对生成过程进行建模,从而将昂贵的MCMC探索摊销为从Gflownet采样的固定动作中。我们展示了Gflownets如何在模式之间进行大致进行大型Gibbs采样以混合。我们提出了一个框架,以共同训练具有能量功能的Gflownet,以便Gflownet学会从能量分布中进行采样,而能量则以近似MLE目标学习,并从GFLOWNET中使用负样本。我们证明了EB-GFN对各种概率建模任务的有效性。代码可在https://github.com/zdhnarsil/eb_gfn上公开获取。
translated by 谷歌翻译
这是关于Boltzmann机器(BM),受限玻尔兹曼机器(RBM)和Deep信念网络(DBN)的教程和调查论文。我们从概率图形模型,Markov随机字段,Gibbs采样,统计物理学,ISING模型和Hopfield网络的必需背景开始。然后,我们介绍BM和RBM的结构。解释了可见变量和隐藏变量的条件分布,RBM中的GIBBS采样以生成变量,通过最大似然估计训练BM和RBM以及对比度差异。然后,我们讨论变量的不同可能的离散和连续分布。我们介绍有条件的RBM及其训练方式。最后,我们将深度信念网络解释为RBM模型的一堆。本文有关玻尔兹曼机器的论文在包括数据科学,统计,神经计算和统计物理学在内的各个领域都有用。
translated by 谷歌翻译
We show how to use "complementary priors" to eliminate the explainingaway effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
translated by 谷歌翻译
Perturb-and-MAP offers an elegant approach to approximately sample from an energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model. Sampling in turn enables learning. However, this line of research has been hindered by the general intractability of the MAP computation. Very few works venture outside tractable models, and when they do, they use linear programming approaches, which as we show, have several limitations. In this work, we present perturb-and-max-product (PMP), a parallel and scalable mechanism for sampling and learning in discrete EBMs. Models can be arbitrary as long as they are built using tractable factors. We show that (a) for Ising models, PMP is orders of magnitude faster than Gibbs and Gibbs-with-Gradients (GWG) at learning and generating samples of similar or better quality; (b) PMP is able to learn and sample from RBMs; (c) in a large, entangled graphical model in which Gibbs and GWG fail to mix, PMP succeeds.Preprint. Under review.
translated by 谷歌翻译
由于难以计算的逻辑似然梯度,训练受限制的Boltzmann机器(RBMS)一直挑战很长一段时间。在过去的几十年中,许多作品已经提出了更多或更少成功的培训食谱,但没有研究问题的关键数量:混合时间,即从模型中采样新配置所需的蒙特卡罗迭代的数量。在这项工作中,我们表明,这种混合时间在训练模型的动态和稳定性中起着至关重要的作用,并且RBMS在两个明确定义的制度中运行,即均衡和均衡,具体取决于这之间的相互作用模型的混合时间和步骤数量,$ k $,用于近似梯度。我们进一步展示了这种混合时间随着学习而增加,这通常会在k $大于这段时间时,通常意味着从一个制度到另一个的过渡。特别是,我们表明,使用流行的$ k $(持久性)对比发散方法,以k $小,学习模型的动态非常慢,往往是强烈的均衡效果。相反,RBMS在平衡显示器上培训更快,动态更快,以及在采样期间对数据集配置的平滑收敛性。最后,我们讨论如何在实践中利用这两个制度,这取决于任务一项目标符合履行:(i)缩短的$ k $可用于在短时间学习时生成令人信服的样本,(ii)大量$ k $(或越来越大)需要学习RBM的正确均衡分布。最后,这两个运营制度的存在似乎是通过似然最大化训练的基于能量模型的一般性。
translated by 谷歌翻译
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called "contrastive divergence" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
基于能量的模型(EBMS)最近成功地代表了少量图像的复杂分布。然而,对它们的抽样需要昂贵的马尔可夫链蒙特卡罗(MCMC)迭代在高维像素空间中缓慢混合。与EBMS不同,变形AutoEncoders(VAES)快速生成样本,并配备潜在的空间,使得数据歧管的快速遍历。然而,VAE倾向于将高概率密度分配到实际数据分布之外的数据空间中的区域,并且经常在产生清晰图像时失败。在本文中,我们提出了VAE的一个共生组成和ebm的vaebm,提供了两个世界的eBM。 VAEBM使用最先进的VAE捕获数据分布的整体模式结构,它依赖于其EBM组件,以明确地从模型中排除非数据样区域并优化图像样本。此外,VAEBM中的VAE组件允许我们通过在VAE的潜空间中重新处理它们来加速MCMC更新。我们的实验结果表明,VAEBM在几个基准图像数据集上以大量边距开辟了最先进的VAES和EBMS。它可以产生高于256 $ \倍的高质量图像,使用短MCMC链。我们还证明了VAEBM提供了完整的模式覆盖范围,并在分配外检测中表现良好。源代码可在https://github.com/nvlabs/vaebm上获得
translated by 谷歌翻译
我们提出了离散的Langevin提案(DLP),这是一种简单且可扩展的基于梯度的建议,用于对复杂的高维离散分布进行采样。与基于Gibbs采样的方法相反,DLP能够单个步骤并行更新所有坐标,并且更改的幅度由步骤尺寸控制。这允许在高维且密切相关的变量的空间中进行廉价,有效的探索。我们通过证明其固定分布的渐近偏置对于对数季度分布而言是零,并且对于接近对数季度的分布而言,我们证明了DLP的效率为零。使用DLP,我们开发了几种采样算法的变体,包括未经调整的,大都市调整后的,随机和预处理版本。DLP在各种任务上都优于许多受欢迎的替代方案,包括ISING模型,受限的Boltzmann机器,基于深层的基于能量的模型,二进制神经网络和语言生成。
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
受限的玻尔兹曼机器(RBMS)提供了一种用于无监督的机器学习的多功能体系结构,原则上可以以任意准确性近似任何目标概率分布。但是,RBM模型通常由于其计算复杂性而无法直接访问,并调用了Markov-Chain采样以分析学习概率分布。因此,对于培训和最终应用,希望拥有既准确又有效的采样器。我们强调,这两个目标通常相互竞争,无法同时实现。更具体地说,我们确定并定量地表征了RBM学习的三个制度:独立学习,精度提高而不会失去效率;相关学习,较高的精度需要较低的效率;和退化,精度和效率都不再改善甚至恶化。这些发现基于数值实验和启发式论点。
translated by 谷歌翻译
Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
translated by 谷歌翻译
Helmholtz机器(HMS)是由两个Sigmoid信念网络(SBN)组成的一类生成模型,分别用作编码器和解码器。这些模型通常是使用称为唤醒 - 睡眠(WS)的两步优化算法对这些模型进行的,并且最近通过改进版本(例如重新恢复的尾流(RWS)和双向Helmholtz Machines(BIHM))进行了改进版本。 SBN中连接的局部性在与概率模型相关的Fisher信息矩阵中诱导稀疏性,并以细粒粒度的块状结构的形式引起。在本文中,我们利用自然梯度利用该特性来有效地训练SBN和HMS。我们提出了一种新颖的算法,称为“自然重新唤醒”(NRWS),该算法与其标准版本的几何适应相对应。以类似的方式,我们还引入了天然双向Helmholtz机器(NBIHM)。与以前的工作不同,我们将展示如何有效地计算自然梯度,而无需引入Fisher信息矩阵结构的任何近似值。在文献中进行的标准数据集进行的实验表明,NRW和NBIHM不仅在其非几何基准方面,而且在HMS的最先进培训算法方面都具有一致的改善。在训练后,汇聚速度以及对数可能达到的对数似然的值量化了改进。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
最近证明利用稀疏网络连接深神经网络中的连续层,可为大型最新模型提供好处。但是,网络连接性在浅网络的学习曲线中也起着重要作用,例如经典限制的玻尔兹曼机器(RBM)。一个基本问题是有效地找到了改善学习曲线的连接模式。最近的原则方法明确将网络连接作为参数,这些参数必须在模型中进行优化,但通常依靠连续功能来表示连接和明确的惩罚。这项工作提出了一种基于网络梯度的想法来找到RBM的最佳连接模式的方法:计算每个可能连接的梯度,给定特定的连接模式,并使用梯度驱动连续连接强度参数又使用确定连接模式。因此,学习RBM参数和学习网络连接是真正共同执行的,尽管学习率不同,并且没有改变目标函数。该方法应用于MNIST数据集,以显示针对样本生成和输入分类的基准任务找到更好的RBM模型。
translated by 谷歌翻译
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments.
translated by 谷歌翻译
变异推理通常从近似分布q到后p中最小化“反向” kullbeck-leibeler(kl)kl(q || p)。最近的工作研究“正向” KL KL(P || Q),它与反向KL不同并不能导致低估不确定性的变异近似值。本文介绍了运输评分攀登(TSC),该方法通过使用汉密尔顿蒙特卡洛(HMC)和新型的自适应传输图来优化KL(P || Q)。传输图通过充当潜在变量空间和扭曲空间之间变量的变化来改善HMC的轨迹。TSC使用HMC样品在优化KL时动态训练传输图(P || Q)。TSC利用协同作用,在该协同作用下,更好的运输地图会导致更好的HMC采样,从而导致更好的传输地图。我们在合成和真实数据上演示了TSC。我们发现,在训练大规模数据的变异自动编码器时,TSC可以实现竞争性能。
translated by 谷歌翻译
马尔可夫链蒙特卡洛(MCMC),例如langevin Dynamics,有效地近似顽固的分布。但是,由于昂贵的数据采样迭代和缓慢的收敛性,它的用法在深层可变模型的背景下受到限制。本文提出了摊销的langevin Dynamics(ALD),其中数据划分的MCMC迭代完全被编码器的更新替换为将观测值映射到潜在变量中。这种摊销可实现有效的后验采样,而无需数据迭代。尽管具有效率,但我们证明ALD是MCMC算法有效的,其马尔可夫链在轻度假设下将目标后部作为固定分布。基于ALD,我们还提出了一个名为Langevin AutoCodeer(LAE)的新的深层变量模型。有趣的是,可以通过稍微修改传统自动编码器来实现LAE。使用多个合成数据集,我们首先验证ALD可以从目标后代正确获取样品。我们还在图像生成任务上评估了LAE,并证明我们的LAE可以根据变异推断(例如变异自动编码器)和其他基于MCMC的方法在测试可能性方面胜过现有的方法。
translated by 谷歌翻译