The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
translated by 谷歌翻译
变异自动编码器(VAE)是最常用的无监督机器学习模型之一。但是,尽管对先前和后验的高斯分布的默认选择通常代表了数学方便的分布通常会导致竞争结果,但我们表明该参数化无法用潜在的超球体结构对数据进行建模。为了解决这个问题,我们建议使用von Mises-fisher(VMF)分布,从而导致超级潜在空间。通过一系列实验,我们展示了这种超球vae或$ \ mathcal {s} $ - vae如何更适合于用超球形结构捕获数据,同时胜过正常的,$ \ mathcal {n} $ - vae-,在其他数据类型的低维度中。http://github.com/nicola-decao/s-vae-tf和https://github.com/nicola-decao/nicola-decao/s-vae-pytorch
translated by 谷歌翻译
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. * Work done during an internship at Google Brain.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
具有潜在变量的深生成模型已被最近用于从多模式数据中学习关节表示和生成过程。但是,这两种学习机制可能相互冲突,表示形式无法嵌入有关数据模式的信息。本研究研究了所有模式和类标签可用于模型培训的现实情况,但是缺少下游任务所需的一些方式和标签。在这种情况下,我们表明,变异下限限制了联合表示和缺失模式之间的相互信息。为了抵消这些问题,我们引入了一种新型的条件多模式判别模型,该模型使用信息性的先验分布并优化了无可能的无可能目标函数,该目标函数可在联合表示和缺失模态之间最大化相互信息。广泛的实验表明了我们提出的模型的好处,这是经验结果表明,我们的模型实现了最新的结果,从而导致了代表性问题,例如下游分类,声音反演和注释产生。
translated by 谷歌翻译
最近,在深度生成模型中,不可能是非线性ICA的可识别性的文艺复兴。对于i.I.D.数据,先前的作品已经假定访问足够丰富的辅助观察集,表示$ \ mathbf {u} $。我们在这里展示了在没有这种侧面信息的情况下可以获得可识别性。以前的方法必须制定强烈的假设,以获得可识别的模型。在这里,我们在一组宽松的约束集中获得了经验识别的模型。特别是,我们专注于在其潜在空间中执行聚类的生成模型 - 一种匹配以前可识别模型的模型结构,而是使用学习群集提供辅助信息的合成形式。我们评估我们的提案,包括通过统计测试,并发现学习群集有效功能:具有潜在群集的深度生成模型是经验识别的,与依赖侧面信息的模型相同。
translated by 谷歌翻译
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
translated by 谷歌翻译
We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer preceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.
translated by 谷歌翻译
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
translated by 谷歌翻译
统计模型是机器学习的核心,具有广泛适用性,跨各种下游任务。模型通常由通过最大似然估计从数据估计的自由参数控制。但是,当面对现实世界数据集时,许多模型运行到一个关键问题:它们是在完全观察到的数据方面配制的,而在实践中,数据集会困扰缺失数据。来自不完整数据的统计模型估计理论在概念上类似于潜在变量模型的估计,其中存在强大的工具,例如变分推理(VI)。然而,与标准潜在变量模型相比,具有不完整数据的参数估计通常需要估计缺失变量的指数 - 许多条件分布,因此使标准的VI方法是棘手的。通过引入变分Gibbs推理(VGI),是一种新的通用方法来解决这个差距,以估计来自不完整数据的统计模型参数。我们在一组合成和实际估算任务上验证VGI,从不完整的数据中估算重要的机器学习模型,VAE和标准化流程。拟议的方法,同时通用,实现比现有的特定模型特定估计方法竞争或更好的性能。
translated by 谷歌翻译
马尔可夫链蒙特卡洛(MCMC),例如langevin Dynamics,有效地近似顽固的分布。但是,由于昂贵的数据采样迭代和缓慢的收敛性,它的用法在深层可变模型的背景下受到限制。本文提出了摊销的langevin Dynamics(ALD),其中数据划分的MCMC迭代完全被编码器的更新替换为将观测值映射到潜在变量中。这种摊销可实现有效的后验采样,而无需数据迭代。尽管具有效率,但我们证明ALD是MCMC算法有效的,其马尔可夫链在轻度假设下将目标后部作为固定分布。基于ALD,我们还提出了一个名为Langevin AutoCodeer(LAE)的新的深层变量模型。有趣的是,可以通过稍微修改传统自动编码器来实现LAE。使用多个合成数据集,我们首先验证ALD可以从目标后代正确获取样品。我们还在图像生成任务上评估了LAE,并证明我们的LAE可以根据变异推断(例如变异自动编码器)和其他基于MCMC的方法在测试可能性方面胜过现有的方法。
translated by 谷歌翻译
我们介绍了一种基于识别范围模型(RPM)的概率无监督学习方法的新方法:一种归一化的半参数假设类别,用于观察到的和潜在变量的联合分布。在关键的假设下,观察值在有条件地独立的情况下,rpm直接编码“识别”过程,从而在观测值的情况下参数参数既参数潜在的潜在分布及其条件分布。该识别模型与每个观察到的变量的边际分布的非参数描述配对。因此,重点是学习一种良好的潜在表示,该表示可以捕获测量值之间的依赖性。 RPM允许在具有离散潜在的设置和可牵引力的设置中进行精确的最大似然学习,即使连续观测和潜在的映射是通过灵活的模型(例如神经网络)表示的。我们开发有效的近似值,以具有可拖动先验的连续潜在变量。与诸如Helmholtz机器和变异自动编码器之类的双聚材料模型中所需的近似值不同,这些RPM近似仅引入次要偏置,这些偏置通常可能渐近地消失。此外,在潜在的先验上的棘手中,RPM可以与标准概率技术(例如变异贝叶斯)有效结合。我们在高维数据设置中演示了该模型,包括对MNIST数字的弱监督学习形式以及从感觉观察发现潜在地图的形式。 RPM提供了一种有效的方法来发现,代表和理由关于观察数据的潜在结构,即对动物和人工智能至关重要的功能。
translated by 谷歌翻译
We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutationinvariant MNIST classification with all labels.
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
基于连续的潜在空间(例如变异自动编码器)的概率模型可以理解为无数混合模型,其中组件连续取决于潜在代码。它们具有用于生成和概率建模的表达性工具,但与可牵引的概率推断不符,即计算代表概率分布的边际和条件。同时,可以将概率模型(例如概率电路(PC))理解为层次离散混合模型,从而使它们可以执行精确的推断,但是与连续的潜在空间模型相比,它们通常显示出低于标准的性能。在本文中,我们研究了一种混合方法,即具有较小潜在尺寸的可拖动模型的连续混合物。尽管这些模型在分析上是棘手的,但基于一组有限的集成点,它们非常适合数值集成方案。有足够数量的集成点,近似值变得精确。此外,使用一组有限的集成点,可以将近似方法编译成PC中,以“在近似模型中的精确推断”执行。在实验中,我们表明这种简单的方案被证明非常有效,因为PC在许多标准密度估计基准上以这种方式为可拖动模型设定了新的最新模型。
translated by 谷歌翻译
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
translated by 谷歌翻译