为了避免维度的诅咒,聚集高维数据的一种常见方法是首先将数据投射到缩小尺寸的空间中,然后将投影数据聚集。尽管有效,但这种两阶段的方法阻止了降低维度降低和聚类模型的关节优化,并掩盖了完整模型描述数据的很好。在这里,我们展示了如何将这样的两阶段模型的家族组合成一个单一的分层模型,我们称之为高斯(HMOG)的分层混合物。 HMOG同时捕获了降低性降低和聚类,并且其性能通过似然函数以封闭形式量化。通过用指数式的家庭理论制定和扩展现有模型,我们展示了如何最大程度地提高HMOGS具有期望最大化的可能性。我们将HMOGS应用于合成数据和RNA测序数据,并演示它们如何超过两阶段模型的局限性。最终,HMOG是对共同统计框架的严格概括,并为研究人员提供了一种在聚集高维数据时改善模型性能的方法。
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact
translated by 谷歌翻译
与许多机器学习模型类似,群集加权模型(CWM)的准确性和速度都可以受到高维数据的阻碍,从而导致以前的作品对一种简约的技术,以减少“尺寸诅咒”对混合模型的影响。在这项工作中,我们回顾了集群加权模型(CWM)的背景研究。我们进一步表明,在庞大的高维数据的情况下,简约的技术不足以使混合模型蓬勃发展。我们通过使用“ FlexCWM” R软件包中的默认值选择位置参数的初始值来讨论一种用于检测隐藏组件的启发式。我们引入了一种称为T-分布的随机邻居嵌入(TSNE)的维度降低技术,以增强高维空间中的简约CWM。最初,CWM适用于回归,但出于分类目的,所有多级变量都会用一些噪声进行对数转换。模型的参数是通过预期最大化算法获得的。使用来自不同字段的实际数据集证明了讨论技术的有效性。
translated by 谷歌翻译
我们提出了一种新的非参数混合物模型,用于多变量回归问题,灵感来自概率K-Nearthimest邻居算法。使用有条件指定的模型,对样本外输入的预测基于与每个观察到的数据点的相似性,从而产生高斯混合物表示的预测分布。在混合物组件的参数以及距离度量标准的参数上,使用平均场变化贝叶斯算法进行后推断,并具有基于随机梯度的优化过程。在与数据大小相比,输入 - 输出关系很复杂,预测分布可能偏向或多模式的情况下,输入相对较高的尺寸,该方法尤其有利。对五个数据集进行的计算研究,其中两个是合成生成的,这说明了我们的高维输入的专家混合物方法的明显优势,在验证指标和视觉检查方面都优于竞争者模型。
translated by 谷歌翻译
We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
本文开发了一个贝叶斯图形模型,用于融合不同类型的计数数据。激励的应用是从不同治疗方法收集的各种高维特征的细菌群落研究。在这样的数据集中,社区之间没有明确的对应关系,每个对应都与不同的因素相对应,从而使数据融合具有挑战性。我们引入了一种灵活的多项式高斯生成模型,用于共同建模此类计数数据。该潜在变量模型通过共同的多元高斯潜在空间共同表征了观察到的数据,该空间参数化了转录组计数的多项式概率集。潜在变量的协方差矩阵诱导所有转录本之间共同依赖性的协方差矩阵,有效地融合了多个数据源。我们提出了一种可扩展的可扩展性变异期望最大化(EM)算法,用于推断模型的潜在变量和参数。推断的潜在变量为可视化数据提供了常见的维度降低,而推断的参数则提供了预测性的后验分布。除了证明变异性程序的模拟研究外,我们还将模型应用于细菌微生物组数据集。
translated by 谷歌翻译
The framework of variational autoencoders allows us to efficiently learn deep latent-variable models, such that the model's marginal distribution over observed variables fits the data. Often, we're interested in going a step further, and want to approximate the true joint distribution over observed and latent variables, including the true prior and posterior distributions over latent variables. This is known to be generally impossible due to unidentifiability of the model. We address this issue by showing that for a broad family of deep latentvariable models, identification of the true joint distribution over observed and latent variables is actually possible up to very simple transformations, thus achieving a principled and powerful form of disentanglement. Our result requires a factorized prior distribution over the latent variables that is conditioned on an additionally observed variable, such as a class label or almost any other observation. We build on recent developments in nonlinear ICA, which we extend to the case with noisy or undercomplete observations, integrated in a maximum likelihood framework. The result also trivially contains identifiable flow-based generative models as a special case.
translated by 谷歌翻译
期望 - 最大化(EM)算法是一种简单的元叠加,当观察到的数据中缺少测量值或数据由可观察到的数据组成时,它已多年来用作统计推断的方法。它的一般属性进行了充分的研究,而且还有无数方法将其应用于个人问题。在本文中,我们介绍了$ em $ $ and算法,EM算法的信息几何公式及其扩展和应用程序以及各种问题。具体而言,我们将看到,可以制定一个异常稳定推理算法,用于计算通道容量的算法,概率单纯性的参数估计方法,特定的多变量分析方法,例如概率模型中的主要组件分析和模态回归中的主成分分析,基质分解和学习生成模型,这些模型最近从几何学角度引起了深度学习的关注。
translated by 谷歌翻译
近几十年来,技术进步使得可以收集大数据集。在这种情况下,基于模型的群集是一种非常流行的,灵活和可解释的方法,用于在明确定义的统计框架中进行数据探索。大型数据集的增加之一是缺失值更频繁。但是,传统方式(由于丢弃具有缺失的值或估算方法的观察)不是为聚类目的而设计的。此外,它们很少适用于常规情况,虽然在实践中频繁地缺失,但是当缺失取决于未观察到的数据值时,缺失就缺失(mnar)值,而且可能在观察到的数据值上。本文的目标是通过直接在基于模型的聚类算法内嵌入MNAR数据来提出一种新的方法。我们为数据和缺失数据指示器的联合分布进行了选择模型。它对应于数据分布的混合模型和缺失数据机制的一般Mnar模型,其可以取决于底层类(未知)和/或缺失变量本身的值。导出大量有意义的MNAR子模型,对每个子模型研究了参数的可识别性,这通常是任何MNAR提案的关键问题。考虑EM和随机EM算法估计。最后,我们对合成数据的提议子模型进行了实证评估,我们说明了我们的方法对医疗寄存器的方法,创伤者(R)数据集。
translated by 谷歌翻译
我们介绍了一个新型的多层加权网络模型,该模型除了本地信号外,还考虑了全局噪声。该模型类似于多层随机块模型(SBM),但关键区别在于,跨层之间的块之间的相互作用在整个系统中是常见的,我们称之为环境噪声。单个块还以这些固定的环境参数为特征,以表示不属于其他任何地方的成员。这种方法允许将块同时聚类和类型化到信号或噪声中,以便更好地理解其在整个系统中的作用,而现有块模型未考虑。我们采用了分层变异推断的新颖应用来共同检测和区分块类型。我们称此模型为多层加权网络称为随机块(具有)环境噪声模型(SBANM),并开发了相关的社区检测算法。我们将此方法应用于费城神经发育队列中的受试者,以发现与精神病有关的具有共同心理病理学的受试者社区。
translated by 谷歌翻译
我们介绍了一种基于识别范围模型(RPM)的概率无监督学习方法的新方法:一种归一化的半参数假设类别,用于观察到的和潜在变量的联合分布。在关键的假设下,观察值在有条件地独立的情况下,rpm直接编码“识别”过程,从而在观测值的情况下参数参数既参数潜在的潜在分布及其条件分布。该识别模型与每个观察到的变量的边际分布的非参数描述配对。因此,重点是学习一种良好的潜在表示,该表示可以捕获测量值之间的依赖性。 RPM允许在具有离散潜在的设置和可牵引力的设置中进行精确的最大似然学习,即使连续观测和潜在的映射是通过灵活的模型(例如神经网络)表示的。我们开发有效的近似值,以具有可拖动先验的连续潜在变量。与诸如Helmholtz机器和变异自动编码器之类的双聚材料模型中所需的近似值不同,这些RPM近似仅引入次要偏置,这些偏置通常可能渐近地消失。此外,在潜在的先验上的棘手中,RPM可以与标准概率技术(例如变异贝叶斯)有效结合。我们在高维数据设置中演示了该模型,包括对MNIST数字的弱监督学习形式以及从感觉观察发现潜在地图的形式。 RPM提供了一种有效的方法来发现,代表和理由关于观察数据的潜在结构,即对动物和人工智能至关重要的功能。
translated by 谷歌翻译
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
We develop an optimization algorithm suitable for Bayesian learning in complex models. Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations. It applies within the class of exponential-family variational posterior distributions, for which we extensively discuss the Gaussian case for which the updates have a rather simple form. Our Quasi Black-box Variational Inference (QBVI) framework is readily applicable to a wide class of Bayesian inference problems and is of simple implementation as the updates of the variational posterior do not involve gradients with respect to the model parameters, nor the prescription of the Fisher information matrix. We develop QBVI under different hypotheses for the posterior covariance matrix, discuss details about its robust and feasible implementation, and provide a number of real-world applications to demonstrate its effectiveness.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
高斯混合物模型(GMM)提供了一个简单而原则的框架,具有适用于统计推断的属性。在本文中,我们提出了一种新的基于模型的聚类算法,称为EGMM(证据GMM),在信念函数的理论框架中,以更好地表征集群成员的不确定性。通过代表每个对象的群集成员的质量函数,提出了由所需群集的功率组组成的组件组成的证据高斯混合物分布来对整个数据集进行建模。 EGMM中的参数通过特殊设计的预期最大化(EM)算法估算。还提供了允许自动确定正确数量簇的有效性指数。所提出的EGMM与经典GMM一样简单,但可以为所考虑的数据集生成更有信息的证据分区。合成和真实数据集实验表明,所提出的EGMM的性能比其他代表性聚类算法更好。此外,通过应用多模式脑图像分割的应用也证明了其优势。
translated by 谷歌翻译
项目反应理论(IRT)是一个无处不在的模型,可以根据他们对问题的回答理解人类行为和态度。大型现代数据集为捕捉人类行为的更多细微差别提供了机会,从而有可能改善心理测量模型,从而改善科学理解和公共政策。但是,尽管较大的数据集允许采用更灵活的方法,但许多用于拟合IRT模型的当代算法也可能具有禁止现实世界应用的巨大计算需求。为了解决这种瓶颈,我们引入了IRT的变异贝叶斯推理算法,并表明它在不牺牲准确性的情况下快速可扩展。将此方法应用于认知科学和教育的五个大规模项目响应数据集中,比替代推理算法更高的对数可能性和更高的准确性。然后,使用这种新的推论方法,我们将IRT概括为具有表现力的贝叶斯响应模型,利用深度学习的最新进展来捕获具有神经网络的非线性项目特征曲线(ICC)。使用TIMSS的特定级数学测试,我们显示我们的非线性IRT模型可以捕获有趣的不对称ICC。该算法实现是开源的,易于使用。
translated by 谷歌翻译