贝叶斯等级混合集群(BHMC)是一种有趣的模型,可提高传统的贝叶斯分层聚类方法。关于生成过程中的父级节点扩散,BHMC用分层Dirichlet过程混合模型(HDPMM)替换传统的高斯 - 高斯(G2G)内核。然而,BHMC的缺点在于它可以在更高级别(即,靠近根节点的那些)中获得比较高的节点方差。这可以解释为节点之间的分离,特别是较高级别的节点之间的分离可能是弱的。试图克服这一缺点,我们考虑最近的推论框架名为后续正规化,这有助于一种简单的方式对贝叶斯模型施加额外的限制来解决原始模型的一些弱点。因此,为了增强群集的分离,我们将后正则化应用于在层次结构的每个级别的节点上强加对节点上的最大边缘约束。在本文中,我们说明了框架如何与BHMC集成,并通过原始模型实现所需的改进。
translated by 谷歌翻译
在使用多模式贝叶斯后部分布时,马尔可夫链蒙特卡罗(MCMC)算法难以在模式之间移动,并且默认变分或基于模式的近似推动将低估后不确定性。并且,即使找到最重要的模式,难以评估后部的相对重量。在这里,我们提出了一种使用MCMC,变分或基于模式的模式的并行运行的方法,以便尽可能多地击中多种模式或分离的区域,然后使用贝叶斯堆叠来组合这些用于构建分布的加权平均值的可扩展方法。通过堆叠从多模式后分布的堆叠,最小化交叉验证预测误差的结果,并且代表了比变分推断更好的不确定度,但它不一定是相当于渐近的,以完全贝叶斯推断。我们呈现理论一致性,其中堆叠推断逼近来自未衰退的模型和非混合采样器的真实数据生成过程,预测性能优于完全贝叶斯推断,因此可以被视为祝福而不是模型拼写下的诅咒。我们展示了几个模型家庭的实际实施:潜在的Dirichlet分配,高斯过程回归,分层回归,马蹄素变量选择和神经网络。
translated by 谷歌翻译
贝叶斯结构学习允许人们对负责生成给定数据的因果定向无环图(DAG)捕获不确定性。在这项工作中,我们提出了结构学习(信任)的可疗法不确定性,这是近似后推理的框架,依赖于概率回路作为我们后验信仰的表示。与基于样本的后近似值相反,我们的表示可以捕获一个更丰富的DAG空间,同时也能够通过一系列有用的推理查询来仔细地理解不确定性。我们从经验上展示了如何将概率回路用作结构学习方法的增强表示,从而改善了推断结构和后部不确定性的质量。有条件查询的实验结果进一步证明了信任的表示能力的实际实用性。
translated by 谷歌翻译
We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets.
translated by 谷歌翻译
We present the GPry algorithm for fast Bayesian inference of general (non-Gaussian) posteriors with a moderate number of parameters. GPry does not need any pre-training, special hardware such as GPUs, and is intended as a drop-in replacement for traditional Monte Carlo methods for Bayesian inference. Our algorithm is based on generating a Gaussian Process surrogate model of the log-posterior, aided by a Support Vector Machine classifier that excludes extreme or non-finite values. An active learning scheme allows us to reduce the number of required posterior evaluations by two orders of magnitude compared to traditional Monte Carlo inference. Our algorithm allows for parallel evaluations of the posterior at optimal locations, further reducing wall-clock times. We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. GPry outperforms traditional Monte Carlo methods when the evaluation time of the likelihood (or the calculation of theoretical observables) is of the order of seconds; for evaluation times of over a minute it can perform inference in days that would take months using traditional methods. GPry is distributed as an open source Python package (pip install gpry) and can also be found at https://github.com/jonaselgammal/GPry.
translated by 谷歌翻译
潜在位置网络模型是网络科学的多功能工具;应用程序包括集群实体,控制因果混淆,并在未观察的图形上定义前提。估计每个节点的潜在位置通常是贝叶斯推理问题的群体,吉布斯内的大都市是最流行的近似后分布的工具。然而,众所周知,GIBBS内的大都市对于大型网络而言是低效;接受比计算成本昂贵,并且所得到的后绘高度相关。在本文中,我们提出了一个替代的马尔可夫链蒙特卡罗战略 - 使用分裂哈密顿蒙特卡罗和萤火虫蒙特卡罗的组合定义 - 利用后部分布的功能形式进行更有效的后退计算。我们展示了这些战略在吉布斯和综合网络上的其他算法中优于大都市,以及学区的教师和工作人员的真正信息共享网络。
translated by 谷歌翻译
在过去二十年中,识别具有不同纵向数据趋势的群体的方法已经成为跨越许多研究领域的兴趣。为了支持研究人员,我们总结了文献关于纵向聚类的指导。此外,我们提供了一种纵向聚类方法,包括基于基团的轨迹建模(GBTM),生长混合模拟(GMM)和纵向K平均值(KML)。该方法在基本级别引入,并列出了强度,限制和模型扩展。在最近数据收集的发展之后,将注意这些方法的适用性赋予密集的纵向数据(ILD)。我们展示了使用R.中可用的包在合成数据集上的应用程序的应用。
translated by 谷歌翻译
Surrogate models have shown to be an extremely efficient aid in solving engineering problems that require repeated evaluations of an expensive computational model. They are built by sparsely evaluating the costly original model and have provided a way to solve otherwise intractable problems. A crucial aspect in surrogate modelling is the assumption of smoothness and regularity of the model to approximate. This assumption is however not always met in reality. For instance in civil or mechanical engineering, some models may present discontinuities or non-smoothness, e.g., in case of instability patterns such as buckling or snap-through. Building a single surrogate model capable of accounting for these fundamentally different behaviors or discontinuities is not an easy task. In this paper, we propose a three-stage approach for the approximation of non-smooth functions which combines clustering, classification and regression. The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates that are eventually assembled. A sequence of well-known machine learning techniques are used: Dirichlet process mixtures models (DPMM), support vector machines and Gaussian process modelling. The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
translated by 谷歌翻译
Neyman-Scott processes (NSPs) are point process models that generate clusters of points in time or space. They are natural models for a wide range of phenomena, ranging from neural spike trains to document streams. The clustering property is achieved via a doubly stochastic formulation: first, a set of latent events is drawn from a Poisson process; then, each latent event generates a set of observed data points according to another Poisson process. This construction is similar to Bayesian nonparametric mixture models like the Dirichlet process mixture model (DPMM) in that the number of latent events (i.e. clusters) is a random variable, but the point process formulation makes the NSP especially well suited to modeling spatiotemporal data. While many specialized algorithms have been developed for DPMMs, comparatively fewer works have focused on inference in NSPs. Here, we present novel connections between NSPs and DPMMs, with the key link being a third class of Bayesian mixture models called mixture of finite mixture models (MFMMs). Leveraging this connection, we adapt the standard collapsed Gibbs sampling algorithm for DPMMs to enable scalable Bayesian inference on NSP models. We demonstrate the potential of Neyman-Scott processes on a variety of applications including sequence detection in neural spike trains and event detection in document streams.
translated by 谷歌翻译
概率无论无内容语法(PCFG)和动态贝叶斯网络(DBNS)是广泛使用的序列模型,具有互补的优势和局限性。虽然PCFGS允许嵌套分层依赖关系(树结构),但它们的潜在变量(非终端符号)必须是离散的。相比之下,DBN允许持续潜在变量,但依赖关系是严格顺序(链结构)。因此,如果假设潜伏变量是连续的并且还具有嵌套的分层依赖结构,则也可以应用。在本文中,我们呈现递归贝叶斯网络(RBNS),它概括和统一PCFG和DBN,将其优势与特殊情况相结合。 RBN定义了具有离散或连续潜在变量的树结构贝叶斯网络上的联合分布。主要挑战在于在可能的结构和连续变量的指数数上执行关节推断。我们提供了两个解决方案:1)对于任意RBN,我们将内部和外部概率概括为混合离散连续的情况,这允许通过梯度下降到连续潜变量的最大后估计,同时通过网络结构边缘化。 2)对于高斯RBN,我们还导出了分析近似,允许鲁棒参数优化和贝叶斯推断。 RBN的容量和多样化应用在两个示例中示出:在综合数据的定量评估中,与改变点检测和分层聚类相比,我们证明并讨论了RBN对噪声序列的分割和树诱导的优势。在对音乐数据的应用中,我们接近原始注释级别的分层音乐分析的未解决问题,并将我们的结果与专家注释进行比较。
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
大多数机器学习算法由一个或多个超参数配置,必须仔细选择并且通常会影响性能。为避免耗时和不可递销的手动试验和错误过程来查找性能良好的超参数配置,可以采用各种自动超参数优化(HPO)方法,例如,基于监督机器学习的重新采样误差估计。本文介绍了HPO后,本文审查了重要的HPO方法,如网格或随机搜索,进化算法,贝叶斯优化,超带和赛车。它给出了关于进行HPO的重要选择的实用建议,包括HPO算法本身,性能评估,如何将HPO与ML管道,运行时改进和并行化结合起来。这项工作伴随着附录,其中包含关于R和Python的特定软件包的信息,以及用于特定学习算法的信息和推荐的超参数搜索空间。我们还提供笔记本电脑,这些笔记本展示了这项工作的概念作为补充文件。
translated by 谷歌翻译
这是机器学习中(主要是)笔和纸练习的集合。练习在以下主题上:线性代数,优化,定向图形模型,无向图形模型,图形模型的表达能力,因子图和消息传递,隐藏马尔可夫模型的推断,基于模型的学习(包括ICA和非正态模型),采样和蒙特卡洛整合以及变异推断。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
我们在这里采用贝叶斯非参数混合模型,以将多臂匪徒扩展到尤其是汤普森采样,以扩展到存在奖励模型不确定性的场景。在随机的多臂强盗中,播放臂的奖励是由未知分布产生的。奖励不确定性,即缺乏有关奖励生成分布的知识,引起了探索 - 开发权的权衡:强盗代理需要同时了解奖励分布的属性,并顺序决定下一步要采取哪种操作。在这项工作中,我们通过采用贝叶斯非参数高斯混合模型来进行奖励模型不确定性,将汤普森的抽样扩展到场景中,以进行灵活的奖励密度估计。提出的贝叶斯非参数混合物模型汤普森采样依次学习了奖励模型,该模型最能近似于真实但未知的每臂奖励分布,从而实现了成功的遗憾表现。我们基于基于后验分析的新颖的分析得出的,这是一种针对该方法的渐近遗憾。此外,我们从经验上评估了其在多样化和以前难以捉摸的匪徒环境中的性能,例如,在指数级的家族中,奖励不受异常值和不同的每臂奖励分布。我们表明,拟议的贝叶斯非参数汤普森取样优于表现,无论是平均累积的遗憾和遗憾的波动,最先进的替代方案。在存在强盗奖励模型不确定性的情况下,提出的方法很有价值,因为它避免了严格的逐案模型设计选择,但提供了重要的遗憾。
translated by 谷歌翻译
Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information. These features allow it to converge to high-dimensional target distributions much more quickly than simpler methods such as random walk Metropolis or Gibbs sampling. However, HMC's performance is highly sensitive to two user-specified parameters: a step size and a desired number of steps L. In particular, if L is too small then the algorithm exhibits undesirable random walk behavior, while if L is too large the algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps. Empirically, NUTS perform at least as efficiently as and sometimes more efficiently than a well tuned standard HMC method, without requiring user intervention or costly tuning runs. We also derive a method for adapting the step size parameter on the fly based on primal-dual averaging. NUTS can thus be used with no hand-tuning at all. NUTS is also suitable for applications such as BUGS-style automatic inference engines that require efficient "turnkey" sampling algorithms.
translated by 谷歌翻译
这是模型选择和假设检测的边缘似然计算的最新介绍和概述。计算概率模型(或常量比率)的常规规定常数是许多统计数据,应用数学,信号处理和机器学习中的许多应用中的基本问题。本文提供了对主题的全面研究。我们突出了不同技术之间的局限性,优势,连接和差异。还描述了使用不正确的前沿的问题和可能的解决方案。通过理论比较和数值实验比较一些最相关的方法。
translated by 谷歌翻译
信息技术的进步导致了非常大的数据集,通常保存在不同的存储中心。必须适于现有的统计方法来克服所产生的计算障碍,同时保持统计有效性和效率。分裂和征服方法已应用于许多领域,包括分位式流程,回归分析,主偶数和指数家庭。我们研究了有限高斯混合的分布式学习的分裂和征服方法。我们建议减少策略并开发一种有效的MM算法。新估计器显示在某些一般条件下保持一致并保留根 - N一致性。基于模拟和现实世界数据的实验表明,如果后者是可行的,所提出的分离和征管方法具有基于完整数据集的全球估计的统计性能。如果模型假设与真实数据不匹配,甚至可以略高于全局估算器。它还具有比某些现有方法更好的统计和计算性能。
translated by 谷歌翻译
群集分析需要许多决定:聚类方法和隐含的参考模型,群集数,通常,几个超参数和算法调整。在实践中,一个分区产生多个分区,基于验证或选择标准选择最终的分区。存在丰富的验证方法,即隐式或明确地假设某个聚类概念。此外,它们通常仅限于从特定方法获得的分区上操作。在本文中,我们专注于可以通过二次或线性边界分开的群体。参考集群概念通过二次判别符号函数和描述集群大小,中心和分散的参数定义。我们开发了两个名为二次分数的群集质量标准。我们表明这些标准与从一般类椭圆对称分布产生的组一致。对这种类型的组追求在应用程序中是常见的。研究了与混合模型和模型的聚类的似然理论的连接。基于Bootstrap重新采样的二次分数,我们提出了一个选择规则,允许在许多聚类解决方案中选择。所提出的方法具有独特的优点,即它可以比较不能与其他最先进的方法进行比较的分区。广泛的数值实验和实际数据的分析表明,即使某些竞争方法在某些设置中出现优越,所提出的方法也实现了更好的整体性能。
translated by 谷歌翻译