This study concerns the formulation and application of Bayesian optimal experimental design to symbolic discovery, which is the inference from observational data of predictive models taking general functional forms. We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior. A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
translated by 谷歌翻译
由于数据有限和非识别性,观察性和介入数据的因果发现是具有挑战性的:在估计基本结构因果模型(SCM)时引入不确定性的因素。基于这两个因素引起的不确定性选择实验(干预措施)可以加快SCM的识别。来自有限数据的因果发现实验设计中的现有方法要么依赖于SCM的线性假设,要么仅选择干预目标。这项工作将贝叶斯因果发现的最新进展纳入了贝叶斯最佳实验设计框架中,从而使大型非线性SCM的积极因果发现同时选择了介入目标和值。我们证明了对线性和非线性SCM的合成图(ERDOS-R \'enyi,breetr cable)以及在\ emph {intiLico}单细胞基因调节网络数据集的\ emph {inyeare scms的性能。
translated by 谷歌翻译
Kullback-Leibler(KL)差异广泛用于贝叶斯神经网络(BNNS)的变异推理。然而,KL差异具有无限性和不对称性等局限性。我们检查了更通用,有限和对称的詹森 - 香农(JS)差异。我们根据几何JS差异为BNN制定新的损失函数,并表明基于KL差异的常规损失函数是其特殊情况。我们以封闭形式的高斯先验评估拟议损失函数的差异部分。对于任何其他一般的先验,都可以使用蒙特卡洛近似值。我们提供了实施这两种情况的算法。我们证明所提出的损失函数提供了一个可以调整的附加参数,以控制正则化程度。我们得出了所提出的损失函数在高斯先验和后代的基于KL差异的损失函数更好的条件。我们证明了基于嘈杂的CIFAR数据集和有偏见的组织病理学数据集的最新基于KL差异的BNN的性能提高。
translated by 谷歌翻译
实际符号可以传达有价值的直觉并简明扼要地表达新的想法。信息理论对机器学习具有重要性,但信息理论量的符号有时是不透明的。我们提出了一种实用和统一的符号,并将其扩展到包括观察结果(事件)和随机变量之间的信息 - 理论量。这包括在NLP中已知的点亮相互信息,以及在贝叶斯最佳实验设计中的认知科学和信息增益中的特定惊喜和特定信息,如特定的惊喜和特定信息。我们应用了我们的符号来证明使用新的直觉来证明Macka(2003)提到的二项式系数的次要系数的近似值。我们还简明地重新改造了变形自动编码器和大约贝叶斯神经网络中的变差推断的证据。此外,我们将符号应用于贝叶斯主动学习中的流行信息 - 理论采集函数,它选择由专家标记的最具信息性(未标记的)样本,并将此采集功能扩展到核心设置的问题,目标是选择给定标签的大多数信息样本。
translated by 谷歌翻译
我们提出了一种新的非参数混合物模型,用于多变量回归问题,灵感来自概率K-Nearthimest邻居算法。使用有条件指定的模型,对样本外输入的预测基于与每个观察到的数据点的相似性,从而产生高斯混合物表示的预测分布。在混合物组件的参数以及距离度量标准的参数上,使用平均场变化贝叶斯算法进行后推断,并具有基于随机梯度的优化过程。在与数据大小相比,输入 - 输出关系很复杂,预测分布可能偏向或多模式的情况下,输入相对较高的尺寸,该方法尤其有利。对五个数据集进行的计算研究,其中两个是合成生成的,这说明了我们的高维输入的专家混合物方法的明显优势,在验证指标和视觉检查方面都优于竞争者模型。
translated by 谷歌翻译
我们考虑了使用显微镜或X射线散射技术产生的图像数据自组装的模型的贝叶斯校准。为了说明BCP平衡结构中的随机远程疾病,我们引入了辅助变量以表示这种不确定性。然而,这些变量导致了高维图像数据的综合可能性,通常可以评估。我们使用基于测量运输的可能性方法以及图像数据的摘要统计数据来解决这一具有挑战性的贝叶斯推理问题。我们还表明,可以计算出有关模型参数的数据中的预期信息收益(EIG),而无需额外的成本。最后,我们介绍了基于二嵌段共聚物薄膜自组装和自上而下显微镜表征的ohta-kawasaki模型的数值案例研究。为了进行校准,我们介绍了一些基于域的能量和傅立叶的摘要统计数据,并使用EIG量化了它们的信息性。我们证明了拟议方法研究数据损坏和实验设计对校准结果的影响的力量。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
具有伽马超高提升的分层模型提供了一个灵活,稀疏的促销框架,用于桥接$ l ^ 1 $和$ l ^ 2 $ scalalizations在贝叶斯的配方中致正问题。尽管对这些模型具有贝叶斯动机,但现有的方法仅限于\ Textit {最大后验}估计。尚未实现执行不确定性量化的可能性。本文介绍了伽马超高图的分层逆问题的变分迭代交替方案。所提出的变分推理方法产生精确的重建,提供有意义的不确定性量化,易于实施。此外,它自然地引入了用于选择超参数的模型选择。我们说明了我们在几个计算的示例中的方法的性能,包括从时间序列数据的动态系统的解卷积问题和稀疏识别。
translated by 谷歌翻译
最近,经验可能性已在贝叶斯框架下广泛应用。马尔可夫链蒙特卡洛(MCMC)方法经常用于从感兴趣参数的后验分布中采样。然而,可能性支持的复杂性,尤其是非凸性的性质,在选择适当的MCMC算法时建立了巨大的障碍。这种困难限制了在许多应用中基于贝叶斯的经验可能性(贝叶赛)方法的使用。在本文中,我们提出了一个两步的大都会黑斯廷斯算法,以从贝耶斯后期进行采样。我们的建议是在层次上指定的,其中确定经验可能性的估计方程用于根据其余参数的建议值提出一组参数的值。此外,我们使用经验可能性讨论贝叶斯模型的选择,并将我们的两步大都会黑斯廷斯算法扩展到可逆的跳跃马尔可夫链蒙特卡洛手术程序,以便从最终的后验中采样。最后,提出了我们提出的方法的几种应用。
translated by 谷歌翻译
Uncertainty quantification is crucial to inverse problems, as it could provide decision-makers with valuable information about the inversion results. For example, seismic inversion is a notoriously ill-posed inverse problem due to the band-limited and noisy nature of seismic data. It is therefore of paramount importance to quantify the uncertainties associated to the inversion process to ease the subsequent interpretation and decision making processes. Within this framework of reference, sampling from a target posterior provides a fundamental approach to quantifying the uncertainty in seismic inversion. However, selecting appropriate prior information in a probabilistic inversion is crucial, yet non-trivial, as it influences the ability of a sampling-based inference in providing geological realism in the posterior samples. To overcome such limitations, we present a regularized variational inference framework that performs posterior inference by implicitly regularizing the Kullback-Leibler divergence loss with a CNN-based denoiser by means of the Plug-and-Play methods. We call this new algorithm Plug-and-Play Stein Variational Gradient Descent (PnP-SVGD) and demonstrate its ability in producing high-resolution, trustworthy samples representative of the subsurface structures, which we argue could be used for post-inference tasks such as reservoir modelling and history matching. To validate the proposed method, numerical tests are performed on both synthetic and field post-stack seismic data.
translated by 谷歌翻译
Recent advances in coreset methods have shown that a selection of representative datapoints can replace massive volumes of data for Bayesian inference, preserving the relevant statistical information and significantly accelerating subsequent downstream tasks. Existing variational coreset constructions rely on either selecting subsets of the observed datapoints, or jointly performing approximate inference and optimizing pseudodata in the observed space akin to inducing points methods in Gaussian Processes. So far, both approaches are limited by complexities in evaluating their objectives for general purpose models, and require generating samples from a typically intractable posterior over the coreset throughout inference and testing. In this work, we present a black-box variational inference framework for coresets that overcomes these constraints and enables principled application of variational coresets to intractable models, such as Bayesian neural networks. We apply our techniques to supervised learning problems, and compare them with existing approaches in the literature for data summarization and inference.
translated by 谷歌翻译
We present the GPry algorithm for fast Bayesian inference of general (non-Gaussian) posteriors with a moderate number of parameters. GPry does not need any pre-training, special hardware such as GPUs, and is intended as a drop-in replacement for traditional Monte Carlo methods for Bayesian inference. Our algorithm is based on generating a Gaussian Process surrogate model of the log-posterior, aided by a Support Vector Machine classifier that excludes extreme or non-finite values. An active learning scheme allows us to reduce the number of required posterior evaluations by two orders of magnitude compared to traditional Monte Carlo inference. Our algorithm allows for parallel evaluations of the posterior at optimal locations, further reducing wall-clock times. We significantly improve performance using properties of the posterior in our active learning scheme and for the definition of the GP prior. In particular we account for the expected dynamical range of the posterior in different dimensionalities. We test our model against a number of synthetic and cosmological examples. GPry outperforms traditional Monte Carlo methods when the evaluation time of the likelihood (or the calculation of theoretical observables) is of the order of seconds; for evaluation times of over a minute it can perform inference in days that would take months using traditional methods. GPry is distributed as an open source Python package (pip install gpry) and can also be found at https://github.com/jonaselgammal/GPry.
translated by 谷歌翻译
贝叶斯核心通过构建数据点的一个较小的加权子集近似后验分布。任何在整个后验上运行的推理过程在计算上昂贵,都可以在核心上廉价地运行,其结果近似于完整数据上的结果。但是,当前方法受到大量运行时的限制,或者需要用户指定向完整后部的低成本近似值。我们提出了一种贝叶斯核心结构算法,该算法首先选择均匀随机的数据子集,然后使用新型的准Newton方法优化权重。我们的算法是一种易于实现的黑框方法,不需要用户指定低成本后近似。它是第一个在输出核心后部的KL差异上带有一般高概率构成的。实验表明,我们的方法可与具有可比的施工时间的替代方案相比,核心质量有显着改善,所需的存储成本和用户输入要少得多。
translated by 谷歌翻译
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein's identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.
translated by 谷歌翻译
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
translated by 谷歌翻译
我们研究了全球优化因果关系变量的因果关系变量的问题,在该目标变量中可以进行干预措施。这个问题在许多科学领域都引起,包括生物学,运营研究和医疗保健。我们提出了因果熵优化(CEO),该框架概括了因果贝叶斯优化(CBO),以说明所有不确定性来源,包括由因果图结构引起的。首席执行官在因果效应的替代模型中以及用于通过信息理论采集函数选择干预措施的机制中纳入了因果结构的不确定性。所得算法自动交易结构学习和因果效应优化,同时自然考虑观察噪声。对于各种合成和现实世界的结构性因果模型,与CBO相比,CEO可以更快地与全局最佳达到融合,同时还可以学习图形。此外,我们的结构学习和因果优化的联合方法在顺序的结构学习优先方法上改善了。
translated by 谷歌翻译
这是模型选择和假设检测的边缘似然计算的最新介绍和概述。计算概率模型(或常量比率)的常规规定常数是许多统计数据,应用数学,信号处理和机器学习中的许多应用中的基本问题。本文提供了对主题的全面研究。我们突出了不同技术之间的局限性,优势,连接和差异。还描述了使用不正确的前沿的问题和可能的解决方案。通过理论比较和数值实验比较一些最相关的方法。
translated by 谷歌翻译
贝叶斯正交(BQ)是一种解决贝叶斯方式中数值集成问题的方法,允许用户量化其对解决方案的不确定性。 BQ的标准方法基于Intains的高斯过程(GP)近似。结果,BQ本质上仅限于可以以有效的方式完成GP近似的情况,因此通常禁止非常高维或非平滑的目标功能。本文提出使用基于贝叶斯添加剂回归树(BART)前锋的新的贝叶斯数值集成算法来解决这个问题,我们调用Bart-Int。 BART Priors易于调整,适合不连续的功能。我们证明它们在顺序设计环境中,它们也会自然地借给自己,并且可以在各种设置中获得显式收敛速率。这种新方法的优点和缺点在包括Genz功能的一组基准测试和贝叶斯调查设计问题上突出显示。
translated by 谷歌翻译
变异推理(VI)的核心原理是将计算复杂后概率密度计算的统计推断问题转换为可拖动的优化问题。该属性使VI比几种基于采样的技术更快。但是,传统的VI算法无法扩展到大型数据集,并且无法轻易推断出越野数据点,而无需重新运行优化过程。该领域的最新发展,例如随机,黑框和摊销VI,已帮助解决了这些问题。如今,生成的建模任务广泛利用摊销VI来实现其效率和可扩展性,因为它利用参数化函数来学习近似的后验密度参数。在本文中,我们回顾了各种VI技术的数学基础,以构成理解摊销VI的基础。此外,我们还概述了最近解决摊销VI问题的趋势,例如摊销差距,泛化问题,不一致的表示学习和后验崩溃。最后,我们分析了改善VI优化的替代差异度量。
translated by 谷歌翻译
We develop an optimization algorithm suitable for Bayesian learning in complex models. Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations. It applies within the class of exponential-family variational posterior distributions, for which we extensively discuss the Gaussian case for which the updates have a rather simple form. Our Quasi Black-box Variational Inference (QBVI) framework is readily applicable to a wide class of Bayesian inference problems and is of simple implementation as the updates of the variational posterior do not involve gradients with respect to the model parameters, nor the prescription of the Fisher information matrix. We develop QBVI under different hypotheses for the posterior covariance matrix, discuss details about its robust and feasible implementation, and provide a number of real-world applications to demonstrate its effectiveness.
translated by 谷歌翻译