贝叶斯拉索是在线性回归框架中构建的,并应用了吉布斯采样以估计回归参数。本文开发了一种新的稀疏学习模型,称为贝叶斯套索稀疏(BLS)模型,该模型采用了贝叶斯拉索的层次模型公式。与原始贝叶斯套索的主要区别在于估计程序;BLS方法使用基于II类型最大似然过程的学习算法。与贝叶斯拉索相反,BLS提供了回归参数的稀疏估计值。BLS方法还通过引入内核功能来得出非线性监督学习问题。我们将BLS模型与众所周知的相关矢量机,快速拉普拉斯法,再见套索和套索在模拟和真实数据上进行了比较。数值结果表明,BLS稀疏而精确,尤其是在处理嘈杂和不规则数据集时。
translated by 谷歌翻译
The horseshoe prior is known to possess many desirable properties for Bayesian estimation of sparse parameter vectors, yet its density function lacks an analytic form. As such, it is challenging to find a closed-form solution for the posterior mode. Conventional horseshoe estimators use the posterior mean to estimate the parameters, but these estimates are not sparse. We propose a novel expectation-maximisation (EM) procedure for computing the MAP estimates of the parameters in the case of the standard linear model. A particular strength of our approach is that the M-step depends only on the form of the prior and it is independent of the form of the likelihood. We introduce several simple modifications of this EM procedure that allow for straightforward extension to generalised linear models. In experiments performed on simulated and real data, our approach performs comparable, or superior to, state-of-the-art sparse estimation methods in terms of statistical performance and computational cost.
translated by 谷歌翻译
我们引入了一种新的经验贝叶斯方法,用于大规模多线性回归。我们的方法结合了两个关键思想:(i)使用灵活的“自适应收缩”先验,该先验近似于正常分布的有限混合物,近似于正常分布的非参数家族; (ii)使用变分近似来有效估计先前的超参数并计算近似后期。将这两个想法结合起来,将快速,灵活的方法与计算速度相当,可与快速惩罚的回归方法(例如Lasso)相当,并在各种场景中具有出色的预测准确性。此外,我们表明,我们方法中的后验平均值可以解释为解决惩罚性回归问题,并通过直接解决优化问题(而不是通过交叉验证来调整)从数据中学到的惩罚函数的精确形式。 。我们的方法是在r https://github.com/stephenslab/mr.ash.ash.alpha的r软件包中实现的
translated by 谷歌翻译
我们提出了一种变分贝叶斯比例危险模型,用于预测和可变选择的关于高维存活数据。我们的方法基于平均场变分近似,克服了MCMC的高计算成本,而保留有用的特征,提供优异的点估计,并通过后夹层概念提供可变选择的自然机制。我们提出的方法的性能通过广泛的仿真进行评估,并与其他最先进的贝叶斯变量选择方法进行比较,展示了可比或更好的性能。最后,我们展示了如何在两个转录组数据集上使用所提出的方法进行审查的生存结果,其中我们识别具有预先存在的生物解释的基因。
translated by 谷歌翻译
贝叶斯变量选择方法是适合和推断稀疏高维线性回归模型的强大技术。但是,许多在计算密集型上或需要对模型参数进行限制性的先验分布。基于可能性的惩罚方法在计算方面更友好,但是推理需要资源密集型的改装技术。在本文中,我们提出了一种有效而强大的贝叶斯方法,用于稀疏高维线性回归。通过使用插件的经验贝叶斯估算超参数的估计值,需要对参数的最小化假设。有效的最大后验概率(MAP)估计是通过使用分区和扩展期望最大化(ECM)算法完成的。结果是应用于稀疏高维线性回归的经验贝叶斯ECM(探针)算法。我们提出了估计未来价值预测的可靠和预测间隔的方法。我们将预测的经验特性和我们的预测推断与可比方法进行了比较,并通过大量的模拟研究和对癌细胞系药物反应研究的分析进行了比较。提出的方法在R软件包探针中实现。
translated by 谷歌翻译
具有伽马超高提升的分层模型提供了一个灵活,稀疏的促销框架,用于桥接$ l ^ 1 $和$ l ^ 2 $ scalalizations在贝叶斯的配方中致正问题。尽管对这些模型具有贝叶斯动机,但现有的方法仅限于\ Textit {最大后验}估计。尚未实现执行不确定性量化的可能性。本文介绍了伽马超高图的分层逆问题的变分迭代交替方案。所提出的变分推理方法产生精确的重建,提供有意义的不确定性量化,易于实施。此外,它自然地引入了用于选择超参数的模型选择。我们说明了我们在几个计算的示例中的方法的性能,包括从时间序列数据的动态系统的解卷积问题和稀疏识别。
translated by 谷歌翻译
回归模型用于各种应用,为来自不同领域的研究人员提供强大的科学工具。线性或简单的参数,模型通常不足以描述输入变量与响应之间的复杂关系。通过诸如神经网络的灵活方法可以更好地描述这种关系,但这导致不太可解释的模型和潜在的过度装备。或者,可以使用特定的参数非线性函数,但是这种功能的规范通常是复杂的。在本文中,我们介绍了一种灵活的施工方法,高度灵活的非线性参数回归模型。非线性特征是分层的,类似于深度学习,但对要考虑的可能类型的功能具有额外的灵活性。这种灵活性,与变量选择相结合,使我们能够找到一小部分重要特征,从而可以更具可解释的模型。在可能的功能的空间内,考虑了贝叶斯方法,基于它们的复杂性引入功能的前沿。采用遗传修改模式跳跃马尔可夫链蒙特卡罗算法来执行贝叶斯推理和估计模型平均的后验概率。在各种应用中,我们说明了我们的方法如何用于获得有意义的非线性模型。此外,我们将其预测性能与多个机器学习算法进行比较。
translated by 谷歌翻译
Many scientific problems require identifying a small set of covariates that are associated with a target response and estimating their effects. Often, these effects are nonlinear and include interactions, so linear and additive methods can lead to poor estimation and variable selection. Unfortunately, methods that simultaneously express sparsity, nonlinearity, and interactions are computationally intractable -- with runtime at least quadratic in the number of covariates, and often worse. In the present work, we solve this computational bottleneck. We show that suitable interaction models have a kernel representation, namely there exists a "kernel trick" to perform variable selection and estimation in $O$(# covariates) time. Our resulting fit corresponds to a sparse orthogonal decomposition of the regression function in a Hilbert space (i.e., a functional ANOVA decomposition), where interaction effects represent all variation that cannot be explained by lower-order effects. On a variety of synthetic and real data sets, our approach outperforms existing methods used for large, high-dimensional data sets while remaining competitive (or being orders of magnitude faster) in runtime.
translated by 谷歌翻译
我们根据功能性隐藏动态地理模型(F-HDGM)的惩罚最大似然估计器(PMLE)提出了一种新型的模型选择算法。这些模型采用经典的混合效应回归结构,该结构具有嵌入式时空动力学,以模拟在功能域中观察到的地理参考数据。因此,感兴趣的参数是该域之间的函数。该算法同时选择了相关的样条基函数和回归变量,这些函数和回归变量用于对响应变量与协变量之间的固定效应关系进行建模。这样,它会自动收缩到功能系数的零部分或无关回归器的全部效果。该算法基于迭代优化,并使用自适应的绝对收缩和选择器操作员(LASSO)惩罚函数,其中未含量的F-HDGM最大likikelihood估计器获得了其中的权重。最大化的计算负担大大减少了可能性的局部二次近似。通过蒙特卡洛模拟研究,我们分析了在不同情况下算法的性能,包括回归器之间的强相关性。我们表明,在我们考虑的所有情况下,受罚的估计器的表现都优于未确定的估计器。我们将该算法应用于一个真实案例研究,其中将意大利伦巴第地区的小时二氧化氮浓度记录记录为具有多种天气和土地覆盖协变量的功能过程。
translated by 谷歌翻译
在选择组套索(或普遍的变体,例如重叠,稀疏或标准化的组套索)之后,在没有选择偏见的调整的情况下,对所选参数的推断是不可靠的。在受惩罚的高斯回归设置中,现有方法为选择事件提供了调整,这些事件可以表示为数据变量中的线性不平等。然而,这种表示未能与组套索一起选择,并实质上阻碍了随后的选择后推断的范围。推论兴趣的关键问题 - 例如,推断选定变量对结果的影响 - 仍未得到解答。在本文中,我们开发了一种一致的,选择性的贝叶斯方法,通过得出似然调整因子和近似值来解决现有差距,从而消除了组中的偏见。对模拟数据和人类Connectome项目数据的实验表明,我们的方法恢复了所选组中参数的影响,同时仅支付较小的偏差调整价格。
translated by 谷歌翻译
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. This paper develops a method, termed as the linearised deep image prior (DIP), to estimate the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). Specifically, we endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. To preserve conjugacy, we approximate the TV regulariser with a Gaussian surrogate. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. We demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP. Our code is available at https://github.com/educating-dip/bayes_dip.
translated by 谷歌翻译
替代模型用于减轻工程任务中的计算负担,这些计算负担需要重复评估计算要求的物理系统模型,例如不确定性的有效传播。对于显示出非常非线性依赖其输入参数的模型,标准的替代技术(例如多项式混沌膨胀)不足以获得原始模型响应的准确表示。通过应用有理近似,对于通过有理函数准确描述的模型可以有效地降低近似误差。具体而言,我们的目标是近似复杂值模型。获得替代系数的一种常见方法是最小化模型和替代物之间的基于样本的误差,从最小二乘意义上讲。为了获得原始模型的准确表示并避免过度拟合,样品集的量是扩展中多项式项数的两到三倍。对于需要高多项式程度或在其输入参数方面具有高维度的模型,该数字通常超过负担得起的计算成本。为了克服这个问题,我们将稀疏的贝叶斯学习方法应用于理性近似。通过特定的先前分布结构,在替代模型的系数中诱导稀疏性。分母的多项式系数以及问题的超参数是通过类型-II-Maximim-Maximim类似方法来确定的。我们应用了准牛顿梯度散发算法,以找到最佳的分母系数,并通过应用$ \ mathbb {cr} $ -Colculus来得出所需的梯度。
translated by 谷歌翻译
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=astata.Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact
translated by 谷歌翻译
在科学技术的许多领域中,从数据中提取理事物理学是一个关键挑战。方程发现的现有技术取决于输入和状态测量。但是,实际上,我们只能访问输出测量。我们在这里提出了一个新的框架,用于从输出测量中学习动态系统的物理学;这本质上将物理发现问题从确定性转移到随机域。提出的方法将输入模拟为随机过程,并将随机演算,稀疏学习算法和贝叶斯统计的概念融合在一起。特别是,我们将稀疏性结合起来,促进尖峰和平板先验,贝叶斯法和欧拉·马鲁山(Euler Maruyama)计划,以从数据中识别统治物理。最终的模型高效,可以进行稀疏,嘈杂和不完整的输出测量。在涉及完整状态测量和部分状态测量的几个数值示例中说明了所提出方法的功效和鲁棒性。获得的结果表明,拟议方法仅从产出测量中识别物理学的潜力。
translated by 谷歌翻译
Large multilayer neural networks trained with backpropagation have recently achieved state-ofthe-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
translated by 谷歌翻译
Understanding of the pathophysiology of obstructive lung disease (OLD) is limited by available methods to examine the relationship between multi-omic molecular phenomena and clinical outcomes. Integrative factorization methods for multi-omic data can reveal latent patterns of variation describing important biological signal. However, most methods do not provide a framework for inference on the estimated factorization, simultaneously predict important disease phenotypes or clinical outcomes, nor accommodate multiple imputation. To address these gaps, we propose Bayesian Simultaneous Factorization (BSF). We use conjugate normal priors and show that the posterior mode of this model can be estimated by solving a structured nuclear norm-penalized objective that also achieves rank selection and motivates the choice of hyperparameters. We then extend BSF to simultaneously predict a continuous or binary response, termed Bayesian Simultaneous Factorization and Prediction (BSFP). BSF and BSFP accommodate concurrent imputation and full posterior inference for missing data, including "blockwise" missingness, and BSFP offers prediction of unobserved outcomes. We show via simulation that BSFP is competitive in recovering latent variation structure, as well as the importance of propagating uncertainty from the estimated factorization to prediction. We also study the imputation performance of BSF via simulation under missing-at-random and missing-not-at-random assumptions. Lastly, we use BSFP to predict lung function based on the bronchoalveolar lavage metabolome and proteome from a study of HIV-associated OLD. Our analysis reveals a distinct cluster of patients with OLD driven by shared metabolomic and proteomic expression patterns, as well as multi-omic patterns related to lung function decline. Software is freely available at https://github.com/sarahsamorodnitsky/BSFP .
translated by 谷歌翻译
我们制定自然梯度变推理(VI),期望传播(EP),和后线性化(PL)作为牛顿法用于优化贝叶斯后验分布的参数扩展。这种观点明确地把数值优化框架下的推理算法。我们表明,通用近似牛顿法从优化文献,即高斯 - 牛顿和准牛顿方法(例如,该BFGS算法),仍然是这种“贝叶斯牛顿”框架下有效。这导致了一套这些都保证以产生半正定协方差矩阵,不像标准VI和EP新颖算法。我们统一的观点提供了新的见解各种推理方案之间的连接。所有提出的方法适用于具有高斯事先和非共轭的可能性,这是我们与(疏)高斯过程和状态空间模型展示任何模型。
translated by 谷歌翻译
We discuss the prediction accuracy of assumed statistical models in terms of prediction errors for the generalized linear model and penalized maximum likelihood methods. We derive the forms of estimators for the prediction errors: C p criterion, information criteria, and leave-one-out cross validation (LOOCV) error, using the generalized approximate message passing (GAMP) algorithm and replica method. These estimators coincide with each other when the number of model parameters is sufficiently small; however, there is a discrepancy between them in particular in the overparametrized region where the number of model parameters is larger than the data dimension. In this paper, we review the prediction errors and corresponding estimators, and discuss their differences. In the framework of GAMP, we show that the information criteria can be expressed by using the variance of the estimates. Further, we demonstrate how to approach LOOCV error from the information criteria by utilizing the expression provided by GAMP.
translated by 谷歌翻译
内核正规化最小二乘(KRLS)是一种流行的方法,用于灵活估算可能在变量之间具有复杂关系的模型。但是,其对许多研究人员的有用性受到限制,原因有两个。首先,现有的方法不灵活,不允许KRL与理论动机的扩展(例如固定效应或非线性结果)结合使用。其次,对于甚至适度尺寸的数据集,估计在计算上是非常强大的。我们的论文通过引入广义KRL(GKRL)来解决这两种问题。我们注意到,可以将KRLS重新构造为层次模型,从而允许轻松推理和模块化模型构建。在计算上,我们还实施随机草图以显着加速估计,同时估计质量的罚款有限。我们证明,GKRL可以在一分钟内进行数万观察到的数据集中。此外,可以迅速估计需要在十二次(例如元学习者)中安装模型的最新技术。
translated by 谷歌翻译
高斯过程(GP)回归是一种灵活的,非参数回归的方法,自然量化不确定性。在许多应用中,响应和协变量的数量均大,目标是选择与响应相关的协变量。在这种情况下,我们提出了一种新颖的可扩展算法,即创建的VGPR,该算法基于Vecchia GP近似,优化了受惩罚的GP log-logikelihiens,这是空间统计的有序条件近似,这意味着精确矩阵的稀疏cholesky因子。我们将正则路径从强度惩罚到弱惩罚,依次添加基于对数似然梯度的候选协变量,并通过新的二次约束坐标下降算法取消了无关的协变量。我们提出了基于Vecchia的迷你批次亚采样,该子采样提供了无偏的梯度估计器。最终的过程可扩展到数百万个响应和数千个协变量。理论分析和数值研究表明,相对于现有方法,可伸缩性和准确性的提高。
translated by 谷歌翻译