在本文中,我们提出了一种在贝叶斯神经网络中执行近似高斯推理(Tagi)的分析方法。该方法使得后尺寸矢量和对角线协方差矩阵的分析高斯推断用于重量和偏差。提出的方法具有$ \ mathcal {o}(n)$的计算复杂性,与参数$ n $的数量,并且对回归和分类基准测试的测试确认,对于相同的网络架构,它匹配依赖于梯度背交的现有方法的性能。
translated by 谷歌翻译
Compared to point estimates calculated by standard neural networks, Bayesian neural networks (BNN) provide probability distributions over the output predictions and model parameters, i.e., the weights. Training the weight distribution of a BNN, however, is more involved due to the intractability of the underlying Bayesian inference problem and thus, requires efficient approximations. In this paper, we propose a novel approach for BNN learning via closed-form Bayesian inference. For this purpose, the calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems, where the weights are modeled as Gaussian random variables. This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent. We demonstrate our method on several UCI datasets and compare it to the state of the art.
translated by 谷歌翻译
神经随机微分方程(NSDES)模拟随机过程作为神经网络的漂移和扩散函数。尽管已知NSDE可以进行准确的预测,但到目前为止,其不确定性定量属性仍未探索。我们报告了经验发现,即从NSDE获得良好的不确定性估计是计算上的过度估计。作为一种补救措施,我们开发了一种计算负担得起的确定性方案,该方案在动力学受NSD管辖时准确地近似过渡内核。我们的方法引入了匹配算法的二维力矩:沿着神经净层和沿时间方向水平的垂直力,这受益于有效近似的原始组合。我们对过渡内核的确定性近似适用于培训和预测。我们在多个实验中观察到,我们方法的不确定性校准质量只有在引入高计算成本后才通过蒙特卡洛采样来匹配。由于确定性培训的数值稳定性,我们的方法还提高了预测准确性。
translated by 谷歌翻译
用于估计模型不确定性的线性拉普拉斯方法在贝叶斯深度学习社区中引起了人们的重新关注。该方法提供了可靠的误差线,并接受模型证据的封闭式表达式,从而可以选择模型超参数。在这项工作中,我们检查了这种方法背后的假设,尤其是与模型选择结合在一起。我们表明,这些与一些深度学习的标准工具(构成近似方法和归一化层)相互作用,并为如何更好地适应这种经典方法对现代环境提出建议。我们为我们的建议提供理论支持,并在MLP,经典CNN,具有正常化层,生成性自动编码器和变压器的剩余网络上进行经验验证它们。
translated by 谷歌翻译
贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
We propose a simultaneous learning and pruning algorithm capable of identifying and eliminating irrelevant structures in a neural network during the early stages of training. Thus, the computational cost of subsequent training iterations, besides that of inference, is considerably reduced. Our method, based on variational inference principles using Gaussian scale mixture priors on neural network weights, learns the variational posterior distribution of Bernoulli random variables multiplying the units/filters similarly to adaptive dropout. Our algorithm, ensures that the Bernoulli parameters practically converge to either 0 or 1, establishing a deterministic final network. We analytically derive a novel hyper-prior distribution over the prior parameters that is crucial for their optimal selection and leads to consistent pruning levels and prediction accuracy regardless of weight initialization or the size of the starting network. We prove the convergence properties of our algorithm establishing theoretical and practical pruning conditions. We evaluate the proposed algorithm on the MNIST and CIFAR-10 data sets and the commonly used fully connected and convolutional LeNet and VGG16 architectures. The simulations show that our method achieves pruning levels on par with state-of the-art methods for structured pruning, while maintaining better test-accuracy and more importantly in a manner robust with respect to network initialization and initial size.
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
Large multilayer neural networks trained with backpropagation have recently achieved state-ofthe-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
translated by 谷歌翻译
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
translated by 谷歌翻译
Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature. 1
translated by 谷歌翻译
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation -rules for gradient backpropagation through stochastic variables -and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.
translated by 谷歌翻译
We develop an optimization algorithm suitable for Bayesian learning in complex models. Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations. It applies within the class of exponential-family variational posterior distributions, for which we extensively discuss the Gaussian case for which the updates have a rather simple form. Our Quasi Black-box Variational Inference (QBVI) framework is readily applicable to a wide class of Bayesian inference problems and is of simple implementation as the updates of the variational posterior do not involve gradients with respect to the model parameters, nor the prescription of the Fisher information matrix. We develop QBVI under different hypotheses for the posterior covariance matrix, discuss details about its robust and feasible implementation, and provide a number of real-world applications to demonstrate its effectiveness.
translated by 谷歌翻译
我们提出了一种新的方法,可以在复杂模型(例如贝叶斯神经网络)中执行近似贝叶斯推断。该方法比马尔可夫链蒙特卡洛更可扩展到大数据,它具有比变异推断更具表现力的模型,并且不依赖于对抗训练(或密度比估计)。我们采用了构建两个模型的最新方法:(1)一个主要模型,负责执行回归或分类; (2)一个辅助,表达的(例如隐式)模型,该模型定义了主模型参数上的近似后验分布。但是,我们根据后验预测分布的蒙特卡洛估计值通过梯度下降来优化后验模型的参数 - 这是我们唯一的近似值(除后模型除外)。只需要指定一个可能性,可以采用各种形式,例如损失功能和合成可能性,从而提供无可能的方法的形式。此外,我们制定了该方法,使后样品可以独立于或有条件地取决于主要模型的输入。后一种方法被证明能够增加主要模型的明显复杂性。我们认为这在诸如替代和基于物理的模型之类的应用中很有用。为了促进贝叶斯范式如何提供不仅仅是不确定性量化的方式,我们证明了:不确定性量化,多模式以及具有最新预测的神经网络体系结构的应用。
translated by 谷歌翻译
本文介绍了使用基于补丁的先前分布的图像恢复的新期望传播(EP)框架。虽然Monte Carlo技术典型地用于从难以处理的后分布中进行采样,但它们可以在诸如图像恢复之类的高维推论问题中遭受可扩展性问题。为了解决这个问题,这里使用EP来使用多元高斯密度的产品近似后分布。此外,对这些密度的协方差矩阵施加结构约束允许更大的可扩展性和分布式计算。虽然该方法自然适于处理添加剂高斯观察噪声,但它也可以扩展到非高斯噪声。用于高斯和泊松噪声的去噪,染色和去卷积问题进行的实验说明了这种柔性近似贝叶斯方法的潜在益处,以实现与采样技术相比降低的计算成本。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
We investigate the efficacy of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
虽然已知辍学是一种成功的正规化技术,但仍缺乏对导致成功的机制的见解。我们介绍了\ emph {重量膨胀}的概念,这增加了由权重协方差矩阵的列或行载体跨越的并行曲线的签名体积,并表明重量膨胀是增加PAC中概括的有效手段。 - bayesian设置。我们提供了一个理论上的论点,即辍学会导致体重扩大和对辍学和体重扩张之间相关性的广泛经验支持。为了支持我们的假设,即可以将重量扩张视为增强的概括能力的\ emph {指示器},而不仅仅是副产品,我们还研究了实现重量扩展的其他方法(resp。\ contraction \ contraction ),发现它们通常会导致(分别\ \降低)的概括能力。这表明辍学是一种有吸引力的正规化器,因为它是一种用于获得体重扩展的计算廉价方法。这种洞察力证明了辍学者作为正规化器的作用,同时为确定正规化器铺平了道路,这些正规化器有望通过体重扩张来改善概括。
translated by 谷歌翻译
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNsextracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.
translated by 谷歌翻译
A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation (BP). A prominent example is predictive coding (PC), which is a neuroscience-inspired method that performs inference on hierarchical Gaussian generative models. These methods, however, fail to keep up with modern neural networks, as they are unable to replicate the dynamics of complex layers and activation functions. In this work, we solve this problem by generalizing PC to arbitrary probability distributions, enabling the training of architectures, such as transformers, that are hard to approximate with only Gaussian assumptions. We perform three experimental analyses. First, we study the gap between our method and the standard formulation of PC on multiple toy examples. Second, we test the reconstruction quality on variational autoencoders, where our method reaches the same reconstruction quality as BP. Third, we show that our method allows us to train transformer networks and achieve a performance comparable with BP on conditional language models. More broadly, this method allows neuroscience-inspired learning to be applied to multiple domains, since the internal distributions can be flexibly adapted to the data, tasks, and architectures used.
translated by 谷歌翻译