Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.
translated by 谷歌翻译
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.
translated by 谷歌翻译
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.
translated by 谷歌翻译
Flat minima
分类:
We present a new algorithm for nding low complexity neural networks with high generalization capability. The algorithm searches for a \ at" minimum of the error function. A at minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that at minima correspond to \simple" networks and low expected over tting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into under tting and over tting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a \good" weight prior { instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it e ectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, at minimum search outperforms (1) conventional backprop, (2) weight decay, (3) \optimal brain surgeon" / \optimal brain damage". We also provide pseudo code of the algorithm (omitted from the NC-version).The appendix presents a detailed theoretical justi cation of our approach. Using a variant of the Gibbs algorithm, appendix A.1 de nes generalization, under tting and over tting error in a novel way. By de ning an appropriate prior over input-output functions, we postulate that the most probable network is a \ at" one. Appendix A.2 formally justi es the error function minimized by our algorithm. Appendix A.3 describes an e cient implementation of the algorithm. Appendix A.4 nally presents pseudo code of the algorithm. TASK / ARCHITECTURE / BOXESGeneralization task. The task is to approximate an unknown function f X Y mapping a nite set of possible inputs X R N to a nite set of possible outputs Y R K . A data set D is obtained from f (see appendix A.1). All training information is given by a nite set D 0 D. D 0 is called the training set. The pth element of D 0 is denoted by an input/target pair (x p ; y p ).Architecture/ Net functions. For simplicity, we will focus on a standard feedforward net (but in the experiments, we will use recurrent nets as well). The net has N input units, K output units, L weights, and di erentiable activation functions. It maps input vectors x 2 R N to output vectors o(w; x) 2 R K , where w is the L-dimensional weight vector, and the weight on the connection from unit j to i is denoted w ij . The net function induced by w is denoted net(w): for x 2 R N , net(w)(x) = o(w; x) = o 1 (w; x); o 2 (w; x); : : : ; o K 1 (w; x); o K (w; x) , where o i (w; x) denotes the i-th component of o(w; x), corresponding to output unit i. Training error. We use squared error E(net(w); D 0 ) := P (xp;yp)2D0 k y p o(w; x p ) k 2 , where k : k denotes the Euclidean norm.Tolerable error. To
translated by 谷歌翻译
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
translated by 谷歌翻译
We propose a simultaneous learning and pruning algorithm capable of identifying and eliminating irrelevant structures in a neural network during the early stages of training. Thus, the computational cost of subsequent training iterations, besides that of inference, is considerably reduced. Our method, based on variational inference principles using Gaussian scale mixture priors on neural network weights, learns the variational posterior distribution of Bernoulli random variables multiplying the units/filters similarly to adaptive dropout. Our algorithm, ensures that the Bernoulli parameters practically converge to either 0 or 1, establishing a deterministic final network. We analytically derive a novel hyper-prior distribution over the prior parameters that is crucial for their optimal selection and leads to consistent pruning levels and prediction accuracy regardless of weight initialization or the size of the starting network. We prove the convergence properties of our algorithm establishing theoretical and practical pruning conditions. We evaluate the proposed algorithm on the MNIST and CIFAR-10 data sets and the commonly used fully connected and convolutional LeNet and VGG16 architectures. The simulations show that our method achieves pruning levels on par with state-of the-art methods for structured pruning, while maintaining better test-accuracy and more importantly in a manner robust with respect to network initialization and initial size.
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
Kullback-Leibler(KL)差异广泛用于贝叶斯神经网络(BNNS)的变异推理。然而,KL差异具有无限性和不对称性等局限性。我们检查了更通用,有限和对称的詹森 - 香农(JS)差异。我们根据几何JS差异为BNN制定新的损失函数,并表明基于KL差异的常规损失函数是其特殊情况。我们以封闭形式的高斯先验评估拟议损失函数的差异部分。对于任何其他一般的先验,都可以使用蒙特卡洛近似值。我们提供了实施这两种情况的算法。我们证明所提出的损失函数提供了一个可以调整的附加参数,以控制正则化程度。我们得出了所提出的损失函数在高斯先验和后代的基于KL差异的损失函数更好的条件。我们证明了基于嘈杂的CIFAR数据集和有偏见的组织病理学数据集的最新基于KL差异的BNN的性能提高。
translated by 谷歌翻译
在这项工作中,我们使用变分推论来量化无线电星系分类的深度学习模型预测的不确定性程度。我们表明,当标记无线电星系时,个体测试样本的模型后差水平与人类不确定性相关。我们探讨了各种不同重量前沿的模型性能和不确定性校准,并表明稀疏事先产生更良好的校准不确定性估计。使用单个重量的后部分布,我们表明我们可以通过从最低信噪比(SNR)中除去权重来修剪30%的完全连接的层权重,而无需显着损失性能。我们证明,可以使用基于Fisher信息的排名来实现更大程度的修剪,但我们注意到两种修剪方法都会影响Failaroff-Riley I型和II型无线电星系的不确定性校准。最后,我们表明,与此领域的其他工作相比,我们经历了冷的后效,因此后部必须缩小后加权以实现良好的预测性能。我们检查是否调整成本函数以适应模型拼盘可以弥补此效果,但发现它不会产生显着差异。我们还研究了原则数据增强的效果,并发现这改善了基线,而且还没有弥补观察到的效果。我们将其解释为寒冷的后效,因为我们的培训样本过于有效的策划导致可能性拼盘,并将其提高到未来无线电银行分类的潜在问题。
translated by 谷歌翻译
Large multilayer neural networks trained with backpropagation have recently achieved state-ofthe-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
translated by 谷歌翻译
我们提出了一种新的方法,可以在复杂模型(例如贝叶斯神经网络)中执行近似贝叶斯推断。该方法比马尔可夫链蒙特卡洛更可扩展到大数据,它具有比变异推断更具表现力的模型,并且不依赖于对抗训练(或密度比估计)。我们采用了构建两个模型的最新方法:(1)一个主要模型,负责执行回归或分类; (2)一个辅助,表达的(例如隐式)模型,该模型定义了主模型参数上的近似后验分布。但是,我们根据后验预测分布的蒙特卡洛估计值通过梯度下降来优化后验模型的参数 - 这是我们唯一的近似值(除后模型除外)。只需要指定一个可能性,可以采用各种形式,例如损失功能和合成可能性,从而提供无可能的方法的形式。此外,我们制定了该方法,使后样品可以独立于或有条件地取决于主要模型的输入。后一种方法被证明能够增加主要模型的明显复杂性。我们认为这在诸如替代和基于物理的模型之类的应用中很有用。为了促进贝叶斯范式如何提供不仅仅是不确定性量化的方式,我们证明了:不确定性量化,多模式以及具有最新预测的神经网络体系结构的应用。
translated by 谷歌翻译
预测性编码提供了对皮质功能的潜在统一说明 - 假设大脑的核心功能是最小化有关世界生成模型的预测错误。该理论与贝叶斯大脑框架密切相关,在过去的二十年中,在理论和认知神经科学领域都产生了重大影响。基于经验测试的预测编码的改进和扩展的理论和数学模型,以及评估其在大脑中实施的潜在生物学合理性以及该理论所做的具体神经生理学和心理学预测。尽管存在这种持久的知名度,但仍未对预测编码理论,尤其是该领域的最新发展进行全面回顾。在这里,我们提供了核心数学结构和预测编码的逻辑的全面综述,从而补充了文献中最新的教程。我们还回顾了该框架中的各种经典和最新工作,从可以实施预测性编码的神经生物学现实的微电路到预测性编码和广泛使用的错误算法的重新传播之间的紧密关系,以及对近距离的调查。预测性编码和现代机器学习技术之间的关系。
translated by 谷歌翻译
We show how to use "complementary priors" to eliminate the explainingaway effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
translated by 谷歌翻译
引力波(GW)检测现在是普遍的,并且随着GW探测器的全球网络的灵敏度,我们将观察每年瞬态GW事件的$ \ MATHCAL {O}(100)美元。用于估计其源参数的目前的方法采用最佳敏感但是计算昂贵的贝叶斯推理方法,其中典型的分析在6小时和5天之间取。对于二元中子星和中子星黑洞系统提示,预计在1秒 - 1分钟的时间尺度和用于提醒EM随访观察员的最快方法,可以提供估计在$ \ mathcal {o }(1)$分钟,在有限的关键源参数范围内。在这里,我们表明,在二进制黑洞信号上预先培训的条件变形Autiachoder可以返回贝叶斯后概率估计。仅针对给定的先前参数空间执行一次训练程序,然后可以将所得培训的机器能够生成描述后部分配$ \ SIM 6 $幅度的样本比现有技术更快。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
近似贝叶斯深度学习方法对于解决在智能系统中部署深度学习组件时,包括在智能系统中部署深度学习组件的几个问题,包括减轻过度自信的错误并提供增强的鲁棒性,从而超出分发示例。但是,现有近似贝叶斯推理方法的计算要求可以使它们不适合部署包括低功耗边缘设备的智能IOT系统。在本文中,我们为监督深度学习提供了一系列近似贝叶斯推理方法,并在应用这些方法对当前边缘硬件上的挑战和机遇。我们突出了几种潜在的解决方案来降低模型存储要求,提高计算可扩展性,包括模型修剪和蒸馏方法。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called "contrastive divergence" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.
translated by 谷歌翻译
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.
translated by 谷歌翻译