贝叶斯神经网络(BNN)中最流行的估计方法之一是平均场变异推理(MFVI)。在这项工作中,我们表明具有RELU激活功能的神经网络会诱导后代,这些后者很难与MFVI拟合。我们为这种现象提供了理论上的理由,经过经验研究,并报告了一系列实验的结果,以研究激活函数对BNNS校准的影响。我们发现,使用泄漏的Relu激活会导致更高的高斯重量后代,并且比基于RELU的对应物的预期校准误差(ECE)较低。
translated by 谷歌翻译
本文研究了用于训练过度参数化制度中的贝叶斯神经网络(BNN)的变异推理(VI),即当神经元的数量趋于无穷大时。更具体地说,我们考虑过度参数化的两层BNN,并指出平均VI训练中的关键问题。这个问题来自于证据(ELBO)的下限分解为两个术语:一个与模型的可能性函数相对应,第二个对应于kullback-leibler(KL)差异(KL)差异。特别是,我们从理论和经验上都表明,只有当根据观测值和神经元之间的比率适当地重新缩放KL时,在过度参数化制度中,这两个术语之间存在权衡。我们还通过数值实验来说明我们的理论结果,这些实验突出了该比率的关键选择。
translated by 谷歌翻译
An activation function has a significant impact on the efficiency and robustness of the neural networks. As an alternative, we evolved a cutting-edge non-monotonic activation function, Negative Stimulated Hybrid Activation Function (Nish). It acts as a Rectified Linear Unit (ReLU) function for the positive region and a sinus-sigmoidal function for the negative region. In other words, it incorporates a sigmoid and a sine function and gaining new dynamics over classical ReLU. We analyzed the consistency of the Nish for different combinations of essential networks and most common activation functions using on several most popular benchmarks. From the experimental results, we reported that the accuracy rates achieved by the Nish is slightly better than compared to the Mish in classification.
translated by 谷歌翻译
受生物神经元的启发,激活功能在许多现实世界中常用的任何人工神经网络的学习过程中起着重要作用。文献中已经提出了各种激活功能,用于分类和回归任务。在这项工作中,我们调查了过去已经使用的激活功能以及当前的最新功能。特别是,我们介绍了多年来激活功能的各种发展以及这些激活功能的优势以及缺点或局限性。我们还讨论了经典(固定)激活功能,包括整流器单元和自适应激活功能。除了基于表征的激活函数的分类法外,还提出了基于应用的激活函数的分类法。为此,对MNIST,CIFAR-10和CIFAR-100等分类数据集进行了各种固定和自适应激活函数的系统比较。近年来,已经出现了一个具有物理信息的机器学习框架,以解决与科学计算有关的问题。为此,我们还讨论了在物理知识的机器学习框架中使用的激活功能的各种要求。此外,使用Tensorflow,Pytorch和Jax等各种机器学习库之间进行了不同的固定和自适应激活函数进行各种比较。
translated by 谷歌翻译
The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?
translated by 谷歌翻译
对于神经网络的近似贝叶斯推断被认为是标准培训的强大替代品,通常在分发数据上提供良好的性能。然而,贝叶斯神经网络(BNNS)具有高保真近似推断的全批汉密尔顿蒙特卡罗在协变速下实现了较差的普遍,甚至表现不佳的经典估算。我们解释了这种令人惊讶的结果,展示了贝叶斯模型平均值实际上如何存在于协变量的情况下,特别是在输入特征中的线性依赖性导致缺乏后退的情况下。我们还展示了为什么相同的问题不会影响许多近似推理程序,或古典最大A-Bouthiori(地图)培训。最后,我们提出了改善BNN的鲁棒性的新型前锋,对许多协变量转变来源。
translated by 谷歌翻译
We investigate the efficacy of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.
translated by 谷歌翻译
Compared to point estimates calculated by standard neural networks, Bayesian neural networks (BNN) provide probability distributions over the output predictions and model parameters, i.e., the weights. Training the weight distribution of a BNN, however, is more involved due to the intractability of the underlying Bayesian inference problem and thus, requires efficient approximations. In this paper, we propose a novel approach for BNN learning via closed-form Bayesian inference. For this purpose, the calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems, where the weights are modeled as Gaussian random variables. This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent. We demonstrate our method on several UCI datasets and compare it to the state of the art.
translated by 谷歌翻译
对具有无限宽度的神经网络的研究对于更好地理解实际应用中的神经网络很重要。在这项工作中,我们得出了深,无限宽度的Maxout网络和高斯过程(GP)的等效性,并用组成结构表征Maxout内核。此外,我们建立了深厚的Maxout网络内核与深神经网络内核之间的联系。我们还提供了有效的数值实现,可以适应任何麦克斯特等级。数值结果表明,与有限宽度的对应物和深神经网络内核相比,基于深层Maxout网络内核进行贝叶斯推论可能会导致竞争成果。这使我们启发了麦克斯的激活也可以纳入其他无限宽度神经网络结构,例如卷积神经网络(CNN)。
translated by 谷歌翻译
神经网络和高斯过程的优势和劣势是互补的。更好地了解他们的关系伴随着使每个方法从另一个方法中受益的承诺。在这项工作中,我们建立了神经网络的前进通行证与(深)稀疏高斯工艺模型之间的等价。我们开发的理论是基于解释激活函数作为跨域诱导功能,通过对激活函数和内核之间的相互作用进行严格分析。这导致模型可以被视为具有改善的不确定性预测或深度高斯过程的神经网络,其具有提高的预测精度。这些权利要求通过对回归和分类数据集进行实验结果来支持。
translated by 谷歌翻译
激活功能在深神网络中引入非线性。这种非线性有助于神经网络从数据集中更快,有效地学习。在深度学习中,基于类型问题陈述开发和使用许多激活功能。Relu的变体,Swish和Mish是Goto激活功能。Mish功能被认为比Swish相似甚至更好,并且比Relu更好。在本文中,我们提出了一个名为APTX的激活函数,其行为与Mish相似,但需要较少的数学操作来计算。APTX的计算要求较小会加快模型培训的速度,从而减少了深度学习模型的硬件需求。
translated by 谷歌翻译
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
translated by 谷歌翻译
整流的线性单元目前是深度卷积神经网络中的最先进的激活功能。为了对抗Relu的垂死神经元问题,我们提出了参数分层线性单元(PVLU),其增加了具有培训系数的正弦函数来relu。随着在整个真实域的非线性和非零梯度引入非线性和非零梯度,PVLU在转移学习的背景下实施时作为微调的机制。在简单的非转移顺序CNN上,PVLU取代允许的相对误差减少16.3%和11.3%(无且数据增强)在CIFAR-100上。 PVLU也在转移学习模型上进行测试。 VGG-16和VGG-19分别在CREU与PVLU取代后,在CIFAR-10分别体验了9.5%和10.7%的相对误差。当在高斯过滤的CiFar-10图像上进行培训时,VGG型号将注意类似的改进。最值得注意的是,使用PVLU的微调允许在CIFAR数据集上的近最先进的剩余神经网络架构上的相对误差减少和超过10%。
translated by 谷歌翻译
我们通过Pac-Bayes概括界的镜头研究冷后效应。我们认为,在非反应环境中,当训练样本的数量相对较小时,应考虑到冷后效应的讨论,即大概贝叶斯推理并不能容易地提供对样本外数据的性能的保证。取而代之的是,通过泛化结合更好地描述了样本外误差。在这种情况下,我们探讨了各种推理与PAC-Bayes目标的ELBO目标之间的联系。我们注意到,虽然Elbo和Pac-Bayes目标相似,但后一个目标自然包含温度参数$ \ lambda $,不限于$ \ lambda = 1 $。对于回归和分类任务,在各向同性拉普拉斯与后部的近似值的情况下,我们展示了这种对温度参数的PAC-bayesian解释如何捕获冷后效应。
translated by 谷歌翻译
贝叶斯神经网络与高斯过程之间的联系在过去几年中获得了很多关注,其中隐藏的单位在层宽趋于无穷大时收敛到高斯过程限制。支撑此结果是隐藏单元在无限宽度限制中变得独立。我们的宗旨是在实际有限宽度贝叶斯神经网络中阐明隐藏的单位依赖性质。除了理论结果之外,我们除了对隐藏的单位依赖性属性的深度和宽度影响。
translated by 谷歌翻译
利用深度神经网络在监督学习设置中产生校准预测概率的多种技术已经出现了利用在多个随机起点(深坐标)的循环训练或培训期间发现的集合不同解决方案的方法。但是,只有有限的工作已经调查了探索各种解决方案(后模式)探索本地区域的效用。在CIFAR-10数据集上使用三种众所周知的深层架构,我们评估了几种简单的方法,用于探索重量空间的局部区域,相对于BRICR得分,准确性和预期的校准误差。我们考虑贝叶斯推理技术(变分推理和汉密尔顿蒙特卡罗施加到Softmax输出层)以及利用Optima附近的随机梯度下降轨迹。在将单独模式添加到合奏中均匀提高性能时,我们表明,这里考虑的简单模式探索方法在没有模式探索的情况下对整体产生的简单模式勘探方法很少。
translated by 谷歌翻译
我们考虑贝叶斯逆问题,其中假设未知状态是具有不连续结构的函数先验。介绍了基于具有重型重量的神经网络输出的一类现有分布,其具有关于这种网络的无限宽度限制的现有结果。理论上,即使网络宽度是有限的,我们也显示来自这种前导者的样本具有所需的不连续性,使得它们适合于边缘保留反转。在数值上,我们考虑在一个和二维空间域上定义的解卷积问题,以说明这些前景的有效性;地图估计,尺寸 - 鲁棒MCMC采样和基于集合的近似值用于探测后部分布。点估计的准确性显示出超过从非重尾前沿获得的那些,并且显示不确定性估计以提供更有用的定性信息。
translated by 谷歌翻译
With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.
translated by 谷歌翻译