贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
深度神经网络易于对异常值过度自信的预测。贝叶斯神经网络和深度融合都已显示在某种程度上减轻了这个问题。在这项工作中,我们的目标是通过提议预测由高斯混合模型的后续的高斯混合模型来结合这两种方法的益处,该高斯混合模型包括独立培训的深神经网络的LAPPALL近似的加权和。该方法可以与任何一组预先训练的网络一起使用,并且与常规合并相比,只需要小的计算和内存开销。理论上我们验证了我们的方法从训练数据中的培训数据和虚拟化的基本线上的标准不确定量级基准测试中的“远离”的过度控制。
translated by 谷歌翻译
用于估计模型不确定性的线性拉普拉斯方法在贝叶斯深度学习社区中引起了人们的重新关注。该方法提供了可靠的误差线,并接受模型证据的封闭式表达式,从而可以选择模型超参数。在这项工作中,我们检查了这种方法背后的假设,尤其是与模型选择结合在一起。我们表明,这些与一些深度学习的标准工具(构成近似方法和归一化层)相互作用,并为如何更好地适应这种经典方法对现代环境提出建议。我们为我们的建议提供理论支持,并在MLP,经典CNN,具有正常化层,生成性自动编码器和变压器的剩余网络上进行经验验证它们。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
We investigate the efficacy of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.
translated by 谷歌翻译
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. This paper develops a method, termed as the linearised deep image prior (DIP), to estimate the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). Specifically, we endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. To preserve conjugacy, we approximate the TV regulariser with a Gaussian surrogate. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. We demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP. Our code is available at https://github.com/educating-dip/bayes_dip.
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
贝叶斯神经网络和深度集合代表了深入学习中不确定性量化的两种现代范式。然而,这些方法主要因内存低效率问题而争取,因为它们需要比其确定性对应物高出几倍的参数储存。为了解决这个问题,我们使用少量诱导重量增强每层的重量矩阵,从而将不确定性定量突出到这种低尺寸空间中。我们进一步扩展了Matheron的有条件高斯采样规则,以实现快速的重量采样,这使得我们的推理方法能够与合并相比保持合理的运行时间。重要的是,我们的方法在具有完全连接的神经网络和RESNET的预测和不确定性估算任务中实现了竞争性能,同时将参数大小减少到$单辆$ \ LEQ 24.3 \%$的参数大小神经网络。
translated by 谷歌翻译
随着我们远离数据,预测不确定性应该增加,因为各种各样的解释与鲜为人知的信息一致。我们引入了远距离感知的先验(DAP)校准,这是一种纠正训练域之外贝叶斯深度学习模型过度自信的方法。我们将DAPS定义为模型参数的先验分布,该模型参数取决于输入,通过其与训练集的距离度量。DAP校准对后推理方法不可知,可以作为后处理步骤进行。我们证明了其在各种分类和回归问题中对几个基线的有效性,包括旨在测试远离数据的预测分布质量的基准。
translated by 谷歌翻译
通过强制了解输入中某些转换保留输出的知识,通常应用数据增强来提高深度学习的性能。当前,使用的数据扩大是通过人类的努力和昂贵的交叉验证来选择的,这使得应用于新数据集很麻烦。我们开发了一种基于梯度的方便方法,用于在没有验证数据的情况下和在深度神经网络的培训期间选择数据增强。我们的方法依赖于措辞增强作为先前分布的不变性,并使用贝叶斯模型选择学习,该模型已被证明在高斯过程中起作用,但尚未用于深神经网络。我们提出了一个可区分的Kronecker因拉普拉斯(Laplace)近似与边际可能性的近似,作为我们的目标,可以在没有人类监督或验证数据的情况下优化。我们表明,我们的方法可以成功地恢复数据中存在的不断增长,这提高了图像数据集的概括和数据效率。
translated by 谷歌翻译
表示学习已成为一种实用的方法,可以在重建方面成功地建立大量高维数据的丰富参数编码。在考虑具有测试训练分布变化的无监督任务时,概率的观点有助于解决预测过度自信和不良校准。但是,由于多种原因,即维度或顽固性问题的诅咒,直接引入贝叶斯推断仍然是一个艰难的问题。 Laplace近似(LA)在这里提供了一个解决方案,因为可以通过二阶Taylor膨胀在参数空间的某些位置通过二阶Taylor膨胀来建立重量的高斯近似值。在这项工作中,我们为洛杉矶启发的无监督表示学习提供了贝叶斯自动编码器。我们的方法实现了迭代的拉普拉斯更新,以获得新型自动编码器证据的新变化下限。二阶部分衍生物的巨大计算负担是通过Hessian矩阵的近似来跳过的。从经验上讲,我们通过为分布外检测提供了良好的不确定性,用于差异几何形状的大地测量和缺失数据归思的方法来证明拉普拉斯自动编码器的可伸缩性和性能。
translated by 谷歌翻译
神经线性模型(NLM)是深度贝叶斯模型,通过从数据中学习特征,然后对这些特征进行贝叶斯线性回归来产生预测的不确定性。尽管他们受欢迎,但很少有作品专注于有条理地评估这些模型的预测性不确定性。在这项工作中,我们证明了NLMS的传统培训程序急剧低估了分发输入的不确定性,因此它们不能在风险敏感的应用中暂时部署。我们确定了这种行为的基本原因,并提出了一种新的培训框架,捕获下游任务的有用预测不确定性。
translated by 谷歌翻译
已知神经网络模型加强隐藏的数据偏差,使它们不可靠且难以解释。我们试图通过在功能空间中引入归纳偏差来构建“知道他们不知道的内容”。我们表明贝叶斯神经网络的定期激活功能在网络权重和平移 - 不变,静止的高斯过程前沿建立了连接之间的连接。此外,我们表明,通过覆盖三角波和周期性的Relu激活功能,该链接超出了正弦波(傅里叶)激活。在一系列实验中,我们表明定期激活功能获得了域内数据的可比性,并捕获对深度神经网络中的扰动输入的灵敏度进行域名检测。
translated by 谷歌翻译
最近出现了一系列用于估计具有单个正向通行证的深神经网络中的认知不确定性的新方法,最近已成为贝叶斯神经网络的有效替代方法。在信息性表示的前提下,这些确定性不确定性方法(DUM)在检测到分布(OOD)数据的同时在推理时添加可忽略的计算成本时实现了强大的性能。但是,目前尚不清楚dums是否经过校准,可以无缝地扩展到现实世界的应用 - 这都是其实际部署的先决条件。为此,我们首先提供了DUMS的分类法,并在连续分配转移下评估其校准。然后,我们将它们扩展到语义分割。我们发现,尽管DUMS尺度到现实的视觉任务并在OOD检测方面表现良好,但当前方法的实用性受到分配变化下的校准不良而破坏的。
translated by 谷歌翻译
不确定性估计(UE)技术 - 例如高斯过程(GP),贝叶斯神经网络(BNN),蒙特卡罗辍学(MCDropout) - 旨在通过为每个分配估计的不确定性值来提高机器学习模型的可解释性他们的预测输出。然而,由于过高的不确定性估计可以在实践中具有致命的后果,因此本文分析了上述技术。首先,我们表明GP方法始终会产生高不确定性估计(OOD)数据。其次,我们在2D玩具示例中显示了BNN和MCDRopout在OOD样品上没有提供高不确定性估计。最后,我们凭经验展示了这种BNNS和MCDRopout的陷阱也在现实世界数据集中持有。我们的见解(i)提高了对深度学习中目前流行的UE方法更加谨慎使用的认识,(ii)鼓励开发UE方法,这些方法近似于基于GP的方法 - 而不是BNN和MCDROPOUT,以及我们的经验设置可用于验证任何其他UE方法的ood性能。源代码在https://github.com/epfml/unctemationsiapity-娱乐中获得。
translated by 谷歌翻译
收购用于监督学习的标签可能很昂贵。为了提高神经网络回归的样本效率,我们研究了活跃的学习方法,这些方法可以适应地选择未标记的数据进行标记。我们提出了一个框架,用于从(与网络相关的)基础内核,内核转换和选择方法中构造此类方法。我们的框架涵盖了许多基于神经网络的高斯过程近似以及非乘式方法的现有贝叶斯方法。此外,我们建议用草图的有限宽度神经切线核代替常用的最后层特征,并将它们与一种新型的聚类方法结合在一起。为了评估不同的方法,我们引入了一个由15个大型表格回归数据集组成的开源基准。我们所提出的方法的表现优于我们的基准测试上的最新方法,缩放到大数据集,并在不调整网络体系结构或培训代码的情况下开箱即用。我们提供开源代码,包括所有内核,内核转换和选择方法的有效实现,并可用于复制我们的结果。
translated by 谷歌翻译
对于神经网络的近似贝叶斯推断被认为是标准培训的强大替代品,通常在分发数据上提供良好的性能。然而,贝叶斯神经网络(BNNS)具有高保真近似推断的全批汉密尔顿蒙特卡罗在协变速下实现了较差的普遍,甚至表现不佳的经典估算。我们解释了这种令人惊讶的结果,展示了贝叶斯模型平均值实际上如何存在于协变量的情况下,特别是在输入特征中的线性依赖性导致缺乏后退的情况下。我们还展示了为什么相同的问题不会影响许多近似推理程序,或古典最大A-Bouthiori(地图)培训。最后,我们提出了改善BNN的鲁棒性的新型前锋,对许多协变量转变来源。
translated by 谷歌翻译
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
translated by 谷歌翻译
We introduce ensembles of stochastic neural networks to approximate the Bayesian posterior, combining stochastic methods such as dropout with deep ensembles. The stochastic ensembles are formulated as families of distributions and trained to approximate the Bayesian posterior with variational inference. We implement stochastic ensembles based on Monte Carlo dropout, DropConnect and a novel non-parametric version of dropout and evaluate them on a toy problem and CIFAR image classification. For CIFAR, the stochastic ensembles are quantitatively compared to published Hamiltonian Monte Carlo results for a ResNet-20 architecture. We also test the quality of the posteriors directly against Hamiltonian Monte Carlo simulations in a simplified toy model. Our results show that in a number of settings, stochastic ensembles provide more accurate posterior estimates than regular deep ensembles.
translated by 谷歌翻译
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous largescale empirical comparison of these methods under dataset shift. We present a largescale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
translated by 谷歌翻译