Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous largescale empirical comparison of these methods under dataset shift. We present a largescale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
translated by 谷歌翻译
深度神经网络具有令人印象深刻的性能,但是他们无法可靠地估计其预测信心,从而限制了其在高风险领域中的适用性。我们表明,应用多标签的一VS损失揭示了分类的歧义并降低了模型的过度自信。引入的Slova(单标签One-Vs-All)模型重新定义了单个标签情况的典型单VS-ALL预测概率,其中只有一个类是正确的答案。仅当单个类具有很高的概率并且其他概率可忽略不计时,提议的分类器才有信心。与典型的SoftMax函数不同,如果所有其他类的概率都很小,Slova自然会检测到分布的样本。该模型还通过指数校准进行了微调,这使我们能够与模型精度准确地对齐置信分数。我们在三个任务上验证我们的方法。首先,我们证明了斯洛伐克与最先进的分布校准具有竞争力。其次,在数据集偏移下,斯洛伐克的性能很强。最后,我们的方法在检测到分布样品的检测方面表现出色。因此,斯洛伐克是一种工具,可以在需要不确定性建模的各种应用中使用。
translated by 谷歌翻译
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
translated by 谷歌翻译
人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译
Discriminative neural networks offer little or no performance guarantees when deployed on data not generated by the same process as the training distribution. On such out-of-distribution (OOD) inputs, the prediction may not only be erroneous, but confidently so, limiting the safe deployment of classifiers in real-world applications. One such challenging application is bacteria identification based on genomic sequences, which holds the promise of early detection of diseases, but requires a model that can output low confidence predictions on OOD genomic sequences from new bacteria that were not present in the training data. We introduce a genomics dataset for OOD detection that allows other researchers to benchmark progress on this important problem. We investigate deep generative model based approaches for OOD detection and observe that the likelihood score is heavily affected by population level background statistics. We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics. We benchmark the OOD detection performance of the proposed method against existing approaches on the genomics dataset and show that our method achieves state-of-the-art performance. We demonstrate the generality of the proposed method by showing that it significantly improves OOD detection when applied to deep generative models of images.
translated by 谷歌翻译
我们有兴趣估计深神经网络的不确定性,这些神经网络在许多科学和工程问题中起着重要作用。在本文中,我们提出了一个引人注目的新发现,即具有相同权重初始化的神经网络的合奏,在数据集中受到持续偏差的转移而训练会产生稍微不一致的训练模型,其中预测的差异是强大的指标。认知不确定性。使用神经切线核(NTK),我们证明了这种现象是由于NTK不变的部分而发生的。由于这是通过微不足道的输入转换来实现的,因此我们表明可以使用单个神经网络(使用我们称为$ \ delta- $ uq的技术)来近似它,从而通过边缘化效果来估计预测周围的不确定性偏见。我们表明,$ \ delta- $ uq的不确定性估计值优于各种基准测试的当前方法 - 异常拒绝,分配变化下的校准以及黑匣子功能的顺序设计优化。
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
深度神经网络易于对异常值过度自信的预测。贝叶斯神经网络和深度融合都已显示在某种程度上减轻了这个问题。在这项工作中,我们的目标是通过提议预测由高斯混合模型的后续的高斯混合模型来结合这两种方法的益处,该高斯混合模型包括独立培训的深神经网络的LAPPALL近似的加权和。该方法可以与任何一组预先训练的网络一起使用,并且与常规合并相比,只需要小的计算和内存开销。理论上我们验证了我们的方法从训练数据中的培训数据和虚拟化的基本线上的标准不确定量级基准测试中的“远离”的过度控制。
translated by 谷歌翻译
生物关键是一种信号,可以从人体中连续测量,例如呼吸声,心脏活动(ECG),脑波(EEG)等,基于该信号,机器学习模型已经为自动疾病的非常有前途的性能开发检测和健康状态监测。但是,DataSet Shift,即,推理的数据分布因训练的分布而异,对于真实的基于生物信号的应用程序并不罕见。为了提高稳健性,具有不确定性资格的概率模型适于捕获预测的可靠性。然而,评估估计不确定性的质量仍然是一个挑战。在这项工作中,我们提出了一个框架来评估估计不确定性在捕获不同类型的生物数据集转换时估计的不确定性的能力。特别是,我们使用基于呼吸声和心电图信号的三个分类任务,以基准五个代表性的不确定性资格方法。广泛的实验表明,尽管集合和贝叶斯模型可以在数据集移位下提供相对更好的不确定性估计,但所有测试模型都无法满足可靠的预测和模型校准中的承诺。我们的工作为任何新开发的生物宣布分类器进行了全面评估,为全面评估铺平了道路。
translated by 谷歌翻译
神经网络缺乏对抗性鲁棒性,即,它们容易受到对抗的例子,通过对输入的小扰动导致错误的预测。此外,当模型给出错误的预测时,信任被破坏,即,预测的概率不是我们应该相信我们模型的良好指标。在本文中,我们研究了对抗性鲁棒性和校准之间的联系,发现模型对小扰动敏感的输入(很容易攻击)更有可能具有较差的预测。基于这种洞察力,我们通过解决这些对抗的缺陷输入来研究校准。为此,我们提出了基于对抗基于对抗的自适应标签平滑(AR-AD),其通过适应性软化标签,通过适应性软化标签来整合对抗性鲁棒性和校准到训练中的相关性,这是基于对敌人可以攻击的容易攻击。我们发现我们的方法,考虑了分销数据的对抗性稳健性,即使在分布班次下也能够更好地校准模型。此外,还可以应用于集合模型,以进一步提高模型校准。
translated by 谷歌翻译
本文我们的目标是利用异质的温度缩放作为校准策略(OOD)检测。此处的异质性是指每个样品的最佳温度参数可能不同,而不是传统的方法对整个分布使用相同的值。为了实现这一目标,我们提出了一种称为锚定的新培训策略,可以估算每个样品的适当温度值,从而导致几个基准的最新OOD检测性能。使用NTK理论,我们表明该温度函数估计与分类器的认知不确定性紧密相关,这解释了其行为。与某些表现最佳的OOD检测方法相反,我们的方法不需要暴露于其他离群数据集,自定义校准目标或模型结合。通过具有不同OOD检测设置的经验研究 - 远处,OOD附近和语义相干OOD - 我们建立了一种高效的OOD检测方法。可以在此处访问代码和模型-https://github.com/rushilanirudh/amp
translated by 谷歌翻译
最近出现了一系列用于估计具有单个正向通行证的深神经网络中的认知不确定性的新方法,最近已成为贝叶斯神经网络的有效替代方法。在信息性表示的前提下,这些确定性不确定性方法(DUM)在检测到分布(OOD)数据的同时在推理时添加可忽略的计算成本时实现了强大的性能。但是,目前尚不清楚dums是否经过校准,可以无缝地扩展到现实世界的应用 - 这都是其实际部署的先决条件。为此,我们首先提供了DUMS的分类法,并在连续分配转移下评估其校准。然后,我们将它们扩展到语义分割。我们发现,尽管DUMS尺度到现实的视觉任务并在OOD检测方面表现良好,但当前方法的实用性受到分配变化下的校准不良而破坏的。
translated by 谷歌翻译
众所周知,视觉分类模型在数据分布班面上遭受较差的校准。在本文中,我们对此问题采取了几何方法。我们提出几何灵敏度分解(GSD)将样本特征嵌入的标准分解为目标分类器的示例特征嵌入和角度相似度分解为依赖于实例和实例 - 独立的组件。实例相关组件捕获关于输入中的更改的敏感信息,而实例无关的组件仅表示仅用于最小化训练数据集的丢失的不敏感信息。灵感来自分解,我们分析了一个简单的扩展到当前的SoftMax-Linear模型,这在训练期间学会解开两个组件。在几种常见视觉模型上,脱谕式模型在面对配送(OOD)数据和腐败方面的标准校准度量上的其他校准方法表现出明显不那么复杂。具体而言,我们将当前技术超越30.8%的相对改善对预期校准误差的损坏的CIFAR100。代码在https://github.com/gt-ripl/geometric -sentivity-decomposition.git。
translated by 谷歌翻译
我们表明,著名的混音的有效性[Zhang等,2018],如果而不是将其用作唯一的学习目标,就可以进一步改善它,而是将其用作标准跨侧面损失的附加规则器。这种简单的变化不仅提供了太大的准确性,而且在大多数情况下,在各种形式的协变量转移和分布外检测实验下,在大多数情况下,混合量的预测不确定性估计质量都显着提高了。实际上,我们观察到混合物在检测出分布样本时可能会产生大量退化的性能,因为我们在经验上表现出来,因为它倾向于学习在整个过程中表现出高渗透率的模型。很难区分分布样本与近分离样本。为了显示我们的方法的功效(RegMixup),我们在视觉数据集(Imagenet&Cifar-10/100)上提供了详尽的分析和实验,并将其与最新方法进行比较,以进行可靠的不确定性估计。
translated by 谷歌翻译
对于神经网络的近似贝叶斯推断被认为是标准培训的强大替代品,通常在分发数据上提供良好的性能。然而,贝叶斯神经网络(BNNS)具有高保真近似推断的全批汉密尔顿蒙特卡罗在协变速下实现了较差的普遍,甚至表现不佳的经典估算。我们解释了这种令人惊讶的结果,展示了贝叶斯模型平均值实际上如何存在于协变量的情况下,特别是在输入特征中的线性依赖性导致缺乏后退的情况下。我们还展示了为什么相同的问题不会影响许多近似推理程序,或古典最大A-Bouthiori(地图)培训。最后,我们提出了改善BNN的鲁棒性的新型前锋,对许多协变量转变来源。
translated by 谷歌翻译
贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
We introduce ensembles of stochastic neural networks to approximate the Bayesian posterior, combining stochastic methods such as dropout with deep ensembles. The stochastic ensembles are formulated as families of distributions and trained to approximate the Bayesian posterior with variational inference. We implement stochastic ensembles based on Monte Carlo dropout, DropConnect and a novel non-parametric version of dropout and evaluate them on a toy problem and CIFAR image classification. For CIFAR, the stochastic ensembles are quantitatively compared to published Hamiltonian Monte Carlo results for a ResNet-20 architecture. We also test the quality of the posteriors directly against Hamiltonian Monte Carlo simulations in a simplified toy model. Our results show that in a number of settings, stochastic ensembles provide more accurate posterior estimates than regular deep ensembles.
translated by 谷歌翻译
Self-Supervised Learning (SSL) is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars. In addition to a lack of labeled data, these applications also suffer from distributional shifts. Therefore, an SSL method should provide robust generalization and uncertainty estimation in the test dataset to be considered a reliable model in such high-stakes domains. However, existing approaches often focus on generalization, without evaluating the model's uncertainty. The ability to compare SSL techniques for improving these estimates is therefore critical for research on the reliability of self-supervision models. In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks. We train SSL in auxiliary learning for vision and pre-training for language model, then evaluate the generalization (in-out classification accuracy) and uncertainty (expected calibration error) across different distribution covariate shift datasets, including MNIST-C, CIFAR-10-C, CIFAR-10.1, and MNLI. Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning. All source code to reproduce results is available at https://github.com/hamanhbui/reliable_ssl_baselines.
translated by 谷歌翻译
我们提出了一种用于预测性不确定性的框架,其神经网络取代了重量概率密度函数(PDF)的传统贝叶斯概念,其基于基于Gaussian再现内核Hilbert空间(RKHS)嵌入的模型权重的物理潜在场表示。这使我们能够使用量子物理学的扰动理论来制定模型权力关系关系的片刻分解问题。提取的时刻显示了模型输出的局部附近的重量PDF的连续正则化。这种局部时刻以极大的灵敏度确定重量PDF的局部异质性,从而提供比贝叶斯和集合方法特征的模型预测性不确定性的模型预测性的更大准确性。我们表明这导致更好地导致检测经历了经历了经过调节的测试数据的假模型预测,从而从模型中学到的培训PDF。我们在使用常见失真技术损坏的几个基准数据集中评估我们对基线不确定性定量方法的方法。我们的方法提供了快速模型预测性不确定性估计,具有更高的精度和校准。
translated by 谷歌翻译