With the advancement of deep learning technology, neural networks have demonstrated their excellent ability to provide accurate predictions in many tasks. However, a lack of consideration for neural network calibration will not gain trust from humans, even for high-accuracy models. In this regard, the gap between the confidence of the model's predictions and the actual correctness likelihood must be bridged to derive a well-calibrated model. In this paper, we introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models. Furthermore, we provide animations and interactive sections in the demonstration to familiarize researchers with calibration in neural networks. A Colab tutorial on utilizing our toolkit is also introduced.
translated by 谷歌翻译
神经网络校准是深度学习的重要任务,以确保模型预测的信心与真正的正确性可能性之间的一致性。在本文中,我们提出了一种称为Neural夹紧的新的后处理校准方法,该方法通过可学习的通用输入扰动和输出温度扩展参数在预训练的分类器上采用简单的联合输入输出转换。此外,我们提供了理论上的解释,说明为什么神经夹具比温度缩放更好。在CIFAR-100和Imagenet图像识别数据集以及各种深神经网络模型上进行了评估,我们的经验结果表明,神经夹具明显优于最先进的后处理校准方法。
translated by 谷歌翻译
Confidence calibration -the problem of predicting probability estimates representative of the true correctness likelihood -is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-ofthe-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -a singleparameter variant of Platt Scaling -is surprisingly effective at calibrating predictions.
translated by 谷歌翻译
我们解决了不确定性校准的问题,并引入了一种新型的校准方法,即参数化温度缩放(PTS)。标准的深神经网络通常会产生未校准的预测,可以使用事后校准方法将其转化为校准的置信得分。在这项贡献中,我们证明了准确保存最先进的事后校准器的性能受其内在表达能力的限制。我们通过计算通过神经网络参数为参数的预测温度来概括温度缩放。我们通过广泛的实验表明,我们的新型准确性保护方法始终优于大量模型体系结构,数据集和指标的现有算法。
translated by 谷歌翻译
Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
在在下游决策取决于预测概率的安全关键应用中,校准神经网络是最重要的。测量校准误差相当于比较两个实证分布。在这项工作中,我们引入了由经典Kolmogorov-Smirnov(KS)统计测试的自由校准措施,其中主要思想是比较各自的累积概率分布。由此,通过通过Quidsime使用可微分函数来近似经验累积分布,我们获得重新校准函数,将网络输出映射到实际(校准的)类分配概率。使用停滞校准组进行脊柱拟合,并在看不见的测试集上评估所获得的重新校准功能。我们测试了我们对各种图像分类数据集的现有校准方法的方法,并且我们的样条键的重新校准方法始终如一地优于KS错误的现有方法以及其他常用的校准措施。我们的代码可在https://github.com/kartikgupta-at-anu/spline-calibration获得。
translated by 谷歌翻译
分析分类模型性能对于机器学习从业人员来说是一项至关重要的任务。尽管从业者经常使用从混乱矩阵中得出的基于计数的指标,例如准确性,许多应用程序,例如天气预测,体育博彩或患者风险预测,但依赖分类器的预测概率而不是预测标签。在这些情况下,从业者关注的是产生校准模型,即输出反映真实分布的模型的模型。通常通过静态可靠性图在视觉上分析模型校准,但是,由于所需的强大聚合,传统的校准可视化可能会遭受各种缺陷。此外,基于计数的方法无法充分分析模型校准。我们提出校准,这是一个解决上述问题的交互性可靠性图。校准构造一个可靠性图,该图表可抵抗传统方法中的缺点,并允许进行交互式子组分析和实例级检查。我们通过在现实世界和合成数据上的用例中证明了校准的实用性。我们通过与常规分析模型校准的数据科学家进行思考实验的结果来进一步验证校准。
translated by 谷歌翻译
The deployment of machine learning classifiers in high-stakes domains requires well-calibrated confidence scores for model predictions. In this paper we introduce the notion of variable-based calibration to characterize calibration properties of a model with respect to a variable of interest, generalizing traditional score-based calibration and metrics such as expected calibration error (ECE). In particular, we find that models with near-perfect ECE can exhibit significant variable-based calibration error as a function of features of the data. We demonstrate this phenomenon both theoretically and in practice on multiple well-known datasets, and show that it can persist after the application of existing recalibration methods. To mitigate this issue, we propose strategies for detection, visualization, and quantification of variable-based calibration error. We then examine the limitations of current score-based recalibration methods and explore potential modifications. Finally, we discuss the implications of these findings, emphasizing that an understanding of calibration beyond simple aggregate measures is crucial for endeavors such as fairness and model interpretability.
translated by 谷歌翻译
从神经网络获得的校准置信度估计是至关重要的,尤其是针对安全至关重要的应用,例如自主驾驶或医疗图像诊断。但是,尽管已经研究了有关分类问题的置信度校准任务,但仍缺少有关对象检测和分割问题的详尽研究。因此,我们专注于本章中对象检测和分割模型的置信度校准的研究。我们介绍了多元置信校准的概念,这是对象检测和分割任务的众所周知校准方法的扩展。这允许进行扩展的置信校准,还知道其他功能,例如边界框/像素位置,形状信息等。此外,我们扩展了预期的校准误差(ECE),以测量对象检测和分割模型的错误计算。我们检查了MS Coco以及CityScapes上的几个网络体系结构,并表明鉴于引入的校准定义,尤其是对象检测以及实例分割模型在本质上被误解。使用我们提出的校准方法,我们能够改善校准,从而对分割面罩的质量也产生积极影响。
translated by 谷歌翻译
尽管深度学习预测模型在歧视不同阶层方面已经成功,但它们通常会遭受跨越包括医疗保健在内的具有挑战性领域的校准不良。此外,长尾分布在深度学习分类问题(包括临床疾病预测)中构成了巨大挑战。最近提出了一些方法来校准计算机视觉中的深入预测,但是没有发现代表模型如何在不同挑战性的环境中起作用。在本文中,我们通过对四个高影响力校准模型的比较研究来弥合从计算机视觉到医学成像的置信度校准。我们的研究是在不同的情况下进行的(自然图像分类和肺癌风险估计),包括在平衡与不平衡训练集以及计算机视觉与医学成像中进行。我们的结果支持关键发现:(1)我们获得了新的结​​论,这些结论未在不同的学习环境中进行研究,例如,结合两个校准模型,这些模型都可以减轻过度启发的预测,从而导致了不足的预测,并且来自计算机视觉模型的更简单的校准模型域往往更容易被医学成像化。 (2)我们强调了一般计算机视觉任务和医学成像预测之间的差距,例如,校准方法是通用计算机视觉任务的理想选择,实际上可能会损坏医学成像预测的校准。 (3)我们还加强了自然图像分类设置的先前结论。我们认为,这项研究的优点可以指导读者选择校准模型,并了解一般计算机视觉和医学成像域之间的差距。
translated by 谷歌翻译
已知现代卷积神经网络(CNNS)在校准上对看不见的输入数据的校准方面是过度自由度。也就是说,它们比他们准确更自信。如果预测的概率用于下游决策,则这是不希望的。在考虑精度时,CNN也令人惊讶地对压缩技术(例如量化)令人惊讶地稳健,这旨在降低计算和内存成本。我们表明,这种稳健性可以通过现代CNN的校准行为来部分解释,并且可以通过过度步骤来改进。这是由于直观的结果:低置信度预测更容易改变后量化,而不太准确。高信任预测将更加准确,但更难以改变。因此,产生后量化精度的最小降低。这提出了神经网络设计中的潜在冲突:过度频率的校准可能导致量化更好的鲁棒性。我们在CIFAR-100和ImageNet数据集上执行将训练后量化的实验应用于各种CNN。
translated by 谷歌翻译
Calibration strengthens the trustworthiness of black-box models by producing better accurate confidence estimates on given examples. However, little is known about if model explanations can help confidence calibration. Intuitively, humans look at important features attributions and decide whether the model is trustworthy. Similarly, the explanations can tell us when the model may or may not know. Inspired by this, we propose a method named CME that leverages model explanations to make the model less confident with non-inductive attributions. The idea is that when the model is not highly confident, it is difficult to identify strong indications of any class, and the tokens accordingly do not have high attribution scores for any class and vice versa. We conduct extensive experiments on six datasets with two popular pre-trained language models in the in-domain and out-of-domain settings. The results show that CME improves calibration performance in all settings. The expected calibration errors are further reduced when combined with temperature scaling. Our findings highlight that model explanations can help calibrate posterior estimates.
translated by 谷歌翻译
Deep neural networks (DNN) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning class while increasing the confidence of the remaining classes across all test samples. However, from a deployment perspective, an ideal model is desired to (i) generate well-calibrated predictions for high-confidence samples with predicted probability say >0.95, and (ii) generate a higher proportion of legitimate high-confidence samples. To this end, we propose a novel regularization technique that can be used with classification losses, leading to state-of-the-art calibrated predictions at test time; From a deployment standpoint in safety-critical applications, only high-confidence samples from a well-calibrated model are of interest, as the remaining samples have to undergo manual inspection. Predictive confidence reduction of these potentially ``high-confidence samples'' is a downside of existing calibration approaches. We mitigate this by proposing a dynamic train-time data pruning strategy that prunes low-confidence samples every few epochs, providing an increase in "confident yet calibrated samples". We demonstrate state-of-the-art calibration performance across image classification benchmarks, reducing training time without much compromise in accuracy. We provide insights into why our dynamic pruning strategy that prunes low-confidence training samples leads to an increase in high-confidence samples at test time.
translated by 谷歌翻译
学习推迟(L2D)框架有可能使AI系统更安全。对于给定的输入,如果人类比模型更有可能采取正确的行动,则系统可以将决定推迟给人类。我们研究L2D系统的校准,研究它们输出的概率是否合理。我们发现Mozannar&Sontag(2020)多类框架没有针对专家正确性进行校准。此外,由于其参数化是为此目的而退化的,因此甚至不能保证产生有效的概率。我们提出了一个基于单VS-ALL分类器的L2D系统,该系统能够产生专家正确性的校准概率。此外,我们的损失功能也是多类L2D的一致替代,例如Mozannar&Sontag(2020)。我们的实验验证了我们的系统校准不仅是我们的系统校准,而且这种好处无需准确。我们的模型的准确性始终可与Mozannar&Sontag(2020)模型的模型相当(通常是优越),从仇恨言语检测到星系分类到诊断皮肤病变的任务。
translated by 谷歌翻译
本文介绍了分类器校准原理和实践的简介和详细概述。校准的分类器正确地量化了与其实例明智的预测相关的不确定性或信心水平。这对于关键应用,最佳决策,成本敏感的分类以及某些类型的上下文变化至关重要。校准研究具有丰富的历史,其中几十年来预测机器学习作为学术领域的诞生。然而,校准兴趣的最近增加导致了新的方法和从二进制到多种子体设置的扩展。需要考虑的选项和问题的空间很大,并导航它需要正确的概念和工具集。我们提供了主要概念和方法的介绍性材料和最新的技术细节,包括适当的评分规则和其他评估指标,可视化方法,全面陈述二进制和多字数分类的HOC校准方法,以及几个先进的话题。
translated by 谷歌翻译
尽管图形神经网络(GNNS)已经取得了显着的准确性,但结果是否值得信赖仍未开发。以前的研究表明,许多现代神经网络对预测过度充满信心,然而,令人惊讶的是,我们发现GNN主要呈相反方向,即,GNN是不受自信的。因此,非常需要GNN的置信度校准。在本文中,我们通过设计拓扑知识的后HOC校准函数提出了一种新型值得信赖的GNN模型。具体而言,我们首先验证图形中的置信度分布具有同眼性的财产,而且这一发现激发了我们设计校准GNN模型(CAGCN)以学习校准功能。 CAGCN能够从GNN的Logits对每个节点的校准置信度获得独特的变换,同时,这种变换能够在类之间保留课程之间的顺序,满足精度保留的属性。此外,我们将校准GNN应用于自培训框架,表明可以通过校准的置信度获得更可靠的伪标签,并进一步提高性能。广泛的实验证明了我们所提出的模型在校准和准确性方面的有效性。
translated by 谷歌翻译
变形金刚目前是自然语言理解(NLU)任务的最新技术,容易产生未校准的预测或极端概率,从而根据其输出相对困难而做出不同的决策过程。在本文中,我们建议构建几个电感Venn - 持续预测因子(IVAP),这些预测因子(IVAP)可以根据预先训练的变压器的选择在最小的假设下可以很好地校准。我们在一组不同的NLU任务上测试了它们的性能,并表明它们能够产生均匀分布在[0,1]间隔的概率预测的良好概率预测,同时均保留了原始模型的预测准确性。
translated by 谷歌翻译
Recently, there has been a growing interest in applying machine learning methods to problems in engineering mechanics. In particular, there has been significant interest in applying deep learning techniques to predicting the mechanical behavior of heterogeneous materials and structures. Researchers have shown that deep learning methods are able to effectively predict mechanical behavior with low error for systems ranging from engineered composites, to geometrically complex metamaterials, to heterogeneous biological tissue. However, there has been comparatively little attention paid to deep learning model calibration, i.e., the match between predicted probabilities of outcomes and the true probabilities of outcomes. In this work, we perform a comprehensive investigation into ML model calibration across seven open access engineering mechanics datasets that cover three distinct types of mechanical problems. Specifically, we evaluate both model and model calibration error for multiple machine learning methods, and investigate the influence of ensemble averaging and post hoc model calibration via temperature scaling. Overall, we find that ensemble averaging of deep neural networks is both an effective and consistent tool for improving model calibration, while temperature scaling has comparatively limited benefits. Looking forward, we anticipate that this investigation will lay the foundation for future work in developing mechanics specific approaches to deep learning model calibration.
translated by 谷歌翻译
我们旨在定量衡量医学图像分割模型的实际可用性:可以使用/信任模型的预测在多大程度上,多久和在哪些样品上进行样本。我们首先提出了一个度量,正确的信心等级相关性(CCRC),以捕获预测的置信度估计如何与其正确性分数相关。具有高价值CCRC的模型意味着其预测信心可靠地表明,哪些样本的预测更可能是正确的。由于CCRC没有捕获实际的预测正确性,因此仅仅指示预测模型是否既准确又可靠地用于实践中。因此,我们进一步提出了另一种可用区域估计(URE)的方法,同时量化了预测在一个估计中的置信度评估的正确性和可靠性。 URE提供了有关模型的预测在多大程度上可用的具体信息。此外,可以利用可用区域(UR)的大小来比较模型:具有较大UR的模型可以作为更可用的模型,因此可以将其视为更好的模型。六个数据集的实验验证了所提出的评估方法表现良好,为医学图像分割模型的实际可用性提供了具体和简洁的措施。代码可在https://github.com/yizhezhang2000/ure上提供。
translated by 谷歌翻译
估算预测预测的语言模型的不确定性对于提高NLP的可靠性是重要的。虽然许多以前的作品侧重于量化预测不确定性,但在解释不确定性时几乎没有工作。本文进一步推动了一个关于解释后校准的预训练的语言模型的不确定预测。我们适应了两种基于扰动的后宫释放方法,留出次出来和采样福利,以识别引起预测中不确定性的输入中的单词。我们以三项任务测试BERT和Roberta上提出的方法:情绪分类,自然语言推断和解释域,在域内和域外设置。实验表明,两种方法都始终捕获引起预测不确定性的输入中的单词。
translated by 谷歌翻译