Recent studies have revealed that, beyond conventional accuracy, calibration should also be considered for training modern deep neural networks. To address miscalibration during learning, some methods have explored different penalty functions as part of the learning objective, alongside a standard classification loss, with a hyper-parameter controlling the relative contribution of each term. Nevertheless, these methods share two major drawbacks: 1) the scalar balancing weight is the same for all classes, hindering the ability to address different intrinsic difficulties or imbalance among classes; and 2) the balancing weight is usually fixed without an adaptive strategy, which may prevent from reaching the best compromise between accuracy and calibration, and requires hyper-parameter search for each application. We propose Class Adaptive Label Smoothing (CALS) for calibrating deep networks, which allows to learn class-wise multipliers during training, yielding a powerful alternative to common label smoothing penalties. Our method builds on a general Augmented Lagrangian approach, a well-established technique in constrained optimization, but we introduce several modifications to tailor it for large-scale, class-adaptive training. Comprehensive evaluation and multiple comparisons on a variety of benchmarks, including standard and long-tailed image classification, semantic segmentation, and text classification, demonstrate the superiority of the proposed method. The code is available at https://github.com/by-liu/CALS.
translated by 谷歌翻译
尽管深神经网络的占优势性能,但最近的作品表明它们校准不佳,导致过度自信的预测。由于培训期间的跨熵最小化,因此可以通过过度化来加剧错误烫伤,因为它促进了预测的Softmax概率来匹配单热标签分配。这产生了正确的类别的Pre-SoftMax激活,该类别明显大于剩余的激活。来自文献的最近证据表明,损失函数嵌入隐含或明确最大化的预测熵会产生最先进的校准性能。我们提供了当前最先进的校准损耗的统一约束优化视角。具体地,这些损失可以被视为在Logit距离上施加平等约束的线性惩罚(或拉格朗日)的近似值。这指出了这种潜在的平等约束的一个重要限制,其随后的梯度不断推动非信息解决方案,这可能会阻止在基于梯度的优化期间模型的辨别性能和校准之间的最佳妥协。在我们的观察之后,我们提出了一种基于不平等约束的简单灵活的泛化,这在Logit距离上强加了可控裕度。关于各种图像分类,语义分割和NLP基准的综合实验表明,我们的方法在网络校准方面对这些任务设置了新的最先进的结果,而不会影响辨别性能。代码可在https://github.com/by-liu/mbls上获得。
translated by 谷歌翻译
现代深层神经网络在医学图像分割任务中取得了显着进展。然而,最近观察到他们倾向于产生过于自信的估计,即使在高度不确定性的情况下,导致校准差和不可靠的模型。在这项工作中,我们介绍了错误的预测(MEEP)的最大熵,分割网络的培训策略,这些网络选择性地惩罚过度自信预测,仅关注错误分类的像素。特别是,我们设计了一个正规化术语,鼓励出于错误的预测,增加了复杂场景中的网络不确定性。我们的方法对于神经结构不可知,不会提高模型复杂性,并且可以与多分割损耗功能耦合。我们在两个具有挑战性的医学图像分割任务中将拟议的策略基准:脑磁共振图像(MRI)中的白质超强度病变,心脏MRI中的心房分段。实验结果表明,具有标准分割损耗的耦合MEEP不仅可以改善模型校准,而且还导致分割质量。
translated by 谷歌翻译
深度神经网络(DNN)对于对培训期间的样品大大减少的课程进行更多错误是臭名昭着的。这种类别不平衡在临床应用中普遍存在,并且对处理非常重要,因为样品较少的类通常对应于临界病例(例如,癌症),其中错误分类可能具有严重后果。不要错过这种情况,通过设定更高的阈值,需要以高真正的阳性率(TPRS)运行二进制分类器,但这是类别不平衡问题的非常高的假阳性率(FPRS)的成本。在课堂失衡下的现有方法通常不会考虑到这一点。我们认为,通过在高TPRS处于阳性的错误分类时强调减少FPRS,应提高预测准确性,即赋予阳性,即批判性,类样本与更高的成本相关。为此,我们将DNN的训练训练为二进制分类作为约束优化问题,并引入一种新的约束,可以通过在高TPR处优先考虑FPR减少来强制ROC曲线(AUC)下强制实施最大面积的新约束。我们使用增强拉格朗日方法(ALM)解决了由此产生的受限优化问题。超越二进制文件,我们还提出了两个可能的延长了多级分类问题的建议约束。我们使用内部医学成像数据集,CIFAR10和CIFAR100呈现基于图像的二元和多级分类应用的实验结果。我们的结果表明,该方法通过在关键类别的准确性上获得了大多数病例的拟议方法,同时降低了非关键类别样本的错误分类率。
translated by 谷歌翻译
Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
Deep neural networks (DNN) are prone to miscalibrated predictions, often exhibiting a mismatch between the predicted output and the associated confidence scores. Contemporary model calibration techniques mitigate the problem of overconfident predictions by pushing down the confidence of the winning class while increasing the confidence of the remaining classes across all test samples. However, from a deployment perspective, an ideal model is desired to (i) generate well-calibrated predictions for high-confidence samples with predicted probability say >0.95, and (ii) generate a higher proportion of legitimate high-confidence samples. To this end, we propose a novel regularization technique that can be used with classification losses, leading to state-of-the-art calibrated predictions at test time; From a deployment standpoint in safety-critical applications, only high-confidence samples from a well-calibrated model are of interest, as the remaining samples have to undergo manual inspection. Predictive confidence reduction of these potentially ``high-confidence samples'' is a downside of existing calibration approaches. We mitigate this by proposing a dynamic train-time data pruning strategy that prunes low-confidence samples every few epochs, providing an increase in "confident yet calibrated samples". We demonstrate state-of-the-art calibration performance across image classification benchmarks, reducing training time without much compromise in accuracy. We provide insights into why our dynamic pruning strategy that prunes low-confidence training samples leads to an increase in high-confidence samples at test time.
translated by 谷歌翻译
现实世界数据普遍面对严重的类别 - 不平衡问题,并且展示了长尾分布,即,大多数标签与有限的情况有关。由此类数据集监督的NA \“IVE模型更愿意占主导地位标签,遇到严重的普遍化挑战并变得不佳。我们从先前的角度提出了两种新的方法,以减轻这种困境。首先,我们推导了一个以平衡为导向的数据增强命名均匀的混合物(Unimix)促进长尾情景中的混合,采用先进的混合因子和采样器,支持少数民族。第二,受贝叶斯理论的动机,我们弄清了贝叶斯偏见(北美),是由此引起的固有偏见先前的不一致,并将其补偿为对标准交叉熵损失的修改。我们进一步证明了所提出的方法理论上和经验地确保分类校准。广泛的实验验证我们的策略是否有助于更好校准的模型,以及他们的策略组合在CIFAR-LT,ImageNet-LT和Inattations 2018上实现最先进的性能。
translated by 谷歌翻译
尽管深度学习预测模型在歧视不同阶层方面已经成功,但它们通常会遭受跨越包括医疗保健在内的具有挑战性领域的校准不良。此外,长尾分布在深度学习分类问题(包括临床疾病预测)中构成了巨大挑战。最近提出了一些方法来校准计算机视觉中的深入预测,但是没有发现代表模型如何在不同挑战性的环境中起作用。在本文中,我们通过对四个高影响力校准模型的比较研究来弥合从计算机视觉到医学成像的置信度校准。我们的研究是在不同的情况下进行的(自然图像分类和肺癌风险估计),包括在平衡与不平衡训练集以及计算机视觉与医学成像中进行。我们的结果支持关键发现:(1)我们获得了新的结​​论,这些结论未在不同的学习环境中进行研究,例如,结合两个校准模型,这些模型都可以减轻过度启发的预测,从而导致了不足的预测,并且来自计算机视觉模型的更简单的校准模型域往往更容易被医学成像化。 (2)我们强调了一般计算机视觉任务和医学成像预测之间的差距,例如,校准方法是通用计算机视觉任务的理想选择,实际上可能会损坏医学成像预测的校准。 (3)我们还加强了自然图像分类设置的先前结论。我们认为,这项研究的优点可以指导读者选择校准模型,并了解一般计算机视觉和医学成像域之间的差距。
translated by 谷歌翻译
在本文中,我们研究了现代神经网络的事后校准,这个问题近年来引起了很多关注。已经为任务提出了许多不同复杂性的校准方法,但是关于这些任务的表达方式尚无共识。我们专注于置信度缩放的任务,特别是在概括温度缩放的事后方法上,我们将其称为自适应温度缩放家族。我们分析了改善校准并提出可解释方法的表达功能。我们表明,当有大量数据复杂模型(例如神经网络)产生更好的性能时,但是当数据量受到限制时,很容易失败,这是某些事后校准应用(例如医学诊断)的常见情况。我们研究表达方法在理想条件和设计更简单的方法下学习但对这些表现良好的功能具有强烈的感应偏见的功能。具体而言,我们提出了基于熵的温度缩放,这是一种简单的方法,可根据其熵缩放预测的置信度。结果表明,与其他方法相比,我们的方法可获得最先进的性能,并且与复杂模型不同,它对数据稀缺是可靠的。此外,我们提出的模型可以更深入地解释校准过程。
translated by 谷歌翻译
Jaccard索引,也称为交叉联盟(iou),是图像语义分段中最关键的评估度量之一。然而,由于学习目的既不可分解也不是可分解的,则iou得分的直接优化是非常困难的。虽然已经提出了一些算法来优化其代理,但没有提供泛化能力的保证。在本文中,我们提出了一种边缘校准方法,可以直接用作学习目标,在数据分布上改善IOO的推广,通过刚性下限为基础。本方案理论上,根据IOU分数来确保更好的分割性能。我们评估了在七个图像数据集中所提出的边缘校准方法的有效性,显示使用深度分割模型的其他学习目标的IOU分数大量改进。
translated by 谷歌翻译
现代机器学习问题中的不平衡数据集是司空见惯的。具有敏感属性的代表性课程或群体的存在导致关于泛化和公平性的担忧。这种担忧进一步加剧了大容量深网络可以完全适合培训数据,似乎在训练期间达到完美的准确性和公平,但在测试期间表现不佳。为了解决这些挑战,我们提出了自动化,一个自动设计培训损失功能的双层优化框架,以优化准确性和寻求公平目标的混合。具体地,较低级别的问题列举了模型权重,并且上级问题通过监视和优化通过验证数据的期望目标来调谐损耗功能。我们的损耗设计通过采用参数跨熵损失和个性化数据增强方案,可以为类/组进行个性化处理。我们评估我们对不平衡和群体敏感分类的应用方案的方法的好处和性能。广泛的经验评估表明了自动矛盾最先进的方法的益处。我们的实验结果与损耗功能设计的理论见解和培训验证分裂的好处相辅相成。所有代码都是可用的开源。
translated by 谷歌翻译
神经网络校准是深度学习的重要任务,以确保模型预测的信心与真正的正确性可能性之间的一致性。在本文中,我们提出了一种称为Neural夹紧的新的后处理校准方法,该方法通过可学习的通用输入扰动和输出温度扩展参数在预训练的分类器上采用简单的联合输入输出转换。此外,我们提供了理论上的解释,说明为什么神经夹具比温度缩放更好。在CIFAR-100和Imagenet图像识别数据集以及各种深神经网络模型上进行了评估,我们的经验结果表明,神经夹具明显优于最先进的后处理校准方法。
translated by 谷歌翻译
Confidence calibration -the problem of predicting probability estimates representative of the true correctness likelihood -is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-ofthe-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -a singleparameter variant of Platt Scaling -is surprisingly effective at calibrating predictions.
translated by 谷歌翻译
我们提出了一种称为分配 - 均衡损失的新损失功能,用于展示长尾类分布的多标签识别问题。与传统的单标分类问题相比,由于两个重要问题,多标签识别问题通常更具挑战性,即标签的共同发生以及负标签的主导地位(当被视为多个二进制分类问题时)。分配 - 平衡损失通过对标准二进制交叉熵丢失的两个关键修改来解决这些问题:1)重新平衡考虑标签共发生造成的影响的重量的新方法,以及2)负耐受规则化以减轻负标签的过度抑制。 Pascal VOC和Coco的实验表明,使用这种新损失功能训练的模型可实现现有方法的显着性能。代码和型号可在:https://github.com/wutong16/distributionbalancedloss。
translated by 谷歌翻译
How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that some classes share semantic overlaps due to their similar appearances or concepts. Regarding this hypothesis, we propose a novel regularization to improve discriminative learning. We first calibrate the estimated highest likelihood of one sample based on its semantically neighboring classes, then encourage the overall likelihood predictions to be deterministic by imposing an adaptive exponential penalty. As the gradient of the proposed method is roughly proportional to the uncertainty of the predicted likelihoods, we name it adaptive discriminative regularization (ADR), trained along with a standard cross entropy loss in classification. Extensive experiments demonstrate that it can yield consistent and non-trivial performance improvements in a variety of visual classification tasks (over 10 benchmarks). Furthermore, we find it is robust to long-tailed and noisy label data distribution. Its flexible design enables its compatibility with mainstream classification architectures and losses.
translated by 谷歌翻译
长尾分布是现实世界中的常见现象。提取的大规模图像数据集不可避免地证明了长尾巴的属性和经过不平衡数据训练的模型可以为代表性过多的类别获得高性能,但为代表性不足的类别而苦苦挣扎,导致偏见的预测和绩效降低。为了应对这一挑战,我们提出了一种名为“逆图像频率”(IIF)的新型偏差方法。 IIF是卷积神经网络分类层中逻辑的乘法边缘调整转换。我们的方法比类似的作品实现了更强的性能,并且对于下游任务(例如长尾实例分割)特别有用,因为它会产生较少的假阳性检测。我们的广泛实验表明,IIF在许多长尾基准的基准(例如Imagenet-lt,cifar-lt,ploce-lt和lvis)上超过了最先进的现状,在Imagenet-lt上,Resnet50和26.2%达到了55.8%的TOP-1准确性LVIS上使用MaskRCNN分割AP。代码可在https://github.com/kostas1515/iif中找到
translated by 谷歌翻译
神经网络的校准是一个局部问题,随着神经网络越来越支持现实世界应用程序,它变得越来越重要。当使用现代神经网络时,问题尤其明显,该网络的置信度与正确预测的概率之间存在显着差异。已经提出了各种策略来改善校准,但准确的校准仍然具有挑战性。我们提出了一个具有两个贡献的新型框架:引入可区别的替代物,用于预期的校准误差(欺骗),允许直接优化校准质量,以及一个使用欺骗的元学习框架,该框架使用欺骗来优化用于验证校准相对于模型高超级 - 参数。结果表明,我们通过最先进的校准方法实现了竞争性能。我们的框架为解决校准开辟了新的途径和工具集,我们认为这将激发这一重要挑战的进一步工作。
translated by 谷歌翻译
与其他类别(称为少数族裔或尾巴类)相比,很少的类或类别(称为多数或头等类别的类别)具有更高的数据样本数量,在现实世界中,长尾数据集经常遇到。在此类数据集上培训深层神经网络会给质量级别带来偏见。到目前为止,研究人员提出了多种加权损失和数据重新采样技术,以减少偏见。但是,大多数此类技术都认为,尾巴类始终是最难学习的类,因此需要更多的重量或注意力。在这里,我们认为该假设可能并不总是成立的。因此,我们提出了一种新颖的方法,可以在模型的训练阶段动态测量每个类别的瞬时难度。此外,我们使用每个班级的难度度量来设计一种新型的加权损失技术,称为“基于阶级难度的加权(CDB-W)损失”和一种新型的数据采样技术,称为“基于类别难度的采样)(CDB-S )'。为了验证CDB方法的广泛可用性,我们对多个任务进行了广泛的实验,例如图像分类,对象检测,实例分割和视频操作分类。结果验证了CDB-W损失和CDB-S可以在许多类似于现实世界中用例的类别不平衡数据集(例如Imagenet-LT,LVIS和EGTEA)上实现最先进的结果。
translated by 谷歌翻译
The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at https://github.com/XuZhengzhuo/LiVT.
translated by 谷歌翻译
长尾学习旨在应对在现实情况下严重的阶级失衡下统治训练程序的关键挑战。但是,很少有人注意如何量化表示空间中头等的优势严重性。在此激励的情况下,我们将基于余弦的分类器推广到von mises-fisher(VMF)混合模型,该模型被称为VMF分类器,该模型可以通过计算分布重叠系数来定量地测量超晶体空间上的表示质量。据我们所知,这是从分布重叠系数的角度来衡量分类器和特征的表示质量的第一项工作。最重要的是,我们制定了类间差异和类功能的一致性损失项,以减轻分类器的重量之间的干扰,并与分类器的权重相结合。此外,一种新型的训练后校准算法设计为零成本通过类间重叠系数来提高性能。我们的方法的表现优于先前的工作,并具有很大的利润,并在长尾图像分类,语义细分和实例分段任务上实现了最先进的性能(例如,我们在Imagenet-50中实现了55.0 \%的总体准确性LT)。我们的代码可在https://github.com/vipailab/vmf \_op上找到。
translated by 谷歌翻译