Only increasing accuracy without considering uncertainty may negatively impact Deep Neural Network (DNN) decision-making and decrease its reliability. This paper proposes five combined preprocessing and post-processing methods for time-series binary classification problems that simultaneously increase the accuracy and reliability of DNN outputs applied in a 5G UAV security dataset. These techniques use DNN outputs as input parameters and process them in different ways. Two methods use a well-known Machine Learning (ML) algorithm as a complement, and the other three use only confidence values that the DNN estimates. We compare seven different metrics, such as the Expected Calibration Error (ECE), Maximum Calibration Error (MCE), Mean Confidence (MC), Mean Accuracy (MA), Normalized Negative Log Likelihood (NLL), Brier Score Loss (BSL), and Reliability Score (RS) and the tradeoffs between them to evaluate the proposed hybrid algorithms. First, we show that the eXtreme Gradient Boosting (XGB) classifier might not be reliable for binary classification under the conditions this work presents. Second, we demonstrate that at least one of the potential methods can achieve better results than the classification in the DNN softmax layer. Finally, we show that the prospective methods may improve accuracy and reliability with better uncertainty calibration based on the assumption that the RS determines the difference between MC and MA metrics, and this difference should be zero to increase reliability. For example, Method 3 presents the best RS of 0.65 even when compared to the XGB classifier, which achieves RS of 7.22.
translated by 谷歌翻译
Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
本文介绍了分类器校准原理和实践的简介和详细概述。校准的分类器正确地量化了与其实例明智的预测相关的不确定性或信心水平。这对于关键应用,最佳决策,成本敏感的分类以及某些类型的上下文变化至关重要。校准研究具有丰富的历史,其中几十年来预测机器学习作为学术领域的诞生。然而,校准兴趣的最近增加导致了新的方法和从二进制到多种子体设置的扩展。需要考虑的选项和问题的空间很大,并导航它需要正确的概念和工具集。我们提供了主要概念和方法的介绍性材料和最新的技术细节,包括适当的评分规则和其他评估指标,可视化方法,全面陈述二进制和多字数分类的HOC校准方法,以及几个先进的话题。
translated by 谷歌翻译
这项工作提出了一种用于概率分类器的新算法的Proboost。该算法使用每个训练样本的认知不确定性来确定最具挑战性/不确定的样本。然后,对于下一个弱学习者,这些样本的相关性就会增加,产生序列,该序列逐渐侧重于发现具有最高不确定性的样品。最后,将弱学习者的输出组合成分类器的加权集合。提出了三种方法来操纵训练集:根据弱学习者估计的不确定性,取样,过采样和加权训练样本。此外,还研究了有关集成组合的两种方法。本文所考虑的弱学习者是标准的卷积神经网络,而不确定性估计使用的概率模型则使用变异推理或蒙特卡洛辍学。在MNIST基准数据集上进行的实验评估表明,ProbOOST可以显着改善性能。通过评估这项工作中提出的相对可实现的改进,进一步强调了结果,该指标表明,只有四个弱学习者的模型导致该指标的改进超过12%(出于准确性,灵敏度或特异性),与没有探针的模型相比。
translated by 谷歌翻译
Configurable software systems are employed in many important application domains. Understanding the performance of the systems under all configurations is critical to prevent potential performance issues caused by misconfiguration. However, as the number of configurations can be prohibitively large, it is not possible to measure the system performance under all configurations. Thus, a common approach is to build a prediction model from a limited measurement data to predict the performance of all configurations as scalar values. However, it has been pointed out that there are different sources of uncertainty coming from the data collection or the modeling process, which can make the scalar predictions not certainly accurate. To address this problem, we propose a Bayesian deep learning based method, namely BDLPerf, that can incorporate uncertainty into the prediction model. BDLPerf can provide both scalar predictions for configurations' performance and the corresponding confidence intervals of these scalar predictions. We also develop a novel uncertainty calibration technique to ensure the reliability of the confidence intervals generated by a Bayesian prediction model. Finally, we suggest an efficient hyperparameter tuning technique so as to train the prediction model within a reasonable amount of time whilst achieving high accuracy. Our experimental results on 10 real-world systems show that BDLPerf achieves higher accuracy than existing approaches, in both scalar performance prediction and confidence interval estimation.
translated by 谷歌翻译
Recent studies have revealed that, beyond conventional accuracy, calibration should also be considered for training modern deep neural networks. To address miscalibration during learning, some methods have explored different penalty functions as part of the learning objective, alongside a standard classification loss, with a hyper-parameter controlling the relative contribution of each term. Nevertheless, these methods share two major drawbacks: 1) the scalar balancing weight is the same for all classes, hindering the ability to address different intrinsic difficulties or imbalance among classes; and 2) the balancing weight is usually fixed without an adaptive strategy, which may prevent from reaching the best compromise between accuracy and calibration, and requires hyper-parameter search for each application. We propose Class Adaptive Label Smoothing (CALS) for calibrating deep networks, which allows to learn class-wise multipliers during training, yielding a powerful alternative to common label smoothing penalties. Our method builds on a general Augmented Lagrangian approach, a well-established technique in constrained optimization, but we introduce several modifications to tailor it for large-scale, class-adaptive training. Comprehensive evaluation and multiple comparisons on a variety of benchmarks, including standard and long-tailed image classification, semantic segmentation, and text classification, demonstrate the superiority of the proposed method. The code is available at https://github.com/by-liu/CALS.
translated by 谷歌翻译
自主驾驶应用中的对象检测意味着语义对象的检测和跟踪通常是城市驾驶环境的原产,作为行人和车辆。最先进的基于深度学习的物体检测中的主要挑战之一是假阳性,其出现过于自信得分。由于安全问题,这在自动驾驶和其他关键机器人感知域中是非常不可取的。本文提出了一种通过将新的概率层引入测试中的深度对象检测网络来缓解过度自信预测问题的方法。建议的方法避免了传统的乙状结肠或Softmax预测层,其通常产生过度自信预测。证明所提出的技术在不降低真实阳性上的性能的情况下降低了误报的过度频率。通过yolov4和第二(基于LiDar的探测器)对2D-Kitti异点检测验证了该方法。该方法使得能够实现可解释的概率预测,而无需重新培训网络,因此非常实用。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Objective: Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an electrocardiogram (ECG) is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for accurate ECG-based prediction of electrolyte concentrations. Methods: We explore the use of deep neural networks (DNNs) for this task. We analyze the regression performance across four electrolytes, utilizing a novel dataset containing over 290000 ECGs. For improved understanding, we also study the full spectrum from continuous predictions to binary classification of extreme concentration levels. To enhance clinical usefulness, we finally extend to a probabilistic regression approach and evaluate different uncertainty estimates. Results: We find that the performance varies significantly between different electrolytes, which is clinically justified in the interplay of electrolytes and their manifestation in the ECG. We also compare the regression accuracy with that of traditional machine learning models, demonstrating superior performance of DNNs. Conclusion: Discretization can lead to good classification performance, but does not help solve the original problem of predicting continuous concentration levels. While probabilistic regression demonstrates potential practical usefulness, the uncertainty estimates are not particularly well-calibrated. Significance: Our study is a first step towards accurate and reliable ECG-based prediction of electrolyte concentration levels.
translated by 谷歌翻译
人工神经网络无法评估其预测的不确定性是对它们广泛使用的障碍。我们区分了两种类型的可学习不确定性:由于缺乏训练数据和噪声引起的观察不确定性而导致的模型不确定性。贝叶斯神经网络使用坚实的数学基础来学习其预测的模型不确定性。观察不确定性可以通过在这些网络中添加一层并增强其损失功能来计算观察不确定性。我们的贡献是将这些不确定性概念应用于预测过程监控任务中,以训练基于不确定性的模型以预测剩余时间和结果。我们的实验表明,不确定性估计值允许分化更多和不准确的预测,并在回归和分类任务中构建置信区间。即使在运行过程的早期阶段,这些结论仍然是正确的。此外,部署的技术是快速的,并产生了更准确的预测。学习的不确定性可以增加用户对其流程预测系统的信心,促进人类与这些系统之间的更好合作,并通过较小的数据集实现早期的实施。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
非常希望知道模型的预测是多么不确定,特别是对于复杂的模型和难以理解的模型,如深度学习。虽然在扩散加权MRI中使用深度学习方法,但事先作品没有解决模型不确定性的问题。在这里,我们提出了一种深入的学习方法来估计扩散张量并计算估计不确定性。数据相关的不确定性由网络直接计算,并通过损耗衰减学习。使用Monte Carlo辍学来计算模型不确定性。我们还提出了一种评估预测不确定性的质量的新方法。我们将新方法与标准最小二乘张量估计和基于引导的不确定性计算技术进行比较。我们的实验表明,当测量数量小时,深度学习方法更准确,并且其不确定性预测比标准方法更好地校准。我们表明,新方法计算的估计不确定性可以突出显示模型的偏置,检测域移位,并反映测量中的噪声强度。我们的研究表明了基于深度学习的扩散MRI分析中建模预测不确定性的重要性和实际价值。
translated by 谷歌翻译
不平衡的数据(ID)是阻止机器学习(ML)模型以实现令人满意的结果的问题。 ID是一种情况,即属于一个类别的样本的数量超过另一个类别的情况,这使此类模型学习过程偏向多数类。近年来,为了解决这个问题,已经提出了几种解决方案,该解决方案选择合成为少数族裔类生成新数据,或者减少平衡数据的多数类的数量。因此,在本文中,我们研究了基于深神经网络(DNN)和卷积神经网络(CNN)的方法的有效性,并与各种众所周知的不平衡数据解决方案混合,这意味着过采样和降采样。为了评估我们的方法,我们使用了龙骨,乳腺癌和Z-Alizadeh Sani数据集。为了获得可靠的结果,我们通过随机洗牌的数据分布进行了100次实验。分类结果表明,混合的合成少数族裔过采样技术(SMOTE) - 正态化-CNN优于在24个不平衡数据集上达到99.08%精度的不同方法。因此,提出的混合模型可以应用于其他实际数据集上的不平衡算法分类问题。
translated by 谷歌翻译
Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10 30 . We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested.
translated by 谷歌翻译
Recently, there has been a growing interest in applying machine learning methods to problems in engineering mechanics. In particular, there has been significant interest in applying deep learning techniques to predicting the mechanical behavior of heterogeneous materials and structures. Researchers have shown that deep learning methods are able to effectively predict mechanical behavior with low error for systems ranging from engineered composites, to geometrically complex metamaterials, to heterogeneous biological tissue. However, there has been comparatively little attention paid to deep learning model calibration, i.e., the match between predicted probabilities of outcomes and the true probabilities of outcomes. In this work, we perform a comprehensive investigation into ML model calibration across seven open access engineering mechanics datasets that cover three distinct types of mechanical problems. Specifically, we evaluate both model and model calibration error for multiple machine learning methods, and investigate the influence of ensemble averaging and post hoc model calibration via temperature scaling. Overall, we find that ensemble averaging of deep neural networks is both an effective and consistent tool for improving model calibration, while temperature scaling has comparatively limited benefits. Looking forward, we anticipate that this investigation will lay the foundation for future work in developing mechanics specific approaches to deep learning model calibration.
translated by 谷歌翻译
深度神经网络具有令人印象深刻的性能,但是他们无法可靠地估计其预测信心,从而限制了其在高风险领域中的适用性。我们表明,应用多标签的一VS损失揭示了分类的歧义并降低了模型的过度自信。引入的Slova(单标签One-Vs-All)模型重新定义了单个标签情况的典型单VS-ALL预测概率,其中只有一个类是正确的答案。仅当单个类具有很高的概率并且其他概率可忽略不计时,提议的分类器才有信心。与典型的SoftMax函数不同,如果所有其他类的概率都很小,Slova自然会检测到分布的样本。该模型还通过指数校准进行了微调,这使我们能够与模型精度准确地对齐置信分数。我们在三个任务上验证我们的方法。首先,我们证明了斯洛伐克与最先进的分布校准具有竞争力。其次,在数据集偏移下,斯洛伐克的性能很强。最后,我们的方法在检测到分布样品的检测方面表现出色。因此,斯洛伐克是一种工具,可以在需要不确定性建模的各种应用中使用。
translated by 谷歌翻译
机器学习算法和深度神经网络在几种感知和控制任务中的卓越性能正在推动该行业在安全关键应用中采用这种技术,作为自治机器人和自动驾驶车辆。然而,目前,需要解决几个问题,以使深入学习方法更可靠,可预测,安全,防止对抗性攻击。虽然已经提出了几种方法来提高深度神经网络的可信度,但大多数都是针对特定类的对抗示例量身定制的,因此未能检测到其他角落案件或不安全的输入,这些输入大量偏离训练样本。本文介绍了基于覆盖范式的轻量级监控架构,以增强针对不同不安全输入的模型鲁棒性。特别是,在用于评估多种检测逻辑的架构中提出并测试了四种覆盖分析方法。实验结果表明,该方法有效地检测强大的对抗性示例和分销外输入,引入有限的执行时间和内存要求。
translated by 谷歌翻译
第五代(5G)网络和超越设想巨大的东西互联网(物联网)推出,以支持延长现实(XR),增强/虚拟现实(AR / VR),工业自动化,自主驾驶和智能所有带来的破坏性应用一起占用射频(RF)频谱的大规模和多样化的IOT设备。随着频谱嘎嘎和吞吐量挑战,这种大规模的无线设备暴露了前所未有的威胁表面。 RF指纹识别是预约的作为候选技术,可以与加密和零信任安全措施相结合,以确保无线网络中的数据隐私,机密性和完整性。在未来的通信网络中,在这项工作中,在未来的通信网络中的相关性,我们对RF指纹识别方法进行了全面的调查,从传统观点到最近的基于深度学习(DL)的算法。现有的调查大多专注于无线指纹方法的受限制呈现,然而,许多方面仍然是不可能的。然而,在这项工作中,我们通过解决信号智能(SIGINT),应用程序,相关DL算法,RF指纹技术的系统文献综述来缓解这一点,跨越过去二十年的RF指纹技术的系统文献综述,对数据集和潜在研究途径的讨论 - 必须以百科全书的方式阐明读者的必要条件。
translated by 谷歌翻译
分析分类模型性能对于机器学习从业人员来说是一项至关重要的任务。尽管从业者经常使用从混乱矩阵中得出的基于计数的指标,例如准确性,许多应用程序,例如天气预测,体育博彩或患者风险预测,但依赖分类器的预测概率而不是预测标签。在这些情况下,从业者关注的是产生校准模型,即输出反映真实分布的模型的模型。通常通过静态可靠性图在视觉上分析模型校准,但是,由于所需的强大聚合,传统的校准可视化可能会遭受各种缺陷。此外,基于计数的方法无法充分分析模型校准。我们提出校准,这是一个解决上述问题的交互性可靠性图。校准构造一个可靠性图,该图表可抵抗传统方法中的缺点,并允许进行交互式子组分析和实例级检查。我们通过在现实世界和合成数据上的用例中证明了校准的实用性。我们通过与常规分析模型校准的数据科学家进行思考实验的结果来进一步验证校准。
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译