生物关键是一种信号,可以从人体中连续测量,例如呼吸声,心脏活动(ECG),脑波(EEG)等,基于该信号,机器学习模型已经为自动疾病的非常有前途的性能开发检测和健康状态监测。但是,DataSet Shift,即,推理的数据分布因训练的分布而异,对于真实的基于生物信号的应用程序并不罕见。为了提高稳健性,具有不确定性资格的概率模型适于捕获预测的可靠性。然而,评估估计不确定性的质量仍然是一个挑战。在这项工作中,我们提出了一个框架来评估估计不确定性在捕获不同类型的生物数据集转换时估计的不确定性的能力。特别是,我们使用基于呼吸声和心电图信号的三个分类任务,以基准五个代表性的不确定性资格方法。广泛的实验表明,尽管集合和贝叶斯模型可以在数据集移位下提供相对更好的不确定性估计,但所有测试模型都无法满足可靠的预测和模型校准中的承诺。我们的工作为任何新开发的生物宣布分类器进行了全面评估,为全面评估铺平了道路。
translated by 谷歌翻译
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous largescale empirical comparison of these methods under dataset shift. We present a largescale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Objective: Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an electrocardiogram (ECG) is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for accurate ECG-based prediction of electrolyte concentrations. Methods: We explore the use of deep neural networks (DNNs) for this task. We analyze the regression performance across four electrolytes, utilizing a novel dataset containing over 290000 ECGs. For improved understanding, we also study the full spectrum from continuous predictions to binary classification of extreme concentration levels. To enhance clinical usefulness, we finally extend to a probabilistic regression approach and evaluate different uncertainty estimates. Results: We find that the performance varies significantly between different electrolytes, which is clinically justified in the interplay of electrolytes and their manifestation in the ECG. We also compare the regression accuracy with that of traditional machine learning models, demonstrating superior performance of DNNs. Conclusion: Discretization can lead to good classification performance, but does not help solve the original problem of predicting continuous concentration levels. While probabilistic regression demonstrates potential practical usefulness, the uncertainty estimates are not particularly well-calibrated. Significance: Our study is a first step towards accurate and reliable ECG-based prediction of electrolyte concentration levels.
translated by 谷歌翻译
对于许多应用,分析机器学习模型的不确定性是必不可少的。尽管不确定性量化(UQ)技术的研究对于计算机视觉应用非常先进,但对时空数据的UQ方法的研究较少。在本文中,我们专注于在线手写识别的模型,这是一种特定类型的时空数据。数据是从传感器增强的笔中观察到的,其目标是对书面字符进行分类。我们基于两种突出的贝叶斯推理,平均高斯(赃物)和深层合奏的突出技术对核心(数据)和认知(模型)UQ进行了广泛的评估。在对模型的更好理解后,UQ技术可以在组合右手和左撇子作家(一个代表性不足的组)时检测分布数据和域的变化。
translated by 谷歌翻译
Objective: Convolutional neural networks (CNNs) have demonstrated promise in automated cardiac magnetic resonance image segmentation. However, when using CNNs in a large real-world dataset, it is important to quantify segmentation uncertainty and identify segmentations which could be problematic. In this work, we performed a systematic study of Bayesian and non-Bayesian methods for estimating uncertainty in segmentation neural networks. Methods: We evaluated Bayes by Backprop, Monte Carlo Dropout, Deep Ensembles, and Stochastic Segmentation Networks in terms of segmentation accuracy, probability calibration, uncertainty on out-of-distribution images, and segmentation quality control. Results: We observed that Deep Ensembles outperformed the other methods except for images with heavy noise and blurring distortions. We showed that Bayes by Backprop is more robust to noise distortions while Stochastic Segmentation Networks are more resistant to blurring distortions. For segmentation quality control, we showed that segmentation uncertainty is correlated with segmentation accuracy for all the methods. With the incorporation of uncertainty estimates, we were able to reduce the percentage of poor segmentation to 5% by flagging 31--48% of the most uncertain segmentations for manual review, substantially lower than random review without using neural network uncertainty (reviewing 75--78% of all images). Conclusion: This work provides a comprehensive evaluation of uncertainty estimation methods and showed that Deep Ensembles outperformed other methods in most cases. Significance: Neural network uncertainty measures can help identify potentially inaccurate segmentations and alert users for manual review.
translated by 谷歌翻译
人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译
在这项工作中,我们使用变分推论来量化无线电星系分类的深度学习模型预测的不确定性程度。我们表明,当标记无线电星系时,个体测试样本的模型后差水平与人类不确定性相关。我们探讨了各种不同重量前沿的模型性能和不确定性校准,并表明稀疏事先产生更良好的校准不确定性估计。使用单个重量的后部分布,我们表明我们可以通过从最低信噪比(SNR)中除去权重来修剪30%的完全连接的层权重,而无需显着损失性能。我们证明,可以使用基于Fisher信息的排名来实现更大程度的修剪,但我们注意到两种修剪方法都会影响Failaroff-Riley I型和II型无线电星系的不确定性校准。最后,我们表明,与此领域的其他工作相比,我们经历了冷的后效,因此后部必须缩小后加权以实现良好的预测性能。我们检查是否调整成本函数以适应模型拼盘可以弥补此效果,但发现它不会产生显着差异。我们还研究了原则数据增强的效果,并发现这改善了基线,而且还没有弥补观察到的效果。我们将其解释为寒冷的后效,因为我们的培训样本过于有效的策划导致可能性拼盘,并将其提高到未来无线电银行分类的潜在问题。
translated by 谷歌翻译
分配转移或培训数据和部署数据之间的不匹配是在高风险工业应用中使用机器学习的重要障碍,例如自动驾驶和医学。这需要能够评估ML模型的推广以及其不确定性估计的质量。标准ML基线数据集不允许评估这些属性,因为培训,验证和测试数据通常相同分布。最近,已经出现了一系列专用基准测试,其中包括分布匹配和转移的数据。在这些基准测试中,数据集在任务的多样性以及其功能的数据模式方面脱颖而出。虽然大多数基准测试由2D图像分类任务主导,但Shifts包含表格天气预测,机器翻译和车辆运动预测任务。这使得可以评估模型的鲁棒性属性,并可以得出多种工业规模的任务以及通用或直接适用的特定任务结论。在本文中,我们扩展了偏移数据集,其中两个数据集来自具有高社会重要性的工业高风险应用程序。具体而言,我们考虑了3D磁共振脑图像中白质多发性硬化病变的分割任务以及海洋货物容器中功耗的估计。两项任务均具有无处不在的分配变化和由于错误成本而构成严格的安全要求。这些新数据集将使研究人员能够进一步探索新情况下的强大概括和不确定性估计。在这项工作中,我们提供了两个任务的数据集和基线结果的描述。
translated by 谷歌翻译
深度神经网络具有令人印象深刻的性能,但是他们无法可靠地估计其预测信心,从而限制了其在高风险领域中的适用性。我们表明,应用多标签的一VS损失揭示了分类的歧义并降低了模型的过度自信。引入的Slova(单标签One-Vs-All)模型重新定义了单个标签情况的典型单VS-ALL预测概率,其中只有一个类是正确的答案。仅当单个类具有很高的概率并且其他概率可忽略不计时,提议的分类器才有信心。与典型的SoftMax函数不同,如果所有其他类的概率都很小,Slova自然会检测到分布的样本。该模型还通过指数校准进行了微调,这使我们能够与模型精度准确地对齐置信分数。我们在三个任务上验证我们的方法。首先,我们证明了斯洛伐克与最先进的分布校准具有竞争力。其次,在数据集偏移下,斯洛伐克的性能很强。最后,我们的方法在检测到分布样品的检测方面表现出色。因此,斯洛伐克是一种工具,可以在需要不确定性建模的各种应用中使用。
translated by 谷歌翻译
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
translated by 谷歌翻译
最近的工作引入了该日期,作为深度学习中不确定性建模的一种新方法。Epatet是一个添加到传统神经网络中的小神经网络,它可以共同产生预测分布。尤其是,使用音调可以大大提高多个输入的联合预测的质量,这是神经网络了解其不知道的程度的衡量标准。在本文中,我们检查了在分配变化下是否可以提供类似的优势。我们发现,在ImageNet-A/O/C中,谐调通常可以改善稳健性指标。此外,这些改进比非常大的合奏所提供的改进更为重要,即计算成本较低的数量级。但是,与分配稳定深度学习的杰出问题相比,这些改进相对较小。播集可能是工具箱中的有用工具,但它们远非完整的解决方案。
translated by 谷歌翻译
智能手表或健身追踪器由于负担得起和纵向监测功能而获得了潜在的健康跟踪设备的广泛欢迎。为了进一步扩大其健康跟踪能力,近年来,研究人员开始研究在实时利用光摄影学(PPG)数据中进行心房颤动(AF)检测的可能性,这是一种几乎所有智能手表中广泛使用的廉价传感器。从PPG信号检测AF检测的重大挑战来自智能手表PPG信号中的固有噪声。在本文中,我们提出了一种基于深度学习的新方法,即利用贝叶斯深度学习的力量来准确地从嘈杂的PPG信号中推断出AF风险,同时提供了预测的不确定性估计。在两个公开可用数据集上进行的广泛实验表明,我们提出的方法贝尼斯甲的表现优于现有的最新方法。此外,贝内斯比特(Bayesbeat)的参数比最先进的基线方法要少40-200倍,使其适合在资源约束可穿戴设备中部署。
translated by 谷歌翻译
最近在生物医学中大型数据集的可用性激发了多种医疗保健应用的代表性学习方法的开发。尽管预测性能取得了进步,但这种方法的临床实用性在暴露于现实世界数据时受到限制。在这里,我们开发模型诊断措施,以检测部署过程中潜在的陷阱,而无需访问外部数据。具体而言,我们专注于通过数据转换建模电生理信号(EEG)的现实数据转移,并通过分析a)模型的潜在空间和b)预测性不确定性在这些变换下扩展了常规的基于任务的评估。我们使用公开可用的大规模临床EEG进行了多个EEG功能编码器和两个临床相关的下游任务进行实验。在这种实验环境中,我们的结果表明,在提出的数据转移下,潜在空间完整性和模型不确定性的度量可能有助于预测部署过程中的性能退化。
translated by 谷歌翻译
作为行业4.0时代的一项新兴技术,数字双胞胎因其承诺进一步优化流程设计,质量控制,健康监测,决策和政策制定等,通过全面对物理世界进行建模,以进一步优化流程设计,质量控制,健康监测,决策和政策,因此获得了前所未有的关注。互连的数字模型。在一系列两部分的论文中,我们研究了不同建模技术,孪生启用技术以及数字双胞胎常用的不确定性量化和优化方法的基本作用。第二篇论文介绍了数字双胞胎的关键启示技术的文献综述,重点是不确定性量化,优化方法,开源数据集和工具,主要发现,挑战和未来方向。讨论的重点是当前的不确定性量化和优化方法,以及如何在数字双胞胎的不同维度中应用它们。此外,本文介绍了一个案例研究,其中构建和测试了电池数字双胞胎,以说明在这两部分评论中回顾的一些建模和孪生方法。 GITHUB上可以找到用于生成案例研究中所有结果和数字的代码和预处理数据。
translated by 谷歌翻译
The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.
translated by 谷歌翻译
神经网络缺乏对抗性鲁棒性,即,它们容易受到对抗的例子,通过对输入的小扰动导致错误的预测。此外,当模型给出错误的预测时,信任被破坏,即,预测的概率不是我们应该相信我们模型的良好指标。在本文中,我们研究了对抗性鲁棒性和校准之间的联系,发现模型对小扰动敏感的输入(很容易攻击)更有可能具有较差的预测。基于这种洞察力,我们通过解决这些对抗的缺陷输入来研究校准。为此,我们提出了基于对抗基于对抗的自适应标签平滑(AR-AD),其通过适应性软化标签,通过适应性软化标签来整合对抗性鲁棒性和校准到训练中的相关性,这是基于对敌人可以攻击的容易攻击。我们发现我们的方法,考虑了分销数据的对抗性稳健性,即使在分布班次下也能够更好地校准模型。此外,还可以应用于集合模型,以进一步提高模型校准。
translated by 谷歌翻译
我们有兴趣估计深神经网络的不确定性,这些神经网络在许多科学和工程问题中起着重要作用。在本文中,我们提出了一个引人注目的新发现,即具有相同权重初始化的神经网络的合奏,在数据集中受到持续偏差的转移而训练会产生稍微不一致的训练模型,其中预测的差异是强大的指标。认知不确定性。使用神经切线核(NTK),我们证明了这种现象是由于NTK不变的部分而发生的。由于这是通过微不足道的输入转换来实现的,因此我们表明可以使用单个神经网络(使用我们称为$ \ delta- $ uq的技术)来近似它,从而通过边缘化效果来估计预测周围的不确定性偏见。我们表明,$ \ delta- $ uq的不确定性估计值优于各种基准测试的当前方法 - 异常拒绝,分配变化下的校准以及黑匣子功能的顺序设计优化。
translated by 谷歌翻译
贝叶斯神经网络中近似后期的估计不确定性易于进行错误校准,这导致关键任务中的预测过高,这些任务的预测明显不对称或损失明显。在这里,我们通过在深度学习中校准不确定性后的模型上最大化预期效用,扩展了对损失的贝叶斯框架的近似推断,以最大程度地提高预期效用。此外,我们表明,通过损失不确定性告知的决策可以比直接替代方案更大程度地提高诊断性能。我们提出最大的不确定性校准误差(MUCE)作为测量校准置信度的指标,除了其预测外,特别是对于高风险应用程序,其目标是最大程度地减少误差和估计不确定性之间的最坏情况偏差。在实验中,我们通过将Wasserstein距离作为预测的准确性来显示预测误差与估计不确定性之间的相关性。我们评估了我们从X射线图像中检测COVID-19的方法的有效性。实验结果表明,我们的方法大大减少了错误校准,而不会影响模型的准确性并提高基于计算机的诊断的可靠性。
translated by 谷歌翻译
人工智能(AI)辅助方法在风险领域(例如疾病诊断)受到了很多关注。与疾病类型的分类不同,将医学图像归类为良性或恶性肿瘤是一项精细的任务。但是,大多数研究仅着重于提高诊断准确性,而忽略了模型可靠性的评估,从而限制了其临床应用。对于临床实践,校准对过度参数化的模型和固有的噪声极为明显地提出了低数据表格的主要挑战。特别是,我们发现建模与数据相关的不确定性更有利于置信度校准。与测试时间增强(TTA)相比,我们通过混合数据增强策略提出了一个修改后的自举损失(BS损耗)功能,可以更好地校准预测性不确定性并捕获数据分布转换而无需额外推断时间。我们的实验表明,与标准数据增强,深度集合和MC辍学相比,混合(BSM)模型的BS损失(BSM)模型可以将预期校准误差(ECE)减半。在BSM模型下,不确定性与相似性之间的相关性高达-0.4428。此外,BSM模型能够感知室外数据的语义距离,这表明在现实世界中的临床实践中潜力很高。
translated by 谷歌翻译