辍学通常用于量化预测不确定性,即给定输入示例上模型预测的变化。但是,在实践中使用辍学可能是昂贵的,因为它需要多次运行辍学推理。在本文中,我们研究了如何以资源有效的方式估计辍学的预测不确定性。我们证明,我们可以使用神经元激活强度来估计不同辍学设置下的辍学预测不确定性,并使用三个大型数据集(Movielens,Criteo和Emnist)进行多种任务。我们的方法提供了一种推理方法,将辍学预测不确定性视为廉价辅助任务。我们还证明,使用来自神经网络层的一个子集的激活特征足以达到不确定性估计性能几乎可以与使用所有层的激活特征相当,从而进一步降低了不确定性估计的资源。
translated by 谷歌翻译
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
translated by 谷歌翻译
对于工业规模的广告系统,对广告点击率(CTR)的预测是一个核心问题。广告点击构成了一类重要的用户参与,通常用作广告对用户有用的主要信号。此外,在每次点击收费的广告系统中,单击费用期望值直接输入价值估计。因此,对于大多数互联网广告公司而言,CTR模型开发是一项重大投资。此类问题的工程需要许多适合在线学习的机器学习(ML)技术,这些技术远远超出了传统的准确性改进,尤其是有关效率,可重复性,校准,信用归因。我们介绍了Google搜索广告CTR模型中部署的实用技术的案例研究。本文提供了一项行业案例研究,该研究强调了当前的ML研究的重要领域,并说明了如何评估有影响力的新ML方法并在大型工业环境中有用。
translated by 谷歌翻译
Deep Neural Networks (DNN) are increasingly used as components of larger software systems that need to process complex data, such as images, written texts, audio/video signals. DNN predictions cannot be assumed to be always correct for several reasons, among which the huge input space that is dealt with, the ambiguity of some inputs data, as well as the intrinsic properties of learning algorithms, which can provide only statistical warranties. Hence, developers have to cope with some residual error probability. An architectural pattern commonly adopted to manage failure-prone components is the supervisor, an additional component that can estimate the reliability of the predictions made by untrusted (e.g., DNN) components and can activate an automated healing procedure when these are likely to fail, ensuring that the Deep Learning based System (DLS) does not cause damages, despite its main functionality being suspended. In this paper, we consider DLS that implement a supervisor by means of uncertainty estimation. After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where the accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate. Then we present a large empirical study conducted to compare the alternative approaches to uncertainty estimation. We distilled a set of guidelines for developers that are useful to incorporate a supervisor based on uncertainty monitoring into a DLS.
translated by 谷歌翻译
The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.
translated by 谷歌翻译
人工智能(AI)和机器学习(ML)的最新表现突破,尤其是深度学习的进步(DL),功能强大,易于使用的ML库(例如Scikit-Learn,Tensorflow,Pytorch。),Pytorch。,Pytorch。。核工程师对AI/ML的前所未有的兴趣,并增加了计算能力。对于基于物理学的计算模型,已经广泛研究了验证,验证和不确定性定量(VVUQ),并且已经开发了许多方法。但是,ML模型的VVUQ的研究相对较少,尤其是在核工程中。在这项工作中,我们专注于ML模型的UQ作为ML VVUQ的初步步骤,更具体地说,是Deep Neural Networks(DNNS),因为它们是用于回归和分类任务的最广泛使用的监督ML算法。这项工作旨在量化DNN的预测或近似不确定性,当它们用作昂贵的物理模型的替代模型时。比较了DNN UQ的三种技术,即Monte Carlo辍学(MCD),深层合奏(DE)和贝叶斯神经网络(BNNS)。两个核工程示例用于基准这些方法,(1)使用野牛代码的时间依赖性裂变气体释放数据,以及(2)基于BFBT基准测试的无效分数模拟使用痕量代码。发现这三种方法通常需要不同的DNN体系结构和超参数来优化其性能。 UQ结果还取决于可用培训数据的量和数据的性质。总体而言,所有这三种方法都可以提供对近似不确定性的合理估计。当平均预测接近测试数据时,不确定性通常较小,而BNN方法通常会产生比MCD和DE更大的不确定性。
translated by 谷歌翻译
Objective: Convolutional neural networks (CNNs) have demonstrated promise in automated cardiac magnetic resonance image segmentation. However, when using CNNs in a large real-world dataset, it is important to quantify segmentation uncertainty and identify segmentations which could be problematic. In this work, we performed a systematic study of Bayesian and non-Bayesian methods for estimating uncertainty in segmentation neural networks. Methods: We evaluated Bayes by Backprop, Monte Carlo Dropout, Deep Ensembles, and Stochastic Segmentation Networks in terms of segmentation accuracy, probability calibration, uncertainty on out-of-distribution images, and segmentation quality control. Results: We observed that Deep Ensembles outperformed the other methods except for images with heavy noise and blurring distortions. We showed that Bayes by Backprop is more robust to noise distortions while Stochastic Segmentation Networks are more resistant to blurring distortions. For segmentation quality control, we showed that segmentation uncertainty is correlated with segmentation accuracy for all the methods. With the incorporation of uncertainty estimates, we were able to reduce the percentage of poor segmentation to 5% by flagging 31--48% of the most uncertain segmentations for manual review, substantially lower than random review without using neural network uncertainty (reviewing 75--78% of all images). Conclusion: This work provides a comprehensive evaluation of uncertainty estimation methods and showed that Deep Ensembles outperformed other methods in most cases. Significance: Neural network uncertainty measures can help identify potentially inaccurate segmentations and alert users for manual review.
translated by 谷歌翻译
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous largescale empirical comparison of these methods under dataset shift. We present a largescale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
translated by 谷歌翻译
标记数据可以是昂贵的任务,因为它通常由域专家手动执行。对于深度学习而言,这是繁琐的,因为它取决于大型标记的数据集。主动学习(AL)是一种范式,旨在通过仅使用二手车型认为最具信息丰富的数据来减少标签努力。在文本分类设置中,在AL上完成了很少的研究,旁边没有涉及最近的最先进的自然语言处理(NLP)模型。在这里,我们介绍了一个实证研究,可以将基于不确定性的基于不确定性的算法与Bert $ _ {base} $相比,作为使用的分类器。我们评估两个NLP分类数据集的算法:斯坦福情绪树木银行和kvk-Front页面。此外,我们探讨了旨在解决不确定性的al的预定问题的启发式;即,它是不可规范的,并且易于选择异常值。此外,我们探讨了查询池大小对al的性能的影响。虽然发现,AL的拟议启发式没有提高AL的表现;我们的结果表明,使用BERT $ _ {Base} $概率使用不确定性的AL。随着查询池大小变大,性能的这种差异可以减少。
translated by 谷歌翻译
分配转移或培训数据和部署数据之间的不匹配是在高风险工业应用中使用机器学习的重要障碍,例如自动驾驶和医学。这需要能够评估ML模型的推广以及其不确定性估计的质量。标准ML基线数据集不允许评估这些属性,因为培训,验证和测试数据通常相同分布。最近,已经出现了一系列专用基准测试,其中包括分布匹配和转移的数据。在这些基准测试中,数据集在任务的多样性以及其功能的数据模式方面脱颖而出。虽然大多数基准测试由2D图像分类任务主导,但Shifts包含表格天气预测,机器翻译和车辆运动预测任务。这使得可以评估模型的鲁棒性属性,并可以得出多种工业规模的任务以及通用或直接适用的特定任务结论。在本文中,我们扩展了偏移数据集,其中两个数据集来自具有高社会重要性的工业高风险应用程序。具体而言,我们考虑了3D磁共振脑图像中白质多发性硬化病变的分割任务以及海洋货物容器中功耗的估计。两项任务均具有无处不在的分配变化和由于错误成本而构成严格的安全要求。这些新数据集将使研究人员能够进一步探索新情况下的强大概括和不确定性估计。在这项工作中,我们提供了两个任务的数据集和基线结果的描述。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
收购用于监督学习的标签可能很昂贵。为了提高神经网络回归的样本效率,我们研究了活跃的学习方法,这些方法可以适应地选择未标记的数据进行标记。我们提出了一个框架,用于从(与网络相关的)基础内核,内核转换和选择方法中构造此类方法。我们的框架涵盖了许多基于神经网络的高斯过程近似以及非乘式方法的现有贝叶斯方法。此外,我们建议用草图的有限宽度神经切线核代替常用的最后层特征,并将它们与一种新型的聚类方法结合在一起。为了评估不同的方法,我们引入了一个由15个大型表格回归数据集组成的开源基准。我们所提出的方法的表现优于我们的基准测试上的最新方法,缩放到大数据集,并在不调整网络体系结构或培训代码的情况下开箱即用。我们提供开源代码,包括所有内核,内核转换和选择方法的有效实现,并可用于复制我们的结果。
translated by 谷歌翻译
在主动学习中,训练数据集的大小和复杂性随时间而变化。随着更多要点,由主动学习开始时可用的数据量良好的简单模型可能会受到偏见的影响。可能非常适合于完整数据集的灵活模型可能会在积极学习开始时受到过度装备。我们使用深度不确定性网络(DUNS)来解决这个问题,其中一个BNN变体,其中网络的深度以及其复杂性。我们发现DUNS在几个活跃的学习任务上表现出其他BNN变体。重要的是,我们表明,在DUNs表现最佳的任务上,它们呈现出比基线的显着不太容易。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
神经线性模型(NLM)是深度贝叶斯模型,通过从数据中学习特征,然后对这些特征进行贝叶斯线性回归来产生预测的不确定性。尽管他们受欢迎,但很少有作品专注于有条理地评估这些模型的预测性不确定性。在这项工作中,我们证明了NLMS的传统培训程序急剧低估了分发输入的不确定性,因此它们不能在风险敏感的应用中暂时部署。我们确定了这种行为的基本原因,并提出了一种新的培训框架,捕获下游任务的有用预测不确定性。
translated by 谷歌翻译
Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Different actions might be taken depending on the source of the uncertainty so it is important to be able to distinguish between them. Recently, baseline tasks and metrics have been defined and several practical methods to estimate uncertainty developed. These methods, however, attempt to model uncertainty due to distributional mismatch either implicitly through model uncertainty or as data uncertainty. This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty. PNs do this by parameterizing a prior distribution over predictive distributions. This work focuses on uncertainty for classification and evaluates PNs on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassification on the MNIST and CIFAR-10 datasets, where they are found to outperform previous methods. Experiments on synthetic and MNIST data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.
translated by 谷歌翻译
尽管对安全机器学习的重要性,但神经网络的不确定性量化远未解决。估计神经不确定性的最先进方法通常是混合的,将参数模型与显式或隐式(基于辍学的)合并结合。我们采取另一种途径,提出一种新颖的回归任务的不确定量化方法,纯粹是非参数的。从技术上讲,它通过基于辍学的子网分布来捕获梯级不确定性。这是通过一个新目标来实现的,这使得标签分布与模型分布之间的Wasserstein距离最小化。广泛的经验分析表明,在生产更准确和稳定的不确定度估计方面,Wasserstein丢失在香草测试数据以及在分类转移的情况下表现出最先进的方法。
translated by 谷歌翻译
Property inference attacks against machine learning (ML) models aim to infer properties of the training data that are unrelated to the primary task of the model, and have so far been formulated as binary decision problems, i.e., whether or not the training data have a certain property. However, in industrial and healthcare applications, the proportion of labels in the training data is quite often also considered sensitive information. In this paper we introduce a new type of property inference attack that unlike binary decision problems in literature, aim at inferring the class label distribution of the training data from parameters of ML classifier models. We propose a method based on \emph{shadow training} and a \emph{meta-classifier} trained on the parameters of the shadow classifiers augmented with the accuracy of the classifiers on auxiliary data. We evaluate the proposed approach for ML classifiers with fully connected neural network architectures. We find that the proposed \emph{meta-classifier} attack provides a maximum relative improvement of $52\%$ over state of the art.
translated by 谷歌翻译
神经网络中的不确定性量化有望增加AI系统的安全性,但目前尚不清楚培训集大小如何变化。在本文中,我们评估了七种在时尚Mnist和CiFar10上的不确定性方法,因为我们子样本并产生各种训练套装尺寸。我们发现校准误差和分配检测性能强烈依赖于训练集大小,大多数方法在具有小型训练集的测试集上被错误化。基于梯度的方法似乎估计了估计的认识性不确定性,并且是受训练集规模受影响最大的。我们希望我们的结果可以指导未来的不确定性量化研究,并帮助从业者根据其特定的可用数据选择方法。
translated by 谷歌翻译
We introduce ensembles of stochastic neural networks to approximate the Bayesian posterior, combining stochastic methods such as dropout with deep ensembles. The stochastic ensembles are formulated as families of distributions and trained to approximate the Bayesian posterior with variational inference. We implement stochastic ensembles based on Monte Carlo dropout, DropConnect and a novel non-parametric version of dropout and evaluate them on a toy problem and CIFAR image classification. For CIFAR, the stochastic ensembles are quantitatively compared to published Hamiltonian Monte Carlo results for a ResNet-20 architecture. We also test the quality of the posteriors directly against Hamiltonian Monte Carlo simulations in a simplified toy model. Our results show that in a number of settings, stochastic ensembles provide more accurate posterior estimates than regular deep ensembles.
translated by 谷歌翻译