变压器在广泛的NLP任务中是最先进的,也已应用于许多现实世界产品。了解变压器模型预测的可靠性和确定性对于构建可信机器学习应用,例如医学诊断,这是至关重要的。虽然已经提出了许多最近的变压器延伸,但探讨了对变压器模型的不确定性估计的研究。在这项工作中,我们提出了一种新颖的方法来使变压器能够具有不确定性估计的能力,同时,同时保留原始预测性能。这是通过学习分别参加价值观和一组学习质心的分层随机自我关注来实现的。然后使用Gumbel-Softmax技巧用混合物形成新的注意头。理论上,我们展示了通过从牙龈分布中取样的自我注意逼近是上界的。我们在具有域中的两个文本分类任务和域名(OOD)数据集中的两个文本分类任务中凭证评估我们的模型。实验结果表明,我们的方法:(1)比较方法中最佳预测性能和不确定性权衡; (2)在ID数据集上展示非常竞争力(在大多数情况下,改进)预测性能; (3)与Monte Carlo辍学和集合方法进行了标准,在OOD数据集上的不确定性估算。
translated by 谷歌翻译
The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.
translated by 谷歌翻译
最近,深度学习中的不确定性估计已成为提高安全至关重要应用的可靠性和鲁棒性的关键领域。尽管有许多提出的方法要么关注距离感知模型的不确定性,要么是分布式检测的不确定性,要么是针对分布校准的输入依赖性标签不确定性,但这两种类型的不确定性通常都是必要的。在这项工作中,我们提出了用于共同建模模型和数据不确定性的HETSNGP方法。我们表明,我们提出的模型在这两种类型的不确定性之间提供了有利的组合,因此在包括CIFAR-100C,ImagEnet-C和Imagenet-A在内的一些具有挑战性的分发数据集上优于基线方法。此外,我们提出了HETSNGP Ensemble,这是我们方法的结合版本,该版本还对网络参数的不确定性进行建模,并优于其他集合基线。
translated by 谷歌翻译
多头注意力是最先进的变压器背后的推动力,它在各种自然语言处理(NLP)和计算机视觉任务中实现了出色的性能。已经观察到,对于许多应用,这些注意力头会学习冗余嵌入,并且大多数可以在不降低模型性能的情况下去除。受到这一观察的启发,我们提出了变压器的混合物(变压器-MGK)的混合物,这是一种新型的变压器架构,用每个头部的钥匙混合了变压器中的冗余头部。这些键的混合物遵循高斯混合模型,并使每个注意力头有效地集中在输入序列的不同部分上。与传统的变压器对应物相比,变压器-MGK会加速训练和推理,具有较少的参数,并且需要更少的拖船来计算,同时实现跨任务的可比性或更高的准确性。 Transformer-MGK也可以轻松扩展到线性注意力。我们从经验上证明了在一系列实用应用中变形金属MGK的优势,包括语言建模和涉及非常长序列的任务。在Wikitext-103和远程竞技场基准中,具有4个头部的变压器MGK具有与基线变压器具有8个头的可比性或更好的性能。
translated by 谷歌翻译
在这项工作中,我们介绍了内核化变压器,这是一个通用,可扩展的,数据驱动的框架,用于学习变压器中的内核功能。我们的框架将变压器内核作为光谱特征图之间的点产物近似,并通过学习光谱分布来学习内核。这不仅有助于学习通用的内核端到端,而且还可以减少变压器从二次到线性的时间和空间复杂性。我们表明,在准确性和计算效率方面,内核化的变压器实现了与现有的有效变压器体系结构相当的性能。我们的研究还表明,内核的选择对性能有重大影响,而内核学习变体是固定内核变压器的竞争替代方案,无论是长时间的序列任务。
translated by 谷歌翻译
神经网络中的不确定性量化有望增加AI系统的安全性,但目前尚不清楚培训集大小如何变化。在本文中,我们评估了七种在时尚Mnist和CiFar10上的不确定性方法,因为我们子样本并产生各种训练套装尺寸。我们发现校准误差和分配检测性能强烈依赖于训练集大小,大多数方法在具有小型训练集的测试集上被错误化。基于梯度的方法似乎估计了估计的认识性不确定性,并且是受训练集规模受影响最大的。我们希望我们的结果可以指导未来的不确定性量化研究,并帮助从业者根据其特定的可用数据选择方法。
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
基于变压器的模型广泛用于自然语言处理(NLP)。变压器模型的核心是自我关注机制,它捕获了输入序列中的令牌对的相互作用,并在序列长度上逐步取决于逐行。在更长的序列上培训此类模型是昂贵的。在本文中,我们表明,基于局部敏感散列(LSH)的伯努利采样注意机制降低了这种模型到线性的二次复杂性。我们通过考虑自我关注作为与Bernoulli随机变量相关的单独令牌的总和来绕过二次成本,原则上可以通过单个哈希进行一次(尽管在实践中,这个数字可能是一个小常数)。这导致了有效的采样方案来估算依赖于LSH的特定修改的自我关注(以便在GPU架构上进行部署)。我们在标准512序列长度上评估了胶水基准的算法,在那里我们看到了相对于标准预磨削变压器的良好性能。在远程竞技场(LRA)基准中,为了评估长序列的性能,我们的方法实现了与Softmax自我关注的结果一致,但具有相当大的加速和内存节省,并且通常优于其他有效的自我关注方法。我们的代码可以在https://github.com/mlpen/yoso获得
translated by 谷歌翻译
深度神经网络易于对异常值过度自信的预测。贝叶斯神经网络和深度融合都已显示在某种程度上减轻了这个问题。在这项工作中,我们的目标是通过提议预测由高斯混合模型的后续的高斯混合模型来结合这两种方法的益处,该高斯混合模型包括独立培训的深神经网络的LAPPALL近似的加权和。该方法可以与任何一组预先训练的网络一起使用,并且与常规合并相比,只需要小的计算和内存开销。理论上我们验证了我们的方法从训练数据中的培训数据和虚拟化的基本线上的标准不确定量级基准测试中的“远离”的过度控制。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
Bayesian Inference offers principled tools to tackle many critical problems with modern neural networks such as poor calibration and generalization, and data inefficiency. However, scaling Bayesian inference to large architectures is challenging and requires restrictive approximations. Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference and to estimate uncertainty with deep neural networks. Traditionally, the dropout mask is sampled independently from a fixed distribution. Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference. These methods face two important challenges: (a) the posterior distribution over masks can be highly multi-modal which can be difficult to approximate with standard variational inference and (b) it is not trivial to fully utilize sample-dependent information and correlation among dropout masks to improve posterior estimation. In this work, we propose GFlowOut to address these issues. GFlowOut leverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks. We empirically demonstrate that GFlowOut results in predictive distributions that generalize better to out-of-distribution data, and provide uncertainty estimates which lead to better performance in downstream tasks.
translated by 谷歌翻译
不确定性是时间序列预测任务的重要考虑因素。在这项工作中,我们专门致力于量化流量预测的不确定性。为了实现这一目标,我们开发了深层时空的不确定性定量(DeepStuq),可以估计核心和认知不确定性。我们首先利用时空模型来对流量数据的复杂时空相关性进行建模。随后,开发了两个独立的次神经网络,以最大化异质对数可能性,以估计不确定性。为了估计认知不确定性,我们通过整合蒙特卡洛辍学和平均自适应重量的重新训练方法来结合变异推理和深层结合的优点。最后,我们提出了基于温度缩放的后处理校准方法,从而提高了模型的概括能力估计不确定性。在四个公共数据集上进行了广泛的实验,经验结果表明,就点预测和不确定性量化而言,所提出的方法优于最先进的方法。
translated by 谷歌翻译
在深神经网络中量化预测性不确定性的流行方法通常涉及一组权重或模型,例如通过合并或蒙特卡罗辍学。这些技术通常必须产生开销,必须培训多种模型实例,或者不会产生非常多样化的预测。该调查旨在熟悉基于证据深度学习的概念的替代类模型的读者:对于不熟悉的数据,他们承认“他们不知道的内容”并返回到先前的信仰。此外,它们允许在单个模型中进行不确定性估计,并通过参数化分布分布来转发传递。该调查重新承认现有工作,重点是在分类设置中的实现。最后,我们调查了相同范例的应用到回归问题。我们还对现有的方法进行了反思,并与现有方法相比,并提供最大的核心理论成果,以便通知未来的研究。
translated by 谷歌翻译
考虑到其协变量$ \ boldsymbol x $的连续或分类响应变量$ \ boldsymbol y $的分布是统计和机器学习中的基本问题。深度神经网络的监督学习算法在预测给定$ \ boldsymbol x $的$ \ boldsymbol y $的平均值方面取得了重大进展,但是他们经常因其准确捕捉预测的不确定性的能力而受到批评。在本文中,我们引入了分类和回归扩散(卡)模型,该模型结合了基于扩散的条件生成模型和预训练的条件估计器,以准确预测给定$ \ boldsymbol y $的分布,给定$ \ boldsymbol x $。我们证明了通过玩具示例和现实世界数据集的有条件分配预测的卡片的出色能力,实验结果表明,一般的卡在一般情况下都优于最先进的方法,包括基于贝叶斯的神经网络的方法专为不确定性估计而设计,尤其是当给定$ \ boldsymbol y $的条件分布给定的$ \ boldsymbol x $是多模式时。
translated by 谷歌翻译
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
translated by 谷歌翻译
独立训练的神经网络的集合是一种最新的方法,可以在深度学习中估算预测性不确定性,并且可以通过三角洲函数的混合物解释为后验分布的近似值。合奏的培训依赖于损失景观的非跨性别性和其单个成员的随机初始化,从而使后近似不受控制。本文提出了一种解决此限制的新颖和原则性的方法,最大程度地减少了函数空间中真实后验和内核密度估计器(KDE)之间的$ f $ divergence。我们从组合的角度分析了这一目标,并表明它在任何$ f $的混合组件方面都是supporular。随后,我们考虑了贪婪合奏结构的问题。从负$ f $ didivergence上的边际增益来量化后近似的改善,通过将新组件添加到KDE中得出,我们得出了集合方法的新型多样性项。我们的方法的性能在计算机视觉的分布外检测基准测试中得到了证明,该基准在多个数据集中训练的一系列架构中。我们方法的源代码可在https://github.com/oulu-imeds/greedy_ensembles_training上公开获得。
translated by 谷歌翻译
贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
最近,提出了随机特征专注(RFA),以通过线性化指数核来近似线性时间和空间复杂性的软磁性注意力。在本文中,我们首先提出了一种新颖的观点,以通过将RFA重新铸造为自称的重要性采样器来理解这种近似值的偏见。这种观点进一步阐明了整个软磁注意的\ emph {nobaled}估计量,称为随机注意(RA)。RA通过特定的分布构建积极的随机特征,并享有极大的改善近似保真度,尽管表现出二次复杂性。通过结合RA中的表现力和RFA的效率,我们开发了一种新型的线性复杂性自我发项机制,称为线性随机注意(LARA)。跨各个领域的广泛实验表明,RA和LARA可显着提高RFA的性能。
translated by 谷歌翻译
包括MRI,CT和超声在内的医学成像在临床决策中起着至关重要的作用。准确的分割对于测量图像感兴趣的结构至关重要。但是,手动分割是高度依赖性的,这导致了定量测量的高度和内部变异性。在本文中,我们探讨了通过深神经网络参数参数的贝叶斯预测分布可以捕获临床医生的内部变异性的可行性。通过探索和分析最近出现的近似推理方案,我们可以评估近似贝叶斯的深度学习是否具有分割后的后验可以学习分割和临床测量中的内在评估者变异性。实验以两种不同的成像方式进行:MRI和超声。我们从经验上证明,通过深神经网络参数化参数的贝叶斯预测分布可以近似临床医生的内部变异性。我们通过提供临床测量不确定性来定量分析医学图像,展示了一个新的观点。
translated by 谷歌翻译
Trainable evaluation metrics for machine translation (MT) exhibit strong correlation with human judgements, but they are often hard to interpret and might produce unreliable scores under noisy or out-of-domain data. Recent work has attempted to mitigate this with simple uncertainty quantification techniques (Monte Carlo dropout and deep ensembles), however these techniques (as we show) are limited in several ways -- for example, they are unable to distinguish between different kinds of uncertainty, and they are time and memory consuming. In this paper, we propose more powerful and efficient uncertainty predictors for MT evaluation, and we assess their ability to target different sources of aleatoric and epistemic uncertainty. To this end, we develop and compare training objectives for the COMET metric to enhance it with an uncertainty prediction output, including heteroscedastic regression, divergence minimization, and direct uncertainty prediction. Our experiments show improved results on uncertainty prediction for the WMT metrics task datasets, with a substantial reduction in computational costs. Moreover, they demonstrate the ability of these predictors to address specific uncertainty causes in MT evaluation, such as low quality references and out-of-domain data.
translated by 谷歌翻译