目前的分销检测方法通常存在特殊要求(例如,收集异常数据和近似数计验证)并产生副作用(例如,分类精度下降和缓慢/低效推论)。最近,已经提出了熵外检测作为无缝方法(即,避免所有先前提到的缺点的解决方案)。熵外检测解决方案使用ISOMAX损失进行培训和分布外检测的熵分。 ISOMAX损失作为软墨损失的替换(即输出线性层,Softmax激活和跨熵损失的组合)作为替换,因为随着ISOMAX损失的交换软墨损失不需要变化模型的架构或培训程序/超级参数。在本文中,我们执行我们所谓的ISOMAX损失中使用的距离的成像化。此外,我们提出更换最小距离分数的熵分数。实验表明,这些修改显着增加了分布的检测性能,同时保持解决方案无缝。除了竞争或优于所有主要目前的方法外,提出的解决方案除了更容易使用之外,还避免了所有当前限制,因为只需要对培训神经网络的简单损失替代品。用ISOMAX +丢失替换SoftMax丢失并重现结果的代码可在https://github.com/dlmacedo/entropic-out-of-distribution-detection。
translated by 谷歌翻译
建立强大的确定性神经网络仍然是一个挑战。一方面,某些方法以降低某些情况下的分类准确性为代价改善了分布检测。另一方面,某些方法同时提高了分类准确性,不确定性估计和分布外检测,但以降低推理效率为代价。在本文中,我们提出了使用Dismax损失的培训确定性神经网络,这是对通常的软马克斯损失的倒入替换(即,线性输出层的组合,软磁性激活和交叉透射率损失) 。从Isomax+损失开始,我们根据所有原型的距离创建每个logit,而不仅仅是与正确类关联的logit。我们还引入了一种结合图像的机制,以构建所谓的分数概率正则化。此外,我们提出了一种快速训练后校准网络的方法。最后,我们提出一个复合分数以执行分布外检测。我们的实验表明,Dismax通常在分类准确性,不确定性估计和分布外检测方面同时优于当前方法,同时保持确定性的神经网络推断效率。重现结果的代码可在https://github.com/dlmacedo/distinction-maximization-loss上获得。
translated by 谷歌翻译
Determining whether inputs are out-of-distribution (OOD) is an essential building block for safely deploying machine learning models in the open world. However, previous methods relying on the softmax confidence score suffer from overconfident posterior distributions for OOD data. We propose a unified framework for OOD detection that uses an energy score. We show that energy scores better distinguish in-and out-of-distribution samples than the traditional approach using the softmax scores. Unlike softmax confidence scores, energy scores are theoretically aligned with the probability density of the inputs and are less susceptible to the overconfidence issue. Within this framework, energy can be flexibly used as a scoring function for any pre-trained neural classifier as well as a trainable cost function to shape the energy surface explicitly for OOD detection. On a CIFAR-10 pre-trained WideResNet, using the energy score reduces the average FPR (at TPR 95%) by 18.03% compared to the softmax confidence score. With energy-based training, our method outperforms the state-of-the-art on common benchmarks.
translated by 谷歌翻译
Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN 1 [21], proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.
translated by 谷歌翻译
本文我们的目标是利用异质的温度缩放作为校准策略(OOD)检测。此处的异质性是指每个样品的最佳温度参数可能不同,而不是传统的方法对整个分布使用相同的值。为了实现这一目标,我们提出了一种称为锚定的新培训策略,可以估算每个样品的适当温度值,从而导致几个基准的最新OOD检测性能。使用NTK理论,我们表明该温度函数估计与分类器的认知不确定性紧密相关,这解释了其行为。与某些表现最佳的OOD检测方法相反,我们的方法不需要暴露于其他离群数据集,自定义校准目标或模型结合。通过具有不同OOD检测设置的经验研究 - 远处,OOD附近和语义相干OOD - 我们建立了一种高效的OOD检测方法。可以在此处访问代码和模型-https://github.com/rushilanirudh/amp
translated by 谷歌翻译
Detecting out-of-distribution (OOD) inputs during the inference stage is crucial for deploying neural networks in the real world. Previous methods commonly relied on the output of a network derived from the highly activated feature map. In this study, we first revealed that a norm of the feature map obtained from the other block than the last block can be a better indicator of OOD detection. Motivated by this, we propose a simple framework consisting of FeatureNorm: a norm of the feature map and NormRatio: a ratio of FeatureNorm for ID and OOD to measure the OOD detection performance of each block. In particular, to select the block that provides the largest difference between FeatureNorm of ID and FeatureNorm of OOD, we create Jigsaw puzzle images as pseudo OOD from ID training samples and calculate NormRatio, and the block with the largest value is selected. After the suitable block is selected, OOD detection with the FeatureNorm outperforms other OOD detection methods by reducing FPR95 by up to 52.77% on CIFAR10 benchmark and by up to 48.53% on ImageNet benchmark. We demonstrate that our framework can generalize to various architectures and the importance of block selection, which can improve previous OOD detection methods as well.
translated by 谷歌翻译
Commonly used AI networks are very self-confident in their predictions, even when the evidence for a certain decision is dubious. The investigation of a deep learning model output is pivotal for understanding its decision processes and assessing its capabilities and limitations. By analyzing the distributions of raw network output vectors, it can be observed that each class has its own decision boundary and, thus, the same raw output value has different support for different classes. Inspired by this fact, we have developed a new method for out-of-distribution detection. The method offers an explanatory step beyond simple thresholding of the softmax output towards understanding and interpretation of the model learning process and its output. Instead of assigning the class label of the highest logit to each new sample presented to the network, it takes the distributions over all classes into consideration. A probability score interpreter (PSI) is created based on the joint logit values in relation to their respective correct vs wrong class distributions. The PSI suggests whether the sample is likely to belong to a specific class, whether the network is unsure, or whether the sample is likely an outlier or unknown type for the network. The simple PSI has the benefit of being applicable on already trained networks. The distributions for correct vs wrong class for each output node are established by simply running the training examples through the trained network. We demonstrate our OOD detection method on a challenging transmission electron microscopy virus image dataset. We simulate a real-world application in which images of virus types unknown to a trained virus classifier, yet acquired with the same procedures and instruments, constitute the OOD samples.
translated by 谷歌翻译
尽管具有明显的区分靶向分布样本的能力,但深度神经网络在检测异常分布数据方面的性能差。为了解决此缺陷,最先进的解决方案选择在离群值的辅助数据集上训练深网。这些辅助离群值的各种培训标准是根据启发式直觉提出的。但是,我们发现这些直观设计的离群训练标准可能会损害分布学习,并最终导致劣等的表现。为此,我们确定了分布不兼容的三个原因:矛盾的梯度,错误的可能性和分布变化。基于我们的新理解,我们通过调整深层模型和损耗函数的顶级设计,提出一种新的分布检测方法。我们的方法通过减少对分布特征的概率特征的干扰来实现分布兼容性。在几个基准上,我们的方法不仅可以实现最新的分布检测性能,而且还提高了分布精度。
translated by 谷歌翻译
我们引入强大的想法,从超比计算到有挑战性领域的分布外(OOD)检测。与基于单个神经网络的单层执行的大多数现有的工作相比,我们使用相似性的半正交投影矩阵来将来自多个层的特征映射投影成公共矢量空间。通过反复应用捆绑操作$ \ oplus $,我们为所有分布类创建特定于特定于特定于特定的描述符向量。在测试时间时,描述符矢量之间的简单高效的余弦相似性计算一致地识别具有比当前最先进的性能更好的ood样本。我们表明,多维网络层的超级融合对于实现最佳的普遍表现至关重要。
translated by 谷歌翻译
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small-and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
translated by 谷歌翻译
检测到分布输入对于在现实世界中安全部署机器学习模型至关重要。然而,已知神经网络遭受过度自信的问题,在该问题中,它们对分布和分布的输入的信心异常高。在这项工作中,我们表明,可以通过在训练中实施恒定的向量规范来通过logit归一化(logitnorm)(logitnorm)来缓解此问题。我们的方法是通过分析的激励,即logit的规范在训练过程中不断增加,从而导致过度自信的产出。因此,LogitNorm背后的关键思想是将网络优化期间输出规范的影响解散。通过LogitNorm培训,神经网络在分布数据和分布数据之间产生高度可区分的置信度得分。广泛的实验证明了LogitNorm的优势,在公共基准上,平均FPR95最高为42.30%。
translated by 谷歌翻译
已知现代深度神经网络模型将错误地将分布式(OOD)测试数据分类为具有很高信心的分数(ID)培训课程之一。这可能会对关键安全应用产生灾难性的后果。一种流行的缓解策略是训练单独的分类器,该分类器可以在测试时间检测此类OOD样本。在大多数实际设置中,在火车时间尚不清楚OOD的示例,因此,一个关键问题是:如何使用合成OOD样品来增加ID数据以训练这样的OOD检测器?在本文中,我们为称为CNC的OOD数据增强提出了一种新颖的复合腐败技术。 CNC的主要优点之一是,除了培训集外,它不需要任何固定数据。此外,与当前的最新技术(SOTA)技术不同,CNC不需要在测试时间进行反向传播或结合,从而使我们的方法在推断时更快。我们与过去4年中主要会议的20种方法进行了广泛的比较,表明,在OOD检测准确性和推理时间方面,使用基于CNC的数据增强训练的模型都胜过SOTA。我们包括详细的事后分析,以研究我们方法成功的原因,并确定CNC样本的较高相对熵和多样性是可能的原因。我们还通过对二维数据集进行零件分解分析提供理论见解,以揭示(视觉和定量),我们的方法导致ID类别周围的边界更紧密,从而更好地检测了OOD样品。源代码链接:https://github.com/cnc-ood
translated by 谷歌翻译
我们考虑使用深度神经网络时检测到(分发外)输入数据的问题,并提出了一种简单但有效的方法来提高几种流行的ood检测方法对标签换档的鲁棒性。我们的作品是通过观察到的,即大多数现有的OOD检测算法考虑整个训练/测试数据,无论每个输入激活哪个类进入(级别差异)。通过广泛的实验,我们发现这种做法导致探测器,其性能敏感,易于标记换档。为了解决这个问题,我们提出了一种类别的阈值方案,可以适用于大多数现有的OOD检测算法,并且即使在测试分布的标签偏移存在下也可以保持相似的OOD检测性能。
translated by 谷歌翻译
在值得信赖的机器学习中,这是一个重要的问题,可以识别与分配任务无关的输入的分布(OOD)输入。近年来,已经提出了许多分布式检测方法。本文的目的是识别共同的目标以及确定不同OOD检测方法的隐式评分函数。我们专注于在培训期间使用替代OOD数据的方法,以学习在测试时概括为新的未见外部分布的OOD检测分数。我们表明,内部和(不同)外部分布之间的二元歧视等同于OOD检测问题的几种不同的公式。当与标准分类器以共同的方式接受培训时,该二进制判别器达到了类似于离群暴露的OOD检测性能。此外,我们表明,异常暴露所使用的置信损失具有隐式评分函数,在训练和测试外部分配相同的情况下,以非平凡的方式与理论上最佳评分功能有所不同,这又是类似于训练基于能量的OOD检测器或添加背景类时使用的一种。在实践中,当以完全相同的方式培训时,所有这些方法的性能类似。
translated by 谷歌翻译
使用嘈杂的标签学习是一场实际上有挑战性的弱势监督。在现有文献中,开放式噪声总是被认为是有毒的泛化,类似于封闭式噪音。在本文中,我们经验证明,开放式嘈杂标签可能是无毒的,甚至有利于对固有的嘈杂标签的鲁棒性。灵感来自观察,我们提出了一种简单而有效的正则化,通过将具有动态噪声标签(ODNL)引入培训的开放式样本。使用ODNL,神经网络的额外容量可以在很大程度上以不干扰来自清洁数据的学习模式的方式消耗。通过SGD噪声的镜头,我们表明我们的方法引起的噪音是随机方向,无偏向,这可能有助于模型收敛到最小的最小值,具有卓越的稳定性,并强制执行模型以产生保守预测-of-分配实例。具有各种类型噪声标签的基准数据集的广泛实验结果表明,所提出的方法不仅提高了许多现有的强大算法的性能,而且即使在标签噪声设置中也能实现分配异点检测任务的显着改进。
translated by 谷歌翻译
我们表明,著名的混音的有效性[Zhang等,2018],如果而不是将其用作唯一的学习目标,就可以进一步改善它,而是将其用作标准跨侧面损失的附加规则器。这种简单的变化不仅提供了太大的准确性,而且在大多数情况下,在各种形式的协变量转移和分布外检测实验下,在大多数情况下,混合量的预测不确定性估计质量都显着提高了。实际上,我们观察到混合物在检测出分布样本时可能会产生大量退化的性能,因为我们在经验上表现出来,因为它倾向于学习在整个过程中表现出高渗透率的模型。很难区分分布样本与近分离样本。为了显示我们的方法的功效(RegMixup),我们在视觉数据集(Imagenet&Cifar-10/100)上提供了详尽的分析和实验,并将其与最新方法进行比较,以进行可靠的不确定性估计。
translated by 谷歌翻译
开放式识别使深度神经网络(DNN)能够识别未知类别的样本,同时在已知类别的样本上保持高分类精度。基于自动编码器(AE)和原型学习的现有方法在处理这项具有挑战性的任务方面具有巨大的潜力。在这项研究中,我们提出了一种新的方法,称为类别特定的语义重建(CSSR),该方法整合了AE和原型学习的力量。具体而言,CSSR用特定于类的AE表示的歧管替代了原型点。与传统的基于原型的方法不同,CSSR在单个AE歧管上的每个已知类模型,并通过AE的重建误差来测量类归属感。特定于类的AE被插入DNN主链的顶部,并重建DNN而不是原始图像所学的语义表示。通过端到端的学习,DNN和AES互相促进,以学习歧视性和代表性信息。在多个数据集上进行的实验结果表明,所提出的方法在封闭式和开放式识别中都达到了出色的性能,并且非常简单且灵活地将其纳入现有框架中。
translated by 谷歌翻译
神经网络在许多领域都很受欢迎,但它们具有对远离训练数据的示例提供高信心响应的问题。这使得神经网络在其预测中非常有信心的同时进行错误,从而限制了它们对自主驾驶,空间探索等的安全关键应用的可靠性,我们提出了具有标准点产品的神经元泛化基于神经元和RBF神经元作为形状参数的两个极端情况。使用Relu作为激活功能,我们获得具有紧凑载体的新型神经元,这意味着其输出在有界域之外为零。我们展示了如何通过首先训练标准神经网络训练这种神经元的神经网络训练神经网络的困难,然后逐渐将形状参数逐渐增加到所需值。我们还证明,使用所提出的神经元的神经网络具有通用近似性,这意味着它可以近似具有任意精度的任何连续和可积的功能。通过对标准基准数据集的实验,我们展示了所提出的方法的承诺,因为它可以在分布式样品上具有良好的预测准确性,同时能够始终如一地检测并对分布外样品具有低置信度。
translated by 谷歌翻译
在推理时间检测到分布(OOD)数据对于机器学习的许多应用至关重要。我们提出Xood:一个新型的基于极值的OOD检测框架,用于图像分类,由两种算法组成。第一个是Xood-M完全无监督,而第二个Xood-L则是自我监督的。两种算法都依赖于神经网络激活层中数据的极端值捕获的信号,以区分分布和OOD实例。我们通过实验表明,Xood-M和Xood-l均优于效率和准确性的许多基准数据集的最先进的OOD检测方法,从而将虚假阳性率(FPR95)降低了50%,同时改善了推论时间数量级。
translated by 谷歌翻译
We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs). While most works in the literature that use GANs to generate out-of-distribution (OoD) examples only focus on the evaluation of OoD detection, we present a GAN based approach to learn a classifier that produces proper uncertainties for OoD examples as well as for false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples which is state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, in particular on CIFAR10, CIFAR100 and Tiny ImageNet, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we also find that the generated GAN examples do not significantly affect the calibration error of our classifier and result in a significant gain in model accuracy.
translated by 谷歌翻译