使用嘈杂的标签学习是一场实际上有挑战性的弱势监督。在现有文献中,开放式噪声总是被认为是有毒的泛化,类似于封闭式噪音。在本文中,我们经验证明,开放式嘈杂标签可能是无毒的,甚至有利于对固有的嘈杂标签的鲁棒性。灵感来自观察,我们提出了一种简单而有效的正则化,通过将具有动态噪声标签(ODNL)引入培训的开放式样本。使用ODNL,神经网络的额外容量可以在很大程度上以不干扰来自清洁数据的学习模式的方式消耗。通过SGD噪声的镜头,我们表明我们的方法引起的噪音是随机方向,无偏向,这可能有助于模型收敛到最小的最小值,具有卓越的稳定性,并强制执行模型以产生保守预测-of-分配实例。具有各种类型噪声标签的基准数据集的广泛实验结果表明,所提出的方法不仅提高了许多现有的强大算法的性能,而且即使在标签噪声设置中也能实现分配异点检测任务的显着改进。
translated by 谷歌翻译
当训练数据集患有极端阶级失衡时,深度神经网络通常会表现不佳。最近的研究发现,以半监督的方式直接使用分布外数据(即开放式样本)培训将损害概括性能。在这项工作中,我们从理论上表明,从贝叶斯的角度来看,仍然可以利用分发数据来扩大少数群体。基于这种动机,我们提出了一种称为开放采样的新方法,该方法利用开放式嘈杂标签重新平衡培训数据集的班级先验。对于每个开放式实例,标签是​​从我们的预定义分布中取样的,该分布互补,与原始类先验的分布互补。我们从经验上表明,开放采样不仅可以重新平衡阶级先验,还鼓励神经网络学习可分离的表示。广泛的实验表明,我们提出的方法显着优于现有数据重新平衡方法,并可以提高现有最新方法的性能。
translated by 谷歌翻译
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
translated by 谷歌翻译
检测到分布输入对于在现实世界中安全部署机器学习模型至关重要。然而,已知神经网络遭受过度自信的问题,在该问题中,它们对分布和分布的输入的信心异常高。在这项工作中,我们表明,可以通过在训练中实施恒定的向量规范来通过logit归一化(logitnorm)(logitnorm)来缓解此问题。我们的方法是通过分析的激励,即logit的规范在训练过程中不断增加,从而导致过度自信的产出。因此,LogitNorm背后的关键思想是将网络优化期间输出规范的影响解散。通过LogitNorm培训,神经网络在分布数据和分布数据之间产生高度可区分的置信度得分。广泛的实验证明了LogitNorm的优势,在公共基准上,平均FPR95最高为42.30%。
translated by 谷歌翻译
最近的一些研究表明,使用额外的分配数据可能会导致高水平的对抗性鲁棒性。但是,不能保证始终可以为所选数据集获得足够的额外数据。在本文中,我们提出了一种有偏见的多域对抗训练(BIAMAT)方法,该方法可以使用公开可用的辅助数据集诱导训练数据放大,而无需在主要和辅助数据集之间进行类分配匹配。提出的方法可以通过多域学习利用辅助数据集来实现主数据集上的对抗性鲁棒性。具体而言,可以通过使用Biamat的应用来实现对鲁棒和非鲁棒特征的数据扩增,如通过理论和经验分析所证明的。此外,我们证明,尽管由于辅助和主要数据之间的分布差异,现有方法容易受到负转移的影响,但提出的方法使神经网络能够通过应用程序通过应用程序来成功处理域差异来灵活地利用各种图像数据集来进行对抗训练基于置信的选择策略。预先训练的模型和代码可在:\ url {https://github.com/saehyung-lee/biamat}中获得。
translated by 谷歌翻译
深度学习在大量大数据的帮助下取得了众多域中的显着成功。然而,由于许多真实情景中缺乏高质量标签,数据标签的质量是一个问题。由于嘈杂的标签严重降低了深度神经网络的泛化表现,从嘈杂的标签(强大的培训)学习是在现代深度学习应用中成为一项重要任务。在本调查中,我们首先从监督的学习角度描述了与标签噪声学习的问题。接下来,我们提供62项最先进的培训方法的全面审查,所有这些培训方法都按照其方法论差异分为五个群体,其次是用于评估其优越性的六种性质的系统比较。随后,我们对噪声速率估计进行深入分析,并总结了通常使用的评估方法,包括公共噪声数据集和评估度量。最后,我们提出了几个有前途的研究方向,可以作为未来研究的指导。所有内容将在https://github.com/songhwanjun/awesome-noisy-labels提供。
translated by 谷歌翻译
检测分配(OOD)输入对于安全部署现实世界的深度学习模型至关重要。在评估良性分布和OOD样品时,检测OOD示例的现有方法很好。然而,在本文中,我们表明,当在分发的分布和OOD输入时,现有的检测机制可以极其脆弱,其具有最小的对抗扰动,这不会改变其语义。正式地,我们广泛地研究了对共同的检测方法的强大分布检测问题,并表明最先进的OOD探测器可以通过对分布和ood投入增加小扰动来容易地欺骗。为了抵消这些威胁,我们提出了一种称为芦荟的有效算法,它通过将模型暴露于对抗性inlier和异常值示例来执行鲁棒训练。我们的方法可以灵活地结合使用,并使现有方法稳健。在共同的基准数据集上,我们表明芦荟大大提高了最新的ood检测的稳健性,对CiFar-10和46.59%的CiFar-100改善了58.4%的Auroc改善。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译
深神经网络(DNN)的记忆效应在最近的标签噪声学习方法中起关键作用。为了利用这种效果,已经广泛采用了基于模型预测的方法,该方法旨在利用DNN在学习的早期阶段以纠正嘈杂标签的效果。但是,我们观察到该模型在标签预测期间会犯错误,从而导致性能不令人满意。相比之下,在学习早期阶段产生的特征表现出更好的鲁棒性。受到这一观察的启发,在本文中,我们提出了一种基于特征嵌入的新方法,用于用标签噪声,称为标签NoissiLution(Lend)。要具体而言,我们首先根据当前的嵌入式特征计算一个相似性矩阵,以捕获训练数据的局部结构。然后,附近标记的数据(\ textIt {i.e。},标签噪声稀释)使错误标记的数据携带的嘈杂的监督信号淹没了,其有效性是由特征嵌入的固有鲁棒性保证的。最后,带有稀释标签的培训数据进一步用于培训强大的分类器。从经验上讲,我们通过将我们的贷款与几种代表性的强大学习方法进行比较,对合成和现实世界嘈杂数据集进行了广泛的实验。结果验证了我们贷款的有效性。
translated by 谷歌翻译
Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of approaches have been proposed for learning with noisy labels, many open issues remain. In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes). Intuitively, CE requires an extra term to facilitate learning of hard classes, and more importantly, this term should be noise tolerant, so as to avoid overfitting to noisy labels. Inspired by the symmetric KL-divergence, we propose the approach of Symmetric cross entropy Learning (SL), boosting CE symmetrically with a noise robust counterpart Reverse Cross Entropy (RCE). Our proposed SL approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels. We provide a theoretical analysis of SL and also empirically show, on a range of benchmark and real-world datasets, that SL outperforms state-of-the-art methods. We also show that SL can be easily incorporated into existing methods in order to further enhance their performance.
translated by 谷歌翻译
深神经网络(DNN)的记忆效果在许多最先进的标签噪声学习方法中起着枢轴作用。为了利用这一财产,通常采用早期停止训练早期优化的伎俩。目前的方法通常通过考虑整个DNN来决定早期停止点。然而,DNN可以被认为是一系列层的组成,并且发现DNN中的后一个层对标签噪声更敏感,而其前同行是非常稳健的。因此,选择整个网络的停止点可以使不同的DNN层对抗彼此影响,从而降低最终性能。在本文中,我们建议将DNN分离为不同的部位,逐步培训它们以解决这个问题。而不是早期停止,它一次列举一个整体DNN,我们最初通过用相对大量的时期优化DNN来训练前DNN层。在培训期间,我们通过使用较少数量的时期使用较少的地层来逐步培训后者DNN层,以抵消嘈杂标签的影响。我们将所提出的方法术语作为渐进式早期停止(PES)。尽管其简单性,与早期停止相比,PES可以帮助获得更有前景和稳定的结果。此外,通过将PE与现有的嘈杂标签培训相结合,我们在图像分类基准上实现了最先进的性能。
translated by 谷歌翻译
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small-and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
translated by 谷歌翻译
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The stateof-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
translated by 谷歌翻译
我们提出了自适应培训 - 一种统一的培训算法,通过模型预测动态校准并增强训练过程,而不会产生额外的计算成本 - 以推进深度神经网络的监督和自我监督的学习。我们分析了培训数据的深网络培训动态,例如随机噪声和对抗例。我们的分析表明,模型预测能够在数据中放大有用的基础信息,即使在没有任何标签信息的情况下,这种现象也会发生,突出显示模型预测可能会产生培训过程:自适应培训改善了深网络的概括在噪音下,增强自我监督的代表学习。分析还阐明了解深度学习,例如,在经验风险最小化和最新的自我监督学习算法的折叠问题中对最近发现的双重现象的潜在解释。在CIFAR,STL和Imagenet数据集上的实验验证了我们在三种应用中的方法的有效性:用标签噪声,选择性分类和线性评估进行分类。为了促进未来的研究,该代码已在HTTPS://github.com/layneh/Self-Aveptive-训练中公开提供。
translated by 谷歌翻译
已知神经网络在输入图像上产生过度自信的预测,即使这些图像不存在(OOD)样本。这限制了神经网络模型在存在OOD样本的实际场景中的应用。许多现有方法通过利用各种提示来确定OOD实例,例如在特征空间,逻辑空间,梯度空间或图像的原始空间中查找不规则模式。相反,本文提出了一种简单的测试时间线性训练(ETLT)用于OOD检测方法。从经验上讲,我们发现输入图像的概率不存在,与神经网络提取的功能令人惊讶地线性相关。具体来说,许多最先进的OOD算法虽然旨在以不同的方式衡量可靠性,但实际上导致OOD得分主要与其图像特征线性相关。因此,通过简单地学习从配对图像特征训练并在测试时间推断的OOD分数的线性回归模型,我们可以为测试实例做出更精确的OOD预测。我们进一步提出了该方法的在线变体,该变体可以实现有希望的性能,并且在现实世界中更为实用。值得注意的是,我们将FPR95从$ 51.37 \%$提高到CIFAR-10数据集的$ 12.30 \%$,最大的SoftMax概率是基本的OOD检测器。在几个基准数据集上进行的广泛实验显示了ETLT对OOD检测任务的功效。
translated by 谷歌翻译
Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this paper, we propose a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. Code and pre-trained models are available at https://github.com/alinlab/CSI.
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.
translated by 谷歌翻译
Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating or reducing the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.
translated by 谷歌翻译
Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN 1 [21], proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.
translated by 谷歌翻译
已知现代深度神经网络模型将错误地将分布式(OOD)测试数据分类为具有很高信心的分数(ID)培训课程之一。这可能会对关键安全应用产生灾难性的后果。一种流行的缓解策略是训练单独的分类器,该分类器可以在测试时间检测此类OOD样本。在大多数实际设置中,在火车时间尚不清楚OOD的示例,因此,一个关键问题是:如何使用合成OOD样品来增加ID数据以训练这样的OOD检测器?在本文中,我们为称为CNC的OOD数据增强提出了一种新颖的复合腐败技术。 CNC的主要优点之一是,除了培训集外,它不需要任何固定数据。此外,与当前的最新技术(SOTA)技术不同,CNC不需要在测试时间进行反向传播或结合,从而使我们的方法在推断时更快。我们与过去4年中主要会议的20种方法进行了广泛的比较,表明,在OOD检测准确性和推理时间方面,使用基于CNC的数据增强训练的模型都胜过SOTA。我们包括详细的事后分析,以研究我们方法成功的原因,并确定CNC样本的较高相对熵和多样性是可能的原因。我们还通过对二维数据集进行零件分解分析提供理论见解,以揭示(视觉和定量),我们的方法导致ID类别周围的边界更紧密,从而更好地检测了OOD样品。源代码链接:https://github.com/cnc-ood
translated by 谷歌翻译