深度神经网络在监督学习中表现出令人印象深刻的性能,通过它们适应提供的培训数据的能力。然而,它们的性能在很大程度上取决于训练数据的质量,并且通常在噪音存在下劣化。我们提出了一种解决标签噪声的原则方法,目的是为各个实例和类标签分配重要权重。我们的方法通过制定一类受约束的优化问题,为这些重要的权重产生简单的闭合表单更新。每个迷你批处理解决了所提出的优化问题,这避免了在完整数据集上存储和更新权重的需要。我们的优化框架还提供了一种关于现有标签平滑启发式的理论透视,用于寻址标签噪声(例如标签引导)。我们在几个基准数据集中评估我们的方法,并在存在标签噪声时观察相当大的性能提升。
translated by 谷歌翻译
然而,他们的性能在火车时间存在嘈杂的标签存在下降。灵感来自于使用专家建议的学习,其中乘法权重(MW)更新最近被证明是在专家建议中适度的数据损坏的强大,我们建议在神经网络优化期间使用MW进行重新免除示例。我们理论上建立了当与梯度下降一起使用时的方法的收敛性,并证明其在1D案例中的标签噪声的优势。然后,我们通过表明MW在CIFAR-10,CIFAR-100和服装1M上的标签噪声存在下提高神经网络精度来验证我们的调查结果。我们还展示了我们对对抗性鲁棒性的影响。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译
Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE.
translated by 谷歌翻译
深度学习的最新进展依赖于大型标签的数据集来培训大容量模型。但是,以时间和成本效益的方式收集大型数据集通常会导致标签噪声。我们提出了一种从嘈杂的标签中学习的方法,该方法利用特征空间中的训练示例之间的相似性,鼓励每个示例的预测与其最近的邻居相似。与使用多个模型或不同阶段的训练算法相比,我们的方法采用了简单,附加的正规化项的形式。它可以被解释为经典的,偏置标签传播算法的归纳版本。我们在数据集上彻底评估我们的方法评估合成(CIFAR-10,CIFAR-100)和现实(迷你网络,网络vision,Clotsing1m,Mini-Imagenet-Red)噪声,并实现竞争性或最先进的精度,在所有人之间。
translated by 谷歌翻译
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
translated by 谷歌翻译
The existence of label noise imposes significant challenges (e.g., poor generalization) on the training process of deep neural networks (DNN). As a remedy, this paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of the DNN subject to instance-dependent and instance-independent label noise. The proposed method augments the architecture of a conventional DNN by an instance-dependent permutation layer. This layer is essentially a convex combination of permutation matrices that is dynamically calibrated for each sample. The primary objective of the permutation layer is to correct the loss of noisy samples mitigating the effect of label noise. We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label. In addition, we provide a theoretical comparison between the two variants and show that previous methods can be seen as one of the variants. Finally, we validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
translated by 谷歌翻译
深度学习在大量大数据的帮助下取得了众多域中的显着成功。然而,由于许多真实情景中缺乏高质量标签,数据标签的质量是一个问题。由于嘈杂的标签严重降低了深度神经网络的泛化表现,从嘈杂的标签(强大的培训)学习是在现代深度学习应用中成为一项重要任务。在本调查中,我们首先从监督的学习角度描述了与标签噪声学习的问题。接下来,我们提供62项最先进的培训方法的全面审查,所有这些培训方法都按照其方法论差异分为五个群体,其次是用于评估其优越性的六种性质的系统比较。随后,我们对噪声速率估计进行深入分析,并总结了通常使用的评估方法,包括公共噪声数据集和评估度量。最后,我们提出了几个有前途的研究方向,可以作为未来研究的指导。所有内容将在https://github.com/songhwanjun/awesome-noisy-labels提供。
translated by 谷歌翻译
Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of approaches have been proposed for learning with noisy labels, many open issues remain. In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes). Intuitively, CE requires an extra term to facilitate learning of hard classes, and more importantly, this term should be noise tolerant, so as to avoid overfitting to noisy labels. Inspired by the symmetric KL-divergence, we propose the approach of Symmetric cross entropy Learning (SL), boosting CE symmetrically with a noise robust counterpart Reverse Cross Entropy (RCE). Our proposed SL approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels. We provide a theoretical analysis of SL and also empirically show, on a range of benchmark and real-world datasets, that SL outperforms state-of-the-art methods. We also show that SL can be easily incorporated into existing methods in order to further enhance their performance.
translated by 谷歌翻译
最近,与培训样本相比,具有越来越多的网络参数的过度参数深度网络主导了现代机器学习的性能。但是,当培训数据被损坏时,众所周知,过度参数化的网络往往会过度合适并且不会概括。在这项工作中,我们提出了一种有原则的方法,用于在分类任务中对过度参数的深层网络进行强有力的培训,其中一部分培训标签被损坏。主要想法还很简单:标签噪声与从干净的数据中学到的网络稀疏且不一致,因此我们对噪声进行建模并学会将其与数据分开。具体而言,我们通过另一个稀疏的过度参数术语对标签噪声进行建模,并利用隐式算法正规化来恢复和分离基础损坏。值得注意的是,当在实践中使用如此简单的方法培训时,我们证明了针对各种真实数据集上标签噪声的最新测试精度。此外,我们的实验结果通过理论在简化的线性模型上证实,表明在不连贯的条件下稀疏噪声和低级别数据之间的精确分离。这项工作打开了许多有趣的方向,可以使用稀疏的过度参数化和隐式正则化来改善过度参数化模型。
translated by 谷歌翻译
深度神经网络的成功在很大程度上取决于大量高质量注释的数据的可用性,但是这些数据很难或昂贵。由此产生的标签可能是类别不平衡,嘈杂或人类偏见。从不完美注释的数据集中学习无偏分类模型是一项挑战,我们通常会遭受过度拟合或不足的折磨。在这项工作中,我们彻底研究了流行的软马克斯损失和基于保证金的损失,并提供了一种可行的方法来加强通过最大化最小样本余量来限制的概括误差。我们为此目的进一步得出了最佳条件,该条件指示了类原型应锚定的方式。通过理论分析的激励,我们提出了一种简单但有效的方法,即原型锚定学习(PAL),可以轻松地将其纳入各种基于学习的分类方案中以处理不完美的注释。我们通过对合成和现实世界数据集进行广泛的实验来验证PAL对班级不平衡学习和降低噪声学习的有效性。
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
标签噪声显着降低了应用中深度模型的泛化能力。有效的策略和方法,\ Texit {例如}重新加权或损失校正,旨在在训练神经网络时缓解标签噪声的负面影响。这些现有的工作通常依赖于预指定的架构并手动调整附加的超参数。在本文中,我们提出了翘曲的概率推断(WARPI),以便在元学习情景中自适应地整理分类网络的培训程序。与确定性模型相比,WARPI通过学习摊销元网络来制定为分层概率模型,这可以解决样本模糊性,因此对严格的标签噪声更加坚固。与直接生成损耗的重量值的现有近似加权功能不同,我们的元网络被学习以估计从登录和标签的输入来估计整流向量,这具有利用躺在它们中的足够信息的能力。这提供了纠正分类网络的学习过程的有效方法,证明了泛化能力的显着提高。此外,可以将整流载体建模为潜在变量并学习元网络,可以无缝地集成到分类网络的SGD优化中。我们在嘈杂的标签上评估了四个强大学习基准的Warpi,并在变体噪声类型下实现了新的最先进的。广泛的研究和分析还展示了我们模型的有效性。
translated by 谷歌翻译
Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
translated by 谷歌翻译
In this paper, we present a simple yet effective method (ABSGD) for addressing the data imbalance issue in deep learning. Our method is a simple modification to momentum SGD where we leverage an attentional mechanism to assign an individual importance weight to each gradient in the mini-batch. Unlike many existing heuristic-driven methods for tackling data imbalance, our method is grounded in {\it theoretically justified distributionally robust optimization (DRO)}, which is guaranteed to converge to a stationary point of an information-regularized DRO problem. The individual-level weight of a sampled data is systematically proportional to the exponential of a scaled loss value of the data, where the scaling factor is interpreted as the regularization parameter in the framework of information-regularized DRO. Compared with existing class-level weighting schemes, our method can capture the diversity between individual examples within each class. Compared with existing individual-level weighting methods using meta-learning that require three backward propagations for computing mini-batch stochastic gradients, our method is more efficient with only one backward propagation at each iteration as in standard deep learning methods. To balance between the learning of feature extraction layers and the learning of the classifier layer, we employ a two-stage method that uses SGD for pretraining followed by ABSGD for learning a robust classifier and finetuning lower layers. Our empirical studies on several benchmark datasets demonstrate the effectiveness of the proposed method.
translated by 谷歌翻译
Recent deep networks are capable of memorizing the entire data even when the labels are completely random. To overcome the overfitting on corrupted labels, we propose a novel technique of learning another neural network, called Men-torNet, to supervise the training of the base deep networks, namely, StudentNet. During training, MentorNet provides a curriculum (sample weighting scheme) for StudentNet to focus on the sample the label of which is probably correct. Unlike the existing curriculum that is usually predefined by human experts, MentorNet learns a data-driven curriculum dynamically with StudentNet. Experimental results demonstrate that our approach can significantly improve the generalization performance of deep networks trained on corrupted training data. Notably, to the best of our knowledge, we achieve the best-published result on We-bVision, a large benchmark containing 2.2 million images of real-world noisy labels. The code are at https://github.com/google/mentornet.
translated by 谷歌翻译
深度神经网络模型对有限的标签噪声非常强大,但是它们在高噪声率问题中记住嘈杂标签的能力仍然是一个空旷的问题。最具竞争力的嘈杂标签学习算法依赖于一个2阶段的过程,其中包括无监督的学习,将培训样本分类为清洁或嘈杂,然后是半监督的学习,将经验仿生风险(EVR)最小化,该学习使用标记的集合制成的集合。样品被归类为干净,并提供了一个未标记的样品,该样品被分类为嘈杂。在本文中,我们假设这种2阶段嘈杂标签的学习方法的概括取决于无监督分类器的精度以及训练设置的大小以最大程度地减少EVR。我们从经验上验证了这两个假设,并提出了新的2阶段嘈杂标签训练算法longRemix。我们在嘈杂的标签基准CIFAR-10,CIFAR-100,Webvision,Clotsing1m和Food101-N上测试Longremix。结果表明,我们的Longremix比竞争方法更好,尤其是在高标签噪声问题中。此外,我们的方法在大多数数据集中都能达到最先进的性能。该代码可在https://github.com/filipe-research/longremix上获得。
translated by 谷歌翻译
标签平滑(LS)是一种出现的学习范式,它使用硬训练标签和均匀分布的软标签的正加权平均值。结果表明,LS是带有硬标签的训练数据的常规器,因此改善了模型的概括。后来,据报道,LS甚至有助于用嘈杂的标签学习时改善鲁棒性。但是,我们观察到,当我们以高标签噪声状态运行时,LS的优势就会消失。从直觉上讲,这是由于$ \ mathbb {p}的熵增加(\ text {noisy label} | x)$当噪声速率很高时,在这种情况下,进一步应用LS会倾向于“超平滑”估计后部。我们开始发现,文献中的几种学习与噪声标签的解决方案相反,与负面/不标签平滑(NLS)更紧密地关联,它们与LS相反,并将其定义为使用负重量来结合硬和软标签呢我们在使用嘈杂标签学习时对LS和NLS的性质提供理解。在其他已建立的属性中,我们从理论上表明,当标签噪声速率高时,NLS被认为更有益。我们在多个基准测试中提供了广泛的实验结果,以支持我们的发现。代码可在https://github.com/ucsc-real/negative-label-smooth上公开获取。
translated by 谷歌翻译
我们介绍了嘈杂的特征混音(NFM),这是一个廉价但有效的数据增强方法,这些方法结合了基于插值的训练和噪声注入方案。不是用凸面的示例和它们的标签的凸面组合训练,而不是在输入和特征空间中使用对数据点对的噪声扰动凸组合。该方法包括混合和歧管混合作为特殊情况,但它具有额外的优点,包括更好地平滑决策边界并实现改进的模型鲁棒性。我们提供理论要理解这一点以及NFM的隐式正则化效果。与混合和歧管混合相比,我们的理论得到了经验结果的支持,展示了NFM的优势。我们表明,在一系列计算机视觉基准数据集中,使用NFM培训的剩余网络和视觉变压器在清洁数据的预测准确性和鲁棒性之间具有有利的权衡。
translated by 谷歌翻译
Deep neural networks (DNNs) have achieved tremendous success in a variety of applications across many disciplines. Yet, their superior performance comes with the expensive cost of requiring correctly annotated large-scale datasets. Moreover, due to DNNs' rich capacity, errors in training labels can hamper performance. To combat this problem, mean absolute error (MAE) has recently been proposed as a noise-robust alternative to the commonly-used categorical cross entropy (CCE) loss. However, as we show in this paper, MAE can perform poorly with DNNs and challenging datasets. Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE. Proposed loss functions can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. We report results from experiments conducted with CIFAR-10, CIFAR-100 and FASHION-MNIST datasets and synthetically generated noisy labels.
translated by 谷歌翻译