A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
最近关于使用嘈杂标签的学习的研究通过利用小型干净数据集来显示出色的性能。特别是,基于模型不可知的元学习的标签校正方法进一步提高了性能,通过纠正了嘈杂的标签。但是,标签错误矫予没有保障措施,导致不可避免的性能下降。此外,每个训练步骤都需要至少三个背部传播,显着减慢训练速度。为了缓解这些问题,我们提出了一种强大而有效的方法,可以在飞行中学习标签转换矩阵。采用转换矩阵使分类器对所有校正样本持怀疑态度,这减轻了错误的错误问题。我们还介绍了一个双头架构,以便在单个反向传播中有效地估计标签转换矩阵,使得估计的矩阵紧密地遵循由标签校正引起的移位噪声分布。广泛的实验表明,我们的方法在训练效率方面表现出比现有方法相当或更好的准确性。
translated by 谷歌翻译
可以将监督学习视为将相关信息从输入数据中提取到特征表示形式。当监督嘈杂时,此过程变得困难,因为蒸馏信息可能无关紧要。实际上,最近的研究表明,网络可以轻松地过度贴合所有标签,包括损坏的标签,因此几乎无法概括以清洁数据集。在本文中,我们专注于使用嘈杂的标签学习的问题,并将压缩归纳偏置引入网络体系结构以减轻这种过度的问题。更确切地说,我们重新审视一个名为辍学的经典正则化及其变体嵌套辍学。辍学可以作为其功能删除机制的压缩约束,而嵌套辍学进一步学习有序的特征表示W.R.T.特征重要性。此外,具有压缩正则化的训练有素的模型与共同教学相结合,以提高性能。从理论上讲,我们在压缩正则化下对目标函数进行偏置变化分解。我们分析了单个模型和共同教学。该分解提供了三个见解:(i)表明过度合适确实是使用嘈杂标签学习的问题; (ii)通过信息瓶颈配方,它解释了为什么提出的特征压缩有助于对抗标签噪声; (iii)它通过将压缩正规化纳入共同教学而带来的性能提升提供了解释。实验表明,我们的简单方法比具有现实世界标签噪声(包括服装1M和Animal-10N)的基准测试标准的最先进方法具有可比性甚至更好的性能。我们的实施可在https://yingyichen-cyy.github.io/compressfatsfeatnoisylabels/上获得。
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
深度学习在大量大数据的帮助下取得了众多域中的显着成功。然而,由于许多真实情景中缺乏高质量标签,数据标签的质量是一个问题。由于嘈杂的标签严重降低了深度神经网络的泛化表现,从嘈杂的标签(强大的培训)学习是在现代深度学习应用中成为一项重要任务。在本调查中,我们首先从监督的学习角度描述了与标签噪声学习的问题。接下来,我们提供62项最先进的培训方法的全面审查,所有这些培训方法都按照其方法论差异分为五个群体,其次是用于评估其优越性的六种性质的系统比较。随后,我们对噪声速率估计进行深入分析,并总结了通常使用的评估方法,包括公共噪声数据集和评估度量。最后,我们提出了几个有前途的研究方向,可以作为未来研究的指导。所有内容将在https://github.com/songhwanjun/awesome-noisy-labels提供。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译
标签噪声显着降低了应用中深度模型的泛化能力。有效的策略和方法,\ Texit {例如}重新加权或损失校正,旨在在训练神经网络时缓解标签噪声的负面影响。这些现有的工作通常依赖于预指定的架构并手动调整附加的超参数。在本文中,我们提出了翘曲的概率推断(WARPI),以便在元学习情景中自适应地整理分类网络的培训程序。与确定性模型相比,WARPI通过学习摊销元网络来制定为分层概率模型,这可以解决样本模糊性,因此对严格的标签噪声更加坚固。与直接生成损耗的重量值的现有近似加权功能不同,我们的元网络被学习以估计从登录和标签的输入来估计整流向量,这具有利用躺在它们中的足够信息的能力。这提供了纠正分类网络的学习过程的有效方法,证明了泛化能力的显着提高。此外,可以将整流载体建模为潜在变量并学习元网络,可以无缝地集成到分类网络的SGD优化中。我们在嘈杂的标签上评估了四个强大学习基准的Warpi,并在变体噪声类型下实现了新的最先进的。广泛的研究和分析还展示了我们模型的有效性。
translated by 谷歌翻译
Semi-supervised learning based methods are current SOTA solutions to the noisy-label learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is \emph{class agnostic} and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named \textbf{C}lass \textbf{P}rototype-based label noise \textbf{C}leaner (\textbf{CPC}). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks. Codes and pre-trained models will be released at \url{https://github.com/hjjpku/CPC.git}.
translated by 谷歌翻译
深神经网络(DNN)的记忆效果在许多最先进的标签噪声学习方法中起着枢轴作用。为了利用这一财产,通常采用早期停止训练早期优化的伎俩。目前的方法通常通过考虑整个DNN来决定早期停止点。然而,DNN可以被认为是一系列层的组成,并且发现DNN中的后一个层对标签噪声更敏感,而其前同行是非常稳健的。因此,选择整个网络的停止点可以使不同的DNN层对抗彼此影响,从而降低最终性能。在本文中,我们建议将DNN分离为不同的部位,逐步培训它们以解决这个问题。而不是早期停止,它一次列举一个整体DNN,我们最初通过用相对大量的时期优化DNN来训练前DNN层。在培训期间,我们通过使用较少数量的时期使用较少的地层来逐步培训后者DNN层,以抵消嘈杂标签的影响。我们将所提出的方法术语作为渐进式早期停止(PES)。尽管其简单性,与早期停止相比,PES可以帮助获得更有前景和稳定的结果。此外,通过将PE与现有的嘈杂标签培训相结合,我们在图像分类基准上实现了最先进的性能。
translated by 谷歌翻译
在标签噪声下训练深神网络的能力很有吸引力,因为不完美的注释数据相对便宜。最先进的方法基于半监督学习(SSL),该学习选择小损失示例为清洁,然后应用SSL技术来提高性能。但是,选择步骤主要提供一个中等大小的清洁子集,该子集可俯瞰丰富的干净样品。在这项工作中,我们提出了一个新颖的嘈杂标签学习框架Promix,试图最大程度地提高清洁样品的实用性以提高性能。我们方法的关键是,我们提出了一种匹配的高信心选择技术,该技术选择了那些具有很高置信的示例,并与给定标签进行了匹配的预测。结合小损失选择,我们的方法能够达到99.27的精度,并在检测CIFAR-10N数据集上的干净样品时召回98.22。基于如此大的清洁数据,Promix将最佳基线方法提高了CIFAR-10N的 +2.67%,而CIFAR-100N数据集则提高了 +1.61%。代码和数据可从https://github.com/justherozen/promix获得
translated by 谷歌翻译
我们提出了自适应培训 - 一种统一的培训算法,通过模型预测动态校准并增强训练过程,而不会产生额外的计算成本 - 以推进深度神经网络的监督和自我监督的学习。我们分析了培训数据的深网络培训动态,例如随机噪声和对抗例。我们的分析表明,模型预测能够在数据中放大有用的基础信息,即使在没有任何标签信息的情况下,这种现象也会发生,突出显示模型预测可能会产生培训过程:自适应培训改善了深网络的概括在噪音下,增强自我监督的代表学习。分析还阐明了解深度学习,例如,在经验风险最小化和最新的自我监督学习算法的折叠问题中对最近发现的双重现象的潜在解释。在CIFAR,STL和Imagenet数据集上的实验验证了我们在三种应用中的方法的有效性:用标签噪声,选择性分类和线性评估进行分类。为了促进未来的研究,该代码已在HTTPS://github.com/layneh/Self-Aveptive-训练中公开提供。
translated by 谷歌翻译
我们开发了一种新的原则性算法,用于估计培训数据点对深度学习模型的行为的贡献,例如它做出的特定预测。我们的算法估计了AME,该数量量衡量了将数据点添加到训练数据子集中的预期(平均)边际效应,并从给定的分布中采样。当从均匀分布中采样子集时,AME将还原为众所周知的Shapley值。我们的方法受因果推断和随机实验的启发:我们采样了训练数据的不同子集以训练多个子模型,并评估每个子模型的行为。然后,我们使用套索回归来基于子集组成共同估计每个数据点的AME。在稀疏假设($ k \ ll n $数据点具有较大的AME)下,我们的估计器仅需要$ O(k \ log n)$随机的子模型培训,从而改善了最佳先前的Shapley值估算器。
translated by 谷歌翻译
尽管对神经网络进行了监督学习的巨大进展,但在获得高质量,大规模和准确标记的数据集中存在重大挑战。在这种情况下,在本文中,我们在存在标签噪声的情况下解决分类问题,更具体地,既有闭合和开放式标签噪声,就是样本的真实标签或可能不属于时给定标签的集合。在我们的方法中,方法是一种样本选择机制,其依赖于样本的注释标签与其邻域中标签的分布之间的一致性;依赖于分类器跨后续迭代的置信机制的依赖标签机制;以及培训编码器的培训策略,同时通过单独的选择样本上的跨熵丢失和分类器编码器培训。没有钟声和口哨,如共同训练,以便减少自我确认偏差,并且对其少数超参数的环境具有鲁棒性,我们的方法显着超越了与人工噪声和真实的CIFAR10 / CIFAR100上的先前方法-world噪声数据集如webvision和动物-10n。
translated by 谷歌翻译
带有嘈杂标签的训练深神经网络(DNN)实际上是具有挑战性的,因为不准确的标签严重降低了DNN的概括能力。以前的努力倾向于通过识别带有粗糙的小损失标准来减轻嘈杂标签的干扰的嘈杂数据来处理统一的denoising流中的零件或完整数据,而忽略了嘈杂样本的困难是不同的,因此是刚性和统一的。数据选择管道无法很好地解决此问题。在本文中,我们首先提出了一种称为CREMA的粗到精细的稳健学习方法,以分裂和串扰的方式处理嘈杂的数据。在粗糙水平中,干净和嘈杂的集合首先从统计意义上就可信度分开。由于实际上不可能正确对所有嘈杂样本进行分类,因此我们通过对每个样本的可信度进行建模来进一步处理它们。具体而言,对于清洁集,我们故意设计了一种基于内存的调制方案,以动态调整每个样本在训练过程中的历史可信度顺序方面的贡献,从而减轻了错误地分组为清洁集中的嘈杂样本的效果。同时,对于分类为嘈杂集的样品,提出了选择性标签更新策略,以纠正嘈杂的标签,同时减轻校正错误的问题。广泛的实验是基于不同方式的基准,包括图像分类(CIFAR,Clothing1M等)和文本识别(IMDB),具有合成或自然语义噪声,表明CREMA的优势和普遍性。
translated by 谷歌翻译
We approach the problem of improving robustness of deep learning algorithms in the presence of label noise. Building upon existing label correction and co-teaching methods, we propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit, which uses a pair of neural networks trained on two disjoint parts of the dataset. CrossSplit combines two main ingredients: (i) Cross-split label correction. The idea is that, since the model trained on one part of the data cannot memorize example-label pairs from the other part, the training labels presented to each network can be smoothly adjusted by using the predictions of its peer network; (ii) Cross-split semi-supervised training. A network trained on one part of the data also uses the unlabeled inputs of the other part. Extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art up to 90% noise ratio.
translated by 谷歌翻译
经过嘈杂标签训练的深层模型很容易在概括中过度拟合和挣扎。大多数现有的解决方案都是基于理想的假设,即标签噪声是类条件,即同一类的实例共享相同的噪声模型,并且独立于特征。在实践中,现实世界中的噪声模式通常更为细粒度作为实例依赖性,这构成了巨大的挑战,尤其是在阶层间失衡的情况下。在本文中,我们提出了一种两阶段的干净样品识别方法,以应对上述挑战。首先,我们采用类级特征聚类程序,以早期识别在班级预测中心附近的干净样品。值得注意的是,我们根据稀有类的预测熵来解决类不平衡问题。其次,对于接近地面真相类边界的其余清洁样品(通常与样品与实例有关的噪声混合),我们提出了一种基于一致性的新型分类方法,该方法使用两个分类器头的一致性来识别它们:一致性越高,样品清洁的可能性就越大。对几个具有挑战性的基准进行了广泛的实验,证明了我们的方法与最先进的方法相比。
translated by 谷歌翻译
为了训练强大的深神经网络(DNNS),我们系统地研究了几种目标修饰方法,其中包括输出正则化,自我和非自动标签校正(LC)。发现了三个关键问题:(1)自我LC是最吸引人的,因为它利用了自己的知识,不需要额外的模型。但是,在文献中,如何自动确定学习者的信任程度并没有很好地回答。 (2)一些方法会受到惩罚,而另一些方法奖励低渗透预测,促使我们询问哪一种更好。 (3)使用标准训练设置,当存在严重的噪音时,受过训练的网络的信心较低,因此很难利用其高渗透自我知识。为了解决问题(1),采取两个良好接受的命题 - 深度神经网络在拟合噪声和最小熵正则原理之前学习有意义的模式 - 我们提出了一种名为Proselflc的新颖的端到端方法,该方法是根据根据学习时间和熵。具体而言,给定数据点,如果对模型进行了足够的时间训练,并且预测的熵较低(置信度很高),则我们逐渐增加对预测标签分布的信任与其注释的信任。根据ProSelfLC的说法,对于(2),我们从经验上证明,最好重新定义有意义的低渗透状态并优化学习者对其进行优化。这是防御熵最小化的防御。为了解决该问题(3),我们在利用低温以纠正标签之前使用低温降低了自我知识的熵,因此修订后的标签重新定义了低渗透目标状态。我们通过在清洁和嘈杂的环境以及图像和蛋白质数据集中进行广泛的实验来证明ProSelfLC的有效性。此外,我们的源代码可在https://github.com/xinshaoamoswang/proselflc-at上获得。
translated by 谷歌翻译
样品选择是减轻标签噪声在鲁棒学习中的影响的有效策略。典型的策略通常应用小损失标准来识别干净的样品。但是,这些样本位于决策边界周围,通常会与嘈杂的例子纠缠在一起,这将被此标准丢弃,从而导致概括性能的严重退化。在本文中,我们提出了一种新颖的选择策略,\ textbf {s} elf- \ textbf {f} il \ textbf {t} ering(sft),它利用历史预测中嘈杂的示例的波动来过滤它们,可以过滤它们,这可以是可以过滤的。避免在边界示例中的小损失标准的选择偏置。具体来说,我们介绍了一个存储库模块,该模块存储了每个示例的历史预测,并动态更新以支持随后的学习迭代的选择。此外,为了减少SFT样本选择偏置的累积误差,我们设计了一个正规化术语来惩罚自信的输出分布。通过通过此术语增加错误分类类别的重量,损失函数在轻度条件下标记噪声是可靠的。我们对具有变化噪声类型的三个基准测试并实现了新的最先进的实验。消融研究和进一步分析验证了SFT在健壮学习中选择样本的优点。
translated by 谷歌翻译
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The stateof-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
translated by 谷歌翻译
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
translated by 谷歌翻译