使用嘈杂标签(LNL)学习旨在设计策略来通过减轻模型过度适应嘈杂标签的影响来提高模型性能和概括。 LNL的主要成功在于从大量嘈杂数据中识别尽可能多的干净样品,同时纠正错误分配的嘈杂标签。最近的进步采用了单个样品的预测标签分布来执行噪声验证和嘈杂的标签校正,很容易产生确认偏差。为了减轻此问题,我们提出了邻里集体估计,其中通过将其与其功能空间最近的邻居进行对比,重新估计了候选样本的预测性可靠性。具体而言,我们的方法分为两个步骤:1)邻域集体噪声验证,将所有训练样品分为干净或嘈杂的子集,2)邻里集体标签校正到Relabel嘈杂样品,然后使用辅助技术来帮助进一步的模型优化。 。在四个常用基准数据集(即CIFAR-10,CIFAR-100,Clothing-1M和WebVision-1.0)上进行了广泛的实验,这表明我们提出的方法非常优于最先进的方法。
translated by 谷歌翻译
Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix.
translated by 谷歌翻译
深度学习的最新进展依赖于大型标签的数据集来培训大容量模型。但是,以时间和成本效益的方式收集大型数据集通常会导致标签噪声。我们提出了一种从嘈杂的标签中学习的方法,该方法利用特征空间中的训练示例之间的相似性,鼓励每个示例的预测与其最近的邻居相似。与使用多个模型或不同阶段的训练算法相比,我们的方法采用了简单,附加的正规化项的形式。它可以被解释为经典的,偏置标签传播算法的归纳版本。我们在数据集上彻底评估我们的方法评估合成(CIFAR-10,CIFAR-100)和现实(迷你网络,网络vision,Clotsing1m,Mini-Imagenet-Red)噪声,并实现竞争性或最先进的精度,在所有人之间。
translated by 谷歌翻译
尽管对神经网络进行了监督学习的巨大进展,但在获得高质量,大规模和准确标记的数据集中存在重大挑战。在这种情况下,在本文中,我们在存在标签噪声的情况下解决分类问题,更具体地,既有闭合和开放式标签噪声,就是样本的真实标签或可能不属于时给定标签的集合。在我们的方法中,方法是一种样本选择机制,其依赖于样本的注释标签与其邻域中标签的分布之间的一致性;依赖于分类器跨后续迭代的置信机制的依赖标签机制;以及培训编码器的培训策略,同时通过单独的选择样本上的跨熵丢失和分类器编码器培训。没有钟声和口哨,如共同训练,以便减少自我确认偏差,并且对其少数超参数的环境具有鲁棒性,我们的方法显着超越了与人工噪声和真实的CIFAR10 / CIFAR100上的先前方法-world噪声数据集如webvision和动物-10n。
translated by 谷歌翻译
自数据注释(尤其是对于大型数据集)以来,使用嘈杂的标签学习引起了很大的研究兴趣,这可能不可避免地不可避免。最近的方法通过将培训样本分为清洁和嘈杂的集合来求助于半监督的学习问题。然而,这种范式在重标签噪声下容易出现重大变性,因为干净样品的数量太小,无法进行常规方法。在本文中,我们介绍了一个新颖的框架,称为LC-Booster,以在极端噪音下明确处理学习。 LC-Booster的核心思想是将标签校正纳入样品选择中,以便可以通过可靠的标签校正来培训更纯化的样品,从而减轻确认偏差。实验表明,LC-Booster在几个嘈杂标签的基准测试中提高了最先进的结果,包括CIFAR-10,CIFAR-100,CLASTINGING 1M和WEBVISION。值得注意的是,在极端的90 \%噪声比下,LC-Booster在CIFAR-10和CIFAR-100上获得了92.9 \%和48.4 \%的精度,超过了最终方法,较大的边距就超过了最终方法。
translated by 谷歌翻译
嘈杂的标签损坏了深网络的性能。为了稳健的学习,突出的两级管道在消除可能的不正确标签和半监督培训之间交替。然而,丢弃观察到的标签的部分可能导致信息丢失,尤其是当腐败不是完全随机的时,例如依赖类或实例依赖。此外,从代表性两级方法Dividemix的训练动态,我们确定了确认偏置的统治:伪标签未能纠正相当大量的嘈杂标签,因此累积误差。为了充分利用观察到的标签和减轻错误的校正,我们提出了强大的标签翻新(鲁棒LR)-a新的混合方法,该方法集成了伪标签和置信度估计技术来翻新嘈杂的标签。我们表明我们的方法成功减轻了标签噪声和确认偏差的损害。结果,它跨数据集和噪声类型实现最先进的结果。例如,强大的LR在真实世界嘈杂的数据集网络VIVION上以前最好的绝对高度提高了4.5%的绝对顶级精度改进。
translated by 谷歌翻译
Annotating the dataset with high-quality labels is crucial for performance of deep network, but in real world scenarios, the labels are often contaminated by noise. To address this, some methods were proposed to automatically split clean and noisy labels, and learn a semi-supervised learner in a Learning with Noisy Labels (LNL) framework. However, they leverage a handcrafted module for clean-noisy label splitting, which induces a confirmation bias in the semi-supervised learning phase and limits the performance. In this paper, we for the first time present a learnable module for clean-noisy label splitting, dubbed SplitNet, and a novel LNL framework which complementarily trains the SplitNet and main network for the LNL task. We propose to use a dynamic threshold based on a split confidence by SplitNet to better optimize semi-supervised learner. To enhance SplitNet training, we also present a risk hedging method. Our proposed method performs at a state-of-the-art level especially in high noise ratio settings on various LNL benchmarks.
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
We approach the problem of improving robustness of deep learning algorithms in the presence of label noise. Building upon existing label correction and co-teaching methods, we propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit, which uses a pair of neural networks trained on two disjoint parts of the dataset. CrossSplit combines two main ingredients: (i) Cross-split label correction. The idea is that, since the model trained on one part of the data cannot memorize example-label pairs from the other part, the training labels presented to each network can be smoothly adjusted by using the predictions of its peer network; (ii) Cross-split semi-supervised training. A network trained on one part of the data also uses the unlabeled inputs of the other part. Extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art up to 90% noise ratio.
translated by 谷歌翻译
不完美的标签在现实世界数据集中无处不在,严重损害了模型性能。几个最近处理嘈杂标签的有效方法有两个关键步骤:1)将样品分开通过培训丢失,2)使用半监控方法在错误标记的集合中生成样本的伪标签。然而,由于硬样品和噪声之间的类似损失分布,目前的方法总是损害信息性的硬样品。在本文中,我们提出了PGDF(先前引导的去噪框架),通过生成样本的先验知识来学习深层模型来抑制噪声的新框架,这被集成到分割样本步骤和半监督步骤中。我们的框架可以将更多信息性硬清洁样本保存到干净标记的集合中。此外,我们的框架还通过抑制当前伪标签生成方案中的噪声来促进半监控步骤期间伪标签的质量。为了进一步增强硬样品,我们在训练期间在干净的标记集合中重新重量样品。我们使用基于CiFar-10和CiFar-100的合成数据集以及现实世界数据集WebVision和服装1M进行了评估了我们的方法。结果表明了最先进的方法的大量改进。
translated by 谷歌翻译
深神经网络(DNN)的记忆效应在最近的标签噪声学习方法中起关键作用。为了利用这种效果,已经广泛采用了基于模型预测的方法,该方法旨在利用DNN在学习的早期阶段以纠正嘈杂标签的效果。但是,我们观察到该模型在标签预测期间会犯错误,从而导致性能不令人满意。相比之下,在学习早期阶段产生的特征表现出更好的鲁棒性。受到这一观察的启发,在本文中,我们提出了一种基于特征嵌入的新方法,用于用标签噪声,称为标签NoissiLution(Lend)。要具体而言,我们首先根据当前的嵌入式特征计算一个相似性矩阵,以捕获训练数据的局部结构。然后,附近标记的数据(\ textIt {i.e。},标签噪声稀释)使错误标记的数据携带的嘈杂的监督信号淹没了,其有效性是由特征嵌入的固有鲁棒性保证的。最后,带有稀释标签的培训数据进一步用于培训强大的分类器。从经验上讲,我们通过将我们的贷款与几种代表性的强大学习方法进行比较,对合成和现实世界嘈杂数据集进行了广泛的实验。结果验证了我们贷款的有效性。
translated by 谷歌翻译
经过嘈杂标签训练的深层模型很容易在概括中过度拟合和挣扎。大多数现有的解决方案都是基于理想的假设,即标签噪声是类条件,即同一类的实例共享相同的噪声模型,并且独立于特征。在实践中,现实世界中的噪声模式通常更为细粒度作为实例依赖性,这构成了巨大的挑战,尤其是在阶层间失衡的情况下。在本文中,我们提出了一种两阶段的干净样品识别方法,以应对上述挑战。首先,我们采用类级特征聚类程序,以早期识别在班级预测中心附近的干净样品。值得注意的是,我们根据稀有类的预测熵来解决类不平衡问题。其次,对于接近地面真相类边界的其余清洁样品(通常与样品与实例有关的噪声混合),我们提出了一种基于一致性的新型分类方法,该方法使用两个分类器头的一致性来识别它们:一致性越高,样品清洁的可能性就越大。对几个具有挑战性的基准进行了广泛的实验,证明了我们的方法与最先进的方法相比。
translated by 谷歌翻译
Semi-supervised learning based methods are current SOTA solutions to the noisy-label learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is \emph{class agnostic} and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named \textbf{C}lass \textbf{P}rototype-based label noise \textbf{C}leaner (\textbf{CPC}). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks. Codes and pre-trained models will be released at \url{https://github.com/hjjpku/CPC.git}.
translated by 谷歌翻译
对标签噪声的学习是一个至关重要的话题,可以保证深度神经网络的可靠表现。最近的研究通常是指具有模型输出概率和损失值的动态噪声建模,然后分离清洁和嘈杂的样本。这些方法取得了显着的成功。但是,与樱桃挑选的数据不同,现有方法在面对不平衡数据集时通常无法表现良好,这是现实世界中常见的情况。我们彻底研究了这一现象,并指出了两个主要问题,这些问题阻碍了性能,即\ emph {类间损耗分布差异}和\ emph {由于不确定性而引起的误导性预测}。第一个问题是现有方法通常执行类不足的噪声建模。然而,损失分布显示在类失衡下的类别之间存在显着差异,并且类不足的噪声建模很容易与少数族裔类别中的嘈杂样本和样本混淆。第二个问题是指该模型可能会因认知不确定性和不确定性而导致的误导性预测,因此仅依靠输出概率的现有方法可能无法区分自信的样本。受我们的观察启发,我们提出了一个不确定性的标签校正框架〜(ULC)来处理不平衡数据集上的标签噪声。首先,我们执行认识不确定性的班级特异性噪声建模,以识别可信赖的干净样本并精炼/丢弃高度自信的真实/损坏的标签。然后,我们在随后的学习过程中介绍了不确定性,以防止标签噪声建模过程中的噪声积累。我们对几个合成和现实世界数据集进行实验。结果证明了提出的方法的有效性,尤其是在数据集中。
translated by 谷歌翻译
样品选择是减轻标签噪声在鲁棒学习中的影响的有效策略。典型的策略通常应用小损失标准来识别干净的样品。但是,这些样本位于决策边界周围,通常会与嘈杂的例子纠缠在一起,这将被此标准丢弃,从而导致概括性能的严重退化。在本文中,我们提出了一种新颖的选择策略,\ textbf {s} elf- \ textbf {f} il \ textbf {t} ering(sft),它利用历史预测中嘈杂的示例的波动来过滤它们,可以过滤它们,这可以是可以过滤的。避免在边界示例中的小损失标准的选择偏置。具体来说,我们介绍了一个存储库模块,该模块存储了每个示例的历史预测,并动态更新以支持随后的学习迭代的选择。此外,为了减少SFT样本选择偏置的累积误差,我们设计了一个正规化术语来惩罚自信的输出分布。通过通过此术语增加错误分类类别的重量,损失函数在轻度条件下标记噪声是可靠的。我们对具有变化噪声类型的三个基准测试并实现了新的最先进的实验。消融研究和进一步分析验证了SFT在健壮学习中选择样本的优点。
translated by 谷歌翻译
带有嘈杂标签的训练深神经网络(DNN)实际上是具有挑战性的,因为不准确的标签严重降低了DNN的概括能力。以前的努力倾向于通过识别带有粗糙的小损失标准来减轻嘈杂标签的干扰的嘈杂数据来处理统一的denoising流中的零件或完整数据,而忽略了嘈杂样本的困难是不同的,因此是刚性和统一的。数据选择管道无法很好地解决此问题。在本文中,我们首先提出了一种称为CREMA的粗到精细的稳健学习方法,以分裂和串扰的方式处理嘈杂的数据。在粗糙水平中,干净和嘈杂的集合首先从统计意义上就可信度分开。由于实际上不可能正确对所有嘈杂样本进行分类,因此我们通过对每个样本的可信度进行建模来进一步处理它们。具体而言,对于清洁集,我们故意设计了一种基于内存的调制方案,以动态调整每个样本在训练过程中的历史可信度顺序方面的贡献,从而减轻了错误地分组为清洁集中的嘈杂样本的效果。同时,对于分类为嘈杂集的样品,提出了选择性标签更新策略,以纠正嘈杂的标签,同时减轻校正错误的问题。广泛的实验是基于不同方式的基准,包括图像分类(CIFAR,Clothing1M等)和文本识别(IMDB),具有合成或自然语义噪声,表明CREMA的优势和普遍性。
translated by 谷歌翻译
在标签噪声下训练深神网络的能力很有吸引力,因为不完美的注释数据相对便宜。最先进的方法基于半监督学习(SSL),该学习选择小损失示例为清洁,然后应用SSL技术来提高性能。但是,选择步骤主要提供一个中等大小的清洁子集,该子集可俯瞰丰富的干净样品。在这项工作中,我们提出了一个新颖的嘈杂标签学习框架Promix,试图最大程度地提高清洁样品的实用性以提高性能。我们方法的关键是,我们提出了一种匹配的高信心选择技术,该技术选择了那些具有很高置信的示例,并与给定标签进行了匹配的预测。结合小损失选择,我们的方法能够达到99.27的精度,并在检测CIFAR-10N数据集上的干净样品时召回98.22。基于如此大的清洁数据,Promix将最佳基线方法提高了CIFAR-10N的 +2.67%,而CIFAR-100N数据集则提高了 +1.61%。代码和数据可从https://github.com/justherozen/promix获得
translated by 谷歌翻译
Partial label learning (PLL) is a typical weakly supervised learning, where each sample is associated with a set of candidate labels. The basic assumption of PLL is that the ground-truth label must reside in the candidate set. However, this assumption may not be satisfied due to the unprofessional judgment of the annotators, thus limiting the practical application of PLL. In this paper, we relax this assumption and focus on a more general problem, noisy PLL, where the ground-truth label may not exist in the candidate set. To address this challenging problem, we further propose a novel framework called "Automatic Refinement Network (ARNet)". Our method consists of multiple rounds. In each round, we purify the noisy samples through two key modules, i.e., noisy sample detection and label correction. To guarantee the performance of these modules, we start with warm-up training and automatically select the appropriate correction epoch. Meanwhile, we exploit data augmentation to further reduce prediction errors in ARNet. Through theoretical analysis, we prove that our method is able to reduce the noise level of the dataset and eventually approximate the Bayes optimal classifier. To verify the effectiveness of ARNet, we conduct experiments on multiple benchmark datasets. Experimental results demonstrate that our ARNet is superior to existing state-of-the-art approaches in noisy PLL. Our code will be made public soon.
translated by 谷歌翻译
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The stateof-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
translated by 谷歌翻译
受视力语言预训练模型的显着零击概括能力的启发,我们试图利用剪辑模型的监督来减轻数据标记的负担。然而,这种监督不可避免地包含标签噪声,从而大大降低了分类模型的判别能力。在这项工作中,我们提出了Transductive Clip,这是一个新型的框架,用于学习具有从头开始的嘈杂标签的分类网络。首先,提出了一种类似的对比学习机制来减轻对伪标签的依赖并提高对嘈杂标签的耐受性。其次,合奏标签被用作伪标签更新策略,以稳定具有嘈杂标签的深神经网络的培训。该框架可以通过组合两种技术有效地从夹子模型中降低嘈杂标签的影响。多个基准数据集的实验证明了比其他最新方法的实质性改进。
translated by 谷歌翻译