元学习是一种处理不平衡和嘈杂标签学习的有效方法,但它取决于验证集,其中包含随机选择,手动标记和平衡的分布式样品。该验证集的随机选择和手动标记和平衡不仅是元学习的最佳选择,而且随着类的数量,它的缩放范围也很差。因此,最近的元学习论文提出了临时启发式方法来自动构建和标记此验证集,但是这些启发式方法仍然是元学习的最佳选择。在本文中,我们分析了元学习算法,并提出了新的标准来表征验证集的实用性,基于:1)验证集的信息性; 2)集合的班级分配余额; 3)集合标签的正确性。此外,我们提出了一种新的不平衡的嘈杂标签元学习(INOLML)算法,该算法会自动构建通过上面的标准最大化其实用程序来构建验证。我们的方法比以前的元学习方法显示出显着改进,并在几个基准上设定了新的最新技术。
translated by 谷歌翻译
深度神经网络模型对有限的标签噪声非常强大,但是它们在高噪声率问题中记住嘈杂标签的能力仍然是一个空旷的问题。最具竞争力的嘈杂标签学习算法依赖于一个2阶段的过程,其中包括无监督的学习,将培训样本分类为清洁或嘈杂,然后是半监督的学习,将经验仿生风险(EVR)最小化,该学习使用标记的集合制成的集合。样品被归类为干净,并提供了一个未标记的样品,该样品被分类为嘈杂。在本文中,我们假设这种2阶段嘈杂标签的学习方法的概括取决于无监督分类器的精度以及训练设置的大小以最大程度地减少EVR。我们从经验上验证了这两个假设,并提出了新的2阶段嘈杂标签训练算法longRemix。我们在嘈杂的标签基准CIFAR-10,CIFAR-100,Webvision,Clotsing1m和Food101-N上测试Longremix。结果表明,我们的Longremix比竞争方法更好,尤其是在高标签噪声问题中。此外,我们的方法在大多数数据集中都能达到最先进的性能。该代码可在https://github.com/filipe-research/longremix上获得。
translated by 谷歌翻译
Learning with noisy-labels has become an important research topic in computer vision where state-of-the-art (SOTA) methods explore: 1) prediction disagreement with co-teaching strategy that updates two models when they disagree on the prediction of training samples; and 2) sample selection to divide the training set into clean and noisy sets based on small training loss. However, the quick convergence of co-teaching models to select the same clean subsets combined with relatively fast overfitting of noisy labels may induce the wrong selection of noisy label samples as clean, leading to an inevitable confirmation bias that damages accuracy. In this paper, we introduce our noisy-label learning approach, called Asymmetric Co-teaching (AsyCo), which introduces novel prediction disagreement that produces more consistent divergent results of the co-teaching models, and a new sample selection approach that does not require small-loss assumption to enable a better robustness to confirmation bias than previous methods. More specifically, the new prediction disagreement is achieved with the use of different training strategies, where one model is trained with multi-class learning and the other with multi-label learning. Also, the new sample selection is based on multi-view consensus, which uses the label views from training labels and model predictions to divide the training set into clean and noisy for training the multi-class model and to re-label the training samples with multiple top-ranked labels for training the multi-label model. Extensive experiments on synthetic and real-world noisy-label datasets show that AsyCo improves over current SOTA methods.
translated by 谷歌翻译
现实世界中的大规模医学图像分析(MIA)数据集面临三个挑战:1)它们包含影响训练收敛和概括的嘈杂标记的样本,2)它们通常每个类别的样本分布不平衡,3)通常包括一个多标签问题,其中样本可以进行多个诊断。当前的方法通常经过培训以解决这些问题的一部分,但是我们不知道可以同时解决这三个问题的方法。在本文中,我们提出了一个新的训练模块,称为非挥发性无偏内存(NVUM),该模型的非挥发性存储在嘈杂的多标签问题上的新正则损失的模型逻辑平均值。我们进一步公正了NVUM更新中的分类预测,以解决不平衡的学习问题。我们进行了广泛的实验,以评估本文提出的新基准测试的NVUM,在该基准上进行了训练,该训练是在嘈杂的多标签不平衡的胸部X射线(CXR)训练集上进行的,由Chest-XRay14和Chexpert组成,并且在测试上进行了测试。清洁多标签CXR数据集Openi和Padchest。我们的方法优于以前的最先进的CXR分类器和以前可以在所有评估上处理嘈杂标签的方法。我们的代码可在https://github.com/fbladl/nvum上找到。
translated by 谷歌翻译
在深度学习的生态系统中,嘈杂的标签是不可避免的,但很麻烦,因为模型可以轻松地过度拟合它们。标签噪声有许多类型,例如对称,不对称和实例依赖性噪声(IDN),而IDN是唯一取决于图像信息的类型。鉴于标签错误很大程度上是由于图像中存在的视觉类别不足或模棱两可的信息引起的,因此对图像信息的这种依赖性使IDN成为可研究标签噪声的关键类型。为了提供一种有效的技术来解决IDN,我们提出了一种称为InstanceGM的新图形建模方法,该方法结合了判别和生成模型。实例GM的主要贡献是:i)使用连续的Bernoulli分布来培训生成模型,提供了重要的培训优势,ii)探索最先进的噪声标签歧视分类器来生成清洁标签来自实例依赖性嘈杂标签样品。 InstanceGM具有当前嘈杂的学习方法的竞争力,尤其是在使用合成和现实世界数据集的IDN基准测试中,我们的方法比大多数实验中的竞争对手都表现出更好的准确性。
translated by 谷歌翻译
Current deep neural networks (DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data. The weighting function is an MLP with one hidden layer, constituting a universal approximator to almost any continuous functions, making the method able to fit a wide range of weighting functions including those assumed in conventional research. Guided by a small amount of unbiased meta-data, the parameters of the weighting function can be finely updated simultaneously with the learning process of the classifiers. Synthetic and real experiments substantiate the capability of our method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases. This naturally leads to its better accuracy than other state-of-the-art methods. Source code is available at https://github.com/xjtushujun/meta-weight-net. * Corresponding author. 1 We call the training data biased when they are generated from a joint sample-label distribution deviating from the distribution of evaluation/test set [1].
translated by 谷歌翻译
In this paper, we present a simple yet effective method (ABSGD) for addressing the data imbalance issue in deep learning. Our method is a simple modification to momentum SGD where we leverage an attentional mechanism to assign an individual importance weight to each gradient in the mini-batch. Unlike many existing heuristic-driven methods for tackling data imbalance, our method is grounded in {\it theoretically justified distributionally robust optimization (DRO)}, which is guaranteed to converge to a stationary point of an information-regularized DRO problem. The individual-level weight of a sampled data is systematically proportional to the exponential of a scaled loss value of the data, where the scaling factor is interpreted as the regularization parameter in the framework of information-regularized DRO. Compared with existing class-level weighting schemes, our method can capture the diversity between individual examples within each class. Compared with existing individual-level weighting methods using meta-learning that require three backward propagations for computing mini-batch stochastic gradients, our method is more efficient with only one backward propagation at each iteration as in standard deep learning methods. To balance between the learning of feature extraction layers and the learning of the classifier layer, we employ a two-stage method that uses SGD for pretraining followed by ABSGD for learning a robust classifier and finetuning lower layers. Our empirical studies on several benchmark datasets demonstrate the effectiveness of the proposed method.
translated by 谷歌翻译
标签噪声显着降低了应用中深度模型的泛化能力。有效的策略和方法,\ Texit {例如}重新加权或损失校正,旨在在训练神经网络时缓解标签噪声的负面影响。这些现有的工作通常依赖于预指定的架构并手动调整附加的超参数。在本文中,我们提出了翘曲的概率推断(WARPI),以便在元学习情景中自适应地整理分类网络的培训程序。与确定性模型相比,WARPI通过学习摊销元网络来制定为分层概率模型,这可以解决样本模糊性,因此对严格的标签噪声更加坚固。与直接生成损耗的重量值的现有近似加权功能不同,我们的元网络被学习以估计从登录和标签的输入来估计整流向量,这具有利用躺在它们中的足够信息的能力。这提供了纠正分类网络的学习过程的有效方法,证明了泛化能力的显着提高。此外,可以将整流载体建模为潜在变量并学习元网络,可以无缝地集成到分类网络的SGD优化中。我们在嘈杂的标签上评估了四个强大学习基准的Warpi,并在变体噪声类型下实现了新的最先进的。广泛的研究和分析还展示了我们模型的有效性。
translated by 谷歌翻译
尽管对神经网络进行了监督学习的巨大进展,但在获得高质量,大规模和准确标记的数据集中存在重大挑战。在这种情况下,在本文中,我们在存在标签噪声的情况下解决分类问题,更具体地,既有闭合和开放式标签噪声,就是样本的真实标签或可能不属于时给定标签的集合。在我们的方法中,方法是一种样本选择机制,其依赖于样本的注释标签与其邻域中标签的分布之间的一致性;依赖于分类器跨后续迭代的置信机制的依赖标签机制;以及培训编码器的培训策略,同时通过单独的选择样本上的跨熵丢失和分类器编码器培训。没有钟声和口哨,如共同训练,以便减少自我确认偏差,并且对其少数超参数的环境具有鲁棒性,我们的方法显着超越了与人工噪声和真实的CIFAR10 / CIFAR100上的先前方法-world噪声数据集如webvision和动物-10n。
translated by 谷歌翻译
应付嘈杂标签的大多数现有方法通常假定类别分布良好,因此无法应对训练样本不平衡分布的实际情况的能力不足。为此,本文尽早努力通过长尾分配和标签噪声来解决图像分类任务。在这种情况下,现有的噪声学习方法无法正常工作,因为将噪声样本与干净的尾巴类别的样本区分开来是具有挑战性的。为了解决这个问题,我们提出了一个新的学习范式,基于对弱数据和强数据扩展的推论,以筛选嘈杂的样本,并引入休假散布的正则化,以消除公认的嘈杂样本的效果。此外,我们基于在线先验分布中纳入了一种新颖的预测惩罚,以避免对头等阶层的偏见。与现有的长尾分类方法相比,这种机制在实时捕获班级拟合度方面具有优越性。详尽的实验表明,所提出的方法优于解决噪声标签下长尾分类中分布不平衡问题的最先进算法。
translated by 谷歌翻译
In this paper, we introduced the novel concept of advisor network to address the problem of noisy labels in image classification. Deep neural networks (DNN) are prone to performance reduction and overfitting problems on training data with noisy annotations. Weighting loss methods aim to mitigate the influence of noisy labels during the training, completely removing their contribution. This discarding process prevents DNNs from learning wrong associations between images and their correct labels but reduces the amount of data used, especially when most of the samples have noisy labels. Differently, our method weighs the feature extracted directly from the classifier without altering the loss value of each data. The advisor helps to focus only on some part of the information present in mislabeled examples, allowing the classifier to leverage that data as well. We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model. We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
translated by 谷歌翻译
不平衡的数据对基于深度学习的分类模型构成挑战。解决不平衡数据的最广泛使用的方法之一是重新加权,其中训练样本与损失功能的不同权重相关。大多数现有的重新加权方法都将示例权重视为可学习的参数,并优化了元集中的权重,因此需要昂贵的双重优化。在本文中,我们从分布的角度提出了一种基于最佳运输(OT)的新型重新加权方法。具体而言,我们将训练集视为其样品上的不平衡分布,该分布由OT运输到从元集中获得的平衡分布。训练样品的权重是分布不平衡的概率质量,并通过最大程度地减少两个分布之间的ot距离来学习。与现有方法相比,我们提出的一种方法可以脱离每次迭代时的体重学习对相关分类器的依赖性。图像,文本和点云数据集的实验表明,我们提出的重新加权方法具有出色的性能,在许多情况下实现了最新的结果,并提供了一种有希望的工具来解决不平衡的分类问题。
translated by 谷歌翻译
Deep Neural Networks (DNNs) have been shown to be susceptible to memorization or overfitting in the presence of noisily-labelled data. For the problem of robust learning under such noisy data, several algorithms have been proposed. A prominent class of algorithms rely on sample selection strategies wherein, essentially, a fraction of samples with loss values below a certain threshold are selected for training. These algorithms are sensitive to such thresholds, and it is difficult to fix or learn these thresholds. Often, these algorithms also require information such as label noise rates which are typically unavailable in practice. In this paper, we propose an adaptive sample selection strategy that relies only on batch statistics of a given mini-batch to provide robustness against label noise. The algorithm does not have any additional hyperparameters for sample selection, does not need any information on noise rates and does not need access to separate data with clean labels. We empirically demonstrate the effectiveness of our algorithm on benchmark datasets.
translated by 谷歌翻译
长尾数据集(Head Class)组成的培训样本比尾巴类别多得多,这会导致识别模型对头等舱有偏见。加权损失是缓解此问题的最受欢迎的方法之一,最近的一项工作表明,班级难度可能比常规使用的类频率更好地决定了权重的分布。在先前的工作中使用了一种启发式公式来量化难度,但是我们从经验上发现,最佳公式取决于数据集的特征。因此,我们提出了困难网络,该难题学习在元学习框架中使用模型的性能来预测类的难度。为了使其在其他班级的背景下学习班级的合理难度,我们新介绍了两个关键概念,即相对难度和驾驶员损失。前者有助于困难网络在计算班级难度时考虑其他课程,而后者对于将学习指向有意义的方向是必不可少的。对流行的长尾数据集进行了广泛的实验证明了该方法的有效性,并且在多个长尾数据集上实现了最先进的性能。
translated by 谷歌翻译
Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix.
translated by 谷歌翻译
不完美的标签在现实世界数据集中无处不在,严重损害了模型性能。几个最近处理嘈杂标签的有效方法有两个关键步骤:1)将样品分开通过培训丢失,2)使用半监控方法在错误标记的集合中生成样本的伪标签。然而,由于硬样品和噪声之间的类似损失分布,目前的方法总是损害信息性的硬样品。在本文中,我们提出了PGDF(先前引导的去噪框架),通过生成样本的先验知识来学习深层模型来抑制噪声的新框架,这被集成到分割样本步骤和半监督步骤中。我们的框架可以将更多信息性硬清洁样本保存到干净标记的集合中。此外,我们的框架还通过抑制当前伪标签生成方案中的噪声来促进半监控步骤期间伪标签的质量。为了进一步增强硬样品,我们在训练期间在干净的标记集合中重新重量样品。我们使用基于CiFar-10和CiFar-100的合成数据集以及现实世界数据集WebVision和服装1M进行了评估了我们的方法。结果表明了最先进的方法的大量改进。
translated by 谷歌翻译
The existence of label noise imposes significant challenges (e.g., poor generalization) on the training process of deep neural networks (DNN). As a remedy, this paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of the DNN subject to instance-dependent and instance-independent label noise. The proposed method augments the architecture of a conventional DNN by an instance-dependent permutation layer. This layer is essentially a convex combination of permutation matrices that is dynamically calibrated for each sample. The primary objective of the permutation layer is to correct the loss of noisy samples mitigating the effect of label noise. We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label. In addition, we provide a theoretical comparison between the two variants and show that previous methods can be seen as one of the variants. Finally, we validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
translated by 谷歌翻译
经过嘈杂标签训练的深层模型很容易在概括中过度拟合和挣扎。大多数现有的解决方案都是基于理想的假设,即标签噪声是类条件,即同一类的实例共享相同的噪声模型,并且独立于特征。在实践中,现实世界中的噪声模式通常更为细粒度作为实例依赖性,这构成了巨大的挑战,尤其是在阶层间失衡的情况下。在本文中,我们提出了一种两阶段的干净样品识别方法,以应对上述挑战。首先,我们采用类级特征聚类程序,以早期识别在班级预测中心附近的干净样品。值得注意的是,我们根据稀有类的预测熵来解决类不平衡问题。其次,对于接近地面真相类边界的其余清洁样品(通常与样品与实例有关的噪声混合),我们提出了一种基于一致性的新型分类方法,该方法使用两个分类器头的一致性来识别它们:一致性越高,样品清洁的可能性就越大。对几个具有挑战性的基准进行了广泛的实验,证明了我们的方法与最先进的方法相比。
translated by 谷歌翻译
带有嘈杂标签的训练深神经网络(DNN)实际上是具有挑战性的,因为不准确的标签严重降低了DNN的概括能力。以前的努力倾向于通过识别带有粗糙的小损失标准来减轻嘈杂标签的干扰的嘈杂数据来处理统一的denoising流中的零件或完整数据,而忽略了嘈杂样本的困难是不同的,因此是刚性和统一的。数据选择管道无法很好地解决此问题。在本文中,我们首先提出了一种称为CREMA的粗到精细的稳健学习方法,以分裂和串扰的方式处理嘈杂的数据。在粗糙水平中,干净和嘈杂的集合首先从统计意义上就可信度分开。由于实际上不可能正确对所有嘈杂样本进行分类,因此我们通过对每个样本的可信度进行建模来进一步处理它们。具体而言,对于清洁集,我们故意设计了一种基于内存的调制方案,以动态调整每个样本在训练过程中的历史可信度顺序方面的贡献,从而减轻了错误地分组为清洁集中的嘈杂样本的效果。同时,对于分类为嘈杂集的样品,提出了选择性标签更新策略,以纠正嘈杂的标签,同时减轻校正错误的问题。广泛的实验是基于不同方式的基准,包括图像分类(CIFAR,Clothing1M等)和文本识别(IMDB),具有合成或自然语义噪声,表明CREMA的优势和普遍性。
translated by 谷歌翻译
常规的去命名方法依赖于所有样品都是独立且分布相同的假设,因此最终的分类器虽然受到噪声的干扰,但仍然可以轻松地将噪声识别为训练分布的异常值。但是,在不可避免地长尾巴的大规模数据中,该假设是不现实的。这种不平衡的训练数据使分类器对尾巴类别的歧视性较小,而尾巴类别的差异化现在变成了“硬”的噪声 - 它们几乎与干净的尾巴样品一样离群值。我们将这一新挑战介绍为嘈杂的长尾分类(NLT)。毫不奇怪,我们发现大多数拖延方法无法识别出硬噪声,从而导致三个提出的NLT基准测试的性能大幅下降:Imagenet-NLT,Animal10-NLT和Food101-NLT。为此,我们设计了一个迭代嘈杂的学习框架,称为“难以容易”(H2E)。我们的引导理念是首先学习一个分类器作为噪声标识符不变的类和上下文分布变化,从而将“硬”噪声减少到“ Easy”的噪声,其删除进一步改善了不变性。实验结果表明,我们的H2E胜过最先进的方法及其在长尾设置上的消融,同时在传统平衡设置上保持稳定的性能。数据集和代码可从https://github.com/yxymessi/h2e-framework获得
translated by 谷歌翻译