大型真实数据集中嘈杂的标签是不可避免的。在这项工作中,我们探索了以前的作品解读的一个区域 - 网络的架构如何影响其嘈杂标签的鲁棒性。我们提供一个正式的框架,将网络的稳健性连接到其架构和目标/噪声功能之间的对齐。我们的框架通过其表示中的预测力量来测量网络的稳健性 - 使用一小组清洁标签在学习的陈述上培训的线性模型的测试性能。我们假设网络对嘈杂标签更强大,如果其架构与目标功能比噪声更加对齐。为了支持我们的假设,我们提供各种神经网络架构和不同域的理论和经验证据。我们还发现,当网络与目标函数良好对齐时,在测试精度和甚至优于特勤方面的方法方面,它在最先进的(SOTA)噪声标签培训方法上的预测力可以提高。使用干净的标签。
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译
在最近的几项研究中已经显示了过度参数化在实现卓越概括性能方面的好处,证明了在实践中使用较大模型的趋势。然而,在强大的学习背景下,神经网络大小的影响尚未得到很好的研究。在这项工作中,我们发现,在大量错误标记的示例的存在下,将网络大小的增加超出某个点可能是有害的。特别是,当标签噪声增加时,最初是单调或“双重下降”测试损失曲线(W.R.T.网络宽度)变成U形或双U形曲线,这表明某些模型具有中等大小的模型实现了最佳的概括。我们观察到,当通过随机修剪通过密度控制网络大小时,观察到相似的测试损失行为。我们还通过偏置变化分解和理论上表征标签噪声塑造方差项的方式来仔细研究现象。即使采用最新的鲁棒方法,也可以观察到测试损失的类似行为,这表明限制网络大小可以进一步提高现有方法。最后,我们从经验上检查网络大小对学习函数平稳性的影响,并发现最初的大小和平滑度之间的负相关性是由标签噪声翻转的。
translated by 谷歌翻译
深神经网络(DNN)的记忆效果在许多最先进的标签噪声学习方法中起着枢轴作用。为了利用这一财产,通常采用早期停止训练早期优化的伎俩。目前的方法通常通过考虑整个DNN来决定早期停止点。然而,DNN可以被认为是一系列层的组成,并且发现DNN中的后一个层对标签噪声更敏感,而其前同行是非常稳健的。因此,选择整个网络的停止点可以使不同的DNN层对抗彼此影响,从而降低最终性能。在本文中,我们建议将DNN分离为不同的部位,逐步培训它们以解决这个问题。而不是早期停止,它一次列举一个整体DNN,我们最初通过用相对大量的时期优化DNN来训练前DNN层。在培训期间,我们通过使用较少数量的时期使用较少的地层来逐步培训后者DNN层,以抵消嘈杂标签的影响。我们将所提出的方法术语作为渐进式早期停止(PES)。尽管其简单性,与早期停止相比,PES可以帮助获得更有前景和稳定的结果。此外,通过将PE与现有的嘈杂标签培训相结合,我们在图像分类基准上实现了最先进的性能。
translated by 谷歌翻译
最近已证明自我监督的对比学习(CL)非常有效地防止深网贴上嘈杂的标签。尽管取得了经验成功,但对对比度学习对增强鲁棒性的影响的理论理解非常有限。在这项工作中,我们严格地证明,通过对比度学习学到的表示矩阵可以通过:(i)与数据中每个子类相对应的一个突出的奇异值来增强鲁棒性,并显着较小的剩余奇异值; (ii){{显着的单数矢量与每个子类的干净标签之间的一个很大的对齐。以上属性使对此类表示的线性层能够有效地学习干净的标签,而不会过度适应噪音。}我们进一步表明,通过对比度学习预先训练的深网的雅各比式的低级别结构使他们能够获得优越的最初的性能是在嘈杂的标签上进行微调时。最后,我们证明了对比度学习提供的最初鲁棒性使鲁棒训练方法能够在极端噪声水平下实现最先进的性能,例如平均27.18 \%\%和15.58 \%\%\%\%\%cifar-10上的提高和80 \%对称嘈杂标签的CIFAR-100,网络视频的准确性提高4.11 \%。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译
不完美的标签在现实世界数据集中无处不在,严重损害了模型性能。几个最近处理嘈杂标签的有效方法有两个关键步骤:1)将样品分开通过培训丢失,2)使用半监控方法在错误标记的集合中生成样本的伪标签。然而,由于硬样品和噪声之间的类似损失分布,目前的方法总是损害信息性的硬样品。在本文中,我们提出了PGDF(先前引导的去噪框架),通过生成样本的先验知识来学习深层模型来抑制噪声的新框架,这被集成到分割样本步骤和半监督步骤中。我们的框架可以将更多信息性硬清洁样本保存到干净标记的集合中。此外,我们的框架还通过抑制当前伪标签生成方案中的噪声来促进半监控步骤期间伪标签的质量。为了进一步增强硬样品,我们在训练期间在干净的标记集合中重新重量样品。我们使用基于CiFar-10和CiFar-100的合成数据集以及现实世界数据集WebVision和服装1M进行了评估了我们的方法。结果表明了最先进的方法的大量改进。
translated by 谷歌翻译
深层神经网络能够轻松地使用软磁横层(CE)丢失来记住嘈杂的标签。先前的研究试图解决此问题的重点是将噪声损失函数纳入CE损失。但是,记忆问题得到了缓解,但仍然由于非持鲁棒的损失而造成的。为了解决这个问题,我们专注于学习可靠的对比度表示数据,分类器很难记住CE损失下的标签噪声。我们提出了一种新颖的对比正则化函数,以通过标签噪声不主导表示表示的嘈杂数据来学习此类表示。通过理论上研究由提议的正则化功能引起的表示形式,我们揭示了学识渊博的表示形式将信息保留与真实标签和丢弃与损坏标签相关的信息有关的信息。此外,我们的理论结果还表明,学到的表示形式对标签噪声是可靠的。通过基准数据集的实验证明了该方法的有效性。
translated by 谷歌翻译
Training accurate deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Though a number of approaches have been proposed for learning with noisy labels, many open issues remain. In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes). Intuitively, CE requires an extra term to facilitate learning of hard classes, and more importantly, this term should be noise tolerant, so as to avoid overfitting to noisy labels. Inspired by the symmetric KL-divergence, we propose the approach of Symmetric cross entropy Learning (SL), boosting CE symmetrically with a noise robust counterpart Reverse Cross Entropy (RCE). Our proposed SL approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels. We provide a theoretical analysis of SL and also empirically show, on a range of benchmark and real-world datasets, that SL outperforms state-of-the-art methods. We also show that SL can be easily incorporated into existing methods in order to further enhance their performance.
translated by 谷歌翻译
标签噪声显着降低了应用中深度模型的泛化能力。有效的策略和方法,\ Texit {例如}重新加权或损失校正,旨在在训练神经网络时缓解标签噪声的负面影响。这些现有的工作通常依赖于预指定的架构并手动调整附加的超参数。在本文中,我们提出了翘曲的概率推断(WARPI),以便在元学习情景中自适应地整理分类网络的培训程序。与确定性模型相比,WARPI通过学习摊销元网络来制定为分层概率模型,这可以解决样本模糊性,因此对严格的标签噪声更加坚固。与直接生成损耗的重量值的现有近似加权功能不同,我们的元网络被学习以估计从登录和标签的输入来估计整流向量,这具有利用躺在它们中的足够信息的能力。这提供了纠正分类网络的学习过程的有效方法,证明了泛化能力的显着提高。此外,可以将整流载体建模为潜在变量并学习元网络,可以无缝地集成到分类网络的SGD优化中。我们在嘈杂的标签上评估了四个强大学习基准的Warpi,并在变体噪声类型下实现了新的最先进的。广泛的研究和分析还展示了我们模型的有效性。
translated by 谷歌翻译
Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix.
translated by 谷歌翻译
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. * Equal contribution. † Work partially performed while in Tokyo, visiting Prof. Ken-ichi Kawarabayashi.
translated by 谷歌翻译
最近,与培训样本相比,具有越来越多的网络参数的过度参数深度网络主导了现代机器学习的性能。但是,当培训数据被损坏时,众所周知,过度参数化的网络往往会过度合适并且不会概括。在这项工作中,我们提出了一种有原则的方法,用于在分类任务中对过度参数的深层网络进行强有力的培训,其中一部分培训标签被损坏。主要想法还很简单:标签噪声与从干净的数据中学到的网络稀疏且不一致,因此我们对噪声进行建模并学会将其与数据分开。具体而言,我们通过另一个稀疏的过度参数术语对标签噪声进行建模,并利用隐式算法正规化来恢复和分离基础损坏。值得注意的是,当在实践中使用如此简单的方法培训时,我们证明了针对各种真实数据集上标签噪声的最新测试精度。此外,我们的实验结果通过理论在简化的线性模型上证实,表明在不连贯的条件下稀疏噪声和低级别数据之间的精确分离。这项工作打开了许多有趣的方向,可以使用稀疏的过度参数化和隐式正则化来改善过度参数化模型。
translated by 谷歌翻译
最近关于使用嘈杂标签的学习的研究通过利用小型干净数据集来显示出色的性能。特别是,基于模型不可知的元学习的标签校正方法进一步提高了性能,通过纠正了嘈杂的标签。但是,标签错误矫予没有保障措施,导致不可避免的性能下降。此外,每个训练步骤都需要至少三个背部传播,显着减慢训练速度。为了缓解这些问题,我们提出了一种强大而有效的方法,可以在飞行中学习标签转换矩阵。采用转换矩阵使分类器对所有校正样本持怀疑态度,这减轻了错误的错误问题。我们还介绍了一个双头架构,以便在单个反向传播中有效地估计标签转换矩阵,使得估计的矩阵紧密地遵循由标签校正引起的移位噪声分布。广泛的实验表明,我们的方法在训练效率方面表现出比现有方法相当或更好的准确性。
translated by 谷歌翻译
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
translated by 谷歌翻译
一种广泛使用的传输学习算法是微调的,其中预先接受的模型在具有少量标记数据的目标任务上进行微调。当预训练模型的容量大于目标数据集的大小时,微调容易过度,并“记忆”训练标签。因此,一个重要的问题是规范微调,并确保其对噪声的鲁棒性。为了解决这个问题,我们首先分析微调的泛化属性。我们介绍了PAC-Bayes泛化界定,这取决于在微调和微调模型的噪声稳定期间在每层中行进的距离。我们经验衡量这些数量。根据分析,我们建议正规化的自我标签 - 正规化和自我标记方法之间的插值,包括(i)层明智的正则化,以限制在每层中行进的距离; (ii)自我标记 - 纠正和标签重新重复纠正错误标记的数据点(模型是自信的)和重新重复的自信数据点。我们在使用多个预先训练的模型体系结构上验证我们的方法和文本数据集的广泛集合和文本数据集。我们的方法将基线方法提高了1.76%(平均),可实现七种图像分类任务和0.75%,为几次拍摄的分类任务。当目标数据集包括嘈杂的标签时,我们的方法在两个嘈杂的设置中平均优于基线方法3.56%。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.
translated by 谷歌翻译
深度学习的最新进展依赖于大型标签的数据集来培训大容量模型。但是,以时间和成本效益的方式收集大型数据集通常会导致标签噪声。我们提出了一种从嘈杂的标签中学习的方法,该方法利用特征空间中的训练示例之间的相似性,鼓励每个示例的预测与其最近的邻居相似。与使用多个模型或不同阶段的训练算法相比,我们的方法采用了简单,附加的正规化项的形式。它可以被解释为经典的,偏置标签传播算法的归纳版本。我们在数据集上彻底评估我们的方法评估合成(CIFAR-10,CIFAR-100)和现实(迷你网络,网络vision,Clotsing1m,Mini-Imagenet-Red)噪声,并实现竞争性或最先进的精度,在所有人之间。
translated by 谷歌翻译
Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for this academic-industrial gap is the neighborhood-fetching latency incurred by data dependency in GNNs, which make it hard to deploy for latency-sensitive applications that require fast inference. Conversely, without involving any feature aggregation, MLPs have no data dependency and infer much faster than GNNs, but their performance is less competitive. Motivated by these complementary strengths and weaknesses, we propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and MLPs. Specifically, the GSDN framework is based purely on MLPs, where structural information is only implicitly used as prior to guide knowledge self-distillation between the neighborhood and the target, substituting the explicit neighborhood information propagation as in GNNs. As a result, GSDN enjoys the benefits of graph topology-awareness in training but has no data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with self-distillation, e.g., GSDN improves over stand-alone MLPs by 15.54\% on average and outperforms the state-of-the-art GNNs on six datasets. Regarding inference speed, GSDN infers 75X-89X faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
translated by 谷歌翻译
域的概括(DG)旨在学习分配变化的可推广模型,以避免重新拟合大规模训练数据。以前具有复杂损失设计和梯度约束的作品尚未在大规模基准上取得经验成功。在这项工作中,我们通过利用跨域跨域的预测特征的多个方面来揭示Experts(MOE)模型对DG的概括性的混合物。为此,我们提出了稀疏的融合混合物(SF-MOE),该混合物将稀疏性和融合机制纳入MOE框架中,以使模型保持稀疏和预测性。 SF-MOE有两个专用模块:1)稀疏块和2)融合块,它们分别分别分离和汇总对象的多样化信号。广泛的实验表明,SF-MOE是大规模基准的域名学习者。在5个大规模的DG数据集(例如域内)中,它的表现优于最佳同行,其计算成本相同甚至较低。我们从分布式表示的角度(例如,视觉属性)进一步揭示了SF-MOE的内部机制。我们希望这个框架可以促进未来的研究,将可普遍的对象识别推向现实世界。代码和模型在https://github.com/luodian/sf-moe-dg上发布。
translated by 谷歌翻译