自我培训是半监督学习的有效方法。关键的想法是让学习者本身根据其当前假设而迭代地为未标记的实例生成“伪监督”。结合一致性正则化,伪标签在各个域中显示了有希望的性能,例如在计算机视觉中。为了考虑伪标签的假设性质,这些通常以概率分布的形式提供。仍然可能争辩说,即使是概率分布也代表过多的知情程度,因为它表明学习者精确地了解地面真理的条件概率。在我们的方法中,我们因此允许学习者以债务集的形式标记实例,即(候选人)概率分布。由于这种表现力增加,学习者能够以更加灵活和更忠诚的方式代表不确定性和缺乏知识。要从那种弱标记的数据中学习,我们利用最近在所谓的超集学习领域提出的方法。在详尽的经验评估中,我们将我们的方法与最先进的自我监督方法进行比较,表明竞争优越的性能,尤其是含有高度不确定性的低标签情景。
translated by 谷歌翻译
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model's predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. The code is available at https://github.com/google-research/fixmatch.
translated by 谷歌翻译
我们提出了一个新颖的半监督学习框架,该框架巧妙地利用了模型的预测,从两个强烈的图像观点中的预测之间的一致性正则化,并由伪标签的信心加权,称为conmatch。虽然最新的半监督学习方法使用图像的弱和强烈的观点来定义方向的一致性损失,但如何为两个强大的观点之间的一致性定义定义这种方向仍然没有探索。为了解决这个问题,我们通过弱小的观点作为非参数和参数方法中的锚点来提出从强大的观点中对伪标签的新颖置信度度量。特别是,在参数方法中,我们首次介绍了伪标签在网络中的信心,该网络的信心是以端到端方式通过骨干模型学习的。此外,我们还提出了阶段训练,以提高培训的融合。当纳入现有的半监督学习者中时,并始终提高表现。我们进行实验,以证明我们对最新方法的有效性并提供广泛的消融研究。代码已在https://github.com/jiwoncocoder/conmatch上公开提供。
translated by 谷歌翻译
半监督学习(SSL)的最新最新方法将一致性正则化与基于置信的伪标记结合在一起。为了获得高质量的伪标签,通常采用高置信度阈值。但是,已经表明,对于远离训练数据的样本,深网的基于软磁性的置信度得分可能很高,因此,即使是高信心不明的样品,伪标签也可能仍然不可靠。在这项工作中,我们提出了伪标记的新观点:而不是依靠模型信心,而是衡量未标记的样本是否可能是“分布”;即,接近当前的培训数据。为了对未标记的样本进行分类是“分布”还是“分发”,我们采用了分布外检测文献中的能量评分。随着培训的进行进展,更不标记的样品成为分配并有助于培训,标记和伪标记的数据可以更好地近似于真正的分布以改善模型。实验表明,我们的基于能量的伪标记方法,尽管从概念上讲简单,但在不平衡的SSL基准测试方面显着优于基于置信的方法,并在类平衡的数据上实现了竞争性能。例如,当不平衡比率高于50时,它会在CIFAR10-LT上产生4-6%的绝对准确性提高。当与最新的长尾SSL方法结合使用时,可以实现进一步的改进。
translated by 谷歌翻译
The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples. In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch. Specifically, the high-confidence samples are required to exactly predict "what it is" by conventional True-Positive Classifier, while the low-confidence samples are employed to achieve a simpler goal -- to predict with ease "what it is not" by True-Negative Classifier. In this sense, we not only mitigate the pseudo-labeling errors but also make full use of the low-confidence unlabeled data by consistency of dissimilarity degree. MutexMatch achieves superior performance on multiple benchmark datasets, i.e., CIFAR-10, CIFAR-100, SVHN, STL-10, mini-ImageNet and Tiny-ImageNet. More importantly, our method further shows superiority when the amount of labeled data is scarce, e.g., 92.23% accuracy with only 20 labeled data on CIFAR-10. Our code and model weights have been released at https://github.com/NJUyued/MutexMatch4SSL.
translated by 谷歌翻译
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that guesses low-entropy labels for data-augmented unlabeled examples and mixes labeled and unlabeled data using MixUp. MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. We release all code used in our experiments. 1
translated by 谷歌翻译
近年来,已取得了巨大进展,以通过半监督学习(SSL)来纳入未标记的数据来克服效率低下的监督问题。大多数最先进的模型是基于对未标记的数据追求一致的模型预测的想法,该模型被称为输入噪声,这称为一致性正则化。尽管如此,对其成功的原因缺乏理论上的见解。为了弥合理论和实际结果之间的差距,我们在本文中提出了SSL的最坏情况一致性正则化技术。具体而言,我们首先提出了针对SSL的概括,该概括由分别在标记和未标记的训练数据上观察到的经验损失项组成。在这种界限的激励下,我们得出了一个SSL目标,该目标可最大程度地减少原始未标记的样本与其多重增强变体之间最大的不一致性。然后,我们提供了一种简单但有效的算法来解决提出的最小问题,从理论上证明它会收敛到固定点。五个流行基准数据集的实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
完全监督分类的问题是,它需要大量的注释数据,但是,在许多数据集中,很大一部分数据是未标记的。为了缓解此问题,半监督学习(SSL)利用了标记域上的分类器知识,并将其推送到无标记的域,该域具有与注释数据相似的分布。 SSL方法的最新成功至关重要地取决于阈值伪标记,从而对未标记的域的一致性正则化。但是,现有方法并未在训练过程中纳入伪标签或未标记样品的不确定性,这是由于嘈杂的标签或由于强大的增强而导致的分布样品。受SSL最近发展的启发,我们本文的目标是提出一个新颖的无监督不确定性意识的目标,依赖于核心和认识论不确定性量化。通过提出的不确定性感知损失功能,我们的方法优于标准SSL基准,在计算轻量级的同时,与最新的方法相匹配,或与最先进的方法相提并论。我们的结果优于复杂数据集(例如CIFAR-100和MINI-IMAGENET)的最新结果。
translated by 谷歌翻译
Semi-supervised learning (SSL) has achieved great success in leveraging a large amount of unlabeled data to learn a promising classifier. A popular approach is pseudo-labeling that generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Nevertheless, we highlight that these data with low-confidence pseudo labels can be still beneficial to the training process. Specifically, although the class with the highest probability in the prediction is unreliable, we can assume that this sample is very unlikely to belong to the classes with the lowest probabilities. In this way, these data can be also very informative if we can effectively exploit these complementary labels, i.e., the classes that a sample does not belong to. Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing methods. More critically, our CCL is particularly effective under the label-scarce settings. For example, we yield an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
最小化未标记数据的预测不确定性是在半监督学习(SSL)中实现良好性能的关键因素。预测不确定性通常表示为由输出空间中的转换概率计算的\ emph {熵}。大多数现有工程通过接受确定类(具有最大概率)作为真实标签或抑制微妙预测(具有较小概率)来蒸馏低熵预测。无论如何,这些蒸馏策略通常是模型培训的启发式和更少的信息。从这种辨别中,本文提出了一个名为自适应锐化(\ ADS)的双机制,首先将软阈值应用于自适应掩盖确定和可忽略不计的预测,然后无缝地锐化通知的预测,与通知的预测蒸馏出某些预测只要。更重要的是,我们通过与各种蒸馏策略进行比较理论上,从理论上分析\广告的特征。许多实验验证\广告通过使其显着提高了最先进的SSL方法。我们提出的\ ADS为未来蒸馏的SSL研究造成一个基石。
translated by 谷歌翻译
一个常见的分类任务情况是,有大量数据可用于培训,但只有一小部分用类标签注释。在这种情况下,半监督培训的目的是通过利用标记数据,而且从大量未标记的数据中提高分类准确性。最近的作品通过探索不同标记和未标记数据的不同增强性数据之间的一致性约束,从而取得了重大改进。遵循这条路径,我们提出了一个新颖的无监督目标,该目标侧重于彼此相似的高置信度未标记的数据之间所研究的关系较少。新提出的对损失最大程度地减少了高置信度伪伪标签之间的统计距离,其相似性高于一定阈值。我们提出的简单算法将对损失与MixMatch家族开发的技术结合在一起,显示出比以前在CIFAR-100和MINI-IMAGENET上的算法的显着性能增长,并且与CIFAR-的最先进方法相当。 10和SVHN。此外,简单还优于传输学习设置中最新方法,其中模型是由在ImainEnet或域内实现的权重初始化的。该代码可在github.com/zijian-hu/simple上获得。
translated by 谷歌翻译
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available. 2 * Equal contribution 2 https://github.com/brain-research/realistic-ssl-evaluation 32nd Conference on Neural Information Processing Systems (NeurIPS 2018),
translated by 谷歌翻译
一致性正则化是半监督学习(SSL)最广泛使用的技术之一。通常,目的是培训一种模型,该模型是各种数据增强的模型。在本文中,我们重新审视了这个想法,并发现通过减少来自不同增强图像之间的特征之间的距离来实现不变性,导致性能提高。然而,通过增加特征距离来鼓励其令人鼓舞,而是提高性能。为此,我们通过一个简单但有效的技术,专长的技术提出了一种改进的一致性正则化框架,它分别施加了对分类器和特征级别的一致性和增义。实验结果表明,我们的模型定义了各种数据集和设置的新技术,并以最高的余量优于以前的工作,特别是在低数据制度中。进行了广泛的实验以分析该方法,并将发布代码。
translated by 谷歌翻译
在构建培训迷你批次时,最半监督的学习方法在样本标记的数据上。本文研究了这种常见做法是否改善了学习和方法。我们将其与替代设置进行比较,其中每个迷你批次从所有训练数据均匀地采样,标有或不统计,这大大减少了典型的低标签制度中真正标签的直接监督。然而,这种更简单的设置也可以看作更通用,甚至是必要的,在多任务问题中,标记数据的过采样将变得棘手。我们对半监控的CiFar-10图像分类的实验,使用FixMatch显示使用均匀采样方法时的性能下降,当标记数据的量或训练时间增加时,在均匀采样方法增加时。此外,我们分析培训动态,了解标记数据的过采样如何比较均匀采样。我们的主要发现是,在训练中特别有益,但在更多伪标签变得正确时,在后期的阶段中不太重要。尽管如此,我们还发现,保持一些真正的标签仍然很重要,以避免从错误的伪标签中积累确认错误。
translated by 谷歌翻译
半监督学习(SSL)是规避建立高性能模型的昂贵标签成本的最有前途的范例之一。大多数现有的SSL方法常规假定标记和未标记的数据是从相同(类)分布中绘制的。但是,在实践中,未标记的数据可能包括课外样本;那些不能从标签数据中的封闭类中的单热编码标签,即未标记的数据是开放设置。在本文中,我们介绍了Opencos,这是一种基于最新的自我监督视觉表示学习框架来处理这种现实的半监督学习方案。具体而言,我们首先观察到,可以通过自我监督的对比度学习有效地识别开放式未标记数据集中的类外样本。然后,Opencos利用此信息来克服现有的最新半监督方法中的故障模式,通过利用一式旋转伪标签和软标签来为已识别的识别和外部未标记的标签数据分别。我们广泛的实验结果表明了Opencos的有效性,可以修复最新的半监督方法,适合涉及开放式无标记数据的各种情况。
translated by 谷歌翻译
在这项工作中,我们建议相互分布对准(RDA)解决半监督学习(SSL),该学习是一个无主参数框架,与置信阈值无关,并与匹配的(常规)和不匹配的类别分布一起工作。分布不匹配是一个经常被忽略但更通用的SSL场景,在该场景中,标记和未标记的数据不属于相同的类别分布。这可能导致该模型不利用标记的数据可靠,并大大降低SSL方法的性能,而传统的分布对齐无法挽救。在RDA中,我们对来自两个分类器的预测分布进行了相互对准,这些分类器预测了未标记的数据上的伪标签和互补标签。携带补充信息的这两个分布可用于相互正规化,而无需任何课堂分布。此外,我们从理论上显示RDA最大化输入输出互信息。我们的方法在各种不匹配的分布以及常规匹配的SSL设置的情况下,在SSL中实现了有希望的性能。我们的代码可在以下网址提供:https://github.com/njuyued/rda4robustssl。
translated by 谷歌翻译
深度神经网络在大规模标记的数据集的帮助下,在各种任务上取得了出色的表现。然而,这些数据集既耗时又竭尽全力来获得现实的任务。为了减轻对标记数据的需求,通过迭代分配伪标签将伪标签分配给未标记的样本,自我训练被广泛用于半监督学习中。尽管它很受欢迎,但自我训练还是不可靠的,通常会导致训练不稳定。我们的实验研究进一步表明,半监督学习的偏见既来自问题本身,也来自不适当的训练,并具有可能不正确的伪标签,这会在迭代自我训练过程中累积错误。为了减少上述偏见,我们提出了自我训练(DST)。首先,伪标签的生成和利用是由两个独立于参数的分类器头解耦,以避免直接误差积累。其次,我们估计自我训练偏差的最坏情况,其中伪标记函数在标记的样品上是准确的,但在未标记的样本上却尽可能多地犯错。然后,我们通过避免最坏的情况来优化表示形式,以提高伪标签的质量。广泛的实验证明,DST在标准的半监督学习基准数据集上的最先进方法中,平均提高了6.3%,而在13个不同任务上,FIXMATCH的平均水平为18.9%。此外,DST可以无缝地适应其他自我训练方法,并有助于稳定他们在从头开始的培训和预先训练模型的训练的情况下,在培训的情况下进行培训和平衡表现。
translated by 谷歌翻译
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. 1
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译