在构建培训迷你批次时,最半监督的学习方法在样本标记的数据上。本文研究了这种常见做法是否改善了学习和方法。我们将其与替代设置进行比较,其中每个迷你批次从所有训练数据均匀地采样,标有或不统计,这大大减少了典型的低标签制度中真正标签的直接监督。然而,这种更简单的设置也可以看作更通用,甚至是必要的,在多任务问题中,标记数据的过采样将变得棘手。我们对半监控的CiFar-10图像分类的实验,使用FixMatch显示使用均匀采样方法时的性能下降,当标记数据的量或训练时间增加时,在均匀采样方法增加时。此外,我们分析培训动态,了解标记数据的过采样如何比较均匀采样。我们的主要发现是,在训练中特别有益,但在更多伪标签变得正确时,在后期的阶段中不太重要。尽管如此,我们还发现,保持一些真正的标签仍然很重要,以避免从错误的伪标签中积累确认错误。
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model's predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. The code is available at https://github.com/google-research/fixmatch.
translated by 谷歌翻译
一个常见的分类任务情况是,有大量数据可用于培训,但只有一小部分用类标签注释。在这种情况下,半监督培训的目的是通过利用标记数据,而且从大量未标记的数据中提高分类准确性。最近的作品通过探索不同标记和未标记数据的不同增强性数据之间的一致性约束,从而取得了重大改进。遵循这条路径,我们提出了一个新颖的无监督目标,该目标侧重于彼此相似的高置信度未标记的数据之间所研究的关系较少。新提出的对损失最大程度地减少了高置信度伪伪标签之间的统计距离,其相似性高于一定阈值。我们提出的简单算法将对损失与MixMatch家族开发的技术结合在一起,显示出比以前在CIFAR-100和MINI-IMAGENET上的算法的显着性能增长,并且与CIFAR-的最先进方法相当。 10和SVHN。此外,简单还优于传输学习设置中最新方法,其中模型是由在ImainEnet或域内实现的权重初始化的。该代码可在github.com/zijian-hu/simple上获得。
translated by 谷歌翻译
深度神经网络在大规模标记的数据集的帮助下,在各种任务上取得了出色的表现。然而,这些数据集既耗时又竭尽全力来获得现实的任务。为了减轻对标记数据的需求,通过迭代分配伪标签将伪标签分配给未标记的样本,自我训练被广泛用于半监督学习中。尽管它很受欢迎,但自我训练还是不可靠的,通常会导致训练不稳定。我们的实验研究进一步表明,半监督学习的偏见既来自问题本身,也来自不适当的训练,并具有可能不正确的伪标签,这会在迭代自我训练过程中累积错误。为了减少上述偏见,我们提出了自我训练(DST)。首先,伪标签的生成和利用是由两个独立于参数的分类器头解耦,以避免直接误差积累。其次,我们估计自我训练偏差的最坏情况,其中伪标记函数在标记的样品上是准确的,但在未标记的样本上却尽可能多地犯错。然后,我们通过避免最坏的情况来优化表示形式,以提高伪标签的质量。广泛的实验证明,DST在标准的半监督学习基准数据集上的最先进方法中,平均提高了6.3%,而在13个不同任务上,FIXMATCH的平均水平为18.9%。此外,DST可以无缝地适应其他自我训练方法,并有助于稳定他们在从头开始的培训和预先训练模型的训练的情况下,在培训的情况下进行培训和平衡表现。
translated by 谷歌翻译
半监督学习方法已成为对打击获得大量注释数据的挑战的活跃研究领域。为了提高半监督学习方法表现的目标,我们提出了一种新颖的框架,Hiematch,一种半监督方法,利用分层信息来降低标签成本并表现以及vanilla半监督学习方法。分层信息通常是具有细粒标签的粗标签(例如,啄木鸟)的粗标签(例如,啄木鸟)的现有知识(例如,柔软的啄木鸟或金朝啄木鸟)。但是,尚未探讨使用使用粗类标签来改进半监督技术的监督。在没有细粒度的标签的情况下,Himatch利用标签层次结构,并使用粗级标签作为弱监控信号。此外,Himatch是一种改进任何半熟的学习框架的通用方法,我们使用我们的结果在最近的最先进的技术Mixmatch和Fixmatch上展示了这一点。我们评估了在两个基准数据集,即CiFar-100和Nabirds上的Himatch疗效。与MixMatch相比,HOMACHACT可以在CIFAR-100上减少50%的粒度标签50%的用量,仅在前1个精度的边缘下降0.59%。代码:https://github.com/07agarg/hiermatch.
translated by 谷歌翻译
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available. 2 * Equal contribution 2 https://github.com/brain-research/realistic-ssl-evaluation 32nd Conference on Neural Information Processing Systems (NeurIPS 2018),
translated by 谷歌翻译
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that guesses low-entropy labels for data-augmented unlabeled examples and mixes labeled and unlabeled data using MixUp. MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. We release all code used in our experiments. 1
translated by 谷歌翻译
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data. In this paper, we tackle this challenge and propose an approach for continual semi-supervised learning -- a setting where not all the data samples are labeled. An underlying issue in this scenario is the model forgetting representations of unlabeled data and overfitting the labeled ones. We leverage the power of nearest-neighbor classifiers to non-linearly partition the feature space and learn a strong representation for the current task, as well as distill relevant information from previous tasks. We perform a thorough experimental evaluation and show that our method outperforms all the existing approaches by large margins, setting a strong state of the art on the continual semi-supervised learning paradigm. For example, on CIFAR100 we surpass several others even when using at least 30 times less supervision (0.8% vs. 25% of annotations).
translated by 谷歌翻译
培训深层神经网络以识别图像识别通常需要大规模的人类注释数据。为了减少深神经溶液对标记数据的依赖,文献中已经提出了最先进的半监督方法。尽管如此,在面部表达识别领域(FER)领域,使用这种半监督方法非常罕见。在本文中,我们介绍了一项关于最近提出的在FER背景下的最先进的半监督学习方法的全面研究。我们对八种半监督学习方法进行了比较研究当使用各种标记的样品时。我们还将这些方法的性能与完全监督的培训进行了比较。我们的研究表明,当培训现有的半监督方法时,每类标记的样本只有250个标记的样品可以产生可比的性能,而在完整标记的数据集中训练的完全监督的方法。为了促进该领域的进一步研究,我们在:https://github.com/shuvenduroy/ssl_fer上公开提供代码
translated by 谷歌翻译
The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples. In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch. Specifically, the high-confidence samples are required to exactly predict "what it is" by conventional True-Positive Classifier, while the low-confidence samples are employed to achieve a simpler goal -- to predict with ease "what it is not" by True-Negative Classifier. In this sense, we not only mitigate the pseudo-labeling errors but also make full use of the low-confidence unlabeled data by consistency of dissimilarity degree. MutexMatch achieves superior performance on multiple benchmark datasets, i.e., CIFAR-10, CIFAR-100, SVHN, STL-10, mini-ImageNet and Tiny-ImageNet. More importantly, our method further shows superiority when the amount of labeled data is scarce, e.g., 92.23% accuracy with only 20 labeled data on CIFAR-10. Our code and model weights have been released at https://github.com/NJUyued/MutexMatch4SSL.
translated by 谷歌翻译
半监督学习(SSL)是规避建立高性能模型的昂贵标签成本的最有前途的范例之一。大多数现有的SSL方法常规假定标记和未标记的数据是从相同(类)分布中绘制的。但是,在实践中,未标记的数据可能包括课外样本;那些不能从标签数据中的封闭类中的单热编码标签,即未标记的数据是开放设置。在本文中,我们介绍了Opencos,这是一种基于最新的自我监督视觉表示学习框架来处理这种现实的半监督学习方案。具体而言,我们首先观察到,可以通过自我监督的对比度学习有效地识别开放式未标记数据集中的类外样本。然后,Opencos利用此信息来克服现有的最新半监督方法中的故障模式,通过利用一式旋转伪标签和软标签来为已识别的识别和外部未标记的标签数据分别。我们广泛的实验结果表明了Opencos的有效性,可以修复最新的半监督方法,适合涉及开放式无标记数据的各种情况。
translated by 谷歌翻译
We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model. We introduce a novel way to use a single model for both generating pseudo labels and classification, allowing us to store only one model in memory instead of two. Our method attains similar performance to the Meta Pseudo Labels method while drastically reducing memory usage.
translated by 谷歌翻译
自我培训是半监督学习的有效方法。关键的想法是让学习者本身根据其当前假设而迭代地为未标记的实例生成“伪监督”。结合一致性正则化,伪标签在各个域中显示了有希望的性能,例如在计算机视觉中。为了考虑伪标签的假设性质,这些通常以概率分布的形式提供。仍然可能争辩说,即使是概率分布也代表过多的知情程度,因为它表明学习者精确地了解地面真理的条件概率。在我们的方法中,我们因此允许学习者以债务集的形式标记实例,即(候选人)概率分布。由于这种表现力增加,学习者能够以更加灵活和更忠诚的方式代表不确定性和缺乏知识。要从那种弱标记的数据中学习,我们利用最近在所谓的超集学习领域提出的方法。在详尽的经验评估中,我们将我们的方法与最先进的自我监督方法进行比较,表明竞争优越的性能,尤其是含有高度不确定性的低标签情景。
translated by 谷歌翻译
完全监督分类的问题是,它需要大量的注释数据,但是,在许多数据集中,很大一部分数据是未标记的。为了缓解此问题,半监督学习(SSL)利用了标记域上的分类器知识,并将其推送到无标记的域,该域具有与注释数据相似的分布。 SSL方法的最新成功至关重要地取决于阈值伪标记,从而对未标记的域的一致性正则化。但是,现有方法并未在训练过程中纳入伪标签或未标记样品的不确定性,这是由于嘈杂的标签或由于强大的增强而导致的分布样品。受SSL最近发展的启发,我们本文的目标是提出一个新颖的无监督不确定性意识的目标,依赖于核心和认识论不确定性量化。通过提出的不确定性感知损失功能,我们的方法优于标准SSL基准,在计算轻量级的同时,与最新的方法相匹配,或与最先进的方法相提并论。我们的结果优于复杂数据集(例如CIFAR-100和MINI-IMAGENET)的最新结果。
translated by 谷歌翻译
Semi-supervised learning (SSL) has achieved great success in leveraging a large amount of unlabeled data to learn a promising classifier. A popular approach is pseudo-labeling that generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Nevertheless, we highlight that these data with low-confidence pseudo labels can be still beneficial to the training process. Specifically, although the class with the highest probability in the prediction is unreliable, we can assume that this sample is very unlikely to belong to the classes with the lowest probabilities. In this way, these data can be also very informative if we can effectively exploit these complementary labels, i.e., the classes that a sample does not belong to. Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing methods. More critically, our CCL is particularly effective under the label-scarce settings. For example, we yield an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.
translated by 谷歌翻译
Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption-that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network. Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art.
translated by 谷歌翻译
半监督学习(SSL)的最新最新方法将一致性正则化与基于置信的伪标记结合在一起。为了获得高质量的伪标签,通常采用高置信度阈值。但是,已经表明,对于远离训练数据的样本,深网的基于软磁性的置信度得分可能很高,因此,即使是高信心不明的样品,伪标签也可能仍然不可靠。在这项工作中,我们提出了伪标记的新观点:而不是依靠模型信心,而是衡量未标记的样本是否可能是“分布”;即,接近当前的培训数据。为了对未标记的样本进行分类是“分布”还是“分发”,我们采用了分布外检测文献中的能量评分。随着培训的进行进展,更不标记的样品成为分配并有助于培训,标记和伪标记的数据可以更好地近似于真正的分布以改善模型。实验表明,我们的基于能量的伪标记方法,尽管从概念上讲简单,但在不平衡的SSL基准测试方面显着优于基于置信的方法,并在类平衡的数据上实现了竞争性能。例如,当不平衡比率高于50时,它会在CIFAR10-LT上产生4-6%的绝对准确性提高。当与最新的长尾SSL方法结合使用时,可以实现进一步的改进。
translated by 谷歌翻译
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On Im-ageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger Efficient-Net as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. 1 * This work was conducted at Google.
translated by 谷歌翻译
We study different aspects of active learning with deep neural networks in a consistent and unified way. i) We investigate incremental and cumulative training modes which specify how the newly labeled data are used for training. ii) We study active learning w.r.t. the model configurations such as the number of epochs and neurons as well as the choice of batch size. iii) We consider in detail the behavior of query strategies and their corresponding informativeness measures and accordingly propose more efficient querying procedures. iv) We perform statistical analyses, e.g., on actively learned classes and test error estimation, that reveal several insights about active learning. v) We investigate how active learning with neural networks can benefit from pseudo-labels as proxies for actual labels.
translated by 谷歌翻译