The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.
translated by 谷歌翻译
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
translated by 谷歌翻译
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available. 2 * Equal contribution 2 https://github.com/brain-research/realistic-ssl-evaluation 32nd Conference on Neural Information Processing Systems (NeurIPS 2018),
translated by 谷歌翻译
We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model. We introduce a novel way to use a single model for both generating pseudo labels and classification, allowing us to store only one model in memory instead of two. Our method attains similar performance to the Meta Pseudo Labels method while drastically reducing memory usage.
translated by 谷歌翻译
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On Im-ageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger Efficient-Net as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. 1 * This work was conducted at Google.
translated by 谷歌翻译
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that guesses low-entropy labels for data-augmented unlabeled examples and mixes labeled and unlabeled data using MixUp. MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. We release all code used in our experiments. 1
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
现代深度学习在各个领域取得了巨大的成功。但是,它需要标记大量数据,这是昂贵且劳动密集型的。积极学习(AL)确定要标记的最有用的样本,对于最大化培训过程的效率变得越来越重要。现有的AL方法主要仅使用单个最终固定模型来获取要标记的样品。这种策略可能还不够好,因为没有考虑为给定培训数据的模型的结构不确定性来获取样品。在这项研究中,我们提出了一种基于常规随机梯度下降(SGD)优化产生的时间自我汇总的新颖获取标准。通过捕获通过SGD迭代获得的中间网络权重来获得这些自我复杂模型。我们的收购功能依赖于学生和教师模型之间的一致性度量。为学生模型提供了固定数量的时间自我安装模型,并且教师模型是通过平均学生模型来构建的。使用拟议的获取标准,我们提出了AL算法,即基于学生教师的AL(ST-Conal)。在CIFAR-10,CIFAR-100,CALTECH-256和TINY IMAGENET数据集上进行的图像分类任务进行的实验表明,所提出的ST-Conal实现的性能要比现有的获取方法要好得多。此外,广泛的实验显示了我们方法的鲁棒性和有效性。
translated by 谷歌翻译
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model's predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. The code is available at https://github.com/google-research/fixmatch.
translated by 谷歌翻译
在构建培训迷你批次时,最半监督的学习方法在样本标记的数据上。本文研究了这种常见做法是否改善了学习和方法。我们将其与替代设置进行比较,其中每个迷你批次从所有训练数据均匀地采样,标有或不统计,这大大减少了典型的低标签制度中真正标签的直接监督。然而,这种更简单的设置也可以看作更通用,甚至是必要的,在多任务问题中,标记数据的过采样将变得棘手。我们对半监控的CiFar-10图像分类的实验,使用FixMatch显示使用均匀采样方法时的性能下降,当标记数据的量或训练时间增加时,在均匀采样方法增加时。此外,我们分析培训动态,了解标记数据的过采样如何比较均匀采样。我们的主要发现是,在训练中特别有益,但在更多伪标签变得正确时,在后期的阶段中不太重要。尽管如此,我们还发现,保持一些真正的标签仍然很重要,以避免从错误的伪标签中积累确认错误。
translated by 谷歌翻译
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. 1
translated by 谷歌翻译
深度学习的最新进展依赖于大型标签的数据集来培训大容量模型。但是,以时间和成本效益的方式收集大型数据集通常会导致标签噪声。我们提出了一种从嘈杂的标签中学习的方法,该方法利用特征空间中的训练示例之间的相似性,鼓励每个示例的预测与其最近的邻居相似。与使用多个模型或不同阶段的训练算法相比,我们的方法采用了简单,附加的正规化项的形式。它可以被解释为经典的,偏置标签传播算法的归纳版本。我们在数据集上彻底评估我们的方法评估合成(CIFAR-10,CIFAR-100)和现实(迷你网络,网络vision,Clotsing1m,Mini-Imagenet-Red)噪声,并实现竞争性或最先进的精度,在所有人之间。
translated by 谷歌翻译
这项工作的作者提出了最近半监督的学习方法和相关作品的概述。尽管神经网络在各种应用中取得了显着的成功,但很少有强大的约束,包括需要大量标记数据。因此,半监督的学习是一种学习方案,其中稀缺标签和大量未标记的数据被用于训练模型(例如,深度神经网络)变得越来越重要。基于半监督学习的关键假设,这是多种假设,集群假设和连续性假设,工作回顾了最近的半监督学习方法。特别是,主要讨论了在半监督学习环境中使用深神网络的方法。此外,现有的作品首先是根据基本思想进行了分类并解释的,然后详细介绍了统一上述思想的整体方法。
translated by 谷歌翻译
除了使用硬标签的标准监督学习外,通常在许多监督学习设置中使用辅助损失来改善模型的概括。例如,知识蒸馏增加了第二个教师模仿模型训练的损失,在该培训中,教师可能是一个验证的模型,可以输出比标签更丰富的分布。同样,在标记数据有限的设置中,弱标记信息以标签函数的形式使用。此处引入辅助损失来对抗标签函数,这些功能可能是基于嘈杂的规则的真实标签近似值。我们解决了学习以原则性方式结合这些损失的问题。我们介绍AMAL,该AMAL使用元学习在验证度量上学习实例特定的权重,以实现损失的最佳混合。在许多知识蒸馏和规则降解域中进行的实验表明,Amal在这些领域中对竞争基准的增长可显着。我们通过经验分析我们的方法,并分享有关其提供性能提升的机制的见解。
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译
我们理论上和经验地证明,对抗性鲁棒性可以显着受益于半体验学习。从理论上讲,我们重新审视了Schmidt等人的简单高斯模型。这显示了标准和稳健分类之间的示例复杂性差距。我们证明了未标记的数据桥接这种差距:简单的半体验学习程序(自我训练)使用相同数量的达到高标准精度所需的标签实现高的强大精度。经验上,我们增强了CiFar-10,使用50万微小的图像,使用了8000万微小的图像,并使用强大的自我训练来优于最先进的鲁棒精度(i)$ \ ell_ infty $鲁棒性通过对抗培训和(ii)认证$ \ ell_2 $和$ \ ell_ \ infty $鲁棒性通过随机平滑的几个强大的攻击。在SVHN上,添加DataSet自己的额外训练集,删除的标签提供了4到10个点的增益,在使用额外标签的1点之内。
translated by 谷歌翻译
知识蒸馏是一种培训小型学生网络的流行技术,以模仿更大的教师模型,例如网络的集合。我们表明,虽然知识蒸馏可以改善学生泛化,但它通常不得如此普遍地工作:虽然在教师和学生的预测分布之间,甚至在学生容量的情况下,通常仍然存在令人惊讶的差异完美地匹配老师。我们认为优化的困难是为什么学生无法与老师匹配的关键原因。我们还展示了用于蒸馏的数据集的细节如何在学生与老师匹配的紧密关系中发挥作用 - 以及教师矛盾的教师并不总是导致更好的学生泛化。
translated by 谷歌翻译
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using the same number of parameters as the original deep models. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to complex, well-engineered, deeper convolutional architectures.
translated by 谷歌翻译
Training deep convolutional neural networks usually requires a large amount of labeled data. However, it is expensive and timeconsuming to annotate data for medical image segmentation tasks. In this paper, we present a novel uncertainty-aware semi-supervised framework for left atrium segmentation from 3D MR images. Our framework can effectively leverage the unlabeled data by encouraging consistent predictions of the same input under different perturbations. Concretely, the framework consists of a student model and a teacher model, and the student model learns from the teacher model by minimizing a segmentation loss and a consistency loss with respect to the targets of the teacher model. We design a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information. Experiments show that our method achieves high performance gains by incorporating the unlabeled data. Our method outperforms the state-of-the-art semi-supervised methods, demonstrating the potential of our framework for the challenging semi-supervised problems 3 .
translated by 谷歌翻译
长期以来,半监督学习(SSL)已被证明是一种有限的标签模型的有效技术。在现有的文献中,基于一致性的基于正则化的方法,这些方法迫使扰动样本具有类似的预测,而原始的样本则引起了极大的关注。但是,我们观察到,当标签变得极为有限时,例如,每个类别的2或3标签时,此类方法的性能会大大降低。我们的实证研究发现,主要问题在于语义信息在数据增强过程中的漂移。当提供足够的监督时,可以缓解问题。但是,如果几乎没有指导,错误的正则化将误导网络并破坏算法的性能。为了解决该问题,我们(1)提出了一种基于插值的方法来构建更可靠的正样品对; (2)设计一种新颖的对比损失,以指导学习网络的嵌入以在样品之间进行线性更改,从而通过扩大保证金决策边界来提高网络的歧视能力。由于未引入破坏性正则化,因此我们提出的算法的性能在很大程度上得到了改善。具体而言,所提出的算法的表现优于第二好算法(COMATT),而当CIFAR-10数据集中的每个类只有两个标签可用时,可以实现88.73%的分类精度,占5.3%。此外,我们通过通过我们提出的策略大大改善现有最新算法的性能,进一步证明了所提出的方法的普遍性。
translated by 谷歌翻译