One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels (≤13 labeled images per class) using ResNet-50, a 10× improvement in label efficiency over the previous state-of-theart. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels. 1
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
糖尿病性视网膜病变(DR)是发达国家工人衰老人群中失明的主要原因之一,这是由于糖尿病的副作用降低了视网膜的血液供应。深度神经网络已被广泛用于自动化系统中,以在眼底图像上进行DR分类。但是,这些模型需要大量带注释的图像。在医疗领域,专家的注释昂贵,乏味且耗时。结果,提供了有限数量的注释图像。本文提出了一种半监督的方法,该方法利用未标记的图像和标记的图像来训练一种检测糖尿病性视网膜病的模型。提出的方法通过自我监督的学习使用无监督的预告片,然后使用一小部分标记的图像和知识蒸馏来监督微调,以提高分类任务的性能。在Eyepacs测试和Messidor-2数据集中评估了此方法,仅使用2%的Eyepacs列车标记图像,分别使用0.94和0.89 AUC。
translated by 谷歌翻译
大多数现有的工作在几次学习中,依赖于Meta-Learning网络在大型基础数据集上,该网络通常是与目标数据集相同的域。我们解决了跨域几秒钟的问题,其中基础和目标域之间存在大移位。与未标记的目标数据的跨域几秒识别问题在很大程度上在文献中毫无根据。启动是使用自我训练解决此问题的第一个方法。但是,它使用固定的老师在标记的基础数据集上返回,以为未标记的目标样本创建软标签。由于基本数据集和未标记的数据集来自不同的域,因此将基本数据集的类域中的目标图像投影,具有固定的预制模型可能是子最优的。我们提出了一种简单的动态蒸馏基方法,以方便来自新颖/基础数据集的未标记图像。我们通过从教师网络中的未标记图像的未标记版本的预测计算并将其与来自学生网络相同的相同图像的强大版本匹配来施加一致性正常化。教师网络的参数被更新为学生网络参数的指数移动平均值。我们表明所提出的网络了解可以轻松适应目标域的表示,即使它尚未在预先预测阶段的目标专用类别训练。我们的车型优于当前最先进的方法,在BSCD-FSL基准中的5次分类,3.6%的3.6%,并在传统的域名几枪学习任务中显示出竞争性能。
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On Im-ageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger Efficient-Net as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. 1 * This work was conducted at Google.
translated by 谷歌翻译
尽管自我监督的表示学习(SSL)受到社区的广泛关注,但最近的研究认为,当模型大小降低时,其性能将遭受悬崖的下降。当前的方法主要依赖于对比度学习来训练网络,在这项工作中,我们提出了一种简单而有效的蒸馏对比学习(Disco),以大幅度减轻问题。具体而言,我们发现主流SSL方法获得的最终嵌入包含最富有成果的信息,并建议提炼最终的嵌入,以最大程度地将教师的知识传播到轻量级模型中,通过约束学生的最后嵌入与学生的最后嵌入,以使其与该模型保持一致。老师。此外,在实验中,我们发现存在一种被称为蒸馏瓶颈的现象,并存在以扩大嵌入尺寸以减轻此问题。我们的方法在部署过程中不会向轻型模型引入任何额外的参数。实验结果表明,我们的方法在所有轻型模型上都达到了最先进的作用。特别是,当使用RESNET-101/RESNET-50用作教师教授有效网络-B0时,Imagenet上有效网络B0的线性结果非常接近Resnet-101/Resnet-50,但是有效网络B0的参数数量仅为9.4 \%/16.3 \%Resnet-101/resnet-50。代码可从https:// github获得。 com/yuting-gao/disco-pytorch。
translated by 谷歌翻译
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data. In this paper, we tackle this challenge and propose an approach for continual semi-supervised learning -- a setting where not all the data samples are labeled. An underlying issue in this scenario is the model forgetting representations of unlabeled data and overfitting the labeled ones. We leverage the power of nearest-neighbor classifiers to non-linearly partition the feature space and learn a strong representation for the current task, as well as distill relevant information from previous tasks. We perform a thorough experimental evaluation and show that our method outperforms all the existing approaches by large margins, setting a strong state of the art on the continual semi-supervised learning paradigm. For example, on CIFAR100 we surpass several others even when using at least 30 times less supervision (0.8% vs. 25% of annotations).
translated by 谷歌翻译
We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.
translated by 谷歌翻译
现有的少量学习(FSL)方法依赖于具有大型标记数据集的培训,从而阻止它们利用丰富的未标记数据。从信息理论的角度来看,我们提出了一种有效的无监督的FSL方法,并以自学意义进行学习表示。遵循信息原理,我们的方法通过捕获数据的内在结构来学习全面的表示。具体而言,我们以低偏置的MI估计量来最大化实例及其表示的相互信息(MI),以执行自我监督的预训练。我们的自我监督模型对所见类别的可区分特征的监督预训练没有针对可见的阶级的偏见,从而对看不见的类别进行了更好的概括。我们解释说,受监督的预训练和自我监督的预训练实际上正在最大化不同的MI目标。进一步进行了广泛的实验,以通过各种训练环境分析其FSL性能。令人惊讶的是,结果表明,在适当条件下,自我监管的预训练可以优于监督预训练。与最先进的FSL方法相比,我们的方法在没有基本类别的任何标签的情况下,在广泛使用的FSL基准上实现了可比的性能。
translated by 谷歌翻译
特征回归是将大型神经网络模型蒸馏到较小的功能回归。我们表明,随着网络架构的简单变化,回归可能会优于自我监督模型的知识蒸馏更复杂的最先进方法。令人惊讶的是,即使仅在蒸馏过程中仅使用并且在下游任务中丢弃时,将多层的Perceptron头部添加到CNN骨架上是有益的。因此,更深的非线性投影可以使用在不改变推理架构和时间的情况下准确地模仿老师。此外,我们利用独立的投影头来同时蒸馏多个教师网络。我们还发现,使用与教师和学生网络的输入相同的弱增强图像辅助蒸馏。Imagenet DataSet上的实验证明了各种自我监督蒸馏环境中提出的变化的功效。
translated by 谷歌翻译
使用超越欧几里德距离的神经网络,深入的Bregman分歧测量数据点的分歧,并且能够捕获分布的发散。在本文中,我们提出了深深的布利曼对视觉表现的对比学习的分歧,我们的目标是通过基于功能Bregman分歧培训额外的网络来提高自我监督学习中使用的对比损失。与完全基于单点之间的分歧的传统对比学学习方法相比,我们的框架可以捕获分布之间的发散,这提高了学习表示的质量。我们展示了传统的对比损失和我们提出的分歧损失优于基线的结合,并且最先前的自我监督和半监督学习的大多数方法在多个分类和对象检测任务和数据集中。此外,学习的陈述在转移到其他数据集和任务时概括了良好。源代码和我们的型号可用于补充,并将通过纸张释放。
translated by 谷歌翻译
我们研究视觉变压器(VIT)的半监督学习(SSL),尽管VIT架构广泛采用了不同的任务,但视觉变形金刚(VIT)还是一个不足的主题。为了解决这个问题,我们提出了一条新的SSL管道,该管道由第一个联合国/自制的预训练组成,然后是监督的微调,最后是半监督的微调。在半监督的微调阶段,我们采用指数的移动平均线(EMA) - 教师框架,而不是流行的FixMatch,因为前者更稳定,并且为半手不见的视觉变压器提供了更高的准确性。此外,我们提出了一种概率的伪混合机制来插入未标记的样品及其伪标签以改善正则化,这对于训练电感偏差较弱的训练VIT很重要。我们所提出的方法被称为半vit,比半监督分类设置中的CNN对应物获得可比性或更好的性能。半vit还享受VIT的可伸缩性优势,可以很容易地扩展到具有越来越高的精度的大型模型。例如,半效率总数仅使用1%标签在Imagenet上获得令人印象深刻的80%TOP-1精度,使用100%ImageNet标签与Inception-V4相当。
translated by 谷歌翻译
监管基于深度学习的方法,产生医学图像分割的准确结果。但是,它们需要大量标记的数据集,并获得它们是一种艰苦的任务,需要临床专业知识。基于半/自我监督的学习方法通​​过利用未标记的数据以及有限的注释数据来解决此限制。最近的自我监督学习方法使用对比损失来从未标记的图像中学习良好的全球层面表示,并在像想象网那样的流行自然图像数据集上实现高性能。在诸如分段的像素级预测任务中,对于学习良好的本地级别表示以及全局表示来说至关重要,以实现更好的准确性。然而,现有的局部对比损失的方法的影响仍然是学习良好本地表现的限制,因为类似于随机增强和空间接近定义了类似和不同的局部区域;由于半/自我监督设置缺乏大规模专家注释,而不是基于当地地区的语义标签。在本文中,我们提出了局部对比损失,以便通过利用从未标记的图像的未标记图像的伪标签获得的语义标签信息来学习用于分割的良好像素级别特征。特别地,我们定义了建议的损失,以鼓励具有相同伪标签/标签的像素的类似表示,同时与数据集中的不同伪标签/标签的像素的表示。我们通过联合优化标记和未标记的集合和仅限于标记集的分割损失,通过联合优化拟议的对比损失来进行基于伪标签的自培训和培训网络。我们在三个公共心脏和前列腺数据集上进行了评估,并获得高分割性能。
translated by 谷歌翻译
这项工作的作者提出了最近半监督的学习方法和相关作品的概述。尽管神经网络在各种应用中取得了显着的成功,但很少有强大的约束,包括需要大量标记数据。因此,半监督的学习是一种学习方案,其中稀缺标签和大量未标记的数据被用于训练模型(例如,深度神经网络)变得越来越重要。基于半监督学习的关键假设,这是多种假设,集群假设和连续性假设,工作回顾了最近的半监督学习方法。特别是,主要讨论了在半监督学习环境中使用深神网络的方法。此外,现有的作品首先是根据基本思想进行了分类并解释的,然后详细介绍了统一上述思想的整体方法。
translated by 谷歌翻译
我们提出了将粗大分类标签纳入细粒域中的图像分类器的技术。这种标签通常可以通过较小的努力来获得较小的粒状域,例如根据生物分类法组织类别的自然界。在三个王国组成的半inat数据集上,包括Phylum标签,在使用ImageNet预训练模型的转移学习设置中将物种级别分类精度提高了6%。使用称为FixMatch的最先进的半监督学习算法的分层标签结构提高了1.3%的性能。当提供诸如类或订单的详细标签或从头开始培训时,相对收益更大。但是,我们发现大多数方法对来自新类别的域名数据的存在并不强大。我们提出了一种技术来从层次结构引导的大量未标记图像中选择相关数据,这提高了鲁棒性。总体而言,我们的实验表明,具有粗大分类标签的半监督学习对于细粒度域中的培训分类器是实用的。
translated by 谷歌翻译
我们对最近的自我和半监督ML技术进行严格的评估,从而利用未标记的数据来改善下游任务绩效,以河床分割的三个遥感任务,陆地覆盖映射和洪水映射。这些方法对于遥感任务特别有价值,因为易于访问未标记的图像,并获得地面真理标签通常可以昂贵。当未标记的图像(标记数据集之外)提供培训时,我们量化性能改进可以对这些遥感分割任务进行期望。我们还设计实验以测试这些技术的有效性,当测试集相对于训练和验证集具有域移位时。
translated by 谷歌翻译
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. 1
translated by 谷歌翻译