本文描述了机器学习中“一般周期性训练”的原则,在该原理中,培训以“易于训练”开始和结束,而“硬训练”发生在中间时期。我们提出了几种训练神经网络的表现,包括算法示例(通过超参数和损失功能),基于数据的示例和基于模型的示例。具体而言,我们介绍了几种新技术:周期性重量衰减,周期性批量尺寸,周期性局灶性损失,周期性软度温度,周期性数据增强,周期性梯度剪辑和周期性的半监督学习。此外,我们证明了周期性的重量衰减,周期性软度温度和周期性梯度剪辑(作为该原理的三个示例)对训练有素的模型的测试准确性性能有益。此外,我们从一般周期性培训的角度讨论了基于模型的示例(例如预处理和知识蒸馏),并建议对典型培训方法进行一些更改。总而言之,本文定义了一般的周期性培训概念,并讨论了该概念可以应用于训练神经网络的几种特定方式。本着可重复性的精神,我们的实验中使用的代码可在\ url {https://github.com/lnsmith54/cfl}上获得。
translated by 谷歌翻译
跨透明镜软磁损失是用于训练深神经网络的主要损失函数。另一方面,当每个类别的训练样本数量不平衡时,例如长尾数据集中的训练样本数量不平衡,焦点损失函数已被证明可以提高性能。在本文中,我们引入了一种新型的周期性局灶性损失,并证明它比跨凝结软磁损失或局灶性损失更为普遍。我们描述了周期性局灶性损失背后的直觉,我们的实验提供了证据,表明周期性局灶性损失为平衡,不平衡或长尾数据集提供了卓越的性能。我们为CIFAR-10/CIFAR-100,Imagenet,平衡和不平衡的4,000个训练样本版本的CIFAR-10/CIFAR-100和Imagenet-LT和Imagenet-LT和LTT提供了许多实验结果。挑战。实施周期性局灶性损失函数仅需要几行代码,并且不会增加训练时间。本着可重复性的精神,我们的代码可在\ url {https://github.com/lnsmith54/cfl}上获得。
translated by 谷歌翻译
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate "reasonable bounds" -linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
translated by 谷歌翻译
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods. FixMatch first generates pseudo-labels using the model's predictions on weaklyaugmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -just 4 labels per class. We carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. The code is available at https://github.com/google-research/fixmatch.
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
观察到在训练期间重新定位神经网络,以改善最近的作品中的概括。然而,它既不在深度学习实践中被广泛采用,也不经常用于最先进的培训方案中。这就提出了一个问题,即何时重新定位起作用,以及是否应与正规化技术一起使用,例如数据增强,体重衰减和学习率计划。在这项工作中,我们对标准培训的经验比较进行了广泛的经验比较,并选择了一些重新定位方法来回答这个问题,并在各种图像分类基准上培训了15,000多个模型。我们首先确定在没有任何其他正则化的情况下,这种方法对概括始终有益。但是,当与其他经过精心调整的正则化技术一起部署时,重新定位方法几乎没有给予概括,尽管最佳的概括性能对学习率和体重衰减超参数的选择不太敏感。为了研究重新定位方法对嘈杂数据的影响,我们还考虑在标签噪声下学习。令人惊讶的是,在这种情况下,即使在存在其他经过精心调整的正则化技术的情况下,重新定位也会显着改善标准培训。
translated by 谷歌翻译
知识蒸馏是一种培训小型学生网络的流行技术,以模仿更大的教师模型,例如网络的集合。我们表明,虽然知识蒸馏可以改善学生泛化,但它通常不得如此普遍地工作:虽然在教师和学生的预测分布之间,甚至在学生容量的情况下,通常仍然存在令人惊讶的差异完美地匹配老师。我们认为优化的困难是为什么学生无法与老师匹配的关键原因。我们还展示了用于蒸馏的数据集的细节如何在学生与老师匹配的紧密关系中发挥作用 - 以及教师矛盾的教师并不总是导致更好的学生泛化。
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
差异隐私(DP)提供了正式的隐私保证,以防止对手可以访问机器学习模型,从而从提取有关单个培训点的信息。最受欢迎的DP训练方法是差异私有随机梯度下降(DP-SGD),它通过在训练过程中注入噪声来实现这种保护。然而,以前的工作发现,DP-SGD通常会导致标准图像分类基准的性能显着降解。此外,一些作者假设DP-SGD在大型模型上固有地表现不佳,因为保留隐私所需的噪声规范与模型维度成正比。相反,我们证明了过度参数化模型上的DP-SGD可以比以前想象的要好得多。将仔细的超参数调整与简单技术结合起来,以确保信号传播并提高收敛速率,我们获得了新的SOTA,而没有额外数据的CIFAR-10,在81.4%的81.4%下(8,10^{ - 5}) - 使用40 -layer wide-Resnet,比以前的SOTA提高了71.7%。当对预训练的NFNET-F3进行微调时,我们在ImageNet(0.5,8*10^{ - 7})下达到了83.8%的TOP-1精度。此外,我们还在(8,8 \ cdot 10^{ - 7})下达到了86.7%的TOP-1精度,DP仅比当前的非私人SOTA仅4.3%。我们认为,我们的结果是缩小私人图像分类和非私有图像分类之间准确性差距的重要一步。
translated by 谷歌翻译
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that guesses low-entropy labels for data-augmented unlabeled examples and mixes labeled and unlabeled data using MixUp. MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. We release all code used in our experiments. 1
translated by 谷歌翻译
半监督学习方法已成为对打击获得大量注释数据的挑战的活跃研究领域。为了提高半监督学习方法表现的目标,我们提出了一种新颖的框架,Hiematch,一种半监督方法,利用分层信息来降低标签成本并表现以及vanilla半监督学习方法。分层信息通常是具有细粒标签的粗标签(例如,啄木鸟)的粗标签(例如,啄木鸟)的现有知识(例如,柔软的啄木鸟或金朝啄木鸟)。但是,尚未探讨使用使用粗类标签来改进半监督技术的监督。在没有细粒度的标签的情况下,Himatch利用标签层次结构,并使用粗级标签作为弱监控信号。此外,Himatch是一种改进任何半熟的学习框架的通用方法,我们使用我们的结果在最近的最先进的技术Mixmatch和Fixmatch上展示了这一点。我们评估了在两个基准数据集,即CiFar-100和Nabirds上的Himatch疗效。与MixMatch相比,HOMACHACT可以在CIFAR-100上减少50%的粒度标签50%的用量,仅在前1个精度的边缘下降0.59%。代码:https://github.com/07agarg/hiermatch.
translated by 谷歌翻译
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. 1
translated by 谷歌翻译
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.
translated by 谷歌翻译
如何培训理想的老师进行知识蒸馏仍然是一个悬而未决的问题。人们普遍观察到,将教师最小化经验风险不一定会产生表现最好的学生,这表明教师网络培训中的共同实践与蒸馏目标之间的基本差异。为了填补这一空白,我们提出了一个新颖的以学生为导向的教师网络培训框架Soteacher,这是受到最新发现的启发,即学生的表现取决于教师近似培训样本的真正标签分布的能力。从理论上讲,我们确定(1)具有适当评分规则的经验风险最小化器,如果假设函数是局部lipschitz在训练样本周围连续的,则可以证明训练数据的真实标签分布; (2)当使用数据扩展进行培训时,需要一个额外的约束,使最小化器必须在同一培训输入的增强视图中产生一致的预测。鉴于我们的理论,Soteacher通过结合Lipschitz正则化和​​一致性正则化来翻新经验风险最小化。值得一提的是,Soteacher几乎适用于所有教师学生的建筑对,在教师的培训时不需要对学生的先验知识,并且几乎没有任何计算开销。两个基准数据集的实验证实,Soteacher可以在各种知识蒸馏算法和教师成对的各种知识蒸馏算法中显着和一致地提高学生的绩效。
translated by 谷歌翻译
半监督学习(SSL)是规避建立高性能模型的昂贵标签成本的最有前途的范例之一。大多数现有的SSL方法常规假定标记和未标记的数据是从相同(类)分布中绘制的。但是,在实践中,未标记的数据可能包括课外样本;那些不能从标签数据中的封闭类中的单热编码标签,即未标记的数据是开放设置。在本文中,我们介绍了Opencos,这是一种基于最新的自我监督视觉表示学习框架来处理这种现实的半监督学习方案。具体而言,我们首先观察到,可以通过自我监督的对比度学习有效地识别开放式未标记数据集中的类外样本。然后,Opencos利用此信息来克服现有的最新半监督方法中的故障模式,通过利用一式旋转伪标签和软标签来为已识别的识别和外部未标记的标签数据分别。我们广泛的实验结果表明了Opencos的有效性,可以修复最新的半监督方法,适合涉及开放式无标记数据的各种情况。
translated by 谷歌翻译
利用额外数据的最佳方法(无论是从同一任务中未标记的数据还是从相关任务标记的数据)学习给定任务的最佳方法是什么?本文使用参考研究理论对问题进行正式化。参考先验是客观的,非信息性的贝叶斯先验,可最大程度地提高任务和模型权重之间的相互信息。这样的先验使该任务能够最大程度地影响贝叶斯后部,例如,参考先知取决于可用于学习任务的样本数量,并且对于非常小的样本量,先前的概率质量更大,在假设空间中的低复杂模型上有更多的概率质量。本文介绍了中等尺度深网和基于图像的数据的参考先验的首次演示。我们开发了参考先验的概括,并向两个问题展示了应用。首先,通过使用未标记的数据来计算参考之前,我们开发了新的贝叶斯半监督学习方法,即使每个类别的样本很少,它们仍然有效。其次,通过使用来自源任务的标记数据来计算参考之前,我们开发了一种新的转移学习方法,该方法允许从目标任务进行数据以最大程度地影响贝叶斯后验。这些方法的经验验证是在图像分类数据集上进行的。代码可从https://github.com/grasp-lyrl/deep_reference_priors获得。
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
知识蒸馏(KD)是压缩边缘设备深层分类模型的有效工具。但是,KD的表现受教师和学生网络之间较大容量差距的影响。最近的方法已诉诸KD的多个教师助手(TA)设置,该设置依次降低了教师模型的大小,以相对弥合这些模型之间的尺寸差距。本文提出了一种称为“知识蒸馏”课程专家选择的新技术,以有效地增强在容量差距问题下对紧凑型学生的学习。该技术建立在以下假设的基础上:学生网络应逐渐使用分层的教学课程来逐步指导,因为它可以从较低(较高的)容量教师网络中更好地学习(硬)数据样本。具体而言,我们的方法是一种基于TA的逐渐的KD技术,它每个输入图像选择单个教师,该课程是基于通过对图像进行分类的难度驱动的课程的。在这项工作中,我们凭经验验证了我们的假设,并对CIFAR-10,CIFAR-100,CINIC-10和Imagenet数据集进行了严格的实验,并在类似VGG的模型,Resnets和WideresNets架构上显示出提高的准确性。
translated by 谷歌翻译