本文关注的是将许多预训练的深神经网络(DNN)(称为检查点)排名,以将学习转移到下游任务。由于广泛使用了DNN,我们可能很容易从各种来源收集数百个检查站。他们中的哪个将最好的人转移到我们感兴趣的下游任务?为了彻底回答这个问题,我们建立了一个神经检查点排名基准(Neucrab),并研究一些直观的排名措施。这些措施是通用的,适用于不同输出类型的检查点,而无需知道如何对哪个数据集进行检查。它们还产生了低计算成本,使它们实际上有意义。我们的结果表明,检查点提取的特征的线性可分离性是可传递性的强烈指标。我们还达到了一种新的排名NLEEP,这在实验中带来了最佳性能。
translated by 谷歌翻译
本文解决了对预先训练的深神经网络进行排名并筛选最下游任务的重要问题。这是具有挑战性的,因为每个任务的基本模型排名只能通过微调目标数据集中的预训练模型来生成,该模型是蛮力且计算昂贵的。最近的高级方法提出了几个轻巧的可转移性指标来预测微调结果。但是,这些方法仅捕获静态表示,但忽略了微调动态。为此,本文提出了一个新的可传递性度量,称为\ textbf {s} elf-challenging \ textbf {f} isher \ textbf {d} is Criminant \ textbf {a} nalisy(\ textbf {\ textbf {sfda})现有作品没有的有吸引力的好处。首先,SFDA可以将静态特征嵌入渔民空间中,并完善它们,以在类之间更好地分离性。其次,SFDA使用一种自我挑战的机制来鼓励不同的预训练模型来区分硬性示例。第三,SFDA可以轻松地为模型集合选择多个预训练的模型。 $ 33 $预培训的$ 11 $下游任务的$ 33 $预培训模型的广泛实验表明,在测量预训练模型的可传递性时,SFDA具有高效,有效和健壮。例如,与最先进的方法NLEEP相比,SFDA平均显示了59.1美元的增益,同时带来了$ 22.5 $ x的墙壁速度速度。该代码将在\ url {https://github.com/tencentarc/sfda}上提供。
translated by 谷歌翻译
Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.
translated by 谷歌翻译
近年来,随着预审预周习惯的模型的越来越多,为特定的下游分类任务选择最佳的检查站的问题一直在增加注意力。尽管最近提出了几种方法来解决选择问题(例如LEEP,H-SCORE),但这些方法诉诸应用学习理论并非充分动机的启发式方法。在本文中,我们介绍了PACTRAN,这是一个理论上扎根的指标家族,用于验证模型选择和可传递性测量。我们首先展示了如何从转移学习设置下的最佳PAC-Bayesian界限中得出PACTRAN指标。然后,我们在许多视觉任务(VTAB)以及语言和视觉(OKVQA)任务上对PACTRAN的三个度量实例进行了经验评估。对结果的分析表明,与现有选择方法相比,PACTRAN是一种更一致和有效的可传递性度量。
translated by 谷歌翻译
可传递性估计是选择预训练模型和其中的层来转移学习,转移,以最大程度地提高目标任务上的性能并防止负转移的必不可少的工具。现有的估计算法要么需要对目标任务进行深入培训,要么在评估层之间的可传递性方面遇到困难。为此,我们提出了一种简单,高效且有效的可传递性度量,称为“超越”。通过单一传递目标任务的示例,越过可转移性作为在预训练模型及其标签提取的目标示例的特征之间的相互信息。我们通过诉诸于熵的有效替代方案来克服有效的共同信息估计的挑战。从特征表示的角度来看,所得的越来越多地评估了完整性(功能是否包含目标任务的足够信息)和紧凑性(每个类的特征是否足够紧凑,以实现良好的概括)。从理论上讲,我们已经分析了转移学习后的跨度与性能的紧密联系。尽管在10行代码中具有非凡的简单性,但在对32个预训练模型和16个下游任务的广泛评估中,越来越多地表现出色。
translated by 谷歌翻译
具有许多预训练模型(PTM)的模型中心已经是深度学习的基石。尽管以高成本建造,但它们仍然保持\ emph {探索}:从业人员通常会通过普及从提供的模型中心中选择一个PTM,然后对PTM进行微调以解决目标任务。这种na \“我的但共同的实践构成了两个障碍,以充分利用预训练的模型中心:(1)通过受欢迎程度选择的PTM选择没有最佳保证;(2)仅使用一个PTM,而其余的PTM则被忽略。理想情况下。理想情况下。 ,为了最大程度地利用预训练的模型枢纽,需要尝试所有PTM的所有组合和广泛的微调每个PTM组合,这会产生指数组合和不可偿还的计算预算。在本文中,我们提出了一种新的范围排名和调整预训练的模型:(1)我们的会议论文〜\ citep {you_logme:_2021}提出的logMe,以估算预先训练模型提取的标签证据的最大值,该标签证据可以在模型中排名所有PTMS用于各种类型的PTM和任务的枢纽\ Emph {微调之前}。(2)如果我们不偏爱模型的体系结构,则可以对排名最佳的PTM进行微调和部署,或者可以通过TOPE调整目标PTM -k通过t排名PTM他提出了b-tuning算法。排名部分基于会议论文,我们在本文中完成了其理论分析,包括启发式证据最大化程序的收敛证明和特征维度的影响。调整零件引入了一种用于调整多个PTM的新型贝叶斯调整(B-Tuning)方法,该方法超过了专门的方法,该方法旨在调整均匀的PTMS,并为调整异质PTMS设置了一种新的技术。利用PTM枢纽的新范式对于整个机器学习社区的大量受众来说可能会很有趣。
translated by 谷歌翻译
我们解决了转移学习中的集合选择问题:给出了大量的源模型,我们要选择一个模型的集合,在对目标训练集的微调后,在目标测试集上产生最佳性能。由于微调所有可能的合奏是计算禁止的,因此我们目的是使用计算上有效的可转换度量来预测目标数据集的性能。我们提出了用于此任务的几个新的可转换性指标,并在对语义细分的具有挑战性和现实的转移学习设置中进行评估:我们通过考虑涵盖各种图像域的各种数据集来创建一个大型和多样化的源模型池,两种不同架构和两个预训练计划。鉴于此池,我们自动选择子集,以在给定的目标数据集上形成良好的集合。我们将通过我们的方法选择的合奏与两个基线进行比较,该基线选择单个源模型,其中(1)与我们的方法相同;或(2)从包含大源模型的池,每个池具有与集合相似的容量。平均超过17个目标数据集,我们分别以6.0%和2.5%的相对平均值越优于这些基线。
translated by 谷歌翻译
Deep transfer learning has been widely used for knowledge transmission in recent years. The standard approach of pre-training and subsequently fine-tuning, or linear probing, has shown itself to be effective in many down-stream tasks. Therefore, a challenging and ongoing question arises: how to quantify cross-task transferability that is compatible with transferred results while keeping self-consistency? Existing transferability metrics are estimated on the particular model by conversing source and target tasks. They must be recalculated with all existing source tasks whenever a novel unknown target task is encountered, which is extremely computationally expensive. In this work, we highlight what properties should be satisfied and evaluate existing metrics in light of these characteristics. Building upon this, we propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks. Specifically, we use a restart scheme to calculate every batch gradient over each weight unit more than once, and then we take the average of all the gradients to get the expectation. Thus, the transferability between the source and target task is estimated by computing the distance of normalized principal gradients. Extensive experiments show that the proposed transferability metric is more stable, reliable and efficient than SOTA methods.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
Human observers can learn to recognize new categories of images from a handful of examples, yet doing so with artificial ones remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable. We therefore revisit and improve Contrastive Predictive Coding, an unsupervised objective for learning such representations. This new implementation produces features which support state-of-theart linear classification accuracy on the ImageNet dataset. When used as input for non-linear classification with deep neural networks, this representation allows us to use 2-5× less labels than classifiers trained directly on image pixels. Finally, this unsupervised representation substantially improves transfer learning to object detection on the PASCAL VOC dataset, surpassing fully supervised pre-trained ImageNet classifiers.
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
translated by 谷歌翻译
Pre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task. Most recent efforts in unsupervised feature learning have focused on either small or highly curated datasets like ImageNet, whereas using non-curated raw datasets was found to decrease the feature quality when evaluated on a transfer task. Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available. To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data. We validate our approach on 96 million images from YFCC100M [42], achieving state-of-the-art results among unsupervised methods on standard benchmarks, which confirms the potential of unsupervised learning when only non-curated raw data are available. We also show that pre-training a supervised VGG-16 with our method achieves 74.9% top-1 classification accuracy on the validation set of ImageNet, which is an improvement of +0.8% over the same network trained from scratch. Our code is available at https://github. com/facebookresearch/DeeperCluster.
translated by 谷歌翻译
Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning. Despite its success, the influence of different view choices has been less studied. In this paper, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmentation as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy. As a byproduct, we achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classification (73% top-1 linear readout with a ResNet-50) 1 .
translated by 谷歌翻译
Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard crossentropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Code and models are available 1 .
translated by 谷歌翻译
This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning (S 4 L) and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that S 4 L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels.
translated by 谷歌翻译
We investigate methods for combining multiple selfsupervised tasks-i.e., supervised tasks where data can be collected without manual labeling-in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks-even via a naïve multihead architecture-always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
以前的工作提出了许多新的损失函数和常规程序,可提高图像分类任务的测试准确性。但是,目前尚不清楚这些损失函数是否了解下游任务的更好表示。本文研究了培训目标的选择如何影响卷积神经网络隐藏表示的可转移性,训练在想象中。我们展示了许多目标在Vanilla Softmax交叉熵上导致想象的精度有统计学意义的改进,但由此产生的固定特征提取器转移到下游任务基本较差,并且当网络完全微调时,损失的选择几乎没有效果新任务。使用居中内核对齐来测量网络隐藏表示之间的相似性,我们发现损失函数之间的差异仅在网络的最后几层中都很明显。我们深入了解倒数第二层的陈述,发现不同的目标和近奇计的组合导致大幅不同的类别分离。具有较高类别分离的表示可以在原始任务上获得更高的准确性,但它们的功能对于下游任务不太有用。我们的结果表明,用于原始任务的学习不变功能与传输任务相关的功能之间存在权衡。
translated by 谷歌翻译
深层模型必须学习强大而可转移的表示形式,以便在新领域上表现良好。尽管已经提出了域转移方法(例如,域的适应性,域的概括)来学习跨域的可转移表示,但通常将它们应用于在Imagenet上预先训练的重置骨架。因此,现有作品很少关注预训练对域转移任务的影响。在本文中,我们对领域适应和泛化的预训练进行了广泛的研究和深入分析,即:网络体系结构,大小,训练损失和数据集。我们观察到,仅使用最先进的主链优于现有的最先进的域适应基线,并将新的基本线设置为Office-Home和Domainnet在10.7 \%和5.5 \%上提高。我们希望这项工作可以为未来的领域转移研究提供更多见解。
translated by 谷歌翻译