Does the dominant approach to learn representations (as a side effect of optimizing an expected cost for a single training distribution) remain a good approach when we are dealing with multiple distributions. Our thesis is that such scenarios are better served by representations that are "richer" than those obtained with a single optimization episode. This is supported by a collection of empirical results obtained with an apparently na\"ive ensembling technique: concatenating the representations obtained with multiple training episodes using the same data, model, algorithm, and hyper-parameters, but different random seeds. These independently trained networks perform similarly. Yet, in a number of scenarios involving new distributions, the concatenated representation performs substantially better than an equivalently sized network trained from scratch. This proves that the representations constructed by multiple training episodes are in fact different. Although their concatenation carries little additional information about the training task under the training distribution, it becomes substantially more informative when tasks or distributions change. Meanwhile, a single training episode is unlikely to yield such a redundant representation because the optimization process has no reason to accumulate features that do not incrementally improve the training performance.
translated by 谷歌翻译
在易于优化和强大的分布(OOD)概括之间通常存在困境。例如,许多OOD方法依赖于优化具有挑战性的罚款术语。他们要么太强大,无法可靠地优化,要么太虚弱而无法实现目标。我们建议用丰富的表示,其中包含一个潜在有用功能的调色板初始化网络,即使是简单的模型也可以使用。一方面,丰富的表示为优化器提供了良好的初始化。另一方面,它还提供了有助于OOD概括的电感偏差。这种表示形式是由丰富的功能构建(RFC)算法(也称为盆景算法)构建的,该算法由一系列培训情节组成。在发现剧集中,我们以防止网络使用以前迭代中构建的功能的方式制作了多目标优化标准及其相关数据集。在合成事件中,我们使用知识蒸馏来迫使网络同时代表所有先前发现的特征。用盆景表示的网络初始化,始终有助于六种OOD方法在ColoredMnist基准上实现最佳性能。相同的技术在Wilds Camelyon17任务上大大优于可比较的结果,消除了困扰其他方法的高结果差异,并使超参数调谐和模型选择更加可靠。
translated by 谷歌翻译
自我监督的学习是一个有希望的无监督学习框架,实现了大型浮点网络取得成功。但这种网络不易部署到边缘设备。为了加速模型部署模型,在为各种下游任务中学习这种资源有限的设备的益处,我们向使用移动目标网络的二进制网络提出了一种自我监督的学习方法。特别是,我们建议共同列车,随机初始化的分类器,附加到预用浮点特征提取器,具有二进制网络。此外,我们提出了一种特征相似性损失,动态丢失平衡和改进的多级训练,以进一步提高准确性,并呼叫我们的方法燃烧。我们使用七个数据集的五个下游任务的经验验证显示,烧伤优于二进制网络的自我监督基线,有时优于预测预测。
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
尽管最近通过剩余网络的代表学习中的自我监督方法取得了进展,但它们仍然对ImageNet分类基准进行了高度的监督学习,限制了它们在性能关键设置中的适用性。在MITROVIC等人的现有理论上洞察中建立2021年,我们提出了RELICV2,其结合了明确的不变性损失,在各种适当构造的数据视图上具有对比的目标。 Relicv2在ImageNet上实现了77.1%的前1个分类准确性,使用线性评估使用Reset50架构和80.6%,具有较大的Reset型号,优于宽边缘以前的最先进的自我监督方法。最值得注意的是,RelicV2是使用一系列标准Reset架构始终如一地始终优先于类似的对比较中的监督基线的第一个表示学习方法。最后,我们表明,尽管使用Reset编码器,Relicv2可与最先进的自我监控视觉变压器相媲美。
translated by 谷歌翻译
我们考虑无监督的域适应性(UDA),其中使用来自源域(例如照片)的标记数据,而来自目标域(例如草图)的未标记数据用于学习目标域的分类器。常规的UDA方法(例如,域对抗训练)学习域不变特征,以改善对目标域的概括。在本文中,我们表明,对比的预训练,它在未标记的源和目标数据上学习功能,然后在标记的源数据上进行微调,具有强大的UDA方法的竞争力。但是,我们发现对比前训练不会学习域不变特征,这与常规的UDA直觉不同。从理论上讲,我们证明了对比的预训练可以学习在跨域下微调但仍通过解开域和类信息来概括到目标域的特征。我们的结果表明,UDA不需要域的不变性。我们从经验上验证了基准视觉数据集的理论。
translated by 谷歌翻译
最近的几种少数学习算法中的大多数都是基于转移学习,其中模型是使用大量源数据进行预训练的,并且随后使用少量目标数据更新了预训练的模型。在基于转移的几次学习中,已经广泛研究了复杂的预训练方法,以进行通用和改进的表示。但是,几乎没有关于更新预训练模型以进行几次学习的研究。在本文中,我们比较了两种流行的更新方法,即微调(即更新整个网络)和线性探测(即仅更新线性分类器),考虑了源数据和目标数据之间的分布变化。我们发现,随着样品数量的增加,无论分布变化如何,微型调整都比线性探测更好。接下来,我们研究了对预训练模型进行微调时,数据增强的有效性和无效性。我们的基本分析表明,需要仔细考虑有关更新预训练模型的详细信息,才能获得更好的射击性能。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
Deep transfer learning has been widely used for knowledge transmission in recent years. The standard approach of pre-training and subsequently fine-tuning, or linear probing, has shown itself to be effective in many down-stream tasks. Therefore, a challenging and ongoing question arises: how to quantify cross-task transferability that is compatible with transferred results while keeping self-consistency? Existing transferability metrics are estimated on the particular model by conversing source and target tasks. They must be recalculated with all existing source tasks whenever a novel unknown target task is encountered, which is extremely computationally expensive. In this work, we highlight what properties should be satisfied and evaluate existing metrics in light of these characteristics. Building upon this, we propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks. Specifically, we use a restart scheme to calculate every batch gradient over each weight unit more than once, and then we take the average of all the gradients to get the expectation. Thus, the transferability between the source and target task is estimated by computing the distance of normalized principal gradients. Extensive experiments show that the proposed transferability metric is more stable, reliable and efficient than SOTA methods.
translated by 谷歌翻译
最大化模型准确性的常规配方是(1)具有各种超参数的多个模型,以及(2)选择在固定验证集中表现最佳的单个模型,从而丢弃其余部分。在本文中,我们在微调大型预训练的模型的背景下重新审视了该过程的第二步,其中微调模型通常位于单个低误差盆地中。我们表明,平均多种模型的权重以不同的超参数配置进行了微调通常提高准确性和鲁棒性。与传统的合奏不同,我们可能会平均许多模型,而不会产生任何其他推理或记忆成本 - 我们将结果称为“模型汤”。当微调大型预训练的模型,例如夹子,Align和VIT-G在JFT上预先训练的VIT-G时,我们的汤食谱可为ImageNet上的超参数扫描中的最佳模型提供显着改进。所得的VIT-G模型在Imagenet上达到90.94%的TOP-1准确性,实现了新的最新状态。此外,我们表明,模型汤方法扩展到多个图像分类和自然语言处理任务,改善分发性能,并改善新下游任务的零局部性。最后,我们通过分析将权重平衡和与logit浓度的性能相似与预测的损失和信心的平坦度联系起来,并经过经验验证这种关系。代码可从https://github.com/mlfoundations/model-soups获得。
translated by 谷歌翻译
大规模预训练的快速开发导致基础模型可以充当各种下游任务和领域的有效提取器。在此激励的情况下,我们研究了预训练的视觉模型的功效,作为下游持续学习(CL)场景的基础。我们的目标是双重的。首先,我们想了解RAW-DATA空间中CL和预训练编码器的潜在空间之间CL之间的计算准确性权衡。其次,我们研究编码器的特征,训练算法和数据以及所得的潜在空间如何影响CL性能。为此,我们将各种预训练的模型在大规模基准测试方案中的功效与在潜在和原始数据空间中应用的香草重播设置的功效。值得注意的是,这项研究表明了转移,遗忘,任务相似性和学习如何取决于输入数据特征,而不一定取决于CL算法。首先,我们表明,在某些情况下,通过可忽略的计算中的非参数分类器可以很容易地实现合理的CL性能。然后,我们展示模型如何在更广泛的数据上进行预训练,从而为各种重播大小提供更好的性能。我们以这些表示形式的代表性相似性和传递属性来解释这一点。最后,与训练域相比,我们显示了自我监督预训练对下游域的有效性。我们指出并验证了几个研究方向,这些方向可以进一步提高潜在CL的功效,包括表示结合。本研究中使用的各种数据集可以用作进一步CL研究的计算效率游乐场。该代码库可在https://github.com/oleksost/latent_cl下获得。
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
我们提出了自适应培训 - 一种统一的培训算法,通过模型预测动态校准并增强训练过程,而不会产生额外的计算成本 - 以推进深度神经网络的监督和自我监督的学习。我们分析了培训数据的深网络培训动态,例如随机噪声和对抗例。我们的分析表明,模型预测能够在数据中放大有用的基础信息,即使在没有任何标签信息的情况下,这种现象也会发生,突出显示模型预测可能会产生培训过程:自适应培训改善了深网络的概括在噪音下,增强自我监督的代表学习。分析还阐明了解深度学习,例如,在经验风险最小化和最新的自我监督学习算法的折叠问题中对最近发现的双重现象的潜在解释。在CIFAR,STL和Imagenet数据集上的实验验证了我们在三种应用中的方法的有效性:用标签噪声,选择性分类和线性评估进行分类。为了促进未来的研究,该代码已在HTTPS://github.com/layneh/Self-Aveptive-训练中公开提供。
translated by 谷歌翻译
执行零摄像推理时(即,在特定数据集上不进行微调)时,大型预训练的模型(例如剪辑或ALIGN)在一系列数据分布中提供一致的精度。尽管现有的微调方法显着提高了给定目标分布的准确性,但它们通常会降低分配变化的稳健性。我们通过引入一种简单有效的方法来提高鲁棒性,同时进行微调:结合零拍和微调模型(Wise-ft)的重量。与标准的微调相比,Wise-FT在分配变化下提供了巨大的准确性提高,同时保留了目标分布的高精度。在Imagenet和五个派生的分布变化上,Wise-FT在先前的工作中提高了分布转移的准确性4至6个百分点(PP),同时将Imagenet精度提高1.6pp。Wise-ft的稳健性相似(2至23 pp),明智之前与七个常用的转移学习数据集的标准微调相比,在一组进一步的分配转移的各种集合中,准确性增长率为0.8至3.3 pp。这些改进在微调或推理期间没有任何额外的计算成本。
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]". Based on exhaustive text cues in "[CONTEXT]", CLIP model is aware of different contexts, e.g. background, style, viewpoint, and exhibits unprecedented robustness against a wide range of distribution shifts. However, recent works find further fine-tuning of CLIP models improves accuracy but sacrifices the robustness on downstream tasks. We conduct an empirical investigation to show fine-tuning will corrupt the context-aware ability of pre-trained CLIP features. To solve this problem, we propose Context-Aware Robust Fine-tuning (CAR-FT). CAR-FT regularizes the model during fine-tuning to capture the context information. Specifically, we use zero-shot prompt weights to get the context distribution contained in the image. By minimizing the Kullback-Leibler Divergence (KLD) between context distributions induced by original/fine-tuned CLIP models, CAR-FT makes the context-aware ability of CLIP inherited into downstream tasks, and achieves both higher In-Distribution (ID) and Out-Of-Distribution (OOD) accuracy. The experimental results show CAR-FT achieves superior robustness on five OOD test datasets of ImageNet, and meanwhile brings accuracy gains on nine downstream tasks. Additionally, CAR-FT surpasses previous Domain Generalization (DG) methods and gets 78.5% averaged accuracy on DomainBed benchmark, building the new state-of-the-art.
translated by 谷歌翻译
机器学习从业者通常可以访问数据的频谱:目标任务(通常是有限),未标记的数据和辅助数据的标记数据,用于其他任务的许多可用标记的数据集。我们描述了TAGLET,一个系统为学习技术,用于自动利用所有三种类型的数据并创建高质量的可服装分类器。 TAGLET的关键组件是:(1)根据知识图组织组织的辅助数据,(2)封装用于利用辅助和未标记数据的不同方法的模块,以及(3)将被整合模块组合成可用的蒸馏阶段模型。我们将TAGLETS与最先进的传输学习和半监督学习方法进行比较,四个图像分类任务。我们的研究涵盖了一系列设置,改变了标记数据的量和辅助数据的语义相关性到目标任务。我们发现,辅助和未标记数据的智能融合到多个学习技术使Taglet能够匹配 - 并且最常见的是这些替代方案。 Taglets可作为Github.com/batsresearch/taglet的开源系统使用。
translated by 谷歌翻译
视觉变压器(VIT)已被证明可以在广泛的视觉应用中获得高度竞争性的性能,例如图像分类,对象检测和语义图像分割。与卷积神经网络相比,通常发现视觉变压器的较弱的电感偏差会在较小的培训数据集上培训时,会增加对模型正则化或数据增强的依赖(简称为“ AUGREG”)。我们进行了一项系统的实证研究,以便更好地了解培训数据,AUGREG,模型大小和计算预算之间的相互作用。作为这项研究的一个结果,我们发现增加的计算和AUGREG的组合可以产生与在数量级上训练的模型相同的训练数据的模型:我们在公共Imagenet-21K数据集中培训各种尺寸的VIT模型在较大的JFT-300M数据集上匹配或超越其对手的培训。
translated by 谷歌翻译