我们考虑在给定的分类任务(例如Imagenet-1k(IN1K))上训练深神网络的问题,以便它在该任务以及其他(未来)转移任务方面擅长。这两个看似矛盾的属性在改善模型的概括的同时保持其在原始任务上的性能之间实现了权衡。接受自我监督学习训练的模型(SSL)倾向于比其受监督的转移学习更好地概括。但是,他们仍然落后于In1k上的监督模型。在本文中,我们提出了一个有监督的学习设置,以利用两全其美的方式。我们使用最近的SSL模型的两个关键组成部分丰富了普通的监督培训框架:多尺度农作物用于数据增强和使用可消耗的投影仪。我们用内存库在即时计算的类原型中代替了班级权重的最后一层。我们表明,这三个改进导致IN1K培训任务和13个转移任务之间的权衡取决于更加有利的权衡。在所有探索的配置中,我们都会挑出两种模型:T-Rex实现了转移学习的新状态,并且超过了In1k上的Dino和Paws等最佳方法,以及与高度优化的RSB--相匹配的T-Rex*在IN1K上的A1模型,同时在转移任务上表现更好。项目页面和预估计的模型:https://europe.naverlabs.com/t-rex
translated by 谷歌翻译
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or "views") of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a "swapped" prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.Project page: https://europe.naverlabs.com/mochi 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
我们对自我监督,监督或半监督设置的代表学习感兴趣。在应用自我监督学习的平均移位思想的事先工作,通过拉动查询图像来概括拜尔的想法,不仅更接近其其他增强,而且还可以到其他增强的最近邻居(NNS)。我们认为,学习可以从选择远处与查询相关的邻居选择遥远的邻居。因此,我们建议通过约束最近邻居的搜索空间来概括MSF算法。我们显示我们的方法在SSL设置中优于MSF,当约束使用不同的图像时,并且当约束确保NNS具有与查询相同的伪标签时,在半监控设置中优于培训资源的半监控设置中的爪子。
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
学习概括不见于没有人类监督的有效视觉表现是一个基本问题,以便将机器学习施加到各种各样的任务。最近,分别是SIMCLR和BYOL的两个自我监督方法,对比学习和潜在自动启动的家庭取得了重大进展。在这项工作中,我们假设向这些算法添加显式信息压缩产生更好,更强大的表示。我们通过开发与条件熵瓶颈(CEB)目标兼容的SIMCLR和BYOL配方来验证这一点,允许我们衡量并控制学习的表示中的压缩量,并观察它们对下游任务的影响。此外,我们探讨了Lipschitz连续性和压缩之间的关系,显示了我们学习的编码器的嘴唇峰常数上的易触摸下限。由于Lipschitz连续性与稳健性密切相关,这为什么压缩模型更加强大提供了新的解释。我们的实验证实,向SIMCLR和BYOL添加压缩显着提高了线性评估精度和模型鲁棒性,跨各种域移位。特别是,Byol的压缩版本与Reset-50的ImageNet上的76.0%的线性评估精度达到了76.0%的直线评价精度,并使用Reset-50 2x的78.8%。
translated by 谷歌翻译
尽管最近通过剩余网络的代表学习中的自我监督方法取得了进展,但它们仍然对ImageNet分类基准进行了高度的监督学习,限制了它们在性能关键设置中的适用性。在MITROVIC等人的现有理论上洞察中建立2021年,我们提出了RELICV2,其结合了明确的不变性损失,在各种适当构造的数据视图上具有对比的目标。 Relicv2在ImageNet上实现了77.1%的前1个分类准确性,使用线性评估使用Reset50架构和80.6%,具有较大的Reset型号,优于宽边缘以前的最先进的自我监督方法。最值得注意的是,RelicV2是使用一系列标准Reset架构始终如一地始终优先于类似的对比较中的监督基线的第一个表示学习方法。最后,我们表明,尽管使用Reset编码器,Relicv2可与最先进的自我监控视觉变压器相媲美。
translated by 谷歌翻译
特征回归是将大型神经网络模型蒸馏到较小的功能回归。我们表明,随着网络架构的简单变化,回归可能会优于自我监督模型的知识蒸馏更复杂的最先进方法。令人惊讶的是,即使仅在蒸馏过程中仅使用并且在下游任务中丢弃时,将多层的Perceptron头部添加到CNN骨架上是有益的。因此,更深的非线性投影可以使用在不改变推理架构和时间的情况下准确地模仿老师。此外,我们利用独立的投影头来同时蒸馏多个教师网络。我们还发现,使用与教师和学生网络的输入相同的弱增强图像辅助蒸馏。Imagenet DataSet上的实验证明了各种自我监督蒸馏环境中提出的变化的功效。
translated by 谷歌翻译
神经网络分类器已成为当前“火车前的Fine-Tune”范例的De-Facto选择。在本文中,我们调查了K $ -Nearest邻居(K-NN)分类器,这是一种从预先学习时代的无古典无模型学习方法,作为基于现代神经网络的方法的增强。作为懒惰的学习方法,K-Nn简单地聚集了训练集中的测试图像和顶-k邻居之间的距离。我们采用k-nn具有由监督或自我监督方法产生的预训练的视觉表现,分为两个步骤:(1)利用K-NN预测概率作为培训期间容易\〜〜硬示例的迹象。 (2)用增强分类器的预测分布线性地插入k-nn。通过广泛的实验在广泛的分类任务中,我们的研究揭示了K-NN集成与额外见解的一般性和灵活性:(1)K-NN实现竞争结果,有时甚至优于标准的线性分类器。 (2)结合K-NN对参数分类器执行不良和/或低数据制度的任务特别有益。我们希望这些发现将鼓励人们重新考虑预先学习的角色,计算机愿景中的古典方法。我们的代码可用于:https://github.com/kmnp/nn-revisit。
translated by 谷歌翻译
当自我监督的模型已经显示出比在规模上未标记的数据训练的情况下的监督对方的可比视觉表现。然而,它们的功效在持续的学习(CL)场景中灾难性地减少,其中数据被顺序地向模型呈现给模型。在本文中,我们表明,通过添加将表示的当前状态映射到其过去状态,可以通过添加预测的网络来无缝地转换为CL的蒸馏机制。这使我们能够制定一个持续自我监督的视觉表示的框架,学习(i)显着提高了学习象征的质量,(ii)与若干最先进的自我监督目标兼容(III)几乎没有近似参数调整。我们通过在各种CL设置中培训六种受欢迎的自我监督模型来证明我们的方法的有效性。
translated by 谷歌翻译
降低降低方法是无监督的方法,它学习了低维空间,在这些方法中,初始空间的某些特性(通常是“邻居”的概念)被保留。这种方法通常需要在大的K-NN图或复杂的优化求解器上传播。另一方面,通常用于从头开始学习表示形式,依靠简单,更可扩展的框架来学习的自我监督学习方法。在本文中,我们提出了TLDR,这是通用输入空间的一种降低方法,该方法正在移植Zbontar等人的最新自我监督学习框架。 (2021)降低维度的特定任务,超越任意表示。我们建议使用最近的邻居从训练组中构建对,并减少冗余损失,以学习在此类对之间产生表示形式的编码器。 TLDR是一种简单,易于训练和广泛适用性的方法。它由一个离线最近的邻居计算步骤组成,该步骤可以高度近似,并且是一个直接的学习过程。为了提高可伸缩性,我们专注于提高线性维度的降低,并在图像和文档检索任务上显示一致的收益,例如在Roxford上获得PCA的 +4%地图,用于GEM-AP,改善了ImageNet上的Dino的性能或以10倍的压缩保留。
translated by 谷歌翻译
元学习已成为几乎没有图像分类的实用方法,在该方法中,“学习分类器的策略”是在标记的基础类别上进行元学习的,并且可以应用于具有新颖类的任务。我们删除了基类标签的要求,并通过无监督的元学习(UML)学习可通用的嵌入。具体而言,任务发作是在元训练过程中使用未标记的基本类别的数据增强构建的,并且我们将基于嵌入式的分类器应用于新的任务,并在元测试期间使用标记的少量示例。我们观察到两个元素在UML中扮演着重要角色,即进行样本任务和衡量实例之间的相似性的方法。因此,我们获得了具有两个简单修改的​​强基线 - 一个足够的采样策略,每情节有效地构建多个任务以及半分解的相似性。然后,我们利用来自两个方向的任务特征以获得进一步的改进。首先,合成的混淆实例被合并以帮助提取更多的判别嵌入。其次,我们利用额外的特定任务嵌入转换作为元训练期间的辅助组件,以促进预先适应的嵌入式的概括能力。几乎没有学习基准的实验证明,我们的方法比以前的UML方法优于先前的UML方法,并且比其监督变体获得了可比甚至更好的性能。
translated by 谷歌翻译
我们研究了用于半监控学习(SSL)的无监督数据选择,其中可以提供大规模的未标记数据集,并且为标签采集预算小额数据子集。现有的SSL方法专注于学习一个有效地集成了来自给定小标记数据和大型未标记数据的信息的模型,而我们专注于选择正确的数据以用于SSL的注释,而无需任何标签或任务信息。直观地,要标记的实例应统称为下游任务的最大多样性和覆盖范围,并且单独具有用于SSL的最大信息传播实用程序。我们以三步数据为中心的SSL方法形式化这些概念,使稳定性和精度的纤维液改善8%的CiFar-10(标记为0.08%)和14%的Imagenet -1k(标记为0.2%)。它也是一种具有各种SSL方法的通用框架,提供一致的性能增益。我们的工作表明,在仔细选择注释数据上花费的小计算带来了大注释效率和模型性能增益,而无需改变学习管道。我们完全无监督的数据选择可以轻松扩展到其他弱监督的学习设置。
translated by 谷歌翻译
Self-supervised learning (SSL) is rapidly closing BARLOW TWINS is competitive with state-of-the-art methods for self-supervised learning while being conceptually simpler, naturally avoiding trivial constant (i.e. collapsed) embeddings, and being robust to the training batch size.
translated by 谷歌翻译
对比自我监督的学习已经超越了许多下游任务的监督预测,如分割和物体检测。但是,当前的方法仍然主要应用于像想象成的策划数据集。在本文中,我们首先研究数据集中的偏差如何影响现有方法。我们的研究结果表明,目前的对比方法令人惊讶地工作:(i)对象与场景为中心,(ii)统一与长尾和(iii)一般与域特定的数据集。其次,鉴于这种方法的一般性,我们尝试通过微小的修改来实现进一步的收益。我们展示了学习额外的修正 - 通过使用多尺度裁剪,更强的增强和最近的邻居 - 改善了表示。最后,我们观察Moco在用多作物策略训练时学习空间结构化表示。表示可以用于语义段检索和视频实例分段,而不会FineTuning。此外,结果与专门模型相提并论。我们希望这项工作将成为其他研究人员的有用研究。代码和模型可在https://github.com/wvanganebleke/revisiting-contrastive-ssl上获得。
translated by 谷歌翻译
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
自我监督学习的最新进展证明了多种视觉任务的有希望的结果。高性能自我监督方法中的一个重要成分是通过培训模型使用数据增强,以便在嵌入空间附近的相同图像的不同增强视图。然而,常用的增强管道整体地对待图像,忽略图像的部分的语义相关性-e.g。主题与背景 - 这可能导致学习杂散相关性。我们的工作通过调查一类简单但高度有效的“背景增强”来解决这个问题,这鼓励模型专注于语义相关内容,劝阻它们专注于图像背景。通过系统的调查,我们表明背景增强导致在各种任务中跨越一系列最先进的自我监督方法(MOCO-V2,BYOL,SWAV)的性能大量改进。 $ \ SIM $ + 1-2%的ImageNet收益,使得与监督基准的表现有关。此外,我们发现有限标签设置的改进甚至更大(高达4.2%)。背景技术增强还改善了许多分布换档的鲁棒性,包括天然对抗性实例,想象群-9,对抗性攻击,想象成型。我们还在产生了用于背景增强的显着掩模的过程中完全无监督的显着性检测进展。
translated by 谷歌翻译