对比表现学习已被证明是一种有效的自我监督学习方法。大多数成功的方法都是基于噪声对比估计(NCE)范式,并将实例视图的视图视为阳性和其他情况,作为阳性应与其对比的噪声。但是,数据集中的所有实例都是从相同的分布和共享底层语义信息中汲取,这些语义信息不应被视为噪声。我们认为,良好的数据表示包含实例之间的关系或语义相似性。对比学习隐含地学习关系,但认为负面的噪音是对学习关系质量有害的噪音,因此是象征性的质量。为了规避这个问题,我们提出了一种使用称为相似性对比估计(SCE)之间的情况之间的语义相似性的对比学习的新颖性。我们的培训目标可以被视为柔和的对比学习。我们提出了持续分配以基于其语义相似性推动或拉动实例的持续分配。目标相似性分布从弱增强的情况计算并锐化以消除无关的关系。每个弱增强实例都与一个强大的增强实例配对,该实例对比其积极的同时保持目标相似性分布。实验结果表明,我们所提出的SCE在各种数据集中优于其基线MoCov2和RESSL,并对ImageNet线性评估协议上的最先进的算法具有竞争力。
translated by 谷歌翻译
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
translated by 谷歌翻译
自我监督的学习最近在没有人类注释的情况下在表示学习方面取得了巨大的成功。主要方法(即对比度学习)通常基于实例歧视任务,即单个样本被视为独立类别。但是,假定所有样品都是不同的,这与普通视觉数据集中类似样品的自然分组相矛盾,例如同一狗的多个视图。为了弥合差距,本文提出了一种自适应方法,该方法引入了软样本间关系,即自适应软化对比度学习(ASCL)。更具体地说,ASCL将原始实例歧视任务转换为多实体软歧视任务,并自适应地引入样本间关系。作为现有的自我监督学习框架的有效简明的插件模块,ASCL就性能和效率都实现了多个基准的最佳性能。代码可从https://github.com/mrchenfeng/ascl_icpr2022获得。
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected.
translated by 谷歌翻译
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or "views") of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a "swapped" prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.
translated by 谷歌翻译
对比学习(CL)是自我监督学习(SSL)最成功的范式之一。它以原则上的方式考虑了两个增强的“视图”,同一图像是正面的,将其拉近,所有其他图像都是负面的。但是,在基于CL的技术的令人印象深刻的成功之后,它们的配方通常依赖于重型设置,包括大型样品批次,广泛的培训时代等。因此,我们有动力解决这些问题并建立一个简单,高效但有竞争力的问题对比学习的基线。具体而言,我们从理论和实证研究中鉴定出对广泛使用的Infonce损失的显着负阳性耦合(NPC)效应,从而导致有关批处理大小的不合适的学习效率。通过消除NPC效应,我们提出了脱钩的对比度学习(DCL)损失,该损失从分母中删除了积极的术语,并显着提高了学习效率。 DCL对竞争性表现具有较小的对亚最佳超参数的敏感性,既不需要SIMCLR中的大批量,Moco中的动量编码或大型时代。我们以各种基准来证明,同时表现出对次优的超参数敏感的鲁棒性。值得注意的是,具有DCL的SIMCLR在200个时期内使用批次尺寸256实现68.2%的Imagenet-1K TOP-1精度,在预训练中的表现优于其SIMCLR基线6.4%。此外,DCL可以与SOTA对比度学习方法NNCLR结合使用,以达到72.3%的Imagenet-1k Top-1精度,在400个时期的512批次大小中,这代表了对比学习中的新SOTA。我们认为DCL为将来的对比SSL研究提供了宝贵的基准。
translated by 谷歌翻译
许多最近的自我监督学习方法在图像分类和其他任务上表现出了令人印象深刻的表现。已经使用了一种令人困惑的多种技术,并不总是清楚地了解其收益的原因,尤其是在组合使用时。在这里,我们将图像的嵌入视为点粒子,并将模型优化视为该粒子系统上的动态过程。我们的动态模型结合了类似图像的吸引力,避免局部崩溃的局部分散力以及实现颗粒的全球均匀分布的全局分散力。动态透视图突出了使用延迟参数图像嵌入(a la byol)以及同一图像的多个视图的优点。它还使用纯动态的局部分散力(布朗运动),该分散力比其他方法显示出改善的性能,并且不需要其他粒子坐标的知识。该方法称为MSBREG,代表(i)多视质心损失,它施加了吸引力的力来将不同的图像视图嵌入到其质心上,(ii)奇异值损失,将粒子系统推向空间均匀的密度( iii)布朗扩散损失。我们评估MSBREG在ImageNet上的下游分类性能以及转移学习任务,包括细粒度分类,多类对象分类,对象检测和实例分段。此外,我们还表明,将我们的正则化术语应用于其他方法,进一步改善了其性能并通过防止模式崩溃来稳定训练。
translated by 谷歌翻译
尽管增加了大量的增强家庭,但只有几个樱桃采摘的稳健增强政策有利于自我监督的图像代表学习。在本文中,我们提出了一个定向自我监督的学习范式(DSSL),其与显着的增强符号兼容。具体而言,我们在用标准增强的视图轻度增强后调整重增强策略,以产生更难的视图(HV)。 HV通常具有与原始图像较高的偏差而不是轻度增强的标准视图(SV)。与以前的方法不同,同等对称地将所有增强视图对称地最大化它们的相似性,DSSL将相同实例的增强视图视为部分有序集(具有SV $ \ LeftrightArrow $ SV,SV $ \左路$ HV),然后装备一个定向目标函数尊重视图之间的衍生关系。 DSSL可以轻松地用几行代码实现,并且对于流行的自我监督学习框架非常灵活,包括SIMCLR,Simsiam,Byol。对CiFar和Imagenet的广泛实验结果表明,DSSL可以稳定地改善各种基线,其兼容性与更广泛的增强。
translated by 谷歌翻译
尽管最近通过剩余网络的代表学习中的自我监督方法取得了进展,但它们仍然对ImageNet分类基准进行了高度的监督学习,限制了它们在性能关键设置中的适用性。在MITROVIC等人的现有理论上洞察中建立2021年,我们提出了RELICV2,其结合了明确的不变性损失,在各种适当构造的数据视图上具有对比的目标。 Relicv2在ImageNet上实现了77.1%的前1个分类准确性,使用线性评估使用Reset50架构和80.6%,具有较大的Reset型号,优于宽边缘以前的最先进的自我监督方法。最值得注意的是,RelicV2是使用一系列标准Reset架构始终如一地始终优先于类似的对比较中的监督基线的第一个表示学习方法。最后,我们表明,尽管使用Reset编码器,Relicv2可与最先进的自我监控视觉变压器相媲美。
translated by 谷歌翻译
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the Ima-geNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions, and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement and reference TensorFlow code is released at https://t.ly/supcon 1 .
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
We introduce Bootstrap Your Own Latent (BYOL), a new approach to selfsupervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using a linear evaluation with a ResNet-50 architecture and 79.6% with a larger ResNet. We show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. Our implementation and pretrained models are given on GitHub. 3 * Equal contribution; the order of first authors was randomly selected. 3
translated by 谷歌翻译
对比自我监督的学习已经超越了许多下游任务的监督预测,如分割和物体检测。但是,当前的方法仍然主要应用于像想象成的策划数据集。在本文中,我们首先研究数据集中的偏差如何影响现有方法。我们的研究结果表明,目前的对比方法令人惊讶地工作:(i)对象与场景为中心,(ii)统一与长尾和(iii)一般与域特定的数据集。其次,鉴于这种方法的一般性,我们尝试通过微小的修改来实现进一步的收益。我们展示了学习额外的修正 - 通过使用多尺度裁剪,更强的增强和最近的邻居 - 改善了表示。最后,我们观察Moco在用多作物策略训练时学习空间结构化表示。表示可以用于语义段检索和视频实例分段,而不会FineTuning。此外,结果与专门模型相提并论。我们希望这项工作将成为其他研究人员的有用研究。代码和模型可在https://github.com/wvanganebleke/revisiting-contrastive-ssl上获得。
translated by 谷歌翻译
我们专注于更好地理解增强不变代表性学习的关键因素。我们重新访问moco v2和byol,并试图证明以下假设的真实性:不同的框架即使具有相同的借口任务也会带来不同特征的表示。我们建立了MoCo V2和BYOL之间公平比较的第一个基准,并观察:(i)复杂的模型配置使得可以更好地适应预训练数据集; (ii)从实现竞争性转移表演中获得的预训练和微调阻碍模型的优化策略不匹配。鉴于公平的基准,我们进行进一步的研究并发现网络结构的不对称性赋予对比框架在线性评估协议下正常工作,同时可能会损害长尾分类任务的转移性能。此外,负样本并不能使模型更明智地选择数据增强,也不会使不对称网络结构结构。我们相信我们的发现为将来的工作提供了有用的信息。
translated by 谷歌翻译
Self-supervised learning (SSL) is rapidly closing BARLOW TWINS is competitive with state-of-the-art methods for self-supervised learning while being conceptually simpler, naturally avoiding trivial constant (i.e. collapsed) embeddings, and being robust to the training batch size.
translated by 谷歌翻译
最近对比学习在从未标记数据学习视觉表现方面表现出显着进展。核心思想正在培训骨干,以不变的实例的不同增强。虽然大多数方法只能最大化两个增强数据之间的特征相似性,但我们进一步产生了更具挑战性的训练样本,并强迫模型继续预测这些硬样品上的判别表示。在本文中,我们提出了Mixsiam,传统暹罗网络的混合方法。一方面,我们将实例的两个增强图像输入到骨干,并通过执行两个特征的元素最大值来获得辨别结果。另一方面,我们将这些增强图像的混合物作为输入,并期望模型预测接近鉴别的表示。以这种方式,模型可以访问实例的更多变体数据样本,并继续预测它们的不变判别表示。因此,与先前的对比学习方法相比,学习模型更加强大。大型数据集的广泛实验表明,Mixsiam稳步提高了基线,并通过最先进的方法实现了竞争结果。我们的代码即将发布。
translated by 谷歌翻译
对比性自我监督表示方法学习方法最大程度地提高了正对之间的相似性,同时倾向于最大程度地减少负对之间的相似性。但是,总的来说,负面对之间的相互作用被忽略了,因为它们没有根据其特定差异和相似性而采用的特殊机制来对待负面对。在本文中,我们提出了扩展的动量对比(Xmoco),这是一种基于MOCO家族配置中提出的动量编码单元的遗产,一种自我监督的表示方法。为此,我们引入了交叉一致性正则化损失,并通过该损失将转换一致性扩展到不同图像(负对)。在交叉一致性正则化规则下,我们认为与任何一对图像(正或负)相关的语义表示应在借口转换下保留其交叉相似性。此外,我们通过在批处理上的负面对上实施相似性的均匀分布来进一步规范训练损失。可以轻松地将所提出的正规化添加到现有的自我监督学习算法中。从经验上讲,我们报告了标准Imagenet-1K线性头部分类基准的竞争性能。此外,通过将学习的表示形式转移到常见的下游任务中,我们表明,将Xmoco与普遍使用的增强功能一起使用可以改善此类任务的性能。我们希望本文的发现是研究人员考虑自我监督学习中负面例子的重要相互作用的动机。
translated by 谷歌翻译
自我监督的学习表明它有可能在没有人为注释的情况下提取强大的视觉表现。提出各种作品从不同的角度处理自我监督的学习:(1)对比学习方法(例如,MOCO,SIMCLR)利用阳性和阴性样品来引导训练方向; (2)不对称网络方法(例如,BYOL,SIMSIAM)通过引入预测器网络和止动梯度操作来摆脱阴性样本; (3)特征去相关方法(例如,Barlow Twins,ViCREG),而是旨在降低特征尺寸之间的冗余。这些方法在各种动机的设计损失功能中看起来非常不同。最终的准确度数也各不相同,其中不同的网络和技巧在不同的作品中使用。在这项工作中,我们证明这些方法可以统一成相同的形式。我们不是比较他们的损失函数,我们通过梯度分析推出统一的公式。此外,我们进行公平和详细的实验以比较他们的表现。事实证明,这些方法之间几乎没有差距,并且使用动量编码器是提高性能的关键因素。从这个统一的框架来看,我们提出了一个简单但有效的自我监督学习的简单但有效的渐变形式。它不需要内存银行或预测的网络,但仍然可以实现最先进的性能,并轻松采用其他培训策略。广泛的线性评估实验和许多下游任务也表现出其有效性。代码应释放。
translated by 谷歌翻译
当自我监督的模型已经显示出比在规模上未标记的数据训练的情况下的监督对方的可比视觉表现。然而,它们的功效在持续的学习(CL)场景中灾难性地减少,其中数据被顺序地向模型呈现给模型。在本文中,我们表明,通过添加将表示的当前状态映射到其过去状态,可以通过添加预测的网络来无缝地转换为CL的蒸馏机制。这使我们能够制定一个持续自我监督的视觉表示的框架,学习(i)显着提高了学习象征的质量,(ii)与若干最先进的自我监督目标兼容(III)几乎没有近似参数调整。我们通过在各种CL设置中培训六种受欢迎的自我监督模型来证明我们的方法的有效性。
translated by 谷歌翻译