对比学习的标准方法是最大化数据不同观点之间的一致性。这些视图成对排序,使它们是正面的,编码对应于不同对象的视图对应的同一对象的不同视图或负面的视图。监督信号来自最大程度地提高正面对的总相似性,而为了避免崩溃,需要负面对。在这项工作中,我们注意到,当从数据的视图中形成集合时,考虑单个对的方法无法解释集合和集合间的相似性。因此,它限制了可用于训练表示形式的监督信号的信息内容。我们建议通过将对比对象作为集合进行对比,超越对比对象。为此,我们使用旨在评估集合和图形相似性的组合二次分配理论,并将设定对比度物镜作为对比度学习方法的正规化学方法。我们进行实验,并证明我们的方法改善了对度量学习和自我监督分类任务的学说。
translated by 谷歌翻译
我们介绍了代表学习(CARL)的一致分配,通过组合来自自我监督对比学习和深层聚类的思路来学习视觉表现的无监督学习方法。通过从聚类角度来看对比学习,Carl通过学习一组一般原型来学习无监督的表示,该原型用作能量锚来强制执行给定图像的不同视图被分配给相同的原型。与与深层聚类的对比学习的当代工作不同,Carl建议以在线方式学习一组一般原型,使用梯度下降,而无需使用非可微分算法或k手段来解决群集分配问题。卡尔在许多代表性学习基准中超越了竞争对手,包括线性评估,半监督学习和转移学习。
translated by 谷歌翻译
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the Ima-geNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions, and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement and reference TensorFlow code is released at https://t.ly/supcon 1 .
translated by 谷歌翻译
使用超越欧几里德距离的神经网络,深入的Bregman分歧测量数据点的分歧,并且能够捕获分布的发散。在本文中,我们提出了深深的布利曼对视觉表现的对比学习的分歧,我们的目标是通过基于功能Bregman分歧培训额外的网络来提高自我监督学习中使用的对比损失。与完全基于单点之间的分歧的传统对比学学习方法相比,我们的框架可以捕获分布之间的发散,这提高了学习表示的质量。我们展示了传统的对比损失和我们提出的分歧损失优于基线的结合,并且最先前的自我监督和半监督学习的大多数方法在多个分类和对象检测任务和数据集中。此外,学习的陈述在转移到其他数据集和任务时概括了良好。源代码和我们的型号可用于补充,并将通过纸张释放。
translated by 谷歌翻译
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive selfsupervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim-CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-ofthe-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100× fewer labels. 1
translated by 谷歌翻译
最近,深度度量学习(DML)的实质性研究努力集中在设计复杂的成对距离损失,这需要卷积方案来缓解优化,例如样本挖掘或配对加权。分类的标准交叉熵损失在DML中大大忽略了。在表面上,交叉熵可能看起来不相关,与度量学习无关,因为它没有明确地涉及成对距离。但是,我们提供了一个理论分析,将交叉熵链接到几个众所周知的和最近的成对损耗。我们的连接是从两种不同的观点绘制:一个基于明确的优化洞察力;另一个关于标签与学到的相互信息的判别和生成观点。首先,我们明确证明交叉熵是新的成对损耗的上限,其具有类似于各种成对损耗的结构:它最大限度地减少了课堂内距离,同时最大化了阶级间距离。结果,最小化交叉熵可以被视为近似束缚 - 优化(或大大最小化)算法,以最小化该成对丢失。其次,我们表明,更一般地,最小化跨熵实际上是相当于最大化互联信息的相同信息,我们连接多个众所周知的成对损耗。此外,我们表明,各种标准成对损耗可以通过绑定的关系彼此明确地与彼此有关。我们的研究结果表明,交叉熵代表了最大化相互信息的代理 - 作为成对损耗,没有必要进行复杂的样品挖掘启发式。我们对四个标准DML基准测试的实验强烈支持我们的调查结果。我们获得最先进的结果,优于最近和复杂的DML方法。
translated by 谷歌翻译
我们从统计依赖性角度接近自我监督的图像表示学习,提出与希尔伯特 - 施密特独立性标准(SSL-HSIC)自我监督的学习。 SSL-HSIC最大化图像和图像标识的变换表示之间的依赖性,同时最小化这些表示的核化方差。该框架产生了对Infonce的新了解,在不同转换之间的相互信息(MI)上的变分下限。虽然已知MI本身具有可能导致学习无意义的表示的病理学,但其绑定表现得更好:我们表明它隐含地近似于SSL-HSIC(具有略微不同的规范器)。我们的方法还向我们深入了解Byol,一种无与伦比的SSL方法,因为SSL-HSIC类似地了解了当地的样本邻居。 SSL-HSIC允许我们在批量大小中直接在时间线性上直接优化统计依赖性,而无需限制数据假设或间接相互信息估计。 SSL-HSIC培训或没有目标网络,SSL-HSIC与Imagenet的标准线性评估相匹配,半监督学习和转移到其他分类和视觉任务,如语义分割,深度估计和对象识别等。代码可在https://github.com/deepmind/ssl_hsic提供。
translated by 谷歌翻译
Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We investigate the classic hypothesis that a powerful representation is one that models view-invariant factors. We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. Our approach scales to any number of views, and is viewagnostic. We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics. Our approach achieves state-of-the-art results on image and video unsupervised learning benchmarks.
translated by 谷歌翻译
标准的对比学习方法通常需要大量的否定否定有效的无监督学习,并且往往表现出缓慢的收敛性。我们怀疑这种行为是由于用于提供与积极鲜明对比的否定的廉价选择。我们通过从支持向量机(SVM)的灵感来呈现最大值保证金对比学习(MMCL)来抵消这种困难。我们的方法选择否定作为通过二次优化问题获得的稀疏支持向量,通过最大化决策余量来强制执行对比度。由于SVM优化可以计算要求,特别是在端到端设置中,我们提出了缓解计算负担的简化。我们验证了我们对标准视觉基准数据集的方法,展示了在无监督的代表上学习最先进的表现,同时具有更好的经验收敛性。
translated by 谷歌翻译
现有的深度聚类方法依赖于对比学习的对比学习,这需要否定例子来形成嵌入空间,其中所有情况都处于良好分离状态。但是,否定的例子不可避免地引起阶级碰撞问题,损害了群集的表示学习。在本文中,我们探讨了对深度聚类的非对比表示学习,被称为NCC,其基于Byol,一种没有负例的代表性方法。首先,我们建议将一个增强的实例与嵌入空间中的另一个视图的邻居对齐,称为正抽样策略,该域避免了由否定示例引起的类碰撞问题,从而提高了集群内的紧凑性。其次,我们建议鼓励在所有原型中的一个原型和均匀性的两个增强视图之间的对准,命名的原型是原型的对比损失或protocl,这可以最大化簇间距离。此外,我们在期望 - 最大化(EM)框架中制定了NCC,其中E-Step利用球面K手段来估计实例的伪标签和来自目标网络的原型的分布,并且M-Step利用了所提出的损失优化在线网络。结果,NCC形成了一个嵌入空间,其中所有集群都处于分离良好,而内部示例都很紧凑。在包括ImageNet-1K的几个聚类基准数据集上的实验结果证明了NCC优于最先进的方法,通过显着的余量。
translated by 谷歌翻译
对比性自我监督学习(CSL)是一种实用解决方案,它以无监督的方法从大量数据中学习有意义的视觉表示。普通的CSL将从神经网络提取的特征嵌入到特定的拓扑结构上。在训练进度期间,对比度损失将同一输入的不同视图融合在一起,同时将不同输入分开的嵌入。 CSL的缺点之一是,损失项需要大量的负样本才能提供更好的相互信息理想。但是,通过较大的运行批量大小增加负样本的数量也增强了错误的负面影响:语义上相似的样品与锚分开,因此降低了下游性能。在本文中,我们通过引入一个简单但有效的对比学习框架来解决这个问题。关键的见解是使用暹罗风格的度量损失来匹配原型内特征,同时增加了原型间特征之间的距离。我们对各种基准测试进行了广泛的实验,其中结果证明了我们方法在提高视觉表示质量方面的有效性。具体而言,我们使用线性探针的无监督预训练的Resnet-50在Imagenet-1K数据集上超过了受访的训练有素的版本。
translated by 谷歌翻译
Self-supervised learning is a popular and powerful method for utilizing large amounts of unlabeled data, for which a wide variety of training objectives have been proposed in the literature. In this study, we perform a Bayesian analysis of state-of-the-art self-supervised learning objectives and propose a unified formulation based on likelihood learning. Our analysis suggests a simple method for integrating self-supervised learning with generative models, allowing for the joint training of these two seemingly distinct approaches. We refer to this combined framework as GEDI, which stands for GEnerative and DIscriminative training. Additionally, we demonstrate an instantiation of the GEDI framework by integrating an energy-based model with a cluster-based self-supervised learning model. Through experiments on synthetic and real-world data, including SVHN, CIFAR10, and CIFAR100, we show that GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a wide margin. We also demonstrate that GEDI can be integrated into a neural-symbolic framework to address tasks in the small data regime, where it can use logical constraints to further improve clustering and classification performance.
translated by 谷歌翻译
作为一种成功的自我监督学习方法,对比学习旨在学习输入样本扭曲之间共享的不变信息。尽管对比度学习在抽样策略和架构设计方面取得了持续的进步,但仍然存在两个持续的缺陷:任务 - 核定信息的干扰和样本效率低下,这与琐碎的恒定解决方案的反复存在有关。从维度分析的角度来看,我们发现尺寸的冗余和尺寸混杂因素是现象背后的内在问题,并提供了实验证据来支持我们的观点。我们进一步提出了一种简单而有效的方法metamask,这是元学习学到的维度面膜的缩写,以学习反对维度冗余和混杂因素的表示形式。 MetAmask采用冗余技术来解决尺寸的冗余问题,并创新地引入了尺寸掩模,以减少包含混杂因子的特定维度的梯度效应,该效果通过采用元学习范式进行培训,以改善掩盖掩盖性能的目标典型的自我监督任务的表示。与典型的对比方法相比,我们提供了坚实的理论分析以证明元掩体可以获得下游分类的更严格的风险范围。从经验上讲,我们的方法在各种基准上实现了最先进的性能。
translated by 谷歌翻译
在深度学习研究中,自学学习(SSL)引起了极大的关注,引起了计算机视觉和遥感社区的兴趣。尽管计算机视觉取得了很大的成功,但SSL在地球观测领域的大部分潜力仍然锁定。在本文中,我们对在遥感的背景下为计算机视觉的SSL概念和最新发展提供了介绍,并回顾了SSL中的概念和最新发展。此外,我们在流行的遥感数据集上提供了现代SSL算法的初步基准,从而验证了SSL在遥感中的潜力,并提供了有关数据增强的扩展研究。最后,我们确定了SSL未来研究的有希望的方向的地球观察(SSL4EO),以铺平了两个领域的富有成效的相互作用。
translated by 谷歌翻译
Self-supervised learning (SSL) is rapidly closing BARLOW TWINS is competitive with state-of-the-art methods for self-supervised learning while being conceptually simpler, naturally avoiding trivial constant (i.e. collapsed) embeddings, and being robust to the training batch size.
translated by 谷歌翻译
对比表现学习已被证明是一种有效的自我监督学习方法。大多数成功的方法都是基于噪声对比估计(NCE)范式,并将实例视图的视图视为阳性和其他情况,作为阳性应与其对比的噪声。但是,数据集中的所有实例都是从相同的分布和共享底层语义信息中汲取,这些语义信息不应被视为噪声。我们认为,良好的数据表示包含实例之间的关系或语义相似性。对比学习隐含地学习关系,但认为负面的噪音是对学习关系质量有害的噪音,因此是象征性的质量。为了规避这个问题,我们提出了一种使用称为相似性对比估计(SCE)之间的情况之间的语义相似性的对比学习的新颖性。我们的培训目标可以被视为柔和的对比学习。我们提出了持续分配以基于其语义相似性推动或拉动实例的持续分配。目标相似性分布从弱增强的情况计算并锐化以消除无关的关系。每个弱增强实例都与一个强大的增强实例配对,该实例对比其积极的同时保持目标相似性分布。实验结果表明,我们所提出的SCE在各种数据集中优于其基线MoCov2和RESSL,并对ImageNet线性评估协议上的最先进的算法具有竞争力。
translated by 谷歌翻译
最近先进的无监督学习方法使用暹罗样框架来比较来自同一图像的两个“视图”以进行学习表示。使两个视图独特是一种保证无监督方法可以学习有意义的信息的核心。但是,如果使用用于生成两个视图的增强不足够强度,此类框架有时会易碎过度装备,导致培训数据上的过度自信的问题。此缺点会阻碍模型,从学习微妙方差和细粒度信息。为了解决这个问题,在这项工作中,我们的目标是涉及在无监督的学习中的标签空间上的距离概念,并让模型通过混合输入数据空间来了解正面或负对对之间的柔和程度,以便协同工作输入和损耗空间。尽管其概念性简单,我们凭借解决的解决方案 - 无监督图像混合(UN-MIX),我们可以从转换的输入和相应的新标签空间中学习Subtler,更强大和广义表示。广泛的实验在CiFar-10,CiFar-100,STL-10,微小的想象和标准想象中进行了流行的无人监督方法SIMCLR,BYOL,MOCO V1和V2,SWAV等。我们所提出的图像混合物和标签分配策略可以获得一致的改进在完全相同的超参数和基础方法的培训程序之后1〜3%。代码在https://github.com/szq0214/un-mix上公开提供。
translated by 谷歌翻译
Cross entropy loss has served as the main objective function for classification-based tasks. Widely deployed for learning neural network classifiers, it shows both effectiveness and a probabilistic interpretation. Recently, after the success of self supervised contrastive representation learning methods, supervised contrastive methods have been proposed to learn representations and have shown superior and more robust performance, compared to solely training with cross entropy loss. However, cross entropy loss is still needed to train the final classification layer. In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss. First, we revisit a previously proposed contrastive-based objective function that approximates cross entropy loss and present a simple extension to learn the classifier jointly. Second, we propose a new version of the supervised contrastive training that learns jointly the parameters of the classifier and the backbone of the network. We empirically show that our proposed objective functions show a significant improvement over the standard cross entropy loss with more training stability and robustness in various challenging settings.
translated by 谷歌翻译
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
translated by 谷歌翻译
无教师的在线知识蒸馏(KD)旨在培训多个学生模型的合奏,并彼此提炼知识。尽管现有的在线KD方法实现了理想的性能,但它们通常专注于阶级概率作为核心知识类型,而忽略了宝贵的特征代表性信息。我们为在线KD提供了一个相互的对比学习(MCL)框架。 MCL的核心思想是以在线方式进行对比分布的相互交互和对比度分布的转移。我们的MCL可以汇总跨网络嵌入信息,并最大化两个网络之间的相互信息的下限。这使每个网络能够从他人那里学习额外的对比知识,从而提供更好的特征表示形式,从而提高视觉识别任务的性能。除最后一层外,我们还将MCL扩展到辅助特征细化模块辅助的几个中间层。这进一步增强了在线KD的表示能力。关于图像分类和转移学习到视觉识别任务的实验表明,MCL可以针对最新的在线KD方法带来一致的性能提高。优势表明,MCL可以指导网络生成更好的特征表示。我们的代码可在https://github.com/winycg/mcl上公开获取。
translated by 谷歌翻译