我们研究了针对无监督对比代表学习的硬消耗采样分布设计的问题。我们分析了一种新的MIN-MAX框架,寻求一种表示最小化所有联轴器的最大(最差情况)的广义对比学习损失(正面和阴性样本之间的关节分布)并证明所得的最小最大值代表性将是堕落的。这提供了在联轴器上结合额外的正则化约束的第一理论典范。我们通过最佳运输理论的镜头重新解释最小最大问题,并利用正则化的传输联轴来控制负例的硬度。我们证明最近提出的最先进的硬负面采样分布是对应于耦合熵正则化的特殊情况。
translated by 谷歌翻译
无监督的对比度学习(UCL)是一种自我监督的学习技术,旨在通过将正面样本彼此接近,同时将负面样本推到嵌入空间中远处,以学习有用的表示功能。为了提高UCL的性能,几项作品引入了旨在选择“硬”阴性样本与UCL中使用的随机采样策略相比,旨在选择“硬”阴性样本的硬性阴性对比度学习(H-UCL)。在另一种方法中,在假设标签信息可用的假设下,有监督的对比学习(SCL)最近通过将UCL扩展到完全监督的环境来开发。在本文中,由于硬性采样策略在H-UCL中的有效性以及标签信息在SCL中的有用性的启发性,我们提出了一个称为硬性负责监督的对比度学习(H-SCL)的对比学习框架。我们的数值结果证明了H-SCL在几个图像数据集上对SCL和H-UCL的有效性。另外,从理论上讲,在某些条件下,H-SCL的目标函数可以受H-UCL的目标函数的界定,而不是由UCL的目标函数界定。因此,将H-UCL损失最小化可以作为最小化H-SCL损失的代理,而最小化UCL损失不能。正如我们数值表明H-SCL优于其他对比学习方法时,我们的理论结果(通过H-UCL损失界限H-SCL损失)有助于解释为什么H-UCL在实践中优于UCL。
translated by 谷歌翻译
Recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classification tasks. Several of these methods are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of semantically "similar" data points and "negative samples," the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. The current paper uses the term contrastive learning for such algorithms and presents a theoretical framework for analyzing them by introducing latent classes and hypothesizing that semantically similar points are sampled from the same latent class. This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce (labeled) sample complexity on downstream tasks. We conduct controlled experiments in both the text and image domains to support the theory.
translated by 谷歌翻译
对比度学习(CL)方法有效地学习数据表示,而无需标记监督,在该方法中,编码器通过单VS-MONY SOFTMAX跨透镜损失将每个正样本在多个负样本上对比。通过利用大量未标记的图像数据,在Imagenet上预先训练时,最近的CL方法获得了有希望的结果,这是一个具有均衡图像类的曲制曲线曲线集。但是,当对野外图像进行预训练时,它们往往会产生较差的性能。在本文中,为了进一步提高CL的性能并增强其对未经保育数据集的鲁棒性,我们提出了一种双重的CL策略,该策略将其内部查询的正(负)样本对比,然后才能决定多么强烈地拉动(推)。我们通过对比度吸引力和对比度排斥(CACR)意识到这一策略,这使得查询不仅发挥了更大的力量来吸引更遥远的正样本,而且可以驱除更接近的负面样本。理论分析表明,CACR通过考虑正/阴性样品的分布之间的差异来概括CL的行为,而正/负样品的分布通常与查询独立进行采样,并且它们的真实条件分布给出了查询。我们证明了这种独特的阳性吸引力和阴性排斥机制,这有助于消除在数据集的策划较低时尤其有益于数据及其潜在表示的统一先验分布的需求。对许多标准视觉任务进行的大规模大规模实验表明,CACR不仅在表示学习中的基准数据集上始终优于现有的CL方法,而且在对不平衡图像数据集进行预训练时,还表现出更好的鲁棒性。
translated by 谷歌翻译
标准的对比学习方法通常需要大量的否定否定有效的无监督学习,并且往往表现出缓慢的收敛性。我们怀疑这种行为是由于用于提供与积极鲜明对比的否定的廉价选择。我们通过从支持向量机(SVM)的灵感来呈现最大值保证金对比学习(MMCL)来抵消这种困难。我们的方法选择否定作为通过二次优化问题获得的稀疏支持向量,通过最大化决策余量来强制执行对比度。由于SVM优化可以计算要求,特别是在端到端设置中,我们提出了缓解计算负担的简化。我们验证了我们对标准视觉基准数据集的方法,展示了在无监督的代表上学习最先进的表现,同时具有更好的经验收敛性。
translated by 谷歌翻译
Contrastive representation learning has been outstandingly successful in practice. In this work, we identify two key properties related to the contrastive loss: (1) alignment (closeness) of features from positive pairs, and (2) uniformity of the induced distribution of the (normalized) features on the hypersphere. We prove that, asymptotically, the contrastive loss optimizes these properties, and analyze their positive effects on downstream tasks. Empirically, we introduce an optimizable metric to quantify each property. Extensive experiments on standard vision and language datasets confirm the strong agreement between both metrics and downstream task performance. Directly optimizing for these two metrics leads to representations with comparable or better performance at downstream tasks than contrastive learning. Project
translated by 谷歌翻译
通过对比学习学到的表示的概括依赖于提取数据的特征。然而,我们观察到,对比损失并不总是充分引导提取的特征,可以通过无意中抑制重要预测特征来对下游任务对下游任务的性能产生负面影响的行为。我们发现特征提取受到所谓的实例歧视任务的难度的影响(即,鉴别不同分数的相似点的任务)。虽然更难以改善一些特征的表示,但改进是以抑制先前良好的特征的成本。作为响应,我们提出了隐含的特征修改(IFM),一种改变正和阴性样本的方法,以便引导对比模型来捕获更广泛的预测特征。凭经验,我们观察到IFM减少了特征抑制,结果提高了视觉和医学成像任务的性能。代码可在:\ url {https://github.com/joshr17/ifm}可用。
translated by 谷歌翻译
噪声对比度估计的最新研究表明,从经验上讲,从理论上讲,尽管在对比度损失中拥有更多的“负样本”,但最初在阈值中提高了下游分类的性能,但由于“碰撞覆盖“贸易”,它都会损害下游性能-离开。但是,对比度学习中固有的现象是如此吗?我们在一个简单的理论环境中显示,通过从基础潜在类采样(由Saunshi等人引入(ICML 2019)),产生正对,表明表示(人口)对比度损失的下游性能实际上确实确实确实如此。不会随着负样本的数量降低。一路上,我们在框架中给出了最佳表示形式的结构表征,以进行噪声对比估计。我们还为CIFAR-10和CIFAR-100数据集的理论结果提供了经验支持。
translated by 谷歌翻译
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the Ima-geNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions, and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement and reference TensorFlow code is released at https://t.ly/supcon 1 .
translated by 谷歌翻译
对比度学习最近在无监督的视觉表示学习中显示出巨大的潜力。在此轨道中的现有研究主要集中于图像内不变性学习。学习通常使用丰富的图像内变换来构建正对,然后使用对比度损失最大化一致性。相反,相互影响不变性的优点仍然少得多。利用图像间不变性的一个主要障碍是,尚不清楚如何可靠地构建图像间的正对,并进一步从它们中获得有效的监督,因为没有配对注释可用。在这项工作中,我们提出了一项全面的实证研究,以更好地了解从三个主要组成部分的形象间不变性学习的作用:伪标签维护,采样策略和决策边界设计。为了促进这项研究,我们引入了一个统一的通用框架,该框架支持无监督的内部和间形内不变性学习的整合。通过精心设计的比较和分析,揭示了多个有价值的观察结果:1)在线标签收敛速度比离线标签更快; 2)半硬性样品比硬否定样品更可靠和公正; 3)一个不太严格的决策边界更有利于形象间的不变性学习。借助所有获得的食谱,我们的最终模型(即InterCLR)对多个标准基准测试的最先进的内图内不变性学习方法表现出一致的改进。我们希望这项工作将为设计有效的无监督间歇性不变性学习提供有用的经验。代码:https://github.com/open-mmlab/mmselfsup。
translated by 谷歌翻译
Many datasets are biased, namely they contain easy-to-learn features that are highly correlated with the target class only in the dataset but not in the true underlying distribution of the data. For this reason, learning unbiased models from biased data has become a very relevant research topic in the last years. In this work, we tackle the problem of learning representations that are robust to biases. We first present a margin-based theoretical framework that allows us to clarify why recent contrastive losses (InfoNCE, SupCon, etc.) can fail when dealing with biased data. Based on that, we derive a novel formulation of the supervised contrastive loss (epsilon-SupInfoNCE), providing more accurate control of the minimal distance between positive and negative samples. Furthermore, thanks to our theoretical framework, we also propose FairKL, a new debiasing regularization loss, that works well even with extremely biased data. We validate the proposed losses on standard vision datasets including CIFAR10, CIFAR100, and ImageNet, and we assess the debiasing capability of FairKL with epsilon-SupInfoNCE, reaching state-of-the-art performance on a number of biased datasets, including real instances of biases in the wild.
translated by 谷歌翻译
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
translated by 谷歌翻译
对比度学习依赖于假设正对包含相关视图,例如,视频的图像或视频的共同发生的多峰信号,其共享关于实例的某些基础信息。但如果违反了这个假设怎么办?该文献表明,对比学学习在存在嘈杂的视图中产生次优表示,例如,没有明显共享信息的假正对。在这项工作中,我们提出了一种新的对比损失函数,这是对嘈杂的观点的强大。我们通过显示嘈杂二进制分类的强大对称损失的连接提供严格的理论理由,并通过基于Wassersein距离测量来建立新的对比界限进行新的对比。拟议的损失是完全的方式无话无双,并且对Innoconce损失的更换简单的替代品,这使得适用于现有的对比框架。我们表明,我们的方法提供了在展示各种现实世界噪声模式的图像,视频和图形对比学习基准上的一致性改进。
translated by 谷歌翻译
实例歧视对比学习(CL)在学习可转移表示方面取得了重大成功。与CL损失的温度$ \ tau $相关的硬度感知的属性被确定为在自动集中在硬性阴性样品上起着至关重要的作用。但是,先前的工作还证明了CL损失的均匀性困境(UTD)存在,这将导致意外的性能降解。具体而言,较小的温度有助于学习可分离的嵌入,但对语义相关样品的耐受性较小,这可能导致次优的嵌入空间,反之亦然。在本文中,我们提出了一种模型感的对比学习(MACL)策略来逃避UTD。对于训练不足的阶段,锚固的高相似性区域包含潜在的阳性样品的可能性较小。因此,在这些阶段采用较小的温度可以对硬性阴性样品施加更大的惩罚强度,以改善CL模型的歧视。相反,由于对潜在的阳性样品的耐受性,训练有素的相位较高的温度有助于探索语义结构。在实施过程中,MACL中的温度旨在适应反映CL模型置信度的对齐属性。此外,我们重新审查了为什么对比度学习需要在统一梯度降低的视角中大量负面样本。基于MACL和这些分析,在这项工作中提出了新的CL损失,以改善批量尺寸少量的学说和培训。
translated by 谷歌翻译
A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples. Without access to labels, dissimilar (negative) points are typically taken to be randomly sampled datapoints, implicitly accepting that these points may, in reality, actually have the same label. Perhaps unsurprisingly, we observe that sampling negative examples from truly different labels improves performance, in a synthetic setting where labels are available. Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels. Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks. Theoretically, we establish generalization bounds for the downstream classification task.
translated by 谷歌翻译
通过对比学习,自我监督学习最近在视觉任务中显示了巨大的潜力,这旨在在数据集中区分每个图像或实例。然而,这种情况级别学习忽略了实例之间的语义关系,有时不希望地从语义上类似的样本中排斥锚,被称为“假否定”。在这项工作中,我们表明,对于具有更多语义概念的大规模数据集来说,虚假否定的不利影响更为重要。为了解决这个问题,我们提出了一种新颖的自我监督的对比学习框架,逐步地检测并明确地去除假阴性样本。具体地,在训练过程之后,考虑到编码器逐渐提高,嵌入空间变得更加语义结构,我们的方法动态地检测增加的高质量假否定。接下来,我们讨论两种策略,以明确地在对比学习期间明确地消除检测到的假阴性。广泛的实验表明,我们的框架在有限的资源设置中的多个基准上表现出其他自我监督的对比学习方法。
translated by 谷歌翻译
Contrastive Learning has recently achieved state-of-the-art performance in a wide range of tasks. Many contrastive learning approaches use mined hard negatives to make batches more informative during training but these approaches are inefficient as they increase epoch length proportional to the number of mined negatives and require frequent updates of nearest neighbor indices or mining from recent batches. In this work, we provide an alternative to hard negative mining in supervised contrastive learning, Tail Batch Sampling (TBS), an efficient approximation to the batch assignment problem that upper bounds the gap between the global and training losses, $\mathcal{L}^{Global} - \mathcal{L}^{Train}$. TBS \textbf{improves state-of-the-art performance} in sentence embedding (+0.37 Spearman) and code-search tasks (+2.2\% MRR), is easy to implement - requiring only a few additional lines of code, does not maintain external data structures such as nearest neighbor indices, is more computationally efficient when compared to the most minimal hard negative mining approaches, and makes no changes to the model being trained.
translated by 谷歌翻译
由于其无监督的性质和下游任务的信息性特征表示,实例歧视自我监督的代表学习受到了受到关注的。在实践中,它通常使用比监督类的数量更多的负样本。然而,现有分析存在不一致;从理论上讲,大量的负样本在下游监督任务上降低了分类性能,同时凭经验,它们提高了性能。我们提供了一种新颖的框架,用于使用优惠券收集器的问题分析关于负样本的经验结果。我们的界限可以通过增加负样本的数量来隐立地纳入自我监督损失中的下游任务的监督损失。我们确认我们的拟议分析持有现实世界基准数据集。
translated by 谷歌翻译
多标签分类(MLC)是一个预测任务,其中每个样本可以具有多个标签。我们提出了一种基于高斯混合变分性AutoEncoder(C-GMVAE)的新型对比度学习促进的多标签预测模型,其学习多模式现有空间并采用对比损耗。除了预测模块之外,许多现有方法引入了额外的复杂神经模块以捕获标签相关性。我们发现,通过在监督环境中使用对比学习,我们可以有效利用标签信息,并学习有意义的功能和标签嵌入,捕获标签相关性和预测功率,而无需额外的神经模块。我们的方法还采用了学习和对齐功能和标签的潜在空间的想法。 C-GMVAE对潜伏空间的高斯混合结构施加了高斯混合结构,以减轻后塌陷和过正规的问题,与先前的单峰的作品相比。 C-GMVAE优先于多个公共数据集上的现有方法,通常可以匹配其他模型的完整性能,只有50%的训练数据。此外,我们表明学习的嵌入提供了对标签标签交互的解释的见解。
translated by 谷歌翻译
我们提出了一种结合时间序列表示学习的专家知识的方法。我们的方法采用专家功能来代替以前的对比学习方法中常用的数据转换。我们这样做是因为时间序列数据经常源于工业或医疗领域,这些工业或医学领域通常可以从域专家那里获得专家功能,而转换通常难以捉摸,对于时间序列数据。我们首先提出了有用的时间序列表示应实现的两个属性,并表明当前的表示学习方法不能确保这些属性。因此,我们设计了Expclr,这是一种基于目标的目标,它利用专家功能来鼓励两种属性来实现学习的代表。最后,我们在三个现实世界中的数据集上演示了ExpCLR超过了无监督和半监督的表示学习的几种最新方法。
translated by 谷歌翻译