我们提出了很少的示例聚类(FEC),这是一种新型算法,可以进行对比学习以群集几个示例。我们的方法由以下三个步骤组成:(1)生成候选集群分配,(2)每个集群分配的对比度学习,以及(3)选择最佳候选者。基于以下假设:与其他人的对比学习者的训练速度要快,我们选择了在步骤(3)中学习早期学习中训练损失最小的候选人。在\ textit {mini} -imagenet和Cub-200-2011数据集上进行的广泛实验表明,在各种情况下,FEC平均比其他基本线平均优于其他基本线。FEC还表现出有趣的学习曲线,其中聚类性能逐渐增加,然后急剧下降。
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
作为主导范式,微调目标数据的预先训练模型广泛用于许多深度学习应用,特别是对于小数据集。然而,最近的研究已经明确表明,一旦培训迭代的数量增加,划痕训练都没有比这一训练前策略更糟糕的最终表现。在这项工作中,我们从学习理论中流行的泛化分析的角度重新审视这种现象。我们的结果表明,最终预测精度可能具有对预训练模型的弱依赖性,特别是在大训练迭代的情况下。观察激励我们利用预训练预调整的数据,因为此数据也可用于微调。使用预训练数据的泛化结果表明,当适当的预训练数据包含在微调中时,可以提高目标任务的最终性能。随着理论发现的洞察力,我们提出了一种新颖的选择策略来选择从预训练数据中的子集,以帮助改善目标任务的概括。 8个基准数据集上的图像分类任务的广泛实验结果验证了基于数据选择的微调管道的有效性。
translated by 谷歌翻译
对比度学习最近在无监督的视觉表示学习中显示出巨大的潜力。在此轨道中的现有研究主要集中于图像内不变性学习。学习通常使用丰富的图像内变换来构建正对,然后使用对比度损失最大化一致性。相反,相互影响不变性的优点仍然少得多。利用图像间不变性的一个主要障碍是,尚不清楚如何可靠地构建图像间的正对,并进一步从它们中获得有效的监督,因为没有配对注释可用。在这项工作中,我们提出了一项全面的实证研究,以更好地了解从三个主要组成部分的形象间不变性学习的作用:伪标签维护,采样策略和决策边界设计。为了促进这项研究,我们引入了一个统一的通用框架,该框架支持无监督的内部和间形内不变性学习的整合。通过精心设计的比较和分析,揭示了多个有价值的观察结果:1)在线标签收敛速度比离线标签更快; 2)半硬性样品比硬否定样品更可靠和公正; 3)一个不太严格的决策边界更有利于形象间的不变性学习。借助所有获得的食谱,我们的最终模型(即InterCLR)对多个标准基准测试的最先进的内图内不变性学习方法表现出一致的改进。我们希望这项工作将为设计有效的无监督间歇性不变性学习提供有用的经验。代码:https://github.com/open-mmlab/mmselfsup。
translated by 谷歌翻译
我们在本文中解决了广义类别发现(GCD)的问题,即从一组可见的类中利用信息的未标记的图像,其中未标记的图像可以包含可见的类和看不见的类。可以将所见类看作是类的隐式标准,这使得此设置不同于无监督的聚类,而集群标准可能模棱两可。我们主要关注在细粒数据集中发现类别的问题,因为它是类别发现的最直接应用程序之一,即帮助专家使用所见类规定的隐性标准在未标记的数据集中发现新颖概念。通用类别发现的最新方法杠杆对比度学习以学习表示形式,但是较大的类间相似性和阶层内差异对方法提出了挑战,因为负面示例可能包含无关的线索,以识别类别因此,算法可能会收敛到局部微米。我们提出了一种名为“专家对抗性学习(XCON)”的新颖方法,可以通过将数据集使用K-均值聚类将数据集划分为子数据库,然后对每个子数据集进行对比度学习,从而帮助模型从图像中挖掘有用的信息。学习细粒度的判别特征。在细粒度数据集上的实验表明,与以前的最佳方法相比,性能明显改善,表明我们方法的有效性。
translated by 谷歌翻译
Recent methods for deep metric learning have been focusing on designing different contrastive loss functions between positive and negative pairs of samples so that the learned feature embedding is able to pull positive samples of the same class closer and push negative samples from different classes away from each other. In this work, we recognize that there is a significant semantic gap between features at the intermediate feature layer and class labels at the final output layer. To bridge this gap, we develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity in a contrastive learning setting. This contrastive Bayesian analysis leads to a new loss function for deep metric learning. To improve the generalization capability of the proposed method onto new classes, we further extend the contrastive Bayesian loss with a metric variance constraint. Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning in both supervised and pseudo-supervised scenarios, outperforming existing methods by a large margin.
translated by 谷歌翻译
深度学习表明,针对不同领域(例如图像和语音识别)的传统机器学习方法有了重大改进。他们在基准数据集上的成功通过从业人员通过验证的模型转移到现实世界中。使用监督学习预处理的视觉模型需要大量昂贵的数据注释。为了应对这一限制,已经提出了DeepCluster(一种简单且可扩展的视觉表示预处理)。但是,该模型的基本工作尚不清楚。在本文中,我们分析了DeepCluster内部质量,并详尽地评估了各种超参数在三个不同数据集上的影响。因此,我们提出了一个解释算法在实践中起作用的原因。我们还表明,深簇收敛和性能高度取决于卷积层随机初始化过滤器的质量与所选簇数的相互作用。此外,我们证明连续聚类对于深簇收敛并不重要。因此,聚类阶段的早期停止将减少训练时间,并允许算法扩展到大型数据集。最后,我们在半监督环境中得出了合理的超参数选择标准。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that bridges contrastive learning with clustering. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it encodes semantic structures discovered by clustering into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL.
translated by 谷歌翻译
大多数现有的工作在几次学习中,依赖于Meta-Learning网络在大型基础数据集上,该网络通常是与目标数据集相同的域。我们解决了跨域几秒钟的问题,其中基础和目标域之间存在大移位。与未标记的目标数据的跨域几秒识别问题在很大程度上在文献中毫无根据。启动是使用自我训练解决此问题的第一个方法。但是,它使用固定的老师在标记的基础数据集上返回,以为未标记的目标样本创建软标签。由于基本数据集和未标记的数据集来自不同的域,因此将基本数据集的类域中的目标图像投影,具有固定的预制模型可能是子最优的。我们提出了一种简单的动态蒸馏基方法,以方便来自新颖/基础数据集的未标记图像。我们通过从教师网络中的未标记图像的未标记版本的预测计算并将其与来自学生网络相同的相同图像的强大版本匹配来施加一致性正常化。教师网络的参数被更新为学生网络参数的指数移动平均值。我们表明所提出的网络了解可以轻松适应目标域的表示,即使它尚未在预先预测阶段的目标专用类别训练。我们的车型优于当前最先进的方法,在BSCD-FSL基准中的5次分类,3.6%的3.6%,并在传统的域名几枪学习任务中显示出竞争性能。
translated by 谷歌翻译
在本文中,我们考虑一个高度通用的图像识别设置,其中,给定标记和未标记的图像集,任务是在未标记的集合中对所有图像进行分类。这里,未标记的图像可以来自标记的类或新颖的图像。现有的识别方法无法处理此设置,因为它们会产生几种限制性假设,例如仅来自已知或未知 - 类的未标记的实例以及已知的未知类的数量。我们解决了更加不受约束的环境,命名为“广义类别发现”,并挑战所有这些假设。我们首先通过从新型类别发现和适应这项任务的最先进的算法来建立强有力的基线。接下来,我们建议使用视觉变形金刚,为此开放的世界设置具有对比的代表学习。然后,我们介绍一个简单而有效的半监督$ k $ -means方法,将未标记的数据自动聚类,看不见的类,显着优于基线。最后,我们还提出了一种新的方法来估计未标记数据中的类别数。我们彻底评估了我们在公共数据集上的方法,包括Cifar10,CiFar100和Imagenet-100,以及包括幼崽,斯坦福汽车和植宝司19,包括幼崽,斯坦福汽车和Herbarium19,在这个新的环境中基准测试,以培养未来的研究。
translated by 谷歌翻译
在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善几乎没有记录的学习。具体而言,我们利用易于可用的分布样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据),以避免使用分类器。。我们的方法易于实施,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于归纳和跨传输设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。
translated by 谷歌翻译
我们介绍了代表学习(CARL)的一致分配,通过组合来自自我监督对比学习和深层聚类的思路来学习视觉表现的无监督学习方法。通过从聚类角度来看对比学习,Carl通过学习一组一般原型来学习无监督的表示,该原型用作能量锚来强制执行给定图像的不同视图被分配给相同的原型。与与深层聚类的对比学习的当代工作不同,Carl建议以在线方式学习一组一般原型,使用梯度下降,而无需使用非可微分算法或k手段来解决群集分配问题。卡尔在许多代表性学习基准中超越了竞争对手,包括线性评估,半监督学习和转移学习。
translated by 谷歌翻译
我们研究了用于半监控学习(SSL)的无监督数据选择,其中可以提供大规模的未标记数据集,并且为标签采集预算小额数据子集。现有的SSL方法专注于学习一个有效地集成了来自给定小标记数据和大型未标记数据的信息的模型,而我们专注于选择正确的数据以用于SSL的注释,而无需任何标签或任务信息。直观地,要标记的实例应统称为下游任务的最大多样性和覆盖范围,并且单独具有用于SSL的最大信息传播实用程序。我们以三步数据为中心的SSL方法形式化这些概念,使稳定性和精度的纤维液改善8%的CiFar-10(标记为0.08%)和14%的Imagenet -1k(标记为0.2%)。它也是一种具有各种SSL方法的通用框架,提供一致的性能增益。我们的工作表明,在仔细选择注释数据上花费的小计算带来了大注释效率和模型性能增益,而无需改变学习管道。我们完全无监督的数据选择可以轻松扩展到其他弱监督的学习设置。
translated by 谷歌翻译
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
translated by 谷歌翻译
虽然通过学习特定于样本的鉴别视觉特征,但对比学习最近对未标记图像的深度聚类引起了显着的益处,但其对明确推断的类决策界限的可能性不太了解。这是因为它的实例鉴别策略不是类敏感性,因此,没有优化导出的特定于特定于特定的特征空间的簇,以便对应于有意义的类决策边界进行了优化。在这项工作中,我们通过引入语义对比学习(SCL)来解决这个问题。通过制定语义(群集感知)对比学习目标,SCL对未标记的训练数据进行了明确的基于距离的群集结构。此外,我们引入了通过实例视觉相似性和群集决策边界共同满足的聚类一致性条件,并同时通过他们的共识,同时优化了关于语义地面类别(未知/未标记)的假设。这种语义对比学习方法来发现未知类决策界限对无监督对象识别任务的学习具有相当大的优势。广泛的实验表明,SCL在六个对象识别基准上表现出最先进的对比学习和深度聚类方法,特别是在更具有挑战性的更精细的粒度和更大的数据集。
translated by 谷歌翻译
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.
translated by 谷歌翻译
通过对比学习,自我监督学习最近在视觉任务中显示了巨大的潜力,这旨在在数据集中区分每个图像或实例。然而,这种情况级别学习忽略了实例之间的语义关系,有时不希望地从语义上类似的样本中排斥锚,被称为“假否定”。在这项工作中,我们表明,对于具有更多语义概念的大规模数据集来说,虚假否定的不利影响更为重要。为了解决这个问题,我们提出了一种新颖的自我监督的对比学习框架,逐步地检测并明确地去除假阴性样本。具体地,在训练过程之后,考虑到编码器逐渐提高,嵌入空间变得更加语义结构,我们的方法动态地检测增加的高质量假否定。接下来,我们讨论两种策略,以明确地在对比学习期间明确地消除检测到的假阴性。广泛的实验表明,我们的框架在有限的资源设置中的多个基准上表现出其他自我监督的对比学习方法。
translated by 谷歌翻译
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or "views") of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a "swapped" prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.
translated by 谷歌翻译
少量学习仍然是一个具有挑战性的问题,对于大多数现实世界数据来说,令人不满意的1次射击准确性。在这里,我们在深网络的特征空间中提出了不同的透视数据分布,并展示如何利用它以用于几次拍摄学习。首先,我们观察到特征空间中的最近邻居具有相同类的高概率成员,而来自一个类的通常两个随机点并不多于来自来自不同类别的点。此观察结果表明,特征空间中的类别稀疏,松散连接的图形而不是密集的簇。要利用此属性,我们建议使用少量标签传播到未标记的空间,然后使用内核PCA重建错误作为每个类的特征空间数据分布的决策边界。使用这种方法,我们称之为“k-prop”,我们展示了很大程度上改善了几秒钟学习表演(例如,在Resisc45卫星图像数据集上的1-Shot 5路分类的83%的准确性)用于骨干网的数据集网络可以培训高级最近邻近常数概率。我们使用六个不同的数据集展示这种关系。
translated by 谷歌翻译