我们研究了用于半监控学习(SSL)的无监督数据选择,其中可以提供大规模的未标记数据集,并且为标签采集预算小额数据子集。现有的SSL方法专注于学习一个有效地集成了来自给定小标记数据和大型未标记数据的信息的模型,而我们专注于选择正确的数据以用于SSL的注释,而无需任何标签或任务信息。直观地,要标记的实例应统称为下游任务的最大多样性和覆盖范围,并且单独具有用于SSL的最大信息传播实用程序。我们以三步数据为中心的SSL方法形式化这些概念,使稳定性和精度的纤维液改善8%的CiFar-10(标记为0.08%)和14%的Imagenet -1k(标记为0.2%)。它也是一种具有各种SSL方法的通用框架,提供一致的性能增益。我们的工作表明,在仔细选择注释数据上花费的小计算带来了大注释效率和模型性能增益,而无需改变学习管道。我们完全无监督的数据选择可以轻松扩展到其他弱监督的学习设置。
translated by 谷歌翻译
这项工作研究了伪标签的偏见问题,一种广泛发生的自然现象,但经常通过先前的研究忽视。当在源数据上培训的分类器被传送到未标记的目标数据时,会生成伪标签。当半监督的学习模型Fixmatch预测未标记的数据时,我们观察到沉重的长尾伪标签即使未标记的数据被策划到平衡。没有干预,培训模型继承了伪标签的偏置,最终是次优。为了消除模型偏置,我们提出了一种简单而有效的方法DebiSmatch,包括自适应脱叠模块和自适应边际损失。通过使用在线更新的队列,可以自动调整脱叠的强度和边距的大小。在ImageNet-1K上基准测试,DebiasMatch分别在半监督学习(0.2%注释数据)和零拍摄学习任务中显着超过26%和8.7%的最先进。
translated by 谷歌翻译
我们介绍了代表学习(CARL)的一致分配,通过组合来自自我监督对比学习和深层聚类的思路来学习视觉表现的无监督学习方法。通过从聚类角度来看对比学习,Carl通过学习一组一般原型来学习无监督的表示,该原型用作能量锚来强制执行给定图像的不同视图被分配给相同的原型。与与深层聚类的对比学习的当代工作不同,Carl建议以在线方式学习一组一般原型,使用梯度下降,而无需使用非可微分算法或k手段来解决群集分配问题。卡尔在许多代表性学习基准中超越了竞争对手,包括线性评估,半监督学习和转移学习。
translated by 谷歌翻译
在对比学习中,最近的进步表现出了出色的表现。但是,绝大多数方法仅限于封闭世界的环境。在本文中,我们通过挖掘开放世界的环境来丰富表示学习的景观,其中新颖阶级的未标记样本自然可以在野外出现。为了弥合差距,我们引入了一个新的学习框架,开放世界的对比学习(Opencon)。Opencon应对已知和新颖阶级学习紧凑的表现的挑战,并促进了一路上的新颖性发现。我们证明了Opencon在挑战基准数据集中的有效性并建立竞争性能。在Imagenet数据集上,Opencon在新颖和总体分类精度上分别胜过当前最佳方法的最佳方法,分别胜过11.9%和7.4%。我们希望我们的工作能为未来的工作打开新的大门,以解决这一重要问题。
translated by 谷歌翻译
对比度学习最近在无监督的视觉表示学习中显示出巨大的潜力。在此轨道中的现有研究主要集中于图像内不变性学习。学习通常使用丰富的图像内变换来构建正对,然后使用对比度损失最大化一致性。相反,相互影响不变性的优点仍然少得多。利用图像间不变性的一个主要障碍是,尚不清楚如何可靠地构建图像间的正对,并进一步从它们中获得有效的监督,因为没有配对注释可用。在这项工作中,我们提出了一项全面的实证研究,以更好地了解从三个主要组成部分的形象间不变性学习的作用:伪标签维护,采样策略和决策边界设计。为了促进这项研究,我们引入了一个统一的通用框架,该框架支持无监督的内部和间形内不变性学习的整合。通过精心设计的比较和分析,揭示了多个有价值的观察结果:1)在线标签收敛速度比离线标签更快; 2)半硬性样品比硬否定样品更可靠和公正; 3)一个不太严格的决策边界更有利于形象间的不变性学习。借助所有获得的食谱,我们的最终模型(即InterCLR)对多个标准基准测试的最先进的内图内不变性学习方法表现出一致的改进。我们希望这项工作将为设计有效的无监督间歇性不变性学习提供有用的经验。代码:https://github.com/open-mmlab/mmselfsup。
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
我们对自我监督,监督或半监督设置的代表学习感兴趣。在应用自我监督学习的平均移位思想的事先工作,通过拉动查询图像来概括拜尔的想法,不仅更接近其其他增强,而且还可以到其他增强的最近邻居(NNS)。我们认为,学习可以从选择远处与查询相关的邻居选择遥远的邻居。因此,我们建议通过约束最近邻居的搜索空间来概括MSF算法。我们显示我们的方法在SSL设置中优于MSF,当约束使用不同的图像时,并且当约束确保NNS具有与查询相同的伪标签时,在半监控设置中优于培训资源的半监控设置中的爪子。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
半监督学习(SSL)是解决监督学习的注释瓶颈的主要方法之一。最近的SSL方法可以有效利用大量未标记数据的存储库来提高性能,同时依靠一小部分标记数据。在大多数SSL方法中,一个常见的假设是,标记和未标记的数据来自同一基础数据分布。但是,在许多实际情况下,情况并非如此,这限制了其适用性。相反,在这项工作中,我们试图解决最近提出的挑战性的开放世界SSL问题,这些问题并非如此。在开放世界的SSL问题中,目的是识别已知类别的样本,并同时检测和群集样品属于未标记数据中的新型类别。这项工作引入了OpenLDN,该OpenLDN利用成对的相似性损失来发现新颖的类别。使用双层优化规则,此成对相似性损失利用了标记的设置中可用的信息,以隐式群集新颖的类样本,同时识别来自已知类别的样本。在发现新颖的类别后,OpenLDN将Open-World SSL问题转换为标准SSL问题,以使用现有的SSL方法实现额外的性能提高。我们的广泛实验表明,OpenLDN在多个流行的分类基准上胜过当前的最新方法,同时提供了更好的准确性/培训时间权衡。
translated by 谷歌翻译
元学习已成为几乎没有图像分类的实用方法,在该方法中,“学习分类器的策略”是在标记的基础类别上进行元学习的,并且可以应用于具有新颖类的任务。我们删除了基类标签的要求,并通过无监督的元学习(UML)学习可通用的嵌入。具体而言,任务发作是在元训练过程中使用未标记的基本类别的数据增强构建的,并且我们将基于嵌入式的分类器应用于新的任务,并在元测试期间使用标记的少量示例。我们观察到两个元素在UML中扮演着重要角色,即进行样本任务和衡量实例之间的相似性的方法。因此,我们获得了具有两个简单修改的​​强基线 - 一个足够的采样策略,每情节有效地构建多个任务以及半分解的相似性。然后,我们利用来自两个方向的任务特征以获得进一步的改进。首先,合成的混淆实例被合并以帮助提取更多的判别嵌入。其次,我们利用额外的特定任务嵌入转换作为元训练期间的辅助组件,以促进预先适应的嵌入式的概括能力。几乎没有学习基准的实验证明,我们的方法比以前的UML方法优于先前的UML方法,并且比其监督变体获得了可比甚至更好的性能。
translated by 谷歌翻译
半监督学习(SSL)是规避建立高性能模型的昂贵标签成本的最有前途的范例之一。大多数现有的SSL方法常规假定标记和未标记的数据是从相同(类)分布中绘制的。但是,在实践中,未标记的数据可能包括课外样本;那些不能从标签数据中的封闭类中的单热编码标签,即未标记的数据是开放设置。在本文中,我们介绍了Opencos,这是一种基于最新的自我监督视觉表示学习框架来处理这种现实的半监督学习方案。具体而言,我们首先观察到,可以通过自我监督的对比度学习有效地识别开放式未标记数据集中的类外样本。然后,Opencos利用此信息来克服现有的最新半监督方法中的故障模式,通过利用一式旋转伪标签和软标签来为已识别的识别和外部未标记的标签数据分别。我们广泛的实验结果表明了Opencos的有效性,可以修复最新的半监督方法,适合涉及开放式无标记数据的各种情况。
translated by 谷歌翻译
As an important data selection schema, active learning emerges as the essential component when iterating an Artificial Intelligence (AI) model. It becomes even more critical given the dominance of deep neural network based models, which are composed of a large number of parameters and data hungry, in application. Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions. In this paper, we present a review of active learning through deep active learning approaches from the following perspectives: 1) technical advancements in active learning, 2) applications of active learning in computer vision, 3) industrial systems leveraging or with potential to leverage active learning for data iteration, 4) current limitations and future research directions. We expect this paper to clarify the significance of active learning in a modern AI model manufacturing process and to bring additional research attention to active learning. By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies by boosting model production at scale.
translated by 谷歌翻译
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or "views") of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a "swapped" prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.
translated by 谷歌翻译
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that bridges contrastive learning with clustering. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it encodes semantic structures discovered by clustering into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL.
translated by 谷歌翻译
主动学习(AL)算法旨在识别注释的最佳数据子集,使得深神经网络(DNN)在此标记子集上培训时可以实现更好的性能。 AL特别有影响的工业规模设置,其中数据标签成本高,从业者使用各种工具来处理,以提高模型性能。最近自我监督预测(SSP)的成功突出了利用丰富的未标记数据促进模型性能的重要性。通过将AL与SSP结合起来,我们可以使用未标记的数据,同时标记和培训特别是信息样本。在这项工作中,我们研究了Imagenet上的AL和SSP的组合。我们发现小型玩具数据集上的性能 - 文献中的典型基准设置 - 由于活动学习者选择的类不平衡样本,而不是想象中的性能。在我们测试的现有基线中,各种小型和大规​​模设置的流行AL算法未能以随机抽样优于差异。为了解决类别不平衡问题,我们提出了平衡选择(基础),这是一种简单,可伸缩的AL算法,通过选择比现有方法更加平衡样本来始终如一地始终采样。我们的代码可用于:https://github.com/zeyademam/active_learning。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.
translated by 谷歌翻译
Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of unsupervised image classification remains an important, and open challenge in computer vision. Several recent approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by large margins, in particular +26.6% on CI-FAR10, +25.0% on CIFAR100-20 and +21.3% on STL10 in terms of classification accuracy. Furthermore, our method is the first to perform well on a large-scale dataset for image classification. In particular, we obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime without the use of any groundtruth annotations. The code is made publicly available here.
translated by 谷歌翻译
聚类是一项基本的机器学习任务,在文献中已广泛研究。经典聚类方法遵循以下假设:数据通过各种表示的学习技术表示为矢量化形式的特征。随着数据变得越来越复杂和复杂,浅(传统)聚类方法无法再处理高维数据类型。随着深度学习的巨大成功,尤其是深度无监督的学习,在过去的十年中,已经提出了许多具有深层建筑的代表性学习技术。最近,已经提出了深层聚类的概念,即共同优化表示的学习和聚类,因此引起了社区的日益关注。深度学习在聚类中的巨大成功,最基本的机器学习任务之一以及该方向的最新进展的巨大成功所激发。 - 艺术方法。我们总结了深度聚类的基本组成部分,并通过设计深度表示学习和聚类之间的交互方式对现有方法进行了分类。此外,该调查还提供了流行的基准数据集,评估指标和开源实现,以清楚地说明各种实验设置。最后但并非最不重要的一点是,我们讨论了深度聚类的实际应用,并提出了应有的挑战性主题,应将进一步的研究作为未来的方向。
translated by 谷歌翻译
在新颖的类发现(NCD)中,目标是在一个未标记的集合中找到新的类,并给定一组已知但不同的类别。尽管NCD最近引起了社区的关注,但尽管非常普遍的数据表示,但尚未提出异质表格数据的框架。在本文中,我们提出了TabularNCD,这是一种在表格数据中发现新类别的新方法。我们展示了一种从已知类别中提取知识的方法,以指导包含异质变量的表格数据中新型类的发现过程。该过程的一部分是通过定义伪标签的新方法来完成的,我们遵循多任务学习中的最新发现以优化关节目标函数。我们的方法表明,NCD不仅适用于图像,而且适用于异质表格数据。进行了广泛的实验,以评估我们的方法并证明其对7种不同公共分类数据集的3个竞争对手的有效性。
translated by 谷歌翻译