在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善几乎没有记录的学习。具体而言,我们利用易于可用的分布样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据),以避免使用分类器。。我们的方法易于实施,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于归纳和跨传输设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。
translated by 谷歌翻译
The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or selfsupervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of selfdistillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
translated by 谷歌翻译
大多数现有的工作在几次学习中,依赖于Meta-Learning网络在大型基础数据集上,该网络通常是与目标数据集相同的域。我们解决了跨域几秒钟的问题,其中基础和目标域之间存在大移位。与未标记的目标数据的跨域几秒识别问题在很大程度上在文献中毫无根据。启动是使用自我训练解决此问题的第一个方法。但是,它使用固定的老师在标记的基础数据集上返回,以为未标记的目标样本创建软标签。由于基本数据集和未标记的数据集来自不同的域,因此将基本数据集的类域中的目标图像投影,具有固定的预制模型可能是子最优的。我们提出了一种简单的动态蒸馏基方法,以方便来自新颖/基础数据集的未标记图像。我们通过从教师网络中的未标记图像的未标记版本的预测计算并将其与来自学生网络相同的相同图像的强大版本匹配来施加一致性正常化。教师网络的参数被更新为学生网络参数的指数移动平均值。我们表明所提出的网络了解可以轻松适应目标域的表示,即使它尚未在预先预测阶段的目标专用类别训练。我们的车型优于当前最先进的方法,在BSCD-FSL基准中的5次分类,3.6%的3.6%,并在传统的域名几枪学习任务中显示出竞争性能。
translated by 谷歌翻译
很少有射击学习(FSL)旨在使用有限标记的示例生成分类器。许多现有的作品采用了元学习方法,构建了一些可以从几个示例中学习以生成分类器的学习者。通常,几次学习者是通过依次对多个几次射击任务进行采样并优化几杆学习者在为这些任务生成分类器时的性能来构建或进行元训练的。性能是通过结果分类器对这些任务的测试(即查询)示例进行分类的程度来衡量的。在本文中,我们指出了这种方法的两个潜在弱点。首先,采样的查询示例可能无法提供足够的监督来进行元训练少数学习者。其次,元学习的有效性随着射击数量的增加而急剧下降。为了解决这些问题,我们为少数学习者提出了一个新颖的元训练目标,这是为了鼓励少数学习者生成像强大分类器一样执行的分类器。具体而言,我们将每个采样的几个弹药任务与强大的分类器相关联,该分类器接受了充分的标记示例。强大的分类器可以看作是目标分类器,我们希望在几乎没有示例的情况下生成的几个学习者,我们使用强大的分类器来监督少数射击学习者。我们提出了一种构建强分类器的有效方法,使我们提出的目标成为现有基于元学习的FSL方法的易于插入的术语。我们与许多代表性的元学习方法相结合验证了我们的方法,Lastshot。在几个基准数据集中,我们的方法可导致各种任务的显着改进。更重要的是,通过我们的方法,基于元学习的FSL方法可以在不同数量的镜头上胜过基于非Meta学习的方法。
translated by 谷歌翻译
从一个非常少数标记的样品中学习新颖的课程引起了机器学习区域的越来越高。最近关于基于元学习或转移学习的基于范例的研究表明,良好特征空间的获取信息可以是在几次拍摄任务上实现有利性能的有效解决方案。在本文中,我们提出了一种简单但有效的范式,该范式解耦了学习特征表示和分类器的任务,并且只能通过典型的传送学习培训策略从基类嵌入体系结构的特征。为了在每个类别内保持跨基地和新类别和辨别能力的泛化能力,我们提出了一种双路径特征学习方案,其有效地结合了与对比特征结构的结构相似性。以这种方式,内部级别对齐和级别的均匀性可以很好地平衡,并且导致性能提高。三个流行基准测试的实验表明,当与简单的基于原型的分类器结合起来时,我们的方法仍然可以在电感或转换推理设置中的标准和广义的几次射击问题达到有希望的结果。
translated by 谷歌翻译
现有的少量学习(FSL)方法依赖于具有大型标记数据集的培训,从而阻止它们利用丰富的未标记数据。从信息理论的角度来看,我们提出了一种有效的无监督的FSL方法,并以自学意义进行学习表示。遵循信息原理,我们的方法通过捕获数据的内在结构来学习全面的表示。具体而言,我们以低偏置的MI估计量来最大化实例及其表示的相互信息(MI),以执行自我监督的预训练。我们的自我监督模型对所见类别的可区分特征的监督预训练没有针对可见的阶级的偏见,从而对看不见的类别进行了更好的概括。我们解释说,受监督的预训练和自我监督的预训练实际上正在最大化不同的MI目标。进一步进行了广泛的实验,以通过各种训练环境分析其FSL性能。令人惊讶的是,结果表明,在适当条件下,自我监管的预训练可以优于监督预训练。与最先进的FSL方法相比,我们的方法在没有基本类别的任何标签的情况下,在广泛使用的FSL基准上实现了可比的性能。
translated by 谷歌翻译
Few-shot learning aims to fast adapt a deep model from a few examples. While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift. We thus propose the Omni-Training framework to seamlessly bridge pre-training and meta-training for data-efficient few-shot learning. Our first contribution is a tri-flow Omni-Net architecture. Besides the joint representation flow, Omni-Net introduces two parallel flows for pre-training and meta-training, responsible for improving domain transferability and task transferability respectively. Omni-Net further coordinates the parallel flows by routing their representations via the joint-flow, enabling knowledge transfer across flows. Our second contribution is the Omni-Loss, which introduces a self-distillation strategy separately on the pre-training and meta-training objectives for boosting knowledge transfer throughout different training stages. Omni-Training is a general framework to accommodate many existing algorithms. Evaluations justify that our single framework consistently and clearly outperforms the individual state-of-the-art methods on both cross-task and cross-domain settings in a variety of classification, regression and reinforcement learning problems.
translated by 谷歌翻译
Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. Our code is publicly available at https://github.com/ElementAI/TADAM.
translated by 谷歌翻译
The task of Few-shot learning (FSL) aims to transfer the knowledge learned from base categories with sufficient labelled data to novel categories with scarce known information. It is currently an important research question and has great practical values in the real-world applications. Despite extensive previous efforts are made on few-shot learning tasks, we emphasize that most existing methods did not take into account the distributional shift caused by sample selection bias in the FSL scenario. Such a selection bias can induce spurious correlation between the semantic causal features, that are causally and semantically related to the class label, and the other non-causal features. Critically, the former ones should be invariant across changes in distributions, highly related to the classes of interest, and thus well generalizable to novel classes, while the latter ones are not stable to changes in the distribution. To resolve this problem, we propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency by replacing the patch-level information and supervision of the query images with random gallery images from different classes from the query ones. We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features. To further make these features to be discriminative enough for classification, we propose Correlation-guided Reconstruction (CGR) and Hardness-Aware module for instance discrimination and easier discrimination between similar classes. Moreover, such a framework can be adapted to the unsupervised FSL scenario.
translated by 谷歌翻译
少量学习,特别是几秒钟的图像分类,近年来受到了越来越多的关注,并目睹了重大进展。最近的一些研究暗示表明,许多通用技术或“诀窍”,如数据增强,预训练,知识蒸馏和自我监督,可能大大提高了几次学习方法的性能。此外,不同的作品可以采用不同的软件平台,不同的训练计划,不同的骨干架构以及甚至不同的输入图像大小,使得公平的比较困难,从业者与再现性斗争。为了解决这些情况,通过在Pytorch中的同一单个代码库中重新实施17个最新的框架,提出了几次射门学习(Libfewshot)的全面图书馆。此外,基于libfewshot,我们提供多个基准数据集的全面评估,其中包含多个骨干架构,以评估不同培训技巧的常见缺陷和效果。此外,鉴于近期对必要性或未培训机制的必要性怀疑,我们的评估结果表明,特别是当与预训练相结合时,仍然需要这种机制。我们希望我们的工作不仅可以降低初学者的障碍,可以在几次学习上工作,而且还消除了非动力技巧的影响,促进了几枪学习的内在研究。源代码可从https://github.com/rl-vig/libfewshot获取。
translated by 谷歌翻译
Learning with limited data is a key challenge for visual recognition. Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them leads to the target task. In this paper, we propose a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddings that are task-specific and are discriminative. We empirically investigated various instantiations of such set-to-set functions and observed the Transformer is most effective -as it naturally satisfies key properties of our desired model. We denote this model as FEAT (few-shot embedding adaptation w/ Transformer) and validate it on both the standard few-shot classification benchmark and four extended few-shot learning settings with essential use cases, i.e., cross-domain, transductive, generalized few-shot learning, and low-shot learning. It archived consistent improvements over baseline models as well as previous methods, and established the new stateof-the-art results on two benchmarks.
translated by 谷歌翻译
少量分类需要调整从大型注释的基础数据集中学到的知识来识别新颖的看不见的类,每个类别由少数标记的示例表示。在这样的场景中,预先绘制大容量在大型数据集上的网络,然后在少数示例下向少量抵消导致严重的过度拟合。同时,在从大型标记数据集中学到的“冷冻”特征的顶部培训一个简单的线性分类器无法使模型调整到新型类的属性,有效地诱导底部。在本文中,我们向这两种流行的策略提出了一种替代方法。首先,我们的方法使用在新颖类上培训的线性分类器来伪标签整个大型数据集。这有效地“幻觉”在大型数据集中的新型类别,尽管基本数据库中未存在的新类别(新颖和基类是不相交的)。然后,除了在新型数据集上的标准交叉熵损失之外,它将在伪标记的基础示例上具有蒸馏损失的整个模型。这一步骤有效地训练了网络,识别对新型类别识别的上下文和外观提示,而是使用整个大规模基础数据集,从而克服了几次拍摄学习的固有数据稀缺问题。尽管这种方法的简单性,但我们表明我们的方法在四个成熟的少量分类基准上表现出最先进的。
translated by 谷歌翻译
Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.
translated by 谷歌翻译
半监督学习(SSL)是解决监督学习的注释瓶颈的主要方法之一。最近的SSL方法可以有效利用大量未标记数据的存储库来提高性能,同时依靠一小部分标记数据。在大多数SSL方法中,一个常见的假设是,标记和未标记的数据来自同一基础数据分布。但是,在许多实际情况下,情况并非如此,这限制了其适用性。相反,在这项工作中,我们试图解决最近提出的挑战性的开放世界SSL问题,这些问题并非如此。在开放世界的SSL问题中,目的是识别已知类别的样本,并同时检测和群集样品属于未标记数据中的新型类别。这项工作引入了OpenLDN,该OpenLDN利用成对的相似性损失来发现新颖的类别。使用双层优化规则,此成对相似性损失利用了标记的设置中可用的信息,以隐式群集新颖的类样本,同时识别来自已知类别的样本。在发现新颖的类别后,OpenLDN将Open-World SSL问题转换为标准SSL问题,以使用现有的SSL方法实现额外的性能提高。我们的广泛实验表明,OpenLDN在多个流行的分类基准上胜过当前的最新方法,同时提供了更好的准确性/培训时间权衡。
translated by 谷歌翻译
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
translated by 谷歌翻译
很少有视觉识别是指从一些标记实例中识别新颖的视觉概念。通过将查询表示形式与类表征进行比较以预测查询实例的类别,许多少数射击的视觉识别方法采用了基于公制的元学习范式。但是,当前基于度量的方法通常平等地对待所有实例,因此通常会获得有偏见的类表示,考虑到并非所有实例在总结了类级表示的实例级表示时都同样重要。例如,某些实例可能包含无代表性的信息,例如过多的背景和无关概念的信息,这使结果偏差。为了解决上述问题,我们提出了一个新型的基于公制的元学习框架,称为实例自适应类别表示网络(ICRL-net),以进行几次视觉识别。具体而言,我们开发了一个自适应实例重新平衡网络,具有在生成班级表示,通过学习和分配自适应权重的不同实例中的自适应权重时,根据其在相应类的支持集中的相对意义来解决偏见的表示问题。此外,我们设计了改进的双线性实例表示,并结合了两个新型的结构损失,即,阶层内实例聚类损失和阶层间表示区分损失,以进一步调节实例重估过程并完善类表示。我们对四个通常采用的几个基准测试:Miniimagenet,Tieredimagenet,Cifar-FS和FC100数据集进行了广泛的实验。与最先进的方法相比,实验结果证明了我们的ICRL-NET的优势。
translated by 谷歌翻译
在对比学习中,最近的进步表现出了出色的表现。但是,绝大多数方法仅限于封闭世界的环境。在本文中,我们通过挖掘开放世界的环境来丰富表示学习的景观,其中新颖阶级的未标记样本自然可以在野外出现。为了弥合差距,我们引入了一个新的学习框架,开放世界的对比学习(Opencon)。Opencon应对已知和新颖阶级学习紧凑的表现的挑战,并促进了一路上的新颖性发现。我们证明了Opencon在挑战基准数据集中的有效性并建立竞争性能。在Imagenet数据集上,Opencon在新颖和总体分类精度上分别胜过当前最佳方法的最佳方法,分别胜过11.9%和7.4%。我们希望我们的工作能为未来的工作打开新的大门,以解决这一重要问题。
translated by 谷歌翻译
很少有图像分类是一个具有挑战性的问题,旨在仅基于少量培训图像来达到人类的识别水平。少数图像分类的一种主要解决方案是深度度量学习。这些方法是,通过将看不见的样本根据距离的距离进行分类,可在强大的深神经网络中学到的嵌入空间中看到的样品,可以避免以少数图像分类的少数训练图像过度拟合,并实现了最新的图像表现。在本文中,我们提供了对深度度量学习方法的最新审查,以进行2018年至2022年的少量图像分类,并根据度量学习的三个阶段将它们分为三组,即学习功能嵌入,学习课堂表示和学习距离措施。通过这种分类法,我们确定了他们面临的不同方法和问题的新颖性。我们通过讨论当前的挑战和未来趋势进行了少量图像分类的讨论。
translated by 谷歌翻译
很少有射击学习(FSL)旨在通过利用基本数据集的先验知识来识别只有几个支持样本的新奇查询。在本文中,我们考虑了FSL中的域移位问题,并旨在解决支持集和查询集之间的域间隙。不同于以前考虑基础和新颖类之间的域移位的跨域FSL工作(CD-FSL),新问题称为跨域跨集FSL(CDSC-FSL),不仅需要很少的学习者适应新的领域,但也要在每个新颖类中的不同领域之间保持一致。为此,我们提出了一种新颖的方法,即Stabpa,学习原型紧凑和跨域对准表示,以便可以同时解决域的转移和很少的学习学习。我们对分别从域和办公室数据集构建的两个新的CDCS-FSL基准进行评估。值得注意的是,我们的方法的表现优于多个详细的基线,例如,在域内,将5-shot精度提高了6.0点。代码可从https://github.com/wentaochen0813/cdcs-fsl获得
translated by 谷歌翻译
常规的几杆分类(FSC)旨在识别出有限标记的数据的新课程中的样本。最近,已经提出了域泛化FSC(DG-FSC),目的是识别来自看不见的域的新型类样品。 DG-FSC由于基础类(用于培训)和新颖类(评估中遇到)之间的域移位,对许多模型构成了巨大的挑战。在这项工作中,我们为解决DG-FSC做出了两个新颖的贡献。我们的首要贡献是提出重生网络(BAN)情节培训,并全面研究其对DG-FSC的有效性。作为一种特定的知识蒸馏形式,已证明禁令可以通过封闭式设置来改善常规监督分类的概括。这种改善的概括促使我们研究了DG-FSC的禁令,我们表明禁令有望解决DG-FSC中遇到的域转移。在令人鼓舞的发现的基础上,我们的第二个(主要)贡献是提出很少的禁令,FS-Ban,这是DG-FSC的新型禁令方法。我们提出的FS-BAN包括新颖的多任务学习目标:相互正则化,不匹配的老师和元控制温度,这些目标都是专门设计的,旨在克服DG-FSC中的中心和独特挑战,即过度拟合和领域差异。我们分析了这些技术的不同设计选择。我们使用六个数据集和三个基线模型进行全面的定量和定性分析和评估。结果表明,我们提出的FS-BAN始终提高基线模型的概括性能,并达到DG-FSC的最先进的准确性。
translated by 谷歌翻译