Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.
translated by 谷歌翻译
很少有射击学习(FSL)旨在使用有限标记的示例生成分类器。许多现有的作品采用了元学习方法,构建了一些可以从几个示例中学习以生成分类器的学习者。通常,几次学习者是通过依次对多个几次射击任务进行采样并优化几杆学习者在为这些任务生成分类器时的性能来构建或进行元训练的。性能是通过结果分类器对这些任务的测试(即查询)示例进行分类的程度来衡量的。在本文中,我们指出了这种方法的两个潜在弱点。首先,采样的查询示例可能无法提供足够的监督来进行元训练少数学习者。其次,元学习的有效性随着射击数量的增加而急剧下降。为了解决这些问题,我们为少数学习者提出了一个新颖的元训练目标,这是为了鼓励少数学习者生成像强大分类器一样执行的分类器。具体而言,我们将每个采样的几个弹药任务与强大的分类器相关联,该分类器接受了充分的标记示例。强大的分类器可以看作是目标分类器,我们希望在几乎没有示例的情况下生成的几个学习者,我们使用强大的分类器来监督少数射击学习者。我们提出了一种构建强分类器的有效方法,使我们提出的目标成为现有基于元学习的FSL方法的易于插入的术语。我们与许多代表性的元学习方法相结合验证了我们的方法,Lastshot。在几个基准数据集中,我们的方法可导致各种任务的显着改进。更重要的是,通过我们的方法,基于元学习的FSL方法可以在不同数量的镜头上胜过基于非Meta学习的方法。
translated by 谷歌翻译
很少有视觉识别是指从一些标记实例中识别新颖的视觉概念。通过将查询表示形式与类表征进行比较以预测查询实例的类别,许多少数射击的视觉识别方法采用了基于公制的元学习范式。但是,当前基于度量的方法通常平等地对待所有实例,因此通常会获得有偏见的类表示,考虑到并非所有实例在总结了类级表示的实例级表示时都同样重要。例如,某些实例可能包含无代表性的信息,例如过多的背景和无关概念的信息,这使结果偏差。为了解决上述问题,我们提出了一个新型的基于公制的元学习框架,称为实例自适应类别表示网络(ICRL-net),以进行几次视觉识别。具体而言,我们开发了一个自适应实例重新平衡网络,具有在生成班级表示,通过学习和分配自适应权重的不同实例中的自适应权重时,根据其在相应类的支持集中的相对意义来解决偏见的表示问题。此外,我们设计了改进的双线性实例表示,并结合了两个新型的结构损失,即,阶层内实例聚类损失和阶层间表示区分损失,以进一步调节实例重估过程并完善类表示。我们对四个通常采用的几个基准测试:Miniimagenet,Tieredimagenet,Cifar-FS和FC100数据集进行了广泛的实验。与最先进的方法相比,实验结果证明了我们的ICRL-NET的优势。
translated by 谷歌翻译
元学习已成为几乎没有图像分类的实用方法,在该方法中,“学习分类器的策略”是在标记的基础类别上进行元学习的,并且可以应用于具有新颖类的任务。我们删除了基类标签的要求,并通过无监督的元学习(UML)学习可通用的嵌入。具体而言,任务发作是在元训练过程中使用未标记的基本类别的数据增强构建的,并且我们将基于嵌入式的分类器应用于新的任务,并在元测试期间使用标记的少量示例。我们观察到两个元素在UML中扮演着重要角色,即进行样本任务和衡量实例之间的相似性的方法。因此,我们获得了具有两个简单修改的​​强基线 - 一个足够的采样策略,每情节有效地构建多个任务以及半分解的相似性。然后,我们利用来自两个方向的任务特征以获得进一步的改进。首先,合成的混淆实例被合并以帮助提取更多的判别嵌入。其次,我们利用额外的特定任务嵌入转换作为元训练期间的辅助组件,以促进预先适应的嵌入式的概括能力。几乎没有学习基准的实验证明,我们的方法比以前的UML方法优于先前的UML方法,并且比其监督变体获得了可比甚至更好的性能。
translated by 谷歌翻译
The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or selfsupervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of selfdistillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
translated by 谷歌翻译
Few-shot learning aims to fast adapt a deep model from a few examples. While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift. We thus propose the Omni-Training framework to seamlessly bridge pre-training and meta-training for data-efficient few-shot learning. Our first contribution is a tri-flow Omni-Net architecture. Besides the joint representation flow, Omni-Net introduces two parallel flows for pre-training and meta-training, responsible for improving domain transferability and task transferability respectively. Omni-Net further coordinates the parallel flows by routing their representations via the joint-flow, enabling knowledge transfer across flows. Our second contribution is the Omni-Loss, which introduces a self-distillation strategy separately on the pre-training and meta-training objectives for boosting knowledge transfer throughout different training stages. Omni-Training is a general framework to accommodate many existing algorithms. Evaluations justify that our single framework consistently and clearly outperforms the individual state-of-the-art methods on both cross-task and cross-domain settings in a variety of classification, regression and reinforcement learning problems.
translated by 谷歌翻译
很少有图像分类是一个具有挑战性的问题,旨在仅基于少量培训图像来达到人类的识别水平。少数图像分类的一种主要解决方案是深度度量学习。这些方法是,通过将看不见的样本根据距离的距离进行分类,可在强大的深神经网络中学到的嵌入空间中看到的样品,可以避免以少数图像分类的少数训练图像过度拟合,并实现了最新的图像表现。在本文中,我们提供了对深度度量学习方法的最新审查,以进行2018年至2022年的少量图像分类,并根据度量学习的三个阶段将它们分为三组,即学习功能嵌入,学习课堂表示和学习距离措施。通过这种分类法,我们确定了他们面临的不同方法和问题的新颖性。我们通过讨论当前的挑战和未来趋势进行了少量图像分类的讨论。
translated by 谷歌翻译
少量学习,特别是几秒钟的图像分类,近年来受到了越来越多的关注,并目睹了重大进展。最近的一些研究暗示表明,许多通用技术或“诀窍”,如数据增强,预训练,知识蒸馏和自我监督,可能大大提高了几次学习方法的性能。此外,不同的作品可以采用不同的软件平台,不同的训练计划,不同的骨干架构以及甚至不同的输入图像大小,使得公平的比较困难,从业者与再现性斗争。为了解决这些情况,通过在Pytorch中的同一单个代码库中重新实施17个最新的框架,提出了几次射门学习(Libfewshot)的全面图书馆。此外,基于libfewshot,我们提供多个基准数据集的全面评估,其中包含多个骨干架构,以评估不同培训技巧的常见缺陷和效果。此外,鉴于近期对必要性或未培训机制的必要性怀疑,我们的评估结果表明,特别是当与预训练相结合时,仍然需要这种机制。我们希望我们的工作不仅可以降低初学者的障碍,可以在几次学习上工作,而且还消除了非动力技巧的影响,促进了几枪学习的内在研究。源代码可从https://github.com/rl-vig/libfewshot获取。
translated by 谷歌翻译
在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善几乎没有记录的学习。具体而言,我们利用易于可用的分布样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据),以避免使用分类器。。我们的方法易于实施,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于归纳和跨传输设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。
translated by 谷歌翻译
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
translated by 谷歌翻译
The task of Few-shot learning (FSL) aims to transfer the knowledge learned from base categories with sufficient labelled data to novel categories with scarce known information. It is currently an important research question and has great practical values in the real-world applications. Despite extensive previous efforts are made on few-shot learning tasks, we emphasize that most existing methods did not take into account the distributional shift caused by sample selection bias in the FSL scenario. Such a selection bias can induce spurious correlation between the semantic causal features, that are causally and semantically related to the class label, and the other non-causal features. Critically, the former ones should be invariant across changes in distributions, highly related to the classes of interest, and thus well generalizable to novel classes, while the latter ones are not stable to changes in the distribution. To resolve this problem, we propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency by replacing the patch-level information and supervision of the query images with random gallery images from different classes from the query ones. We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features. To further make these features to be discriminative enough for classification, we propose Correlation-guided Reconstruction (CGR) and Hardness-Aware module for instance discrimination and easier discrimination between similar classes. Moreover, such a framework can be adapted to the unsupervised FSL scenario.
translated by 谷歌翻译
Learning with limited data is a key challenge for visual recognition. Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them leads to the target task. In this paper, we propose a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddings that are task-specific and are discriminative. We empirically investigated various instantiations of such set-to-set functions and observed the Transformer is most effective -as it naturally satisfies key properties of our desired model. We denote this model as FEAT (few-shot embedding adaptation w/ Transformer) and validate it on both the standard few-shot classification benchmark and four extended few-shot learning settings with essential use cases, i.e., cross-domain, transductive, generalized few-shot learning, and low-shot learning. It archived consistent improvements over baseline models as well as previous methods, and established the new stateof-the-art results on two benchmarks.
translated by 谷歌翻译
epiSodic学习是对几枪学习感兴趣的研究人员和从业者的流行练习。它包括在一系列学习问题(或剧集)中组织培训,每个人分为小型训练和验证子集,以模仿评估期间遇到的情况。但这总是必要吗?在本文中,我们调查了在集发作的级别使用非参数方法,例如最近邻居等方法的焦点学习的有用性。对于这些方法,我们不仅展示了广州学习的限制是如何不必要的,而是他们实际上导致利用培训批次的数据低效方式。我们通过匹配和原型网络进行广泛的消融实验,其中两个最流行的方法在集中的级别使用非参数方法。他们的“非焦化”对应物具有很大的更简单,具有较少的近似参数,并在多个镜头分类数据集中提高它们的性能。
translated by 谷歌翻译
域泛化(DG)方法旨在开发概括到测试分布与训练数据不同的设置的模型。在本文中,我们专注于多源零拍DG的挑战性问题,其中来自多个源域的标记训练数据可用,但无法从目标域中访问数据。虽然这个问题已成为研究的重要话题,但令人惊讶的是,将所有源数据汇集在一起​​和培训单个分类器的简单解决方案在标准基准中具有竞争力。更重要的是,即使在不同域中明确地优化不变性的复杂方法也不一定提供对ERM的非微不足道的增益。在本文中,我们首次研究了预先指定的域标签和泛化性能之间的重要链接。使用动机案例研究和分布稳健优化算法的新变种,我们首先演示了如何推断的自定义域组可以通过数据集的原始域标签来实现一致的改进。随后,我们介绍了一种用于多域泛化,Muldens的一般方法,它使用基于ERM的深度合并骨干,并通过元优化算法执行隐式域重标。使用对多个标准基准测试的经验研究,我们表明Muldens不需要定制增强策略或特定于数据集的培训过程,始终如一地优于ERM,通过显着的边距,即使在比较时也会产生最先进的泛化性能对于利用域标签的现有方法。
translated by 谷歌翻译
从一个非常少数标记的样品中学习新颖的课程引起了机器学习区域的越来越高。最近关于基于元学习或转移学习的基于范例的研究表明,良好特征空间的获取信息可以是在几次拍摄任务上实现有利性能的有效解决方案。在本文中,我们提出了一种简单但有效的范式,该范式解耦了学习特征表示和分类器的任务,并且只能通过典型的传送学习培训策略从基类嵌入体系结构的特征。为了在每个类别内保持跨基地和新类别和辨别能力的泛化能力,我们提出了一种双路径特征学习方案,其有效地结合了与对比特征结构的结构相似性。以这种方式,内部级别对齐和级别的均匀性可以很好地平衡,并且导致性能提高。三个流行基准测试的实验表明,当与简单的基于原型的分类器结合起来时,我们的方法仍然可以在电感或转换推理设置中的标准和广义的几次射击问题达到有希望的结果。
translated by 谷歌翻译
在本文中,我们看看跨域几秒分类的问题,旨在从以前看不见的类别和域名的域中学习分类器。最近的方法广泛地通过参数参数化与前者通常在大型训练集上学习的任务 - 不可行的和任务特定权重参数来解决这个问题,并且后者通过在小型支撑集上通过辅助网络动态预测。在这项工作中,我们专注于对后者的估计,并建议将特定于任务特定权重直接在小型支架上学习,以与动态估计它们。特别地,通过系统分析,我们示出了通过矩阵形式的参数适配器以备体网络的多个中间层的参数化适配器的任务特定权重显着提高了Meta DataSet中最先进模型的性能基准以较小的额外费用。
translated by 谷歌翻译
最近,已经观察到,转移学习解决方案可能是我们解决许多少量学习基准的全部 - 因此提出了有关何时以及如何部署元学习算法的重要问题。在本文中,我们试图通过1.提出一个新颖的指标(多样性系数)来阐明这些问题,以测量几次学习基准和2.的任务多样性。 )并在公平条件下进行学习(相同的体系结构,相同的优化器和所有经过培训的模型)。使用多样性系数,我们表明流行的迷你胶原和Cifar-fs几乎没有学习基准的多样性低。这种新颖的洞察力将转移学习解决方案比在公平比较的低多样性方面的元学习解决方案更好。具体而言,我们从经验上发现,低多样性系数与转移学习和MAML学习解决方案之间的高相似性在元测试时间和分类层相似性方面(使用基于特征的距离指标,例如SVCCA,PWCCA,CKA和OPD) )。为了进一步支持我们的主张,我们发现这种元测试的准确性仍然存在,即使模型大小变化也是如此。因此,我们得出的结论是,在低多样性制度中,MAML和转移学习在公平比较时具有等效的元检验性能。我们也希望我们的工作激发了对元学习基准测试基准的更周到的结构和定量评估。
translated by 谷歌翻译
我们提出了一个统一的查看,即通过通用表示,一个深层神经网络共同学习多个视觉任务和视觉域。同时学习多个问题涉及最大程度地减少具有不同幅度和特征的多个损失函数的加权总和,从而导致一个损失的不平衡状态,与学习每个问题的单独模型相比,一个损失的不平衡状态主导了优化和差的结果。为此,我们提出了通过小容量适配器将多个任务/特定于域网络的知识提炼到单个深神经网络中的知识。我们严格地表明,通用表示在学习NYU-V2和CityScapes中多个密集的预测问题方面实现了最新的表现,来自视觉Decathlon数据集中的不同域中的多个图像分类问题以及MetadataSet中的跨域中的几个域中学习。最后,我们还通过消融和定性研究进行多次分析。
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
在新课程训练时,几乎没有射击学习(FSL)方法通常假设具有准确标记的样品的清洁支持集。这个假设通常可能是不现实的:支持集,无论多么小,仍然可能包括标签错误的样本。因此,对标签噪声的鲁棒性对于FSL方法是实用的,但是这个问题令人惊讶地在很大程度上没有探索。为了解决FSL设置中标签错误的样品,我们做出了一些技术贡献。 (1)我们提供了简单而有效的特征聚合方法,改善了流行的FSL技术Protonet使用的原型。 (2)我们描述了一种嘈杂的噪声学习的新型变压器模型(TRANFS)。 TRANFS利用变压器的注意机制称重标记为错误的样品。 (3)最后,我们对迷你胶原和tieredimagenet的嘈杂版本进行了广泛的测试。我们的结果表明,TRANFS与清洁支持集的领先FSL方法相对应,但到目前为止,在存在标签噪声的情况下,它们的表现优于它们。
translated by 谷歌翻译