少量分类旨在执行分类,因为只有利息类别的标记示例。尽管提出了几种方法,但大多数现有的几次射击学习(FSL)模型假设基础和新颖类是从相同的数据域中汲取的。在识别在一个看不见的域中的新型类数据方面,这成为域广义少量分类的更具挑战性的任务。在本文中,我们为域广义的少量拍摄分类提供了一个独特的学习框架,其中基类来自同质的多个源域,而要识别的新类是来自训练期间未见的目标域。通过推进元学习策略,我们的学习框架跨越多个源域利用数据来捕获域不变的功能,通过基于度量学习的机制跨越支持和查询数据来引入FSL能力。我们进行广泛的实验,以验证我们提出的学习框架和展示从小但同质源数据的效果,能够优选地对来自大规模的学习来执行。此外,我们为域广泛的少量分类提供了骨干模型的选择。
translated by 谷歌翻译
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
translated by 谷歌翻译
基于元学习的现有方法通过从(源域)基础类别的培训任务中学到的元知识来预测(目标域)测试任务的新颖类标签。但是,由于范围内可能存在较大的域差异,大多数现有作品可能无法推广到新颖的类别。为了解决这个问题,我们提出了一种新颖的对抗特征增强(AFA)方法,以弥合域间隙,以几乎没有学习。该特征增强旨在通过最大化域差异来模拟分布变化。在对抗训练期间,通过将增强特征(看不见的域)与原始域(可见域)区分开来学习域歧视器,而将域差异最小化以获得最佳特征编码器。所提出的方法是一个插件模块,可以轻松地基于元学习的方式将其集成到现有的几种学习方法中。在九个数据集上进行的广泛实验证明了我们方法对跨域几乎没有射击分类的优越性,与最新技术相比。代码可从https://github.com/youthhoo/afa_for_few_shot_learning获得
translated by 谷歌翻译
Given sufficient training data on the source domain, cross-domain few-shot learning (CD-FSL) aims at recognizing new classes with a small number of labeled examples on the target domain. The key to addressing CD-FSL is to narrow the domain gap and transferring knowledge of a network trained on the source domain to the target domain. To help knowledge transfer, this paper introduces an intermediate domain generated by mixing images in the source and the target domain. Specifically, to generate the optimal intermediate domain for different target data, we propose a novel target guided dynamic mixup (TGDM) framework that leverages the target data to guide the generation of mixed images via dynamic mixup. The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio. To better transfer the knowledge, the proposed Mixup-3T network contains three branches with shared parameters for classifying classes in the source domain, target domain, and intermediate domain. To generate the optimal intermediate domain, the DRGN learns to generate an optimal mix ratio according to the performance on auxiliary target data. Then, the whole TGDM framework is trained via bi-level meta-learning so that TGDM can rectify itself to achieve optimal performance on target data. Extensive experimental results on several benchmark datasets verify the effectiveness of our method.
translated by 谷歌翻译
大多数现有的复合面部表达识别(FER)方法依赖于用于训练的大型化合物表达数据。但是,收集此类数据是劳动密集型且耗时的。在本文中,我们解决了跨域少数学习(FSL)设置中的复合FER任务,该设置仅需要几个在目标域中的复合表达式样本。具体而言,我们提出了一个新型的级联分解网络(CDNET),该网络将基于顺序分解机制的几个学习到分解模块层叠,以获得可转移的特征空间。为了减轻我们任务中基本班级有限的过度拟合问题,部分正则化策略旨在有效利用情节培训和批处理培训的最佳功能。通过在多个基本表达数据集上进行类似任务的培训,CDNET了解了可以轻松适应以识别看不见的化合物表达式的学习能力。对利润和野外复合表达数据集进行的广泛实验证明了我们提出的CDNET与几种最先进的FSL方法的优越性。代码可在以下网址获得:https://github.com/zouxinyi0625/cdnet。
translated by 谷歌翻译
如何通过学习和视觉社区进行识别或分割视觉数据时处理域名转移。在本文中,我们解决了域广义语义分割,其中分割模型在多个源极域上培训,预计将概括到未操作数据域。我们提出了一种具有功能解剖能力的新型元学习方案,它可以使用域泛化保证来派生语义分段的域中的功能。特别是,我们在我们的框架中介绍了一个特定于特定的功能批评模块,强制执行域泛化保证的解除义的视觉功能。最后,我们对基准数据集的定量结果证实了我们所提出的模型的有效性和稳健性,以及在分割中的最先进的域适应和泛化方法表现。
translated by 谷歌翻译
少量学习,特别是几秒钟的图像分类,近年来受到了越来越多的关注,并目睹了重大进展。最近的一些研究暗示表明,许多通用技术或“诀窍”,如数据增强,预训练,知识蒸馏和自我监督,可能大大提高了几次学习方法的性能。此外,不同的作品可以采用不同的软件平台,不同的训练计划,不同的骨干架构以及甚至不同的输入图像大小,使得公平的比较困难,从业者与再现性斗争。为了解决这些情况,通过在Pytorch中的同一单个代码库中重新实施17个最新的框架,提出了几次射门学习(Libfewshot)的全面图书馆。此外,基于libfewshot,我们提供多个基准数据集的全面评估,其中包含多个骨干架构,以评估不同培训技巧的常见缺陷和效果。此外,鉴于近期对必要性或未培训机制的必要性怀疑,我们的评估结果表明,特别是当与预训练相结合时,仍然需要这种机制。我们希望我们的工作不仅可以降低初学者的障碍,可以在几次学习上工作,而且还消除了非动力技巧的影响,促进了几枪学习的内在研究。源代码可从https://github.com/rl-vig/libfewshot获取。
translated by 谷歌翻译
常规的几杆分类(FSC)旨在识别出有限标记的数据的新课程中的样本。最近,已经提出了域泛化FSC(DG-FSC),目的是识别来自看不见的域的新型类样品。 DG-FSC由于基础类(用于培训)和新颖类(评估中遇到)之间的域移位,对许多模型构成了巨大的挑战。在这项工作中,我们为解决DG-FSC做出了两个新颖的贡献。我们的首要贡献是提出重生网络(BAN)情节培训,并全面研究其对DG-FSC的有效性。作为一种特定的知识蒸馏形式,已证明禁令可以通过封闭式设置来改善常规监督分类的概括。这种改善的概括促使我们研究了DG-FSC的禁令,我们表明禁令有望解决DG-FSC中遇到的域转移。在令人鼓舞的发现的基础上,我们的第二个(主要)贡献是提出很少的禁令,FS-Ban,这是DG-FSC的新型禁令方法。我们提出的FS-BAN包括新颖的多任务学习目标:相互正则化,不匹配的老师和元控制温度,这些目标都是专门设计的,旨在克服DG-FSC中的中心和独特挑战,即过度拟合和领域差异。我们分析了这些技术的不同设计选择。我们使用六个数据集和三个基线模型进行全面的定量和定性分析和评估。结果表明,我们提出的FS-BAN始终提高基线模型的概括性能,并达到DG-FSC的最先进的准确性。
translated by 谷歌翻译
跨域很少的学习(CD-FSL)最近几乎没有目标样本在源和目标域之间存在极端差异,最近引起了极大的关注。对于CD-FSL,最近的研究通常开发了基于转移学习的方法,该方法预先培训了受欢迎的标记源域数据集的神经网络,然后将其传输到目标域数据。尽管标记的数据集可以为目标数据提供合适的初始参数,但源和目标之间的域差异可能会阻碍目标域上的微调。本文提出了一种简单而功能强大的方法,该方法在适应目标数据之前将源域上拟合的参数重新传递。重新运行重置源预训练模型的特定于源特异性参数,从而促进了目标域上的微调,从而改善了几乎没有射击性能。
translated by 谷歌翻译
很少有视觉识别是指从一些标记实例中识别新颖的视觉概念。通过将查询表示形式与类表征进行比较以预测查询实例的类别,许多少数射击的视觉识别方法采用了基于公制的元学习范式。但是,当前基于度量的方法通常平等地对待所有实例,因此通常会获得有偏见的类表示,考虑到并非所有实例在总结了类级表示的实例级表示时都同样重要。例如,某些实例可能包含无代表性的信息,例如过多的背景和无关概念的信息,这使结果偏差。为了解决上述问题,我们提出了一个新型的基于公制的元学习框架,称为实例自适应类别表示网络(ICRL-net),以进行几次视觉识别。具体而言,我们开发了一个自适应实例重新平衡网络,具有在生成班级表示,通过学习和分配自适应权重的不同实例中的自适应权重时,根据其在相应类的支持集中的相对意义来解决偏见的表示问题。此外,我们设计了改进的双线性实例表示,并结合了两个新型的结构损失,即,阶层内实例聚类损失和阶层间表示区分损失,以进一步调节实例重估过程并完善类表示。我们对四个通常采用的几个基准测试:Miniimagenet,Tieredimagenet,Cifar-FS和FC100数据集进行了广泛的实验。与最先进的方法相比,实验结果证明了我们的ICRL-NET的优势。
translated by 谷歌翻译
我们介绍了在Neurips'22接受的Chalearn Meta学习系列中的新挑战的设计和基线结果,重点是“跨域”元学习。元学习旨在利用从以前的任务中获得的经验,以有效地解决新任务(即具有更好的性能,较少的培训数据和/或适度的计算资源)。尽管该系列中的先前挑战集中在域内几乎没有学习问题,但目的是有效地学习n-way K-shot任务(即N级培训示例的N班级分类问题),这项竞赛挑战了参与者的解决方案。从各种领域(医疗保健,生态学,生物学,制造业等)提出的“任何通道”和“任何镜头”问题,他们是为了人道主义和社会影响而被选为。为此,我们创建了Meta-Album,这是来自10个域的40个图像分类数据集的元数据,从中,我们从中以任何数量的“方式”(在2-20范围内)和任何数量的“镜头”来解释任务”(在1-20范围内)。竞争是由代码提交的,在Codalab挑战平台上进行了完全盲目测试。获奖者的代码将是开源的,从而使自动化机器学习解决方案的部署可以在几个域中进行几次图像分类。
translated by 谷歌翻译
Training models that generalize to new domains at test time is a problem of fundamental importance in machine learning. In this work, we encode this notion of domain generalization using a novel regularization function. We pose the problem of finding such a regularization function in a Learning to Learn (or) metalearning framework. The objective of domain generalization is explicitly modeled by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.
translated by 谷歌翻译
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.
translated by 谷歌翻译
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. Furthermore, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems. This shows that DG training can benefit standard practice in computer vision.
translated by 谷歌翻译
大多数元学习方法都假设存在于可用于基本知识的情节元学习的一组非常大的标记数据。这与更现实的持续学习范例形成对比,其中数据以包含不相交类的任务的形式逐步到达。在本文中,我们考虑了这个增量元学习(IML)的这个问题,其中类在离散任务中逐步呈现。我们提出了一种方法,我们调用了IML,我们称之为eCISODIC重播蒸馏(ERD),该方法将来自当前任务的类混合到当前任务中,当研究剧集时,来自先前任务的类别示例。然后将这些剧集用于知识蒸馏以最大限度地减少灾难性的遗忘。四个数据集的实验表明ERD超越了最先进的。特别是,在一次挑战的单次次数较挑战,长任务序列增量元学习场景中,我们将IML和联合训练与当前状态的3.5%/ 10.1%/ 13.4%之间的差距降低我们在Diered-ImageNet / Mini-ImageNet / CIFAR100上分别为2.6%/ 2.9%/ 5.0%。
translated by 谷歌翻译
神经记忆能够快速适应新任务,只需几个训练样本。现有的内存模型仅从单个最后一层存储特征,在培训和测试分布之间存在域之间的域移位不概括。我们不是依赖扁平内存,我们提出了一种在不同语义层面存储特征的分层替代方案。我们介绍了分层原型模型,其中每个级别的原型从分层内存中获取相应的信息。如果域移位情况如此需要,该模型能够灵活地依赖不同语义级别的功能。我们通过新派生的分层变分推理框架来学习模型,其中分层内存和原型是共同优化的。为了探索和利用不同语义层面的重要性,我们进一步建议以数据驱动方式学习与每个级别的原型相关联的权重,这使得模型能够自适应地选择最概括的功能。我们进行彻底的消融研究,以证明我们模型中每个组件的有效性。在跨领域和传统少量拍摄分类上的跨领域和竞争性能的新的最先进的性能进一步证实了等级变分记忆的益处。
translated by 谷歌翻译
Learning with limited data is a key challenge for visual recognition. Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them leads to the target task. In this paper, we propose a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddings that are task-specific and are discriminative. We empirically investigated various instantiations of such set-to-set functions and observed the Transformer is most effective -as it naturally satisfies key properties of our desired model. We denote this model as FEAT (few-shot embedding adaptation w/ Transformer) and validate it on both the standard few-shot classification benchmark and four extended few-shot learning settings with essential use cases, i.e., cross-domain, transductive, generalized few-shot learning, and low-shot learning. It archived consistent improvements over baseline models as well as previous methods, and established the new stateof-the-art results on two benchmarks.
translated by 谷歌翻译
Few-shot learning aims to fast adapt a deep model from a few examples. While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift. We thus propose the Omni-Training framework to seamlessly bridge pre-training and meta-training for data-efficient few-shot learning. Our first contribution is a tri-flow Omni-Net architecture. Besides the joint representation flow, Omni-Net introduces two parallel flows for pre-training and meta-training, responsible for improving domain transferability and task transferability respectively. Omni-Net further coordinates the parallel flows by routing their representations via the joint-flow, enabling knowledge transfer across flows. Our second contribution is the Omni-Loss, which introduces a self-distillation strategy separately on the pre-training and meta-training objectives for boosting knowledge transfer throughout different training stages. Omni-Training is a general framework to accommodate many existing algorithms. Evaluations justify that our single framework consistently and clearly outperforms the individual state-of-the-art methods on both cross-task and cross-domain settings in a variety of classification, regression and reinforcement learning problems.
translated by 谷歌翻译
很少有射击学习(FSL)由于其在模型训练中的能力而无需过多的数据而引起了计算机视觉的越来越多的关注。 FSL具有挑战性,因为培训和测试类别(基础与新颖集)可能会在很大程度上多样化。传统的基于转移的解决方案旨在将从大型培训集中学到的知识转移到目标测试集中是有限的,因为任务分配转移的关键不利影响没有充分解决。在本文中,我们通过结合度量学习和通道注意的概念扩展了基于转移方法的解决方案。为了更好地利用特征主链提取的特征表示,我们提出了特定于类的通道注意(CSCA)模块,该模块通过分配每个类别的CSCA权重向量来学会突出显示每个类中的判别通道。与旨在学习全球班级功能的一般注意力模块不同,CSCA模块旨在通过非常有效的计算来学习本地和特定的特定功能。我们评估了CSCA模块在标准基准测试中的性能,包括Miniimagenet,Cifar-imagenet,Cifar-FS和Cub-200-200-2011。实验在电感和/跨域设置中进行。我们取得了新的最新结果。
translated by 谷歌翻译
很少有细粒度的学习旨在将查询图像分类为具有细粒度差异的一组支持类别之一。尽管学习不同对象通过深神网络的局部差异取得了成功,但如何在基于变压器的架构中利用查询支持的跨图像对象语义关系在几个摄像机的细粒度场景中仍未得到充分探索。在这项工作中,我们提出了一个基于变压器的双螺旋模型,即HelixFormer,以双向和对称方式实现跨图像对象语义挖掘。 HelixFormer由两个步骤组成:1)跨不同分支的关系挖掘过程(RMP),以及2)在每个分支中表示增强过程(REP)。通过设计的RMP,每个分支都可以使用来自另一个分支的信息提取细粒对象级跨图义语义关系图(CSRMS),从而确保在语义相关的本地对象区域中更好地跨图像相互作用。此外,借助CSRMS,开发的REP可以增强每个分支中发现的与语义相关的局部区域的提取特征,从而增强模型区分细粒物体的细微特征差异的能力。在五个公共细粒基准上进行的广泛实验表明,螺旋形式可以有效地增强识别细颗粒物体的跨图像对象语义关系匹配,从而在1次以下的大多数先进方法中实现更好的性能,并且5击场景。我们的代码可在以下网址找到:https://github.com/jiakangyuan/helixformer
translated by 谷歌翻译