共享初始化参数的元学习已显示在解决少量学习任务方面非常有效。然而,将框架扩展到许多射击场景,这可能进一步提高其实用性,这一切相对忽略了由于内梯度步长的长链中的元学习的技术困难。在本文中,我们首先表明允许元学习者采取更多的内梯度步骤更好地捕获异构和大规模任务分布的结构,从而导致获得更好的初始化点。此外,为了增加元更新的频率,即使是过度长的内部优化轨迹,我们建议估计关于初始化参数的改变的任务特定参数的所需移位。通过这样做,我们可以随意增加元更新的频率,从而大大提高了元级收敛以及学习初始化的质量。我们验证了我们在异构的大规模任务集中验证了方法,并表明该算法在泛型性能和收敛方面以及多任务学习和微调基线方面主要优于先前的一阶元学习方法。 。
translated by 谷歌翻译
正规化和转移学习是两种流行的技术,可以增强看不见数据的概念,这是机器学习的根本问题。正则化技术是多功能的,因为它们是任务和架构 - 不可知论,但它们不会利用大量数据。传输学习方法学会从一个域转移到另一个域的知识,但可能无法跨解任务和架构拓展,并且可能会引入适应目标任务的新培训成本。为了弥合两者之间的差距,我们提出了一种可转移的扰动,Metaperturb,这是荟萃学会,以提高看不见数据的泛化性能。 Metaperturb实现为基于集的轻量级网络,该网络是不可知的,其尺寸和输入的顺序,它们在整个层上共享。然后,我们提出了一个元学习框架,共同训练了与异构任务相同的扰动功能。正如Metaperturb在层次和任务的不同分布上训练的集合函数,它可以概括为异构任务和架构。通过将不同的神经架构应用于各种规范和微调,验证对特定源域和架构的Metaperturb培训的疗效和普遍性,验证了特定的源域和架构的疗效和普遍性。结果表明,Metaperturb培训的网络显着优于大多数任务和架构的基线,参数大小的忽略不计,并且没有封闭曲调。
translated by 谷歌翻译
Meta-learning has been proposed as a framework to address the challenging few-shot learning setting. The key idea is to leverage a large number of similar few-shot tasks in order to learn how to adapt a base-learner to a new task for which only a few labeled samples are available. As deep neural networks (DNNs) tend to overfit using a few samples only, meta-learning typically uses shallow neural networks (SNNs), thus limiting its effectiveness. In this paper we propose a novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks. Specifically, meta refers to training multiple tasks, and transfer is achieved by learning scaling and shifting functions of DNN weights for each task. In addition, we introduce the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL. We conduct experiments using (5-class, 1-shot) and (5-class, 5shot) recognition tasks on two challenging few-shot learning benchmarks: miniImageNet and Fewshot-CIFAR100. Extensive comparisons to related works validate that our meta-transfer learning approach trained with the proposed HT meta-batch scheme achieves top performance. An ablation study also shows that both components contribute to fast convergence and high accuracy 1 .Optimize θ by Eq. 3; 5 end 6 Optimize Φ S {1,2} and θ by Eq. 4 and Eq. 5; 7 while not done do 8 Sample class-k in T (te) ; 9 Compute Acc k for T (te) ; 10 end 11 Return class-m with the lowest accuracy Acc m .
translated by 谷歌翻译
Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.
translated by 谷歌翻译
模型不合时宜的元学习(MAML)可以说是当今最流行的元学习算法之一。然而,它在几次分类上的性能远远远远远远远远远远远远远远落在许多致力于该问题的算法。在本文中,我们指出了如何训练MAML以进行几次分类的几个关键方面。首先,我们发现MAML在其内部循环更新中需要大量的梯度步骤,这与其常见的用法相矛盾。其次,我们发现MAML对元测试过程中的类标签分配敏感。具体而言,MAML Meta-Trains $ n$道分类器的初始化。这些$ n $方式,在元测试期间,然后具有“ $ n!$”的“ $ n!$”排列,并与$ n $新颖的课程配对。我们发现这些排列会导致巨大的准确性差异,从而使MAML不稳定。第三,我们研究了几种使MAML置换不变的方法,其中元训练单个向量以初始化分类头中的所有$ n $重量矢量的初始化。在Miniimagenet和Tieredimagenet等基准数据集上,我们命名Unicorn-MAML的方法在不牺牲MAML的简单性的情况下以与许多最近的几杆分类算法相同甚至优于许多近期的几个次数分类算法。
translated by 谷歌翻译
几乎没有学习方法的目的是训练模型,这些模型可以根据少量数据轻松适应以前看不见的任务。最受欢迎,最优雅的少学习方法之一是模型敏捷的元学习(MAML)。这种方法背后的主要思想是学习元模型的一般权重,该权重进一步适应了少数梯度步骤中的特定问题。但是,该模型的主要限制在于以下事实:更新过程是通过基于梯度的优化实现的。因此,MAML不能总是在一个甚至几个梯度迭代中将权重修改为基本水平。另一方面,使用许多梯度步骤会导致一个复杂且耗时的优化程序,这很难在实践中训练,并且可能导致过度拟合。在本文中,我们提出了HyperMAML,这是MAML的新型概括,其中更新过程的训练也是模型的一部分。也就是说,在HyperMAML中,我们没有使用梯度下降来更新权重,而是为此目的使用可训练的超级净机。因此,在此框架中,该模型可以生成重大更新,其范围不限于固定数量的梯度步骤。实验表明,超型MAML始终胜过MAML,并且在许多标准的几次学习基准测试基准中与其他最先进的技术相当。
translated by 谷歌翻译
受到预处理的概念的启发,我们提出了一种新的方法,以提高基于梯度的元学习方法的适应速度,而不会产生额外的参数。我们证明,将优化问题重新验证到非线性最小二乘配方,提供了一种原则性的方法,可以根据条件编号和本地的概念来主动执行$ \ textIt {wittercitioned} $参数空间,用于元学习模型曲率。我们的全面评估表明,所提出的方法大大优于其不受限制的对应物,尤其是在初始适应步骤中,同时在几个几次分类任务上取得了可比或更好的总体结果 - 创造了动态选择推断时间的适应性步骤数量的可能性。
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译
A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the tasks are available together as a batch. In contrast, online (regret based) learning considers a setting where tasks are revealed one after the other, but conventionally trains a single model without task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an O(log T ) regret guarantee with one additional higher order smoothness assumption (in comparison to the standard online setting). Our experimental evaluation on three different largescale problems suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
translated by 谷歌翻译
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two fewshot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
translated by 谷歌翻译
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose in-context learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
translated by 谷歌翻译
我们提出了一种适应课程训练框架,适用于少量分类的最先进的元学习技术。基于课程的培训普遍试图通过逐步增加培训复杂性来实现培训复杂性以实现增量概念学习。由于元学习者的目标是学习如何从尽可能少的样本中学习,那些样本的确切数量(即支撑集的大小)是作为给定任务困难的自然代理。我们定义了一个简单但新颖的课程计划,从更大的支持大小开始,并且逐步减少整个训练,最终匹配测试设置的所需拍摄大小。这种提出的方​​法提高了学习效率以及泛化能力。我们在两次拍摄图像分类任务上使用MAML算法进行了实验,显示了课程训练框架的显着收益。消融研究证实了我们所提出的方法的独立性,从模型架构以及元学习的普通参数
translated by 谷歌翻译
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
Few-shot learning aims to fast adapt a deep model from a few examples. While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift. We thus propose the Omni-Training framework to seamlessly bridge pre-training and meta-training for data-efficient few-shot learning. Our first contribution is a tri-flow Omni-Net architecture. Besides the joint representation flow, Omni-Net introduces two parallel flows for pre-training and meta-training, responsible for improving domain transferability and task transferability respectively. Omni-Net further coordinates the parallel flows by routing their representations via the joint-flow, enabling knowledge transfer across flows. Our second contribution is the Omni-Loss, which introduces a self-distillation strategy separately on the pre-training and meta-training objectives for boosting knowledge transfer throughout different training stages. Omni-Training is a general framework to accommodate many existing algorithms. Evaluations justify that our single framework consistently and clearly outperforms the individual state-of-the-art methods on both cross-task and cross-domain settings in a variety of classification, regression and reinforcement learning problems.
translated by 谷歌翻译
我们提出了一个统一的查看,即通过通用表示,一个深层神经网络共同学习多个视觉任务和视觉域。同时学习多个问题涉及最大程度地减少具有不同幅度和特征的多个损失函数的加权总和,从而导致一个损失的不平衡状态,与学习每个问题的单独模型相比,一个损失的不平衡状态主导了优化和差的结果。为此,我们提出了通过小容量适配器将多个任务/特定于域网络的知识提炼到单个深神经网络中的知识。我们严格地表明,通用表示在学习NYU-V2和CityScapes中多个密集的预测问题方面实现了最新的表现,来自视觉Decathlon数据集中的不同域中的多个图像分类问题以及MetadataSet中的跨域中的几个域中学习。最后,我们还通过消融和定性研究进行多次分析。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)目前是少量元学习的主要方法之一。尽管它具有有效性,但由于先天的二聚体问题结构,MAML的优化可能具有挑战性。具体而言,MAML的损失格局比其经验风险最小化的对应物更为复杂,可能的鞍点和局部最小化可能更复杂。为了应对这一挑战,我们利用了最近发明的清晰度最小化的最小化,并开发出一种清晰感的MAML方法,我们称其为Sharp MAML。我们从经验上证明,Sharp-MAML及其计算有效的变体可以胜过流行的现有MAML基准(例如,Mini-Imagenet上的$+12 \%$ $精度)。我们通过收敛速率分析和尖锐MAML的概括结合进行了经验研究。据我们所知,这是在双层学习背景下对清晰度感知最小化的第一个经验和理论研究。该代码可在https://github.com/mominabbass/sharp-maml上找到。
translated by 谷歌翻译
在本文中,我们考虑了多任务表示(MTR)的框架学习的目标是使用源任务来学习降低求解目标任务的样本复杂性的表示形式。我们首先回顾MTR理论的最新进展,并表明它们可以在此框架内进行分析时为流行的元学习算法提供新颖的见解。特别是,我们重点介绍了实践中基于梯度和基于度量的算法之间的根本差异,并提出了理论分析来解释它。最后,我们使用派生的见解来通过新的基于光谱的正则化项来提高元学习方法的性能,并通过对少量分类基准的实验研究确认其效率。据我们所知,这是将MTR理论的最新学习范围付诸实践的第一项贡献,以实现几乎没有射击分类的任务。
translated by 谷歌翻译
最近,已经观察到,转移学习解决方案可能是我们解决许多少量学习基准的全部 - 因此提出了有关何时以及如何部署元学习算法的重要问题。在本文中,我们试图通过1.提出一个新颖的指标(多样性系数)来阐明这些问题,以测量几次学习基准和2.的任务多样性。 )并在公平条件下进行学习(相同的体系结构,相同的优化器和所有经过培训的模型)。使用多样性系数,我们表明流行的迷你胶原和Cifar-fs几乎没有学习基准的多样性低。这种新颖的洞察力将转移学习解决方案比在公平比较的低多样性方面的元学习解决方案更好。具体而言,我们从经验上发现,低多样性系数与转移学习和MAML学习解决方案之间的高相似性在元测试时间和分类层相似性方面(使用基于特征的距离指标,例如SVCCA,PWCCA,CKA和OPD) )。为了进一步支持我们的主张,我们发现这种元测试的准确性仍然存在,即使模型大小变化也是如此。因此,我们得出的结论是,在低多样性制度中,MAML和转移学习在公平比较时具有等效的元检验性能。我们也希望我们的工作激发了对元学习基准测试基准的更周到的结构和定量评估。
translated by 谷歌翻译
几次学习的元学习算法旨在训练能够仅使用几个示例将新任务概括为新任务的神经网络。早期停滞对于性能至关重要,在对新任务分布达到最佳概括时停止模型训练。元学习的早期机制通常依赖于从训练(源)数据集中绘制的元验证集中的标记示例上测量模型性能。这在几个射击传输学习设置中是有问题的,其中元测试集来自不同的目标数据集(OOD),并且可能会在元验证集中具有较大的分配转移。在这项工作中,我们提出了基于激活的早期停滞(ABE),这是使用基于验证的早期播放进行元学习的替代方法。具体而言,我们分析了每个隐藏层的神经激活期间的演变,在目标任务分布的一项任务中,在一组未标记的支持示例上,因为这构成了从最小值和合理的信息中。目标问题。我们的实验表明,有关激活的简单标签不可知统计提供了一种有效的方法来估计目标概括如何随着时间的推移如何发展。在每个隐藏层,我们从第一阶和二阶矩来表征激活分布,然后沿特征维度进一步汇总,从而在四维空间中产生紧凑而直观的表征。检测何时,在整个训练时间以及在哪个层上,目标激活轨迹与源数据的激活轨迹有所不同,使我们能够在大量的几个射击传输学习设置中执行早期停滞并改善概括,并在不同算法,源和目标数据集。
translated by 谷歌翻译