模型 - 不可知的元学习(MAML),一种流行的基于梯度的元学习框架,假设每个任务或实例对元学习​​者的贡献相等。因此,在几次拍摄学习中,它无法解决基本和新颖类之间的域转移。在这项工作中,我们提出了一种新颖的鲁棒元学习算法,巢式MAML,它学会为训练任务或实例分配权重。我们将权重用为超参数,并使用嵌套双级优化方法中设置的一小组验证任务迭代优化它们(与MAML中的标准双级优化相比)。然后,我们在元培训阶段应用NestedMaml,涉及(1)从不同于元测试任务分发的分布中采样的多个任务,或(2)具有嘈杂标签的某些数据样本。对综合和现实世界数据集的广泛实验表明,巢式米姆有效地减轻了“不需要的”任务或情况的影响,从而实现了最先进的强大的元学习方法的显着改善。
translated by 谷歌翻译
Current deep neural networks (DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data. The weighting function is an MLP with one hidden layer, constituting a universal approximator to almost any continuous functions, making the method able to fit a wide range of weighting functions including those assumed in conventional research. Guided by a small amount of unbiased meta-data, the parameters of the weighting function can be finely updated simultaneously with the learning process of the classifiers. Synthetic and real experiments substantiate the capability of our method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases. This naturally leads to its better accuracy than other state-of-the-art methods. Source code is available at https://github.com/xjtushujun/meta-weight-net. * Corresponding author. 1 We call the training data biased when they are generated from a joint sample-label distribution deviating from the distribution of evaluation/test set [1].
translated by 谷歌翻译
Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
translated by 谷歌翻译
We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.
translated by 谷歌翻译
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
translated by 谷歌翻译
模型不合时宜的元学习(MAML)目前是少量元学习的主要方法之一。尽管它具有有效性,但由于先天的二聚体问题结构,MAML的优化可能具有挑战性。具体而言,MAML的损失格局比其经验风险最小化的对应物更为复杂,可能的鞍点和局部最小化可能更复杂。为了应对这一挑战,我们利用了最近发明的清晰度最小化的最小化,并开发出一种清晰感的MAML方法,我们称其为Sharp MAML。我们从经验上证明,Sharp-MAML及其计算有效的变体可以胜过流行的现有MAML基准(例如,Mini-Imagenet上的$+12 \%$ $精度)。我们通过收敛速率分析和尖锐MAML的概括结合进行了经验研究。据我们所知,这是在双层学习背景下对清晰度感知最小化的第一个经验和理论研究。该代码可在https://github.com/mominabbass/sharp-maml上找到。
translated by 谷歌翻译
几个射击分类(FSC)需要使用几个(通常为1-5个)数据点的培训模型。事实证明,元学习能够通过培训各种其他分类任务来学习FSC的参数化模型。在这项工作中,我们提出了铂金(使用superodular互信息的半监督模型不可思议的元学习),这是一种新型的半监督模型不合理的元学习框架,使用了子模块化信息(SMI)函数来促进FSC的性能。在元训练期间,使用SMI函数在内部和外循环中利用铂金的数据,并获得元测试的更丰富的元学习参数化。我们在两种情况下研究白金的性能 - 1)未标记的数据点属于与某个插曲的标签集相同的类别集,以及2)在存在不属于的分布类别的地方标记的集合。我们在Miniimagenet,Tieredimagenet和几乎没有Shot-CIFAR100数据集的各种设置上评估了我们的方法。我们的实验表明,铂金优于MAML和半监督的方法,例如用于半监视的FSC的pseduo-Labeling,尤其是对于每个类别的标记示例比例很小。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)是最成功的元学习技术之一。它使用梯度下降来学习各种任务之间的共同点,从而使模型能够学习其自身参数的元定义,以使用少量标记的培训数据快速适应新任务。几次学习的关键挑战是任务不确定性。尽管可以从具有大量任务的元学习中获得强大的先验,但是由于训练数据集的数量通常太小,因此无法保证新任务的精确模型。在这项研究中,首先,在选择初始化参数的过程中,为特定于任务的学习者提出了新方法,以适应性地学习选择最小化新任务损失的初始化参数。然后,我们建议对元损失部分的两种改进的方法:方法1通过比较元损失差异来生成权重,以提高几个类别时的准确性,而方法2引入了每个任务的同质不确定性,以根据多个损失,以基于多个损失。原始的梯度下降是一种增强新型类别的概括能力的方式,同时确保了准确性的提高。与以前的基于梯度的元学习方法相比,我们的模型在回归任务和少量分类中的性能更好,并提高了模型的鲁棒性,对元测试集中的学习率和查询集。
translated by 谷歌翻译
A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the tasks are available together as a batch. In contrast, online (regret based) learning considers a setting where tasks are revealed one after the other, but conventionally trains a single model without task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an O(log T ) regret guarantee with one additional higher order smoothness assumption (in comparison to the standard online setting). Our experimental evaluation on three different largescale problems suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
translated by 谷歌翻译
我们推出了元学学习算法概括性的新信息 - 理论分析。具体地,我们的分析提出了对传统学习 - 学习框架和现代模型 - 不可知的元学习(MAML)算法的通用理解。此外,我们为MAML的随机变体提供了一种数据依赖的泛化,这对于深入的少量学习是不受空置的。与以前的范围相比,依赖于梯度方形规范的界限,对模拟数据和众所周知的少量射击基准测试的经验验证表明,我们的绑定是大多数情况下更紧密的级。
translated by 谷歌翻译
我们分析了一类养生问题,其中高级问题在于平滑的目标函数的最小化和下层问题是找到平滑收缩图的固定点。这种类型的问题包括元学习,平衡模型,超参数优化和数据中毒对抗性攻击的实例。最近的几项作品提出了算法,这些算法温暖了较低级别的问题,即他们使用先前的下级近似解决方案作为低级求解器的凝视点。这种温暖的启动程序使人们可以在随机和确定性设置中提高样品复杂性,在某些情况下可以实现订单的最佳样品复杂性。但是,存在一些情况,例如元学习和平衡模型,其中温暖的启动程序不适合或无效。在这项工作中,我们表明没有温暖的启动,仍然可以实现订单的最佳或近乎最佳的样品复杂性。特别是,我们提出了一种简单的方法,该方法在下层下使用随机固定点迭代,并在上层处预测不精确的梯度下降,该梯度下降到达$ \ epsilon $ -Stationary Point,使用$ O(\ Epsilon^{-2) })$和$ \ tilde {o}(\ epsilon^{ - 1})$样本分别用于随机和确定性设置。最后,与使用温暖启动的方法相比,我们的方法产生了更简单的分析,不需要研究上层和下层迭代之间的耦合相互作用
translated by 谷歌翻译
近年来,已经开发出各种基于梯度的方法来解决机器学习和计算机视觉地区的双层优化(BLO)问题。然而,这些现有方法的理论正确性和实际有效性总是依赖于某些限制性条件(例如,下层单身,LLS),这在现实世界中可能很难满足。此外,以前的文献仅证明了基于其特定的迭代策略的理论结果,因此缺乏一般的配方,以统一分析不同梯度的BLO的收敛行为。在这项工作中,我们从乐观的双级视点制定BLOS,并建立一个名为Bi-Level血液血统聚合(BDA)的新梯度的算法框架,以部分地解决上述问题。具体而言,BDA提供模块化结构,以分级地聚合上层和下层子问题以生成我们的双级迭代动态。从理论上讲,我们建立了一般会聚分析模板,并导出了一种新的证据方法,以研究基于梯度的BLO方法的基本理论特性。此外,这项工作系统地探讨了BDA在不同优化场景中的收敛行为,即,考虑从解决近似子问题返回的各种解决方案质量(即,全局/本地/静止解决方案)。广泛的实验证明了我们的理论结果,并展示了所提出的超参数优化和元学习任务算法的优越性。源代码可在https://github.com/vis-opt-group/bda中获得。
translated by 谷歌翻译
在本文中,我们研究了模型 - 不可知的元学习(MAML)算法的泛化特性,用于监督学习问题。我们专注于我们培训MAML模型超过$ M $任务的设置,每个都有$ n $数据点,并从两个视角表征其泛化错误:首先,我们假设测试时间的新任务是其中之一培训任务,我们表明,对于强烈凸的客观函数,预期的多余人口损失是由$ {\ mathcal {o}}(1 / mn)$的界限。其次,我们考虑MAML算法的概念任务的泛化,并表明产生的泛化误差取决于新任务的底层分布与培训过程中观察到的任务之间的总变化距离。我们的校对技术依赖于算法稳定性与算法的泛化界之间的连接。特别是,我们为元学习算法提出了一种新的稳定性定义,这使我们能够捕获每项任务的任务数量的任务数量的角色$ N $对MAML的泛化误差。
translated by 谷歌翻译
Bilevel优化是在机器学习的许多领域中最小化涉及另一个功能的价值函数的问题。在大规模的经验风险最小化设置中,样品数量很大,开发随机方法至关重要,而随机方法只能一次使用一些样品进行进展。但是,计算值函数的梯度涉及求解线性系统,这使得很难得出无偏的随机估计。为了克服这个问题,我们引入了一个新颖的框架,其中内部问题的解决方案,线性系统的解和主要变量同时发展。这些方向是作为总和写成的,使其直接得出无偏估计。我们方法的简单性使我们能够开发全球差异算法,其中所有变量的动力学都会降低差异。我们证明,萨巴(Saba)是我们框架中著名的传奇算法的改编,具有$ o(\ frac1t)$收敛速度,并且在polyak-lojasciewicz的假设下实现了线性收敛。这是验证这些属性之一的双光线优化的第一种随机算法。数值实验验证了我们方法的实用性。
translated by 谷歌翻译
部分标签学习(PLL)是一个典型的弱监督学习框架,每个培训实例都与候选标签集相关联,其中只有一个标签是有效的。为了解决PLL问题,通常方法试图通过使用先验知识(例如培训数据的结构信息)或以自训练方式提炼模型输出来对候选人集进行歧义。不幸的是,由于在模型训练的早期阶段缺乏先前的信息或不可靠的预测,这些方法通常无法获得有利的性能。在本文中,我们提出了一个新的针对部分标签学习的框架,该框架具有元客观指导性的歧义(MOGD),该框架旨在通过在小验证集中求解元目标来从设置的候选标签中恢复地面真相标签。具体而言,为了减轻假阳性标签的负面影响,我们根据验证集的元损失重新权重。然后,分类器通过最大程度地减少加权交叉熵损失来训练。通过使用普通SGD优化器的各种深网络可以轻松实现所提出的方法。从理论上讲,我们证明了元目标的收敛属性,并得出了所提出方法的估计误差界限。在各种基准数据集和实际PLL数据集上进行的广泛实验表明,与最先进的方法相比,所提出的方法可以实现合理的性能。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)可以说是当今最流行的元学习算法之一。然而,它在几次分类上的性能远远远远远远远远远远远远远远落在许多致力于该问题的算法。在本文中,我们指出了如何训练MAML以进行几次分类的几个关键方面。首先,我们发现MAML在其内部循环更新中需要大量的梯度步骤,这与其常见的用法相矛盾。其次,我们发现MAML对元测试过程中的类标签分配敏感。具体而言,MAML Meta-Trains $ n$道分类器的初始化。这些$ n $方式,在元测试期间,然后具有“ $ n!$”的“ $ n!$”排列,并与$ n $新颖的课程配对。我们发现这些排列会导致巨大的准确性差异,从而使MAML不稳定。第三,我们研究了几种使MAML置换不变的方法,其中元训练单个向量以初始化分类头中的所有$ n $重量矢量的初始化。在Miniimagenet和Tieredimagenet等基准数据集上,我们命名Unicorn-MAML的方法在不牺牲MAML的简单性的情况下以与许多最近的几杆分类算法相同甚至优于许多近期的几个次数分类算法。
translated by 谷歌翻译
Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.
translated by 谷歌翻译
二重优化(BO)可用于解决各种重要的机器学习问题,包括但不限于超参数优化,元学习,持续学习和增强学习。常规的BO方法需要通过与隐式分化的低级优化过程进行区分,这需要与Hessian矩阵相关的昂贵计算。最近,人们一直在寻求BO的一阶方法,但是迄今为止提出的方法对于大规模的深度学习应用程序往往是复杂且不切实际的。在这项工作中,我们提出了一种简单的一阶BO算法,仅取决于一阶梯度信息,不需要隐含的区别,并且对于大规模的非凸函数而言是实用和有效的。我们为提出的方法提供了非注重方法分析非凸目标的固定点,并提出了表明其出色实践绩效的经验结果。
translated by 谷歌翻译
元学习方法旨在构建能够快速适应低数据制度的新任务的学习算法。这种算法的主要基准之一是几次学习问题。在本文中,我们调查了在培训期间采用多任务方法的标准元学习管道的修改。该提出的方法同时利用来自常见损​​失函数中的几个元训练任务的信息。每个任务在损耗功能中的影响由相应的重量控制。正确优化这些权重可能对整个模型的训练产生很大影响,并且可能会提高测试时间任务的质量。在这项工作中,我们提出并调查了使用同时扰动随机近似(SPSA)方法的方法的使用方法,用于元列车任务权重优化。我们还将提出的算法与基于梯度的方法进行了比较,发现随机近似表明了测试时间最大的质量增强。提出的多任务修改可以应用于使用元学习管道的几乎所有方法。在本文中,我们研究了这种修改对CiFar-FS,FC100,TieredimAgenet和MiniimAgenet几秒钟学习基准的原型网络和模型 - 不可知的元学习算法。在这些实验期间,多任务修改已经证明了对原始方法的改进。所提出的SPSA跟踪算法显示了对最先进的元学习方法具有竞争力的最大精度提升。我们的代码可在线获取。
translated by 谷歌翻译
我们介绍了SubGD,这是一种新颖的几声学习方法,基于最近的发现,即随机梯度下降更新往往生活在低维参数子空间中。在实验和理论分析中,我们表明模型局限于合适的预定义子空间,可以很好地推广用于几次学习。合适的子空间符合给定任务的三个标准:IT(a)允许通过梯度流量减少训练误差,(b)导致模型良好的模型,并且(c)可以通过随机梯度下降来识别。 SUBGD从不同任务的更新说明的自动相关矩阵的特征组合中标识了这些子空间。明确的是,我们可以识别出低维合适的子空间,用于对动态系统的几次学习,而动态系统具有不同的属性,这些属性由分析系统描述的一个或几个参数描述。这种系统在科学和工程领域的现实应用程序中无处不在。我们在实验中证实了SubGD在三个不同的动态系统问题设置上的优势,在样本效率和性能方面,均超过了流行的几次学习方法。
translated by 谷歌翻译