Many problems in machine learning involve bilevel optimization (BLO), including hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems consist of two nested sub-problems, called the outer and inner problems, respectively. In practice, often at least one of these sub-problems is overparameterized. In this case, there are many ways to choose among optima that achieve equivalent objective values. Inspired by recent studies of the implicit bias induced by optimization algorithms in single-level optimization, we investigate the implicit bias of gradient-based algorithms for bilevel optimization. We delineate two standard BLO methods -- cold-start and warm-start -- and show that the converged solution or long-run behavior depends to a large degree on these and other algorithmic choices, such as the hypergradient approximation. We also show that the inner solutions obtained by warm-start BLO can encode a surprising amount of information about the outer objective, even when the outer parameters are low-dimensional. We believe that implicit bias deserves as central a role in the study of bilevel optimization as it has attained in the study of single-level neural net optimization.
translated by 谷歌翻译
We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations. We present results on the relationship between the IFT and differentiating through optimization, motivating our algorithm. We use the proposed approach to train modern network architectures with millions of weights and millions of hyperparameters. We learn a data-augmentation networkwhere every weight is a hyperparameter tuned for validation performance-that outputs augmented training examples; we learn a distilled dataset where each feature in each datapoint is a hyperparameter; and we tune millions of regularization hyperparameters. Jointly tuning weights and hyperparameters with our approach is only a few times more costly in memory and compute than standard training.• We scale IFT-based hyperparameter optimization to modern, large neural architectures, including AlexNet and LSTM-based language models.• We demonstrate several uses for fitting hyperparameters almost as easily as weights, including perparameter regularization, data distillation, and learned-from-scratch data augmentation methods.• We explore how training-validation splits should change when tuning many hyperparameters.
translated by 谷歌翻译
影响功能有效地估计了删除单个训练数据点对模型学习参数的影响。尽管影响估计值与线性模型的剩余重新进行了良好的重新对齐,但最近的作品表明,在神经网络中,这种比对通常很差。在这项工作中,我们通过将其分解为五个单独的术语来研究导致这种差异的特定因素。我们研究每个术语对各种架构和数据集的贡献,以及它们如何随网络宽度和培训时间等因素而变化。尽管实际影响函数估计值可能是非线性网络中保留对方的重新培训的差异,但我们表明它们通常是对不同对象的良好近似值,我们称其为近端Bregman响应函数(PBRF)。由于PBRF仍然可以用来回答许多激励影响功能的问题,例如识别有影响力或标记的示例,因此我们的结果表明,影响功能估计的当前算法比以前的错误分析所暗示的更有用的结果。
translated by 谷歌翻译
我们研究了一类算法,用于在内部级别物镜强烈凸起时求解随机和确定性设置中的彼此优化问题。具体地,我们考虑基于不精确的隐含区分的算法,并且我们利用热门开始策略来摊销精确梯度的估计。然后,我们介绍了一个统一的理论框架,受到奇异的扰动系统(Habets,1974)的研究来分析这种摊销算法。通过使用此框架,我们的分析显示了匹配可以访问梯度无偏见估计的Oracle方法的计算复杂度的算法,从而优于彼此优化的许多现有结果。我们在合成实验中说明了这些发现,并展示了这些算法对涉及几千个变量的超参数优化实验的效率。
translated by 谷歌翻译
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
translated by 谷歌翻译
学习的优化器是可以训练解决优化问题的算法。与使用从理论原则派生的简单更新规则的基线优化器(例如势头或亚当)相比,学习的优化器使用灵活,高维,非线性参数化。虽然这可能导致某些设置中的更好性能,但他们的内部工作仍然是一个谜。学习优化器如何优于一个良好的调整基线?它是否学习了现有优化技术的复杂组合,或者是实现全新的行为吗?在这项工作中,我们通过仔细分析和可视化的学习优化器来解决这些问题。我们研究了从三个不同的任务中从头开始培训的优化器,并发现他们已经了解了可解释的机制,包括:势头,渐变剪辑,学习率计划以及新形式的学习率适应形式。此外,我们展示了学习优化器的动态如何实现这些行为。我们的结果帮助阐明了对学习优化器的工作原理的先前密切了解,并建立了解释未来学习优化器的工具。
translated by 谷歌翻译
量子哈密顿学习和量子吉布斯采样的双重任务与物理和化学中的许多重要问题有关。在低温方案中,这些任务的算法通常会遭受施状能力,例如因样本或时间复杂性差而遭受。为了解决此类韧性,我们将量子自然梯度下降的概括引入了参数化的混合状态,并提供了稳健的一阶近似算法,即量子 - 固定镜下降。我们使用信息几何学和量子计量学的工具证明了双重任务的数据样本效率,因此首次将经典Fisher效率的开创性结果推广到变异量子算法。我们的方法扩展了以前样品有效的技术,以允许模型选择的灵活性,包括基于量子汉密尔顿的量子模型,包括基于量子的模型,这些模型可能会规避棘手的时间复杂性。我们的一阶算法是使用经典镜下降二元性的新型量子概括得出的。两种结果都需要特殊的度量选择,即Bogoliubov-Kubo-Mori度量。为了从数值上测试我们提出的算法,我们将它们的性能与现有基准进行了关于横向场ISING模型的量子Gibbs采样任务的现有基准。最后,我们提出了一种初始化策略,利用几何局部性来建模状态的序列(例如量子 - 故事过程)的序列。我们从经验上证明了它在实际和想象的时间演化的经验上,同时定义了更广泛的潜在应用。
translated by 谷歌翻译
我们分析了一类养生问题,其中高级问题在于平滑的目标函数的最小化和下层问题是找到平滑收缩图的固定点。这种类型的问题包括元学习,平衡模型,超参数优化和数据中毒对抗性攻击的实例。最近的几项作品提出了算法,这些算法温暖了较低级别的问题,即他们使用先前的下级近似解决方案作为低级求解器的凝视点。这种温暖的启动程序使人们可以在随机和确定性设置中提高样品复杂性,在某些情况下可以实现订单的最佳样品复杂性。但是,存在一些情况,例如元学习和平衡模型,其中温暖的启动程序不适合或无效。在这项工作中,我们表明没有温暖的启动,仍然可以实现订单的最佳或近乎最佳的样品复杂性。特别是,我们提出了一种简单的方法,该方法在下层下使用随机固定点迭代,并在上层处预测不精确的梯度下降,该梯度下降到达$ \ epsilon $ -Stationary Point,使用$ O(\ Epsilon^{-2) })$和$ \ tilde {o}(\ epsilon^{ - 1})$样本分别用于随机和确定性设置。最后,与使用温暖启动的方法相比,我们的方法产生了更简单的分析,不需要研究上层和下层迭代之间的耦合相互作用
translated by 谷歌翻译
展开的计算图在许多方案中出现,包括培训RNN,通过展开优化调整超级参与,以及培训学习优化器。当前在这种计算图中优化参数的方法遭受高方差梯度,偏差,慢更新或大的内存使用情况。我们介绍一种称为持久演进策略(PES)的方法,该方法将计算图分为一系列截断的展开,并在每个展开后执行基于演进策略的更新步骤。PE通过在整个展开序列上累积校正项来消除这些截断的偏差。PE允许快速参数更新,具有较低的内存使用率,是无偏的,具有合理的方差特性。我们通过实验证明了PE的优势与综合任务的渐变估计的其他几种方法相比,并表明其适用于培训学习优化器和调整超参数。
translated by 谷歌翻译
二重优化(BO)可用于解决各种重要的机器学习问题,包括但不限于超参数优化,元学习,持续学习和增强学习。常规的BO方法需要通过与隐式分化的低级优化过程进行区分,这需要与Hessian矩阵相关的昂贵计算。最近,人们一直在寻求BO的一阶方法,但是迄今为止提出的方法对于大规模的深度学习应用程序往往是复杂且不切实际的。在这项工作中,我们提出了一种简单的一阶BO算法,仅取决于一阶梯度信息,不需要隐含的区别,并且对于大规模的非凸函数而言是实用和有效的。我们为提出的方法提供了非注重方法分析非凸目标的固定点,并提出了表明其出色实践绩效的经验结果。
translated by 谷歌翻译
We introduce a class of first-order methods for smooth constrained optimization that are based on an analogy to non-smooth dynamical systems. Two distinctive features of our approach are that (i) projections or optimizations over the entire feasible set are avoided, in stark contrast to projected gradient methods or the Frank-Wolfe method, and (ii) iterates are allowed to become infeasible, which differs from active set or feasible direction methods, where the descent motion stops as soon as a new constraint is encountered. The resulting algorithmic procedure is simple to implement even when constraints are nonlinear, and is suitable for large-scale constrained optimization problems in which the feasible set fails to have a simple structure. The key underlying idea is that constraints are expressed in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. In particular, this means that at each iteration only constraints that are violated are taken into account. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
在本文中,我们提出了一种新的Hessian逆自由单环算法(FSLA),用于彼此优化问题。 Bilevel优化的经典算法承认计算昂贵的双回路结构。最近,已经提出了几种单循环算法,其具有优化内部和外部变量。但是,这些算法尚未实现完全单循环。因为它们忽略了评估给定内部和外部状态的超梯度所需的循环。为了开发一个完全单环算法,我们首先研究超梯度的结构,并识别超梯度计算的一般近似配方,这些计算包括几种先前的常见方法,例如,通过时间,共轭渐变,\ emph {等}基于此配方,介绍一个新的状态变量来维护历史超梯度信息。将我们的新配方与内外变量的替代更新相结合,我们提出了一种高效的全循环算法。理论上我们可以显示新状态生成的错误可以界限,我们的算法收敛于$ O(\ epsilon ^ {-2})$。最后,我们通过基于多个Bilevel优化的机器学习任务验证了我们验证的算法。
translated by 谷歌翻译
Deep Learning optimization involves minimizing a high-dimensional loss function in the weight space which is often perceived as difficult due to its inherent difficulties such as saddle points, local minima, ill-conditioning of the Hessian and limited compute resources. In this paper, we provide a comprehensive review of 12 standard optimization methods successfully used in deep learning research and a theoretical assessment of the difficulties in numerical optimization from the optimization literature.
translated by 谷歌翻译
找到模型的最佳超参数可以作为双重优化问题,通常使用零级技术解决。在这项工作中,当内部优化问题是凸但不平滑时,我们研究一阶方法。我们表明,近端梯度下降和近端坐标下降序列序列的前向模式分化,雅各比人会收敛到精确的雅各布式。使用隐式差异化,我们表明可以利用内部问题的非平滑度来加快计算。最后,当内部优化问题大约解决时,我们对高度降低的误差提供了限制。关于回归和分类问题的结果揭示了高参数优化的计算益处,尤其是在需要多个超参数时。
translated by 谷歌翻译
当我们扩大数据集,模型尺寸和培训时间时,深入学习方法的能力中存在越来越多的经验证据。尽管有一些关于这些资源如何调节统计能力的说法,但对它们对模型培训的计算问题的影响知之甚少。这项工作通过学习$ k $ -sparse $ n $ bits的镜头进行了探索,这是一个构成理论计算障碍的规范性问题。在这种情况下,我们发现神经网络在扩大数据集大小和运行时间时会表现出令人惊讶的相变。特别是,我们从经验上证明,通过标准培训,各种体系结构以$ n^{o(k)} $示例学习稀疏的平等,而损失(和错误)曲线在$ n^{o(k)}后突然下降。 $迭代。这些积极的结果几乎匹配已知的SQ下限,即使没有明确的稀疏性先验。我们通过理论分析阐明了这些现象的机制:我们发现性能的相变不到SGD“在黑暗中绊倒”,直到它找到了隐藏的特征集(自然算法也以$ n^中的方式运行{o(k)} $ time);取而代之的是,我们表明SGD逐渐扩大了人口梯度的傅立叶差距。
translated by 谷歌翻译
在这项工作中,我们探讨了随机梯度下降(SGD)训练的深神经网络的限制动态。如前所述,长时间的性能融合,网络继续通过参数空间通过一个异常扩散的过程,其中距离在具有非活动指数的梯度更新的数量中增加距离。我们揭示了优化的超公数,梯度噪声结构之间的复杂相互作用,以及在训练结束时解释这种异常扩散的Hessian矩阵。为了构建这种理解,我们首先为SGD推导出一个连续时间模型,具有有限的学习速率和批量尺寸,作为欠下的Langevin方程。我们在线性回归中研究了这个方程,我们可以为参数的相位空间动态和它们的瞬时速度来得出精确的分析表达式,从初始化到实用性。使用Fokker-Planck方程,我们表明驾驶这些动态的关键成分不是原始的训练损失,而是修改的损失的组合,其隐含地规则地规范速度和概率电流,这导致相位空间中的振荡。我们在ImageNet培训的Reset-18模型的动态中确定了这种理论的定性和定量预测。通过统计物理的镜头,我们揭示了SGD培训的深神经网络的异常限制动态的机制来源。
translated by 谷歌翻译
一类非平滑实践优化问题可以写成,以最大程度地减少平滑且部分平滑的功能。我们考虑了这种结构化问题,这些问题也取决于参数矢量,并研究了将其解决方案映射相对于参数的问题,该参数在灵敏度分析和参数学习选择材料问题中具有很大的应用。我们表明,在部分平滑度和其他温和假设下,近端分裂算法产生的序列的自动分化(AD)会收敛于溶液映射的衍生物。对于一种自动分化的变体,我们称定点自动分化(FPAD),我们纠正了反向模式AD的内存开销问题,此外,理论上提供了更快的收敛。我们从数值上说明了套索和组套索问题的AD和FPAD的收敛性和收敛速率,并通过学习正则化项来证明FPAD在原型实用图像deoise问题上的工作。
translated by 谷歌翻译
Many machine learning problems encode their data as a matrix with a possibly very large number of rows and columns. In several applications like neuroscience, image compression or deep reinforcement learning, the principal subspace of such a matrix provides a useful, low-dimensional representation of individual data. Here, we are interested in determining the $d$-dimensional principal subspace of a given matrix from sample entries, i.e. from small random submatrices. Although a number of sample-based methods exist for this problem (e.g. Oja's rule \citep{oja1982simplified}), these assume access to full columns of the matrix or particular matrix structure such as symmetry and cannot be combined as-is with neural networks \citep{baldi1989neural}. In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can be scaled to datasets with an effectively infinite number of rows and columns. Our method consists in defining a loss function whose minimizer is the desired principal subspace, and constructing a gradient estimate of this loss whose bias can be controlled. We complement our theoretical analysis with a series of experiments on synthetic matrices, the MNIST dataset \citep{lecun2010mnist} and the reinforcement learning domain PuddleWorld \citep{sutton1995generalization} demonstrating the usefulness of our approach.
translated by 谷歌翻译
二重优化发现在现代机器学习问题中发现了广泛的应用,例如超参数优化,神经体系结构搜索,元学习等。而具有独特的内部最小点(例如,内部功能是强烈凸的,都具有唯一的内在最小点)的理解,这是充分理解的,多个内部最小点的问题仍然是具有挑战性和开放的。为此问题设计的现有算法适用于限制情况,并且不能完全保证融合。在本文中,我们采用了双重优化的重新制定来限制优化,并通过原始的双二线优化(PDBO)算法解决了问题。 PDBO不仅解决了多个内部最小挑战,而且还具有完全一阶效率的情况,而无需涉及二阶Hessian和Jacobian计算,而不是大多数现有的基于梯度的二杆算法。我们进一步表征了PDBO的收敛速率,它是与多个内部最小值的双光线优化的第一个已知的非质合收敛保证。我们的实验证明了所提出的方法的预期性能。
translated by 谷歌翻译