最近的研究表明,通过梯度下降训练的无限宽神经网络(NN)的动态可以是神经切线核(NTK)\ CITEP {Jacot2018neural}的特征。在平方损失下,通过梯度下降训练的无限宽度NN,具有无限小的学习速率等同于与NTK \ CITEP {arora2019Exact}的内核回归。但是,当前ridge回归{arora2019Harnessing}只知道等价物,而NN和其他内核机(KMS)之间的等价,例如,支持向量机(SVM),仍然未知。因此,在这项工作中,我们建议在NN和SVM之间建立等效,具体而言,通过柔软的边缘损失和具有由子润发性培训的NTK培训的标准柔软裕度SVM培训的无限宽NN。我们的主要理论结果包括建立NN和广泛的$ \ ELL_2 $正规化KMS之间的等价,其中有限宽度界限,不能通过事先工作来处理,并显示出通过这种正规化损耗函数训练的每个有限宽度NN大约一公里。此外,我们展示了我们的理论可以实现三种实际应用,包括(i)\ yressit {非空心}通过相应的km界限Nn; (ii)无限宽度NN的\ yryit {非琐碎}鲁棒性证书(而现有的鲁棒性验证方法提供空中界定); (iii)本质上更强大的无限宽度NN,来自以前的内核回归。我们的实验代码可用于\ URL {https://github.com/leslie-ch/equiv-nn-svm}。
translated by 谷歌翻译
In a series of recent theoretical works, it was shown that strongly overparameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to overparameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks.
translated by 谷歌翻译
过度分化的深网络的泛化神秘具有有动力的努力,了解梯度下降(GD)如何收敛到概括井的低损耗解决方案。现实生活中的神经网络从小随机值初始化,并以分类的“懒惰”或“懒惰”或“NTK”的训练训练,分析更成功,以及最近的结果序列(Lyu和Li ,2020年; Chizat和Bach,2020; Ji和Telgarsky,2020)提供了理论证据,即GD可以收敛到“Max-ramin”解决方案,其零损失可能呈现良好。但是,仅在某些环境中证明了余量的全球最优性,其中神经网络无限或呈指数级宽。目前的纸张能够为具有梯度流动训练的两层泄漏的Relu网,无论宽度如何,都能为具有梯度流动的双层泄漏的Relu网建立这种全局最优性。分析还为最近的经验研究结果(Kalimeris等,2019)给出了一些理论上的理由,就GD的所谓简单的偏见为线性或其他“简单”的解决方案,特别是在训练中。在悲观方面,该论文表明这种结果是脆弱的。简单的数据操作可以使梯度流量会聚到具有次优裕度的线性分类器。
translated by 谷歌翻译
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activations and confirm the statistical benefits of this implicit bias.
translated by 谷歌翻译
尽管他们的超大容量过度装备能力,但是由特定优化算法训练的深度神经网络倾向于概括到看不见的数据。最近,研究人员通过研究优化算法的隐式正则化效果来解释它。卓越的进展是工作(Lyu&Li,2019),其证明了梯度下降(GD)最大化了均匀深神经网络的余量。除GD外,诸如Adagrad,RMSProp和Adam之类的自适应算法由于其快速培训过程而流行。然而,仍然缺乏适应性优化算法的概括的理论保证。在本文中,我们研究了自适应优化算法的隐式正则化,当它们在均匀深神经网络上优化逻辑损失时。我们证明了在调节器(如亚当和RMSProp)中采用指数移动平均策略的自适应算法可以最大化神经网络的余量,而Adagrad直接在调节器中总和历史平方梯度。它表明了调节剂设计中指数移动平均策略的概括的优越性。从技术上讲,我们提供统一的框架,通过构建新的自适应梯度流量和代理余量来分析自适应优化算法的会聚方向。我们的实验可以很好地支持适应性优化算法的会聚方向的理论发现。
translated by 谷歌翻译
过分分度化是没有凸起的关键因素,以解释神经网络的全局渐变(GD)的全局融合。除了研究良好的懒惰政权旁边,已经为浅网络开发了无限宽度(平均场)分析,使用凸优化技术。为了弥合懒惰和平均场制度之间的差距,我们研究残留的网络(RESNET),其中残留块具有线性参数化,同时仍然是非线性的。这种Resnets承认无限深度和宽度限制,在再现内核Hilbert空间(RKHS)中编码残差块。在这个限制中,我们证明了当地的Polyak-Lojasiewicz不等式。因此,每个关键点都是全球最小化器和GD的局部收敛结果,并检索懒惰的制度。与其他平均场研究相比,它在残留物的表达条件下适用于参数和非参数案。我们的分析导致实用和量化的配方:从通用RKHS开始,应用随机傅里叶特征来获得满足我们的表征条件的高概率的有限维参数化。
translated by 谷歌翻译
How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its "width"-namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers -is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [Jacot et al., 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width. The current paper gives the first efficient exact algorithm for computing the extension of NTK to convolutional neural nets, which we call Convolutional NTK (CNTK), as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for performance of a pure kernel-based method on CIFAR-10, being 10% higher than the methods reported in [Novak et al., 2019], and only 6% lower than the performance of the corresponding finite deep net architecture (once batch normalization etc. are turned off). Theoretically, we also give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK.
translated by 谷歌翻译
批准方法,例如批处理[Ioffe和Szegedy,2015],体重[Salimansand Kingma,2016],实例[Ulyanov等,2016]和层归一化[Baet al。,2016]已广泛用于现代机器学习中。在这里,我们研究了体重归一化方法(WN)方法[Salimans和Kingma,2016年],以及一种称为重扎式投影梯度下降(RPGD)的变体,用于过多散热性最小二乘回归。 WN和RPGD用比例G和一个单位向量W重新绘制权重,因此目标函数变为非convex。我们表明,与原始目标的梯度下降相比,这种非凸式配方具有有益的正则化作用。这些方法适应性地使重量正规化并收敛于最小L2规范解决方案,即使初始化远非零。对于G和W的某些步骤,我们表明它们可以收敛于最小规范解决方案。这与梯度下降的行为不同,梯度下降的行为仅在特征矩阵范围内的一个点开始时才收敛到最小规范解,因此对初始化更敏感。
translated by 谷歌翻译
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
translated by 谷歌翻译
过度参数化神经网络(NN)的损失表面具有许多全球最小值,却零训练误差。我们解释了标准NN训练程序的常见变体如何改变获得的最小化器。首先,我们明确说明了强烈参数化的NN初始化的大小如何影响最小化器,并可能恶化其最终的测试性能。我们提出了限制这种效果的策略。然后,我们证明,对于自适应优化(例如Adagrad),所获得的最小化器通常与梯度下降(GD)最小化器不同。随机迷你批次训练,即使在非自适应情况下,GD和随机GD基本相同的最小化器,这种自适应最小化器也会进一步改变。最后,我们解释说,这些效果仍然与较少参数化的NN相关。尽管过度参数具有其好处,但我们的工作强调,它会导致参数化模型缺乏错误来源。
translated by 谷歌翻译
神经网络在许多领域取得了巨大的经验成功。已经观察到,通过一阶方法训练的随机初始化的神经网络能够实现接近零的训练损失,尽管其损失景观是非凸的并且不平滑的。这种现象很少有理论解释。最近,通过分析过参数化制度中的梯度下降〜(GD)和重球方法〜(HB)的梯度来弥合实践和理论之间的这种差距。在这项工作中,通过考虑Nesterov的加速梯度方法〜(nag),我们通过恒定的动量参数进行进一步进展。我们通过Relu激活分析其用于过度参数化的双层完全连接神经网络的收敛性。具体而言,我们证明了NAG的训练误差以非渐近线性收敛率$(1- \θ(1 / \ sqrt {\ kappa}))收敛到零(1 / \ sqrt {\ kappa})^ t $ the $ t $迭代,其中$ \ Kappa> 1 $由神经网络的初始化和架构决定。此外,我们在NAG和GD和HB的现有收敛结果之间提供了比较。我们的理论结果表明,NAG实现了GD的加速度,其会聚率与HB相当。此外,数值实验验证了我们理论分析的正确性。
translated by 谷歌翻译
神经切线内核(NTK)是分析神经网络及其泛化界限的训练动力学的强大工具。关于NTK的研究已致力于典型的神经网络体系结构,但对于Hadamard产品(NNS-HP)的神经网络不完整,例如StyleGAN和多项式神经网络。在这项工作中,我们为特殊类别的NNS-HP(即多项式神经网络)得出了有限宽度的NTK公式。我们证明了它们与关联的NTK与内核回归预测变量的等效性,该预测扩大了NTK的应用范围。根据我们的结果,我们阐明了针对外推和光谱偏置,PNN在标准神经网络上的分离。我们的两个关键见解是,与标准神经网络相比,PNN能够在外推方案中拟合更复杂的功能,并承认相应NTK的特征值衰减较慢。此外,我们的理论结果可以扩展到其他类型的NNS-HP,从而扩大了我们工作的范围。我们的经验结果验证了更广泛的NNS-HP类别的分离,这为对神经体系结构有了更深入的理解提供了良好的理由。
translated by 谷歌翻译
了解随机梯度下降(SGD)的隐式偏见是深度学习的关键挑战之一,尤其是对于过度透明的模型,损失功能的局部最小化$ l $可以形成多种多样的模型。从直觉上讲,SGD $ \ eta $的学习率很小,SGD跟踪梯度下降(GD),直到它接近这种歧管为止,梯度噪声阻止了进一步的收敛。在这样的政权中,Blanc等人。 (2020)证明,带有标签噪声的SGD局部降低了常规术语,损失的清晰度,$ \ mathrm {tr} [\ nabla^2 l] $。当前的论文通过调整Katzenberger(1991)的想法提供了一个总体框架。它原则上允许使用随机微分方程(SDE)描述参数的限制动力学的SGD围绕此歧管的正规化效应(即“隐式偏见”)的正则化效应,这是由损失共同确定的功能和噪声协方差。这产生了一些新的结果:(1)与Blanc等人的局部分析相比,对$ \ eta^{ - 2} $ steps有效的隐性偏差进行了全局分析。 (2020)仅适用于$ \ eta^{ - 1.6} $ steps和(2)允许任意噪声协方差。作为一个应用程序,我们以任意大的初始化显示,标签噪声SGD始终可以逃脱内核制度,并且仅需要$ o(\ kappa \ ln d)$样本用于学习$ \ kappa $ -sparse $ -sparse yroverparame parametrized linearized Linear Modal in $ \ Mathbb {r}^d $(Woodworth等,2020),而GD在内核制度中初始化的GD需要$ \ omega(d)$样本。该上限是最小值的最佳,并改善了先前的$ \ tilde {o}(\ kappa^2)$上限(Haochen等,2020)。
translated by 谷歌翻译
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of regression and inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence. Our approach compares favorably with other alternatives, as confirmed also in numerical simulations.
translated by 谷歌翻译
A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach [2018], we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.
translated by 谷歌翻译
过度分辨率是指选择神经网络的宽度,使得学习算法可以在非凸训练中可被估计零损失的重要现象。现有理论建立了各种初始化策略,培训修改和宽度缩放等全局融合。特别地,最先进的结果要求宽度以二次逐步缩放,并在实践中使用的标准初始化策略下进行培训数据的数量,以获得最佳泛化性能。相比之下,最新的结果可以获得线性缩放,需要导致导致“懒惰训练”的初始化,或者仅训练单层。在这项工作中,我们提供了一个分析框架,使我们能够采用标准的初始化策略,可能避免懒惰的训练,并在基本浅色神经网络中同时培训所有层,同时获得网络宽度的理想子标缩放。我们通过Polyak-Lojasiewicz条件,平滑度和数据标准假设实现了Desiderata,并使用随机矩阵理论的工具。
translated by 谷歌翻译
训练神经网络的一种常见方法是将所有权重初始化为独立的高斯向量。我们观察到,通过将权重初始化为独立对,每对由两个相同的高斯向量组成,我们可以显着改善收敛分析。虽然已经研究了类似的技术来进行随机输入[Daniely,Neurips 2020],但尚未使用任意输入进行分析。使用此技术,我们展示了如何显着减少两层relu网络所需的神经元数量,均在逻辑损失的参数化设置不足的情况下,大约$ \ gamma^{ - 8} $ [Ji and telgarsky,ICLR, 2020]至$ \ gamma^{ - 2} $,其中$ \ gamma $表示带有神经切线内核的分离边距,以及在与平方损失的过度参数化设置中,从大约$ n^4 $ [song [song]和Yang,2019年]至$ n^2 $,隐含地改善了[Brand,Peng,Song和Weinstein,ITCS 2021]的近期运行时间。对于参数不足的设置,我们还证明了在先前工作时改善的新下限,并且在某些假设下是最好的。
translated by 谷歌翻译
了解通过随机梯度下降(SGD)训练的神经网络的特性是深度学习理论的核心。在这项工作中,我们采取了平均场景,并考虑通过SGD培训的双层Relu网络,以实现一个非变量正则化回归问题。我们的主要结果是SGD偏向于简单的解决方案:在收敛时,Relu网络实现输入的分段线性图,以及“结”点的数量 - 即,Relu网络估计器的切线变化的点数 - 在两个连续的训练输入之间最多三个。特别地,随着网络的神经元的数量,通过梯度流的解决方案捕获SGD动力学,并且在收敛时,重量的分布方法接近相关的自由能量的独特最小化器,其具有GIBBS形式。我们的主要技术贡献在于分析了这一最小化器产生的估计器:我们表明其第二阶段在各地消失,除了代表“结”要点的一些特定地点。我们还提供了经验证据,即我们的理论预测的不同可能发生与数据点不同的位置的结。
translated by 谷歌翻译
一项开创性的工作[Jacot等,2018]表明,在特定参数化下训练神经网络等同于执行特定的内核方法,因为宽度延伸到无穷大。这种等效性为将有关内核方法的丰富文献结果应用于神经网的结果开辟了一个有希望的方向,而神经网络很难解决。本调查涵盖了内核融合的关键结果,因为宽度进入无穷大,有限宽度校正,应用以及对相应方法的局限性的讨论。
translated by 谷歌翻译
了解神经网络的黑匣子预测是具有挑战性的。为实现这一目标,早期的研究已经设计了影响功能(IF)来测量删除神经网络上的单个训练点的效果。然而,用于计算IF易碎的经典隐含HESSIAN-向量产品(IHVP)方法,以及在神经网络的背景下的理论分析仍然缺乏。为此,我们利用神经切线内核(NTK)理论来计算具有正则化均方损耗的神经网络,并证明近似误差对于两层释放的宽度足够大,可以任意较小网络。我们分析了在过度参数化制度中的经典IHVP方法绑定的错误,以了解它的何时何种以及原因。详细说明,我们的理论分析揭示了(1)IHVP的准确性取决于正则化术语,并且在弱规则化下非常低; (2)IHVP的准确性与相应培训点的概率密度具有显着相关性。我们进一步借用NTK的理论来了解IFS更好,包括量化有影响力样本的复杂性,并描绘在训练动态期间的IFS的变化。现实世界数据的数值实验证实了我们的理论结果并展示了我们的研究结果。
translated by 谷歌翻译