大型多层神经网络的概括性能越来越兴趣,可以接受训练以达到零训练错误,同时对测试数据进行良好的推广。该制度被称为“第二次下降”,似乎与常规观点相矛盾,即最佳模型复杂性应反映出不足和过度拟合之间的最佳平衡,即偏见差异权衡。本文介绍了双重下降的VC理论分析,并表明可以通过经典的VC将军范围来充分解释。我们说明了分析性VC结合的应用,用于对分类问题进行两次下降进行建模,并使用多种学习方法(例如SVM,最小二乘正方形和多层观察者分类器)的经验结果。此外,我们讨论了对深度学习社区中VC理论结果误解的几个原因。
translated by 谷歌翻译
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. * Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work.
translated by 谷歌翻译
权重规范$ \ | w \ | $和保证金$ \ gamma $通过归一化的保证金$ \ gamma/\ | w \ | $参与学习理论。由于标准神经净优化器不能控制归一化的边缘,因此很难测试该数量是否与概括有关。本文设计了一系列实验研究,这些研究明确控制了归一化的边缘,从而解决了两个核心问题。首先:归一化的边缘是否总是对概括产生因果影响?本文发现,在归一化的边缘似乎与概括没有关系的情况下,可以与Bartlett等人的理论背道而驰。(2017)。第二:标准化边缘是否对概括有因果影响?该论文发现是的 - 在标准培训设置中,测试性能紧密跟踪了标准化的边距。该论文将高斯流程模型表示为这种行为的有前途的解释。
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
We study the generalization of over-parameterized classifiers where Empirical Risk Minimization (ERM) for learning leads to zero training error. In these over-parameterized settings there are many global minima with zero training error, some of which generalize better than others. We show that under certain conditions the fraction of "bad" global minima with a true error larger than {\epsilon} decays to zero exponentially fast with the number of training data n. The bound depends on the distribution of the true error over the set of classifier functions used for the given classification problem, and does not necessarily depend on the size or complexity (e.g. the number of parameters) of the classifier function set. This might explain the unexpectedly good generalization even of highly over-parameterized Neural Networks. We support our mathematical framework with experiments on a synthetic data set and a subset of MNIST.
translated by 谷歌翻译
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
translated by 谷歌翻译
在最近的几项研究中已经显示了过度参数化在实现卓越概括性能方面的好处,证明了在实践中使用较大模型的趋势。然而,在强大的学习背景下,神经网络大小的影响尚未得到很好的研究。在这项工作中,我们发现,在大量错误标记的示例的存在下,将网络大小的增加超出某个点可能是有害的。特别是,当标签噪声增加时,最初是单调或“双重下降”测试损失曲线(W.R.T.网络宽度)变成U形或双U形曲线,这表明某些模型具有中等大小的模型实现了最佳的概括。我们观察到,当通过随机修剪通过密度控制网络大小时,观察到相似的测试损失行为。我们还通过偏置变化分解和理论上表征标签噪声塑造方差项的方式来仔细研究现象。即使采用最新的鲁棒方法,也可以观察到测试损失的类似行为,这表明限制网络大小可以进一步提高现有方法。最后,我们从经验上检查网络大小对学习函数平稳性的影响,并发现最初的大小和平滑度之间的负相关性是由标签噪声翻转的。
translated by 谷歌翻译
在许多情况下,更简单的模型比更复杂的模型更可取,并且该模型复杂性的控制是机器学习中许多方法的目标,例如正则化,高参数调整和体系结构设计。在深度学习中,很难理解复杂性控制的潜在机制,因为许多传统措施并不适合深度神经网络。在这里,我们开发了几何复杂性的概念,该概念是使用离散的dirichlet能量计算的模型函数变异性的量度。使用理论论据和经验结果的结合,我们表明,许多常见的训练启发式方法,例如参数规范正规化,光谱规范正则化,平稳性正则化,隐式梯度正则化,噪声正则化和参数初始化的选择,都可以控制几何学复杂性,并提供一个统一的框架,以表征深度学习模型的行为。
translated by 谷歌翻译
With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.
translated by 谷歌翻译
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. * Work performed while interning at Google Brain.† Work performed at Google Brain.
translated by 谷歌翻译
标签 - 不平衡和组敏感分类中的目标是优化相关的指标,例如平衡错误和相同的机会。经典方法,例如加权交叉熵,在训练深网络到训练(TPT)的终端阶段时,这是超越零训练误差的训练。这种观察发生了最近在促进少数群体更大边值的直观机制之后开发启发式替代品的动力。与之前的启发式相比,我们遵循原则性分析,说明不同的损失调整如何影响边距。首先,我们证明,对于在TPT中训练的所有线性分类器,有必要引入乘法,而不是添加性的Logit调整,以便对杂项边缘进行适当的变化。为了表明这一点,我们发现将乘法CE修改的连接到成本敏感的支持向量机。也许是违反,我们还发现,在培训开始时,相同的乘法权重实际上可以损害少数群体。因此,虽然在TPT中,添加剂调整无效,但我们表明它们可以通过对乘法重量的初始负效应进行抗衡来加速会聚。通过这些发现的动机,我们制定了矢量缩放(VS)丢失,即捕获现有技术作为特殊情况。此外,我们引入了对群体敏感分类的VS损失的自然延伸,从而以统一的方式处理两种常见类型的不平衡(标签/组)。重要的是,我们对最先进的数据集的实验与我们的理论见解完全一致,并确认了我们算法的卓越性能。最后,对于不平衡的高斯 - 混合数据,我们执行泛化分析,揭示平衡/标准错误和相同机会之间的权衡。
translated by 谷歌翻译
基于梯度的深度学习算法在实践中表现出色,但这并不理解为什么尽管参数比训练示例更多,但他们能够概括得多。人们认为,隐性偏见是其概括能力的关键因素,因此近年来已经对其进行了广泛的研究。在这项简短的调查中,我们解释了隐性偏见的概念,回顾主要结果并讨论其含义。
translated by 谷歌翻译
尽管过度拟合并且更普遍地,双重下降在机器学习中无处不在,但增加了最广泛使用的张量网络的参数数量,但矩阵乘积状态(MPS)通常会导致先前研究中的测试性能单调改善。为了更好地理解由MPS参数参数的体系结构的概括属性,我们构建了人工数据,这些数据可以由MPS精确建模并使用不同数量的参数训练模型。我们观察到一维数据的模型过于拟合,但也发现,对于更复杂的数据而言,过度拟合的意义较低,而对于MNIST图像数据,我们找不到任何过拟合的签名。我们推测,MPS的概括属性取决于数据的属性:具有一维数据(MPS ANSATZ是最合适的)MPS容易拟合的数据,而使用更复杂的数据,该数据不能完全适合MPS,过度拟合过度。可能不那么重要。
translated by 谷歌翻译
最近的研究表明,通过梯度下降训练的无限宽神经网络(NN)的动态可以是神经切线核(NTK)\ CITEP {Jacot2018neural}的特征。在平方损失下,通过梯度下降训练的无限宽度NN,具有无限小的学习速率等同于与NTK \ CITEP {arora2019Exact}的内核回归。但是,当前ridge回归{arora2019Harnessing}只知道等价物,而NN和其他内核机(KMS)之间的等价,例如,支持向量机(SVM),仍然未知。因此,在这项工作中,我们建议在NN和SVM之间建立等效,具体而言,通过柔软的边缘损失和具有由子润发性培训的NTK培训的标准柔软裕度SVM培训的无限宽NN。我们的主要理论结果包括建立NN和广泛的$ \ ELL_2 $正规化KMS之间的等价,其中有限宽度界限,不能通过事先工作来处理,并显示出通过这种正规化损耗函数训练的每个有限宽度NN大约一公里。此外,我们展示了我们的理论可以实现三种实际应用,包括(i)\ yressit {非空心}通过相应的km界限Nn; (ii)无限宽度NN的\ yryit {非琐碎}鲁棒性证书(而现有的鲁棒性验证方法提供空中界定); (iii)本质上更强大的无限宽度NN,来自以前的内核回归。我们的实验代码可用于\ URL {https://github.com/leslie-ch/equiv-nn-svm}。
translated by 谷歌翻译
过度参数化的神经网络的实际成功促进了最近对插值方法的科学研究,这些研究非常适合其训练数据。如果没有灾难性的测试表现,包括神经网络在内的某些插值方法(包括神经网络)可以符合嘈杂的训练数据,这是违反统计学习理论的标准直觉的。为了解释这一点,最近的一系列工作研究了$ \ textit {良性过拟合} $,这是一种现象,其中一些插值方法即使在存在噪音的情况下也接近了贝叶斯的最佳性。在这项工作中,我们认为,虽然良性过度拟合既具有启发性和富有成效的研究在测试时间的风险,这意味着这些模型既不是良性也不是灾难性的,而是属于中间状态。我们称此中级制度$ \ textit {perked forporting} $,我们启动其系统研究。我们首先在内核(Ridge)回归(KR)的背景下探索这种现象,通过在脊参数和核特征光谱上获得条件,KR在这些条件下表现出三种行为。我们发现,具有PowerLaw光谱的内核,包括Laplace内核和Relu神经切线内核,表现出了过度拟合的。然后,我们通过分类法的镜头从经验上研究深度神经网络,并发现接受插值训练的人是脾气暴躁的,而那些训练的人则是良性的。我们希望我们的工作能够使人们对现代学习过度拟合的过度理解。
translated by 谷歌翻译
众所周知,过度参数化的深网能够完全拟合训练数据,同时显示出良好的概括性能。从线性回归上的直觉中得出的常见范式表明,大型网络甚至可以插入嘈杂的数据,而不会显着偏离地面真相信号。目前,缺少这种现象的精确表征。在这项工作中,我们介绍了深网的损失景观清晰度的实证研究,因为我们系统地控制了模型参数和训练时期的数量。我们将研究扩展到培训数据的街区以及清洁和嘈杂标记的样本。我们的发现表明,输入空间中的损失清晰度均遵循模型和时期的双重下降,在嘈杂的标签周围观察到了较差的峰值。与现有直觉相比,小型插值模型尤其适合干净和嘈杂的数据,但大型模型表达了平稳而平坦的损失景观。
translated by 谷歌翻译
A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach [2018], we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.
translated by 谷歌翻译
神经崩溃的概念是指在各种规范分类问题中经验观察到的几种新兴现象。在训练深度神经网络的终端阶段,同一类的所有示例的特征嵌入往往会崩溃为单一表示,而不同类别的特征往往会尽可能分开。通常通过简化的模型(称为无约束的特征表示)来研究神经崩溃,其中假定模型具有“无限表达性”,并且可以将每个数据点映射到任何任意表示。在这项工作中,我们提出了不受约束的功能表示的更现实的变体,该变体考虑到了网络的有限表达性。经验证据表明,嘈杂数据点的记忆导致神经崩溃的降解(扩张)。使用记忆 - 稀释(M-D)现象的模型,我们展示了一种机制,通过该机制,不同的损失导致嘈杂数据上受过训练的网络的不同性能。我们的证据揭示了为什么标签平滑性(经验观察到产生正则化效果的跨凝性的修改)导致分类任务的概括改善的原因。
translated by 谷歌翻译
We consider the problem of data classification where the training set consists of just a few data points. We explore this phenomenon mathematically and reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities. The main thrust of our analysis is to reveal the influence on the model's generalisation capabilities of nonlinear feature transformations mapping the original data into high, and possibly infinite, dimensional spaces.
translated by 谷歌翻译
“良性过度装备”,分类器记住嘈杂的培训数据仍然达到良好的概括性表现,在机器学习界造成了很大的关注。为了解释这种令人惊讶的现象,一系列作品在过度参数化的线性回归,分类和内核方法中提供了理论典范。然而,如果在对逆势实例存在下仍发生良性的过度,则尚不清楚,即欺骗分类器的微小和有意的扰动的例子。在本文中,我们表明,良性过度确实发生在对抗性培训中,是防御对抗性实例的原则性的方法。详细地,我们证明了在$ \ ell_p $普发的扰动下的子高斯数据的混合中的普遍培训的线性分类器的风险限制。我们的结果表明,在中度扰动下,尽管过度禁止嘈杂的培训数据,所以发生前列训练的线性分类器可以实现近乎最佳的标准和对抗性风险。数值实验验证了我们的理论发现。
translated by 谷歌翻译