In this preliminary work, we study the generalization properties of infinite ensembles of infinitely-wide neural networks. Amazingly, this model family admits tractable calculations for many information-theoretic quantities. We report analytical and empirical investigations in the search for signals that correlate with generalization.
translated by 谷歌翻译
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
translated by 谷歌翻译
最近的作品表明,有限的贝叶斯神经网络有时可能会越优于其无限堂兄弟,因为有限网络可以灵活地调整其内部表示。然而,我们对有限网络的学习隐藏层表示如何与无限网络的固定表示不同的理论理解仍然不完整。研究了对网络的扰动有限宽度校正,但已经研究过的网络,但学习特征的渐近学尚未完全表征。在这里,我们认为具有线性读数和高斯可能性的任何贝叶斯网络的平均特征内核的领先有限宽度校正具有很大程度上的普遍形式。我们明确地说明了三个易行网络架构:深线性完全连接和卷积网络,以及具有单个非线性隐藏层的网络。我们的结果开始阐明任务相关的学习信号如何塑造宽阔的贝叶斯神经网络的隐藏层表示。
translated by 谷歌翻译
现代深度神经网络(DNN)的成功基于其在多层转换投入以建立良好高级表示的能力。因此,了解这种表示学习过程至关重要。但是,我们不能使用涉及无限宽度限制的标准理论方法,因为它们消除了代表性学习。因此,我们开发了一个新的无限宽度限制,即表示的学习限制,该限制表现出表示形式的学习反映,但在有限宽度网络中,但同时仍然非常容易处理。例如,表示学习限制在深处的高斯过程中提供了恰好具有多种内核的多元高斯后期,包括所有各向同性(距离依赖)内核。我们得出一个优雅的目标,描述了每个网络层如何学习在输入和输出之间插值的表示形式。最后,我们使用此限制和目标来开发对内核方法的灵活,深刻的概括,我们称之为深内核机器(DKMS)。我们表明,可以使用受高斯过程文献中诱导点方法启发的方法将DKMS缩放到大数据集,并且我们表明DKMS表现出优于其他基于内核方法的性能。
translated by 谷歌翻译
深度均衡网络(DEQ)是构建模型以进行计算的模型的一种有希望的方法。但是,与传统网络相比,对这些模型的理论理解仍然缺乏,部分原因是一组重量的重复应用。我们表明,DEQ对初始化的基质家族的高阶统计敏感。特别是,用正交或对称矩阵初始化可以在训练中提高稳定性。这为我们提供了初始化的实用处方,该处方允许以更广泛的初始重量量表进行训练。
translated by 谷歌翻译
深度神经网络对于违抗理论治疗是臭名昭着的。然而,当每个层中的参数的数量倾向于无穷大时,网络功能是高斯过程(GP)和定量预测描述是可能的。高斯近似允许制定用于选择超参数的标准,例如权重和偏差的差异,以及学习率。这些标准依赖于为深神经网络定义的临界概念。在这项工作中,我们描述了一种新的诊断(理论上和凭经验)这种关键性的新方法。为此,我们介绍了网络的部分雅各者,定义为在Lay $ L_0 <L $中的Preactivation中的Preactivation中的常见率。当网络架构涉及许多不同的层时,这些数量特别有用。我们讨论了部分雅可比人的各种属性,例如他们的缩放,深度和与神经切线内核(NTK)的关系。我们派生了部分雅典人的复发关系,并利用它们来分析深层MLP网络的关键性(且没有)Playernorm。我们发现归一化层改变了超参数和临界指数的最佳值。我们认为在应用开始时,PlayerNorm更稳定,而不是由于相关深度较大的激活。
translated by 谷歌翻译
Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of our new bounds for estimation and representation learning.
translated by 谷歌翻译
我们在本文中研究了从多层神经网络中得出的模型的概括误差,在层中层的大小与训练数据中的样本数量相称的状态下。我们表明,在此制度中,无偏估计器对于此类非线性网络具有不可接受的性能。在线性回归和两层网络的情况下,我们得出了一般偏置估计量的显式概括下限。在线性情况下,界限渐近紧。在非线性情况下,我们将边界与随机梯度下降算法的经验研究提供了比较。该分析使用大型随机矩阵理论中的元素。
translated by 谷歌翻译
为了更好地了解大型神经网络的理论行为,有几项工程已经分析了网络宽度倾向于无穷大的情况。在该制度中,随机初始化的影响和训练神经网络的过程可以与高斯过程和神经切线内核等分析工具正式表达。在本文中,我们审查了在这种无限宽度神经网络中量化不确定性的方法,并将它们与贝叶斯推理框架中的高斯过程的关系进行比较。我们利用沿途使用几个等价结果,以获得预测不确定性的确切闭合性解决方案。
translated by 谷歌翻译
我们研究了重整化组(RG)和深神经网络之间的类比,其中随后的神经元层类似于沿RG的连续步骤。特别地,我们通过在抽取RG下明确计算在DIMIMATION RG下的一个和二维insing模型中的相对熵或kullback-leibler发散,以及作为深度的函数的前馈神经网络中的相对熵或kullback-leibler发散。我们观察到单调增加到参数依赖性渐近值的定性相同的行为。在量子场理论方面,单调增加证实了相对熵和C定理之间的连接。对于神经网络,渐近行为可能对机器学习中的各种信息最大化方法以及解开紧凑性和概括性具有影响。此外,虽然我们考虑的二维误操作模型和随机神经网络都表现出非差异临界点,但是对任何系统的相位结构的相对熵看起来不敏感。从这个意义上讲,需要更精细的探针以充分阐明这些模型中的信息流。
translated by 谷歌翻译
At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit (16; 4; 7; 13; 6), thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function f θ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinitewidth limit, the network function f θ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.
translated by 谷歌翻译
The Information Bottleneck theory provides a theoretical and computational framework for finding approximate minimum sufficient statistics. Analysis of the Stochastic Gradient Descent (SGD) training of a neural network on a toy problem has shown the existence of two phases, fitting and compression. In this work, we analyze the SGD training process of a Deep Neural Network on MNIST classification and confirm the existence of two phases of SGD training. We also propose a setup for estimating the mutual information for a Deep Neural Network through Variational Inference.
translated by 谷歌翻译
我们引入了重新定性,这是一种数据依赖性的重新聚集化,将贝叶斯神经网络(BNN)转化为后部的分布,其KL对BNN对BNN的差异随着层宽度的增长而消失。重新定义图直接作用于参数,其分析简单性补充了宽BNN在功能空间中宽BNN的已知神经网络过程(NNGP)行为。利用重新定性,我们开发了马尔可夫链蒙特卡洛(MCMC)后采样算法,该算法将BNN更快地混合在一起。这与MCMC在高维度上的表现差异很差。对于完全连接和残留网络,我们观察到有效样本量高达50倍。在各个宽度上都取得了改进,并在层宽度的重新培训和标准BNN之间的边缘。
translated by 谷歌翻译
In deep learning, neural networks serve as noisy channels between input data and its representation. This perspective naturally relates deep learning with the pursuit of constructing channels with optimal performance in information transmission and representation. While considerable efforts are concentrated on realizing optimal channel properties during network optimization, we study a frequently overlooked possibility that neural networks can be initialized toward optimal channels. Our theory, consistent with experimental validation, identifies primary mechanics underlying this unknown possibility and suggests intrinsic connections between statistical physics and deep learning. Unlike the conventional theories that characterize neural networks applying the classic mean-filed approximation, we offer analytic proof that this extensively applied simplification scheme is not valid in studying neural networks as information channels. To fill this gap, we develop a corrected mean-field framework applicable for characterizing the limiting behaviors of information propagation in neural networks without strong assumptions on inputs. Based on it, we propose an analytic theory to prove that mutual information maximization is realized between inputs and propagated signals when neural networks are initialized at dynamic isometry, a case where information transmits via norm-preserving mappings. These theoretical predictions are validated by experiments on real neural networks, suggesting the robustness of our theory against finite-size effects. Finally, we analyze our findings with information bottleneck theory to confirm the precise relations among dynamic isometry, mutual information maximization, and optimal channel properties in deep learning.
translated by 谷歌翻译
我们表明,典型分类数据集的输入相关矩阵具有特征光谱,在尖锐的初始下降后,大量的小特征值均匀地分布在指数较大的范围内。这种结构反映在经过此数据训练的网络中:我们表明Hessian和Fisher Information Matrix(FIM)具有特征值,这些特征值均匀地散布在指数较大的范围上。我们称这种特征性称为“草率”,因为与小特征值相对应的一组重量可以通过大小不影响损失而改变。在非典型数据集上培训的具有非宽松输入的网络不会共享这些特征,并且在此类数据集上训练的深网概括了。受到这一点的启发,我们研究了以下假设:输入的斜率有助于深度网络中的概括。我们表明,如果Hessian草率很草率,我们可以通过分析地计算非呈现PAC-BAYES的概括。通过利用我们的经验观察,即训练主要发生在FIM的非宽松子空间中,我们开发了依赖数据分布的PAC-Bayes先验,从而通过数值优化导致准确的概括界限。
translated by 谷歌翻译
估计深神经网络(DNN)的概括误差(GE)是一项重要任务,通常依赖于持有数据的可用性。基于单个训练集更好地预测GE的能力可能会产生总体DNN设计原则,以减少对试用和错误的依赖以及其他绩效评估优势。为了寻找与GE相关的数量,我们使用无限宽度DNN限制到绑定的MI,研究了输入和最终层表示之间的相互信息(MI)。现有的基于输入压缩的GE绑定用于链接MI和GE。据我们所知,这代表了该界限的首次实证研究。为了实证伪造理论界限,我们发现它通常对于表现最佳模型而言通常很紧。此外,它在许多情况下检测到训练标签的随机化,反映了测试时间扰动的鲁棒性,并且只有很少的培训样本就可以很好地工作。考虑到输入压缩是广泛适用的,可以在信心估算MI的情况下,这些结果是有希望的。
translated by 谷歌翻译
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method "Deep Variational Information Bottleneck", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
translated by 谷歌翻译
我们研究神经网络量子状态的无限限制($ \ idty $ -nnqs),它通过集合统计表现出代表性,以及易衰减的梯度下降动态。根据神经网络相关器表示瑞尼熵的集合平均值,并提出了表现出体积法纠缠的架构。开发了一种用于研究神经网络量子状态(NNQS)的梯度下降动态的一般框架,使用量子状态神经切线内核(QS-NTK)。对于$ \ infty $ -nnqs,简化了训练动态,因为QS-NTK变为确定性和常数。导出分析解决方案用于量子州监督学习,允许$ \ infty $ -nnqs恢复任何目标波段。横向场介绍模型有限和无限NNQ的数值实验和Fermi Hubbard模型表现出与理论的优秀协议。 $ \ infty $ -nnqs开辟了研究其他物理应用中的纠缠和培训动态的新机会,例如在寻找基地。
translated by 谷歌翻译
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.
translated by 谷歌翻译
The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?
translated by 谷歌翻译