Graph neural networks (GNN) have become the default machine learning model for relational datasets, including protein interaction networks, biological neural networks, and scientific collaboration graphs. We use tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model. The derived curves are phenomenologically rich: they explain the distinction between learning on homophilic and heterophilic graphs and they predict double descent whose existence in GNNs has been questioned by recent work. Our results are the first to accurately explain the behavior not only of a stylized graph learning model but also of complex GNNs on messy real-world datasets. To wit, we use our analytic insights about homophily and heterophily to improve performance of state-of-the-art graph neural networks on several heterophilic benchmarks by a simple addition of negative self-loop filters.
translated by 谷歌翻译
The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.
translated by 谷歌翻译
随机块模型(SBM)是一个随机图模型,其连接不同的顶点组不同。它被广泛用作研究聚类和社区检测的规范模型,并提供了肥沃的基础来研究组合统计和更普遍的数据科学中出现的信息理论和计算权衡。该专着调查了最近在SBM中建立社区检测的基本限制的最新发展,无论是在信息理论和计算方案方面,以及各种恢复要求,例如精确,部分和弱恢复。讨论的主要结果是在Chernoff-Hellinger阈值中进行精确恢复的相转换,Kesten-Stigum阈值弱恢复的相变,最佳的SNR - 单位信息折衷的部分恢复以及信息理论和信息理论之间的差距计算阈值。该专着给出了在寻求限制时开发的主要算法的原则推导,特别是通过绘制绘制,半定义编程,(线性化)信念传播,经典/非背带频谱和图形供电。还讨论了其他块模型的扩展,例如几何模型和一些开放问题。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
了解特征学习如何影响概括是现代深度学习理论的最重要目标之一。在这里,我们研究了学习表示的能力如何影响一类简单模型的概括性能:深贝叶斯线性神经网络接受了非结构化高斯数据的训练。通过将深层随机特征模型与所有训练所有层的深网进行比较,我们将提供详细的表征宽度,深度,数据密度和先验不匹配之间的相互作用。我们表明,在存在标签噪声的情况下,这两种模型都显示出样本的双重变化行为。如果有狭窄的瓶颈层,那么随机特征模型还可以显示模型的双重变化,而深网不显示这些分歧。随机特征模型可以具有特定的宽度,这些宽度对于在给定的数据密度下是最佳的概括,同时使神经网络尽可能宽或狭窄始终是最佳的。此外,我们表明,对内核限制学习曲线的前阶校正无法区分所有培训所有层的随机特征模型和深层网络。综上所述,我们的发现开始阐明建筑细节如何影响这种简单的深层回归模型类别的概括性能。
translated by 谷歌翻译
近年来,监督学习环境的几个结果表明,古典统计学习 - 理论措施,如VC维度,不充分解释深度学习模型的性能,促使在无限宽度和迭代制度中的工作摆动。但是,对于超出监督环境之外的神经网络成功几乎没有理论解释。在本文中,我们认为,在一些分布假设下,经典学习 - 理论措施可以充分解释转导造型中的图形神经网络的概括。特别是,我们通过分析节点分类问题图卷积网络的概括性特性,对神经网络的性能进行严格分析神经网络。虽然VC维度确实导致该设置中的琐碎泛化误差界限,但我们表明转导变速器复杂性可以解释用于随机块模型的图形卷积网络的泛化特性。我们进一步使用基于转换的Rademacher复杂性的泛化误差界限来展示图形卷积和网络架构在实现较小的泛化误差方面的作用,并在图形结构可以帮助学习时提供洞察。本文的调查结果可以重新新的兴趣在学习理论措施方面对神经网络的概括,尽管在特定问题中。
translated by 谷歌翻译
建立深度学习的理论基础的一个关键挑战是神经网络的复杂优化动态,由大量网络参数之间的高维相互作用产生。这种非琐碎的动态导致有趣的行为,例如概括误差的“双重下降”的现象。这种现象的越常见的方面对应于模型 - 明智的双下降,其中测试误差具有增加模型复杂性的第二下降,超出经典的U形误差曲线。在这项工作中,我们研究了研究误差在训练时间增加时进行了测试误差的较低学习的巨头双重下降的起源。通过利用统计物理学的工具,我们研究了展示了与深神经网络中的EPOCH-WISE Double Countcle的线性师生设置。在此设置中,我们导出了封闭式的分析表达式,用于培训泛化误差的演变。我们发现双重血统可以归因于不同尺度的不同特征:作为快速学习功能过度装备,较慢的学习功能开始适合,导致测试错误的第二个下降。我们通过数字实验验证了我们的研究结果,其中我们的理论准确预测了实证发现,并与深神经网络中的观察结果保持一致。
translated by 谷歌翻译
我们认为越来越复杂的矩阵去噪和贝叶斯最佳设置中的文章学习模型,在挑战性的政权中,在矩阵推断出与系统尺寸线性的排名增加。这与大多数现有的文献相比,与低秩(即常数级别)制度相关的文献相反。我们首先考虑一类旋转不变的矩阵去噪,使用来自随机矩阵理论的标准技术来计算的互动信息和最小均方误差。接下来,我们分析了字典学习的更具挑战性模式。为此,我们将复制方法与随机矩阵理论一起介绍了复制品方法的新组合,共同矩阵理论,Coined光谱副本方法。它允许我们猜测隐藏表示与字典学习问题的嘈杂数据之间的相互信息的变分形式,以及定量最佳重建误差的重叠。所提出的方法从$ \ theta(n ^ 2)$(矩阵条目)到$ \ theta(n)$(特征值或奇异值)减少自由度的数量,并产生的互信息的库仑气体表示让人想起物理学中的矩阵模型。主要成分是使用Harishchandra-Itzykson-Zuber球形积分,结合新的复制对称解耦Ansatz,在特定重叠矩阵的特征值(或奇异值)的概率分布的水平上。
translated by 谷歌翻译
强大的机器学习模型的开发中的一个重要障碍是协变量的转变,当训练和测试集的输入分布时发生的分配换档形式在条件标签分布保持不变时发生。尽管现实世界应用的协变量转变普遍存在,但在现代机器学习背景下的理论理解仍然缺乏。在这项工作中,我们检查协变量的随机特征回归的精确高尺度渐近性,并在该设置中提出了限制测试误差,偏差和方差的精确表征。我们的结果激发了一种自然部分秩序,通过协变速转移,提供足够的条件来确定何时何时损害(甚至有助于)测试性能。我们发现,过度分辨率模型表现出增强的协会转变的鲁棒性,为这种有趣现象提供了第一个理论解释之一。此外,我们的分析揭示了分销和分发外概率性能之间的精确线性关系,为这一令人惊讶的近期实证观察提供了解释。
translated by 谷歌翻译
图形卷积网络(GCN)是最受欢迎的体系结构之一,用于解决分类问题,并附有图形信息。我们对图形卷积在多层网络中的影响进行了严格的理论理解。我们通过与随机块模型结合的非线性分离高斯混合模型的节点分类问题研究这些效果。首先,我们表明,单个图卷积扩展了多层网络可以至少$ 1/\ sqrt [4] {\ Mathbb {e} {\ rm veg对数据进行分类的均值之间的距离。 }} $,其中$ \ mathbb {e} {\ rm deg} $表示节点的预期度。其次,我们表明,随着图的密度稍强,两个图卷积将此因素提高到至少$ 1/\ sqrt [4] {n} $,其中$ n $是图中的节点的数量。最后,我们对网络层中不同组合的图形卷积的性能提供了理论和经验见解,得出的结论是,对于所有位置的所有组合,性能都是相互相似的。我们对合成数据和现实世界数据进行了广泛的实验,以说明我们的结果。
translated by 谷歌翻译
Graph Neural Networks (graph NNs) are a promising deep learning approach for analyzing graph-structured data. However, it is known that they do not improve (or sometimes worsen) their predictive performance as we pile up many layers and add non-lineality. To tackle this problem, we investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity. Our strategy is to generalize the forward propagation of a Graph Convolutional Network (GCN), which is a popular graph NN variant, as a specific dynamical system. In the case of a GCN, we show that when its weights satisfy the conditions determined by the spectra of the (augmented) normalized Laplacian, its output exponentially approaches the set of signals that carry information of the connected components and node degrees only for distinguishing nodes. Our theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra. To demonstrate this, we characterize the asymptotic behavior of GCNs on the Erdős -Rényi graph. We show that when the Erdős -Rényi graph is sufficiently dense and large, a broad range of GCNs on it suffers from the "information loss" in the limit of infinite layers with high probability. Based on the theory, we provide a principled guideline for weight normalization of graph NNs. We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data 1 .
translated by 谷歌翻译
当前的深度神经网络被高度参数化(多达数十亿个连接权重)和非线性。然而,它们几乎可以通过梯度下降算法的变体完美地拟合数据,并达到预测准确性的意外水平,而不会过度拟合。这些是巨大的结果,无视统计学习的预测,并对非凸优化构成概念性挑战。在本文中,我们使用来自无序系统的统计物理学的方法来分析非凸二进制二进制神经网络模型中过度参数化的计算后果,该模型对从结构上更简单但“隐藏”网络产生的数据进行了培训。随着连接权重的增加,我们遵循误差损失函数不同最小值的几何结构的变化,并将其与学习和概括性能相关联。当解决方案开始存在时,第一次过渡发生在所谓的插值点(完美拟合变得可能)。这种过渡反映了典型溶液的特性,但是它是尖锐的最小值,难以采样。差距后,发生了第二个过渡,并具有不同类型的“非典型”结构的不连续外观:重量空间的宽区域,这些区域特别是解决方案密度且具有良好的泛化特性。两种解决方案共存,典型的解决方案的呈指数数量,但是从经验上讲,我们发现有效的算法采样了非典型,稀有的算法。这表明非典型相变是学习的相关阶段。与该理论建议的可观察到的现实网络的数值测试结果与这种情况一致。
translated by 谷歌翻译
We investigate the representation power of graph neural networks in the semisupervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs-ego-and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations-that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H 2 GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
translated by 谷歌翻译
教师 - 学生模型提供了一个框架,其中可以以封闭形式描述高维监督学习的典型情况。高斯I.I.D的假设然而,可以认为典型教师 - 学生模型的输入数据可以被认为过于限制,以捕获现实数据集的行为。在本文中,我们介绍了教师和学生可以在不同的空格上行动的模型的高斯协变态概括,以固定的,而是通用的特征映射。虽然仍处于封闭形式的仍然可解决,但这种概括能够捕获广泛的现实数据集的学习曲线,从而兑现师生框架的潜力。我们的贡献是两倍:首先,我们证明了渐近培训损失和泛化误差的严格公式。其次,我们呈现了许多情况,其中模型的学习曲线捕获了使用内​​核回归和分类学习的现实数据集之一,其中盒出开箱特征映射,例如随机投影或散射变换,或者与散射变换预先学习的 - 例如通过培训多层神经网络学到的特征。我们讨论了框架的权力和局限性。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
这项正在进行的工作旨在为统计学习提供统一的介绍,从诸如GMM和HMM等经典模型到现代神经网络(如VAE和扩散模型)缓慢地构建。如今,有许多互联网资源可以孤立地解释这一点或新的机器学习算法,但是它们并没有(也不能在如此简短的空间中)将这些算法彼此连接起来,或者与统计模型的经典文献相连现代算法出现了。同样明显缺乏的是一个单一的符号系统,尽管对那些已经熟悉材料的人(如这些帖子的作者)不满意,但对新手的入境造成了重大障碍。同样,我的目的是将各种模型(尽可能)吸收到一个用于推理和学习的框架上,表明(以及为什么)如何以最小的变化将一个模型更改为另一个模型(其中一些是新颖的,另一些是文献中的)。某些背景当然是必要的。我以为读者熟悉基本的多变量计算,概率和统计以及线性代数。这本书的目标当然不是​​完整性,而是从基本知识到过去十年中极强大的新模型的直线路径或多或少。然后,目标是补充而不是替换,诸如Bishop的\ emph {模式识别和机器学习}之类的综合文本,该文本现在已经15岁了。
translated by 谷歌翻译
多级分类问题的广义线性模型是现代机器学习任务的基本构建块之一。在本手稿中,我们通过具有任何凸损耗和正规化的经验风险最小化(ERM)来描述与通用手段和协方士的k $高斯的混合。特别是,我们证明了表征ERM估计的精确渐近剂,以高维度,在文献中扩展了关于高斯混合分类的几个先前结果。我们举例说明我们在统计学习中的两个兴趣任务中的两个任务:a)与稀疏手段的混合物进行分类,我们研究了$ \ ell_2 $的$ \ ell_1 $罚款的效率; b)Max-Margin多级分类,在那里我们在$ k> 2 $的多级逻辑最大似然估计器上表征了相位过渡。最后,我们讨论了我们的理论如何超出合成数据的范围,显示在不同的情况下,高斯混合在真实数据集中密切地捕获了分类任务的学习曲线。
translated by 谷歌翻译
A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions -- an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N/\log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR) on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice may be exacerbated by the difficulty of optimizing deep GNN models.
translated by 谷歌翻译
我们研究了重整化组(RG)和深神经网络之间的类比,其中随后的神经元层类似于沿RG的连续步骤。特别地,我们通过在抽取RG下明确计算在DIMIMATION RG下的一个和二维insing模型中的相对熵或kullback-leibler发散,以及作为深度的函数的前馈神经网络中的相对熵或kullback-leibler发散。我们观察到单调增加到参数依赖性渐近值的定性相同的行为。在量子场理论方面,单调增加证实了相对熵和C定理之间的连接。对于神经网络,渐近行为可能对机器学习中的各种信息最大化方法以及解开紧凑性和概括性具有影响。此外,虽然我们考虑的二维误操作模型和随机神经网络都表现出非差异临界点,但是对任何系统的相位结构的相对熵看起来不敏感。从这个意义上讲,需要更精细的探针以充分阐明这些模型中的信息流。
translated by 谷歌翻译
图形神经网络(GNN)在许多预测任务中表现出优于图形的优越性,因为它们在图形结构数据中捕获非线性关系的令人印象深刻。但是,对于节点分类任务,通常只观察到GNN在线性对应物上的边际改进。以前的作品对这种现象的理解很少。在这项工作中,我们求助于贝叶斯学习,以深入研究GNNS在节点分类任务中非线性的功能。鉴于从统计模型CSBM生成的图,我们观察到,给定其自身和邻居的属性的节点标签的最大a-后方估计包括两种类型的非线性,可能是节点属性和节点属性的非线性转换和来自邻居的重新激活特征聚合。后者令人惊讶地与许多GNN模型中使用的非线性类型匹配。通过进一步对节点属性施加高斯假设,我们证明,当节点属性比图形结构更具信息性时,这些relu激活的优越性才是显着的,该图与许多以前的经验观察非常匹配。当训练和测试数据集之间的节点属性分布变化时,可以实现类似的参数。最后,我们验证了关于合成和现实世界网络的理论。
translated by 谷歌翻译