图表神经网络(GNNS)在图形结构数据的表现中表现出巨大的成功。在捕获图形拓扑中,GNN中的层展图表卷积显示为强大。在此过程中,GNN通常由预定义的内核引导,例如拉普拉斯矩阵,邻接矩阵或其变体。但是,预定义的内核的采用可能会限制不同图形的必要性:图形和内核之间的不匹配将导致次优性能。例如,当高频信息对于图表具有重要意义时,聚焦在低频信息上的GNN可能无法实现令人满意的性能,反之亦然。为了解决这个问题,在本文中,我们提出了一种新颖的框架 - 即,即Adaptive Kernel图神经网络(AKGNN) - 这将在第一次尝试时以统一的方式适应最佳图形内核。在所提出的AKGNN中,我们首先设计一种数据驱动的图形内核学习机制,它通过修改图拉普拉斯的最大特征值来自适应地调制全通过和低通滤波器之间的平衡。通过此过程,AKGNN了解高频信号之间的最佳阈值以减轻通用问题。稍后,我们通过参数化技巧进一步减少参数的数量,并通过全局读出功能增强富有表现力。在确认的基准数据集中进行了广泛的实验,并且有希望的结果通过与最先进的GNNS比较,展示了我们所提出的Akgnn的出色表现。源代码在公开上可用:https://github.com/jumxglhf/akgnn。
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译
图形卷积网络对于从图形结构数据进行深入学习而变得必不可少。大多数现有的图形卷积网络都有两个大缺点。首先,它们本质上是低通滤波器,因此忽略了图形信号的潜在有用的中和高频带。其次,固定了现有图卷积过滤器的带宽。图形卷积过滤器的参数仅转换图输入而不更改图形卷积滤波器函数的曲率。实际上,除非我们有专家领域知识,否则我们不确定是否应该在某个点保留或切断频率。在本文中,我们建议自动图形卷积网络(AUTOGCN)捕获图形信号的完整范围,并自动更新图形卷积过滤器的带宽。虽然它基于图谱理论,但我们的自动环境也位于空间中,并具有空间形式。实验结果表明,AutoGCN比仅充当低通滤波器的基线方法实现了显着改善。
translated by 谷歌翻译
图形卷积网络(GCN)及其变体是为仅包含正链的无符号图设计的。许多现有的GCN来自位于(未签名)图的信号的光谱域分析,在每个卷积层中,它们对输入特征进行低通滤波,然后进行可学习的线性转换。它们扩展到具有正面和负面链接的签名图,引发了多个问题,包括计算不规则性和模棱两可的频率解释,从而使计算有效的低通滤波器的设计具有挑战性。在本文中,我们通过签名图的光谱分析来解决这些问题,并提出了两个不同的图形神经网络,一个人仅保留低频信息,并且还保留了高频信息。我们进一步引入了磁性签名的拉普拉斯式,并使用其特征成分进行定向签名图的光谱分析。我们在签名图上测试了节点分类的方法,并链接符号预测任务并实现最先进的性能。
translated by 谷歌翻译
最新提出的基于变压器的图形模型的作品证明了香草变压器用于图形表示学习的不足。要了解这种不足,需要研究变压器的光谱分析是否会揭示其对其表现力的见解。类似的研究已经确定,图神经网络(GNN)的光谱分析为其表现力提供了额外的观点。在这项工作中,我们系统地研究并建立了变压器领域中的空间和光谱域之间的联系。我们进一步提供了理论分析,并证明了变压器中的空间注意机制无法有效捕获所需的频率响应,因此,固有地限制了其在光谱空间中的表现力。因此,我们提出了feta,该框架旨在在整个图形频谱(即图形的实际频率成分)上进行注意力类似于空间空间中的注意力。经验结果表明,FETA在标准基准的所有任务中为香草变压器提供均匀的性能增益,并且可以轻松地扩展到具有低通特性的基于GNN的模型(例如GAT)。
translated by 谷歌翻译
Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral graph convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral graph convolutions with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomenon. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph convolutions and achieve superior performance in both full- and semi-supervised node classification tasks. Most notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that spectral-based GNNs have superior performance. Our code is available at https://github.com/ivam-he/ChebNetII.
translated by 谷歌翻译
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the over-smoothing problem.In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: Initial residual and Identity mapping. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi-and fullsupervised tasks. Code is available at https: //github.com/chennnM/GCNII.
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
图表神经网络(GNNS)在各种机器学习任务中获得了表示学习的提高。然而,应用邻域聚合的大多数现有GNN通常在图中的图表上执行不良,其中相邻的节点属于不同的类。在本文中,我们示出了在典型的异界图中,边缘可以被引导,以及是否像是处理边缘,也可以使它们过度地影响到GNN模型的性能。此外,由于异常的限制,节点对来自本地邻域之外的类似节点的消息非常有益。这些激励我们开发一个自适应地学习图表的方向性的模型,并利用潜在的长距离相关性节点之间。我们首先将图拉普拉斯概括为基于所提出的特征感知PageRank算法向数字化,该算法同时考虑节点之间的图形方向性和长距离特征相似性。然后,Digraph Laplacian定义了一个图形传播矩阵,导致一个名为{\ em diglaciangcn}的模型。基于此,我们进一步利用节点之间的通勤时间测量的节点接近度,以便在拓扑级别上保留节点的远距离相关性。具有不同级别的10个数据集的广泛实验,同意级别展示了我们在节点分类任务任务中对现有解决方案的有效性。
translated by 谷歌翻译
从原始理论上明确定义的频谱图卷积到随后的空间扰动消息传递模型,空间局部(在顶点域中)充当大多数图形神经网络(GNN)的基本原理。在频谱图卷积中,过滤器由多项式近似,其中$ k $-oder多项式涵盖$ k $ -hop邻居。在消息传递中,聚合中使用的各种邻居定义实际上是对空间局部信息的广泛探索。对于学习节点表示,拓扑距离似乎是必要的,因为它表征了节点之间的基本关系。但是,对于学习整个图表的陈述,是必要的吗?在这项工作中,我们表明,不需要这样的原则,它会阻碍大多数现有的GNN,从有效地编码图形结构。通过删除它,以及多项式滤波器的限制,由此产生的新架构在学习图表表示上显着提高了性能。我们还研究了图谱对信号的影响,并将各种现有改进解释为不同的频谱平滑技术。它用作空间理解,以定量测量频谱对输入信号的影响,与众所周知的光谱理解为高/低通滤波器。更重要的是,它在开发强大的图形表示模型上阐明了光线。
translated by 谷歌翻译
Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still suffer from great difficulty in adapting to the variation of local patterns. On the basis of the theoretical analysis on local patterns, we rethink the existing spectral filtering methods and propose the \textbf{\underline{N}}ode-oriented spectral \textbf{\underline{F}}iltering for \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{\underline{N}}etwork (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus discriminating the variations of local homophily patterns adaptively. Meanwhile, the utilization of re-parameterization brings a good trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Furthermore, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance.
translated by 谷歌翻译
提高GCN的深度(预计将允许更多表达性)显示出损害性能,尤其是在节点分类上。原因的主要原因在于过度平滑。过度平滑的问题将GCN的输出驱动到一个在节点之间包含有限的区别信息的空间,从而导致表现不佳。已经提出了一些有关完善GCN架构的作品,但理论上仍然未知这些改进是否能够缓解过度平衡。在本文中,我们首先从理论上分析了通用GCN如何与深度增加的作用,包括通用GCN,GCN,具有偏见,RESGCN和APPNP。我们发现所有这些模型都以通用过程为特征:所有节点融合到Cuboid。在该定理下,我们建议通过在每个训练时期随机去除一定数量的边缘来减轻过度光滑的状态。从理论上讲,Dropedge可以降低过度平滑的收敛速度,或者可以减轻尺寸崩溃引起的信息损失。对模拟数据集的实验评估已可视化不同GCN之间过度平滑的差异。此外,对几个真正的基准支持的广泛实验,这些实验始终如一地改善各种浅GCN和深度GCN的性能。
translated by 谷歌翻译
本文旨在为多尺度帧卷积提供一种新颖的光谱图神经网络设计。在光谱范例中,光谱GNN通过提出频谱域中的各种光谱滤波器来提高图形学习任务性能,以捕获全局和本地图形结构信息。虽然现有的光谱方法在某些图表中显示出卓越的性能,但是当图表信息不完整或扰乱时,它们患有缺乏灵活性并脆弱。我们的新帧卷曲卷积包括直接在光谱域中设计的过滤功能,以克服这些限制。所提出的卷积在切断光谱信息中表现出具有很大的灵活性,并有效地减轻了噪声曲线图信号的负效应。此外,为了利用现实世界图数据中的异质性,具有我们新的帧卷积的异构图形神经网络提供了一种用于将元路径的内在拓扑信息与多级图分析嵌入的解决方案。进行了扩展实验实现了具有嘈杂节点特征和卓越性能结果的设置下的现实异构图和均匀图。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
图形神经网络(GNNS)从节点功能和输入图拓扑中利用信号来改善节点分类任务性能。然而,这些模型倾向于在异细胞图上表现不良,其中连接的节点具有不同的标记。最近提出了GNNS横跨具有不同程度的同性恋级别的图表。其中,依赖于多项式图滤波器的模型已经显示了承诺。我们观察到这些多项式图滤波器模型的解决方案也是过度确定的方程式系统的解决方案。它表明,在某些情况下,模型需要学习相当高的多项式。在调查中,我们发现由于其设计而在学习此类多项式的拟议模型。为了缓解这个问题,我们执行图表的特征分解,并建议学习作用于频谱的不同子集的多个自适应多项式滤波器。理论上和经验证明我们所提出的模型学习更好的过滤器,从而提高了分类准确性。我们研究了我们提出的模型的各个方面,包括利用潜在多项式滤波器的依义组分的数量以及节点分类任务上的各个多项式的性能的依赖性。我们进一步表明,我们的模型通过在大图中评估来扩展。我们的模型在最先进的模型上实现了高达5%的性能增益,并且通常优于现有的基于多项式滤波器的方法。
translated by 谷歌翻译
光谱图神经网络是基于图信号过滤器的一种图神经网络(GNN)。一些能够学习任意光谱过滤器的模型最近出现了。但是,很少有作品分析光谱GNN的表达能力。本文理论上研究了光谱GNNS的表现力。我们首先证明,即使没有非线性的光谱GNN也可以产生任意的图形信号,并给出了两个条件以达到普遍性。它们是:1)图Laplacian的多个特征值和2)节点特征中没有缺失的频率组件。我们还建立了光谱GNN的表达能力与图形同构(GI)测试之间的联系,后者通常用于表征空间GNNS的表达能力。此外,我们从优化的角度研究了具有相同表达能力的不同光谱GNN之间的经验性能差异,并激发了其重量函数对应于光谱中图信号密度的正交基础的使用。受分析的启发,我们提出了Jacobiconv,该雅各比基的正交性和灵活性使用了雅各比的基础,以适应广泛的重量功能。 Jacobiconv抛弃了非线性,同时在合成和现实世界数据集上都超过了所有基线。
translated by 谷歌翻译
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN models for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
translated by 谷歌翻译
Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, coauthorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods. CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Computing methodologies → Artificial intelligence; Neural networks.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译