神经消息传递是用于图形结构数据的基本功能提取单元,它考虑了相邻节点特征在网络传播中从一层到另一层的影响。我们通过相互作用的粒子系统与具有吸引力和排斥力的相互作用粒子系统以及在相变建模中产生的艾伦 - 卡恩力进行建模。该系统是一个反应扩散过程,可以将颗粒分离为不同的簇。这会导致图形神经网络的艾伦 - 卡恩消息传递(ACMP),其中解决方案的数值迭代构成了消息传播。 ACMP背后的机制是颗粒的相变,该颗粒能够形成多群集,从而实现GNNS预测进行节点分类。 ACMP可以将网络深度推向数百个层,理论上证明了严格的dirichlet能量下限。因此,它提供了GNN的深层模型,该模型避免了GNN过度厚度的常见问题。具有高均匀难度的各种实际节点分类数据集的实验表明,具有ACMP的GNN可以实现最先进的性能,而不会衰减Dirichlet Energy。
translated by 谷歌翻译
我们提出了图形耦合振荡器网络(GraphCon),这是一个新颖的图形学习框架。它基于普通微分方程(ODE)的二阶系统的离散化,该系统建模了非线性控制和阻尼振荡器网络,并通过基础图的邻接结构结合。我们的框架的灵活性允许作为耦合函数任何基本的GNN层(例如卷积或注意力),通过该函数,通过该函数通过该函数通过该函数通过该函数通过所提出的ODES的动力学来构建多层深神经网络。我们将GNN中通常遇到的过度厚度问题与基础ode的稳态稳定性联系起来,并表明零二核能能量稳态对于我们提出的ODE不稳定。这表明所提出的框架减轻了过度厚度的问题。此外,我们证明GraphCon减轻了爆炸和消失的梯度问题,以促进对多层GNN的训练。最后,我们证明我们的方法在各种基于图形的学习任务方面就最先进的方法提供了竞争性能。
translated by 谷歌翻译
最小化能量的动力系统在几何和物理学中无处不在。我们为GNN提出了一个梯度流框架,其中方程遵循可学习能量的最陡峭下降的方向。这种方法允许从多粒子的角度来解释GNN的演变,以通过对称“通道混合”矩阵的正和负特征值在特征空间中学习吸引力和排斥力。我们对溶液进行光谱分析,并得出结论,梯度流量图卷积模型可以诱导以图高频为主导的动力学,这对于异性数据集是理想的。我们还描述了对常见GNN体系结构的结构约束,从而将其解释为梯度流。我们进行了彻底的消融研究,以证实我们的理论分析,并在现实世界同质和异性数据集上显示了简单和轻量级模型的竞争性能。
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
大多数图形神经网络(GNNS)使用传递范例的消息,其中节点特征在输入图上传播。最近的作品指出,从远处节点流动的信息失真,作为限制依赖于长途交互的任务的消息的效率。这种现象称为“过度挤压”,已经启动到图形瓶颈,其中$ k $ -hop邻居的数量以$ k $迅速增长。我们在GNNS中提供了精确描述了GNNS中的过度挤压现象,并分析了它如何从图中的瓶颈引发。为此目的,我们介绍了一种新的基于边缘的组合曲率,并证明了负曲面负责过度挤压问题。我们还提出并通过实验测试了一种基于曲率的曲线图重新挖掘方法,以减轻过度挤压。
translated by 谷歌翻译
A prominent paradigm for graph neural networks is based on the message passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate \textit{long distance communication} between nodes, as deep convolutional networks are prone to over-smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE), with a learnable time parameter. Our approach allows to adapt the spatial extent of diffusion across different tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture directly enables local message passing and thus inherits from the expressive power of local message passing approaches. We show that on widely used graph benchmarks we achieve comparable performance and on a synthetic mesh dataset we outperform state-of-the-art methods like GCN or GRAND by a significant margin.
translated by 谷歌翻译
扩散是分子从较高浓度的区域的运动到较低浓度的区域。它可用于描述数据点之间的交互。在许多机器学习问题包括转导半监督学习和少量学习的问题,标记和未标记的数据点之间的关系是高分类精度的关键组件。在本文中,由对流扩散颂歌的启发,我们提出了一种新颖的扩散剩余网络(Diff-Reset),将扩散机制引入内部的神经网络中。在结构化数据假设下,证明扩散机构可以提高距离直径比,从而提高了阶级间点间的可分离性,并减少了局部分类点之间的距离。该特性可以通过用于构建可分离超平面的剩余网络来轻松采用。各种数据集中的半监控图节点分类和几次拍摄图像分类的广泛实验验证了所提出的扩散机制的有效性。
translated by 谷歌翻译
消息传递已作为设计图形神经网络(GNN)的有效工具的发展。但是,消息传递的大多数现有方法简单地简单或平均所有相邻的功能更新节点表示。它们受到两个问题的限制,即(i)缺乏可解释性来识别对GNN的预测重要的节点特征,以及(ii)特征过度混合,导致捕获长期依赖和无能为力的过度平滑问题在异质或低同质的下方处理图。在本文中,我们提出了一个节点级胶囊图神经网络(NCGNN),以通过改进的消息传递方案来解决这些问题。具体而言,NCGNN表示节点为节点级胶囊组,其中每个胶囊都提取其相应节点的独特特征。对于每个节点级胶囊,开发了一个新颖的动态路由过程,以适应适当的胶囊,以从设计的图形滤波器确定的子图中聚集。 NCGNN聚集仅有利的胶囊并限制无关的消息,以避免交互节点的过度混合特征。因此,它可以缓解过度平滑的问题,并通过同粒或异质的图表学习有效的节点表示。此外,我们提出的消息传递方案本质上是可解释的,并免于复杂的事后解释,因为图形过滤器和动态路由过程确定了节点特征的子集,这对于从提取的子分类中的模型预测最为重要。关于合成和现实图形的广泛实验表明,NCGNN可以很好地解决过度光滑的问题,并为半监视的节点分类产生更好的节点表示。它的表现优于同质和异质的艺术状态。
translated by 谷歌翻译
虽然图形神经网络(GNNS)最近成为用于建模关系数据的事实标准,但它们对图形节点或边缘特征的可用性产生了强烈的假设。然而,在许多现实世界应用中,功能仅部分可用;例如,在社交网络中,年龄和性别仅适用于一小部分用户。我们介绍了一种用于处理基于Dirichlet能量最小化的图形机学习应用中缺失特征的一般方法,并导致图表上的扩散型微分方程。该等方程的离散化产生了一种简单,快速且可伸缩的算法,我们调用特征传播。我们通过实验表明,所提出的方法在七个常见节点分类基准测试中优于先前的方法,并且可以承受令人惊讶的缺失特点率:平均而言,当缺少99%的功能时,我们只观察到约4%的相对精度下降。此外,在单个GPU上运行$ \ SIM $ 2.5M节点和$ \ SIM $ 123M边缘,只需10秒即可在单个GPU上运行。
translated by 谷歌翻译
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing. These issues prohibit the ability of GNNs to model complex graph interactions by limiting their effectiveness at taking into account distant information. Our study reveals the key connection between the local graph geometry and the occurrence of both of these issues, thereby providing a unified framework for studying them at a local scale using the Ollivier's Ricci curvature. Based on our theory, a number of principled methods are proposed to alleviate the over-smoothing and over-squashing issues.
translated by 谷歌翻译
A central challenge of building more powerful Graph Neural Networks (GNNs) is the oversmoothing phenomenon, where increasing the network depth leads to homogeneous node representations and thus worse classification performance. While previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. Specifically, we distinguish between two different effects when applying graph convolutions -- an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. By quantifying these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is $O(\log N/\log (\log N))$ for sufficiently dense graphs with $N$ nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR) on oversmoothing. Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. Finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice may be exacerbated by the difficulty of optimizing deep GNN models.
translated by 谷歌翻译
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models, and we show that some recent state-of-the-art models are special instances of this framework. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the-art results.
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
由于问题过度问题,大多数现有的图形神经网络只能使用其固有有限的聚合层捕获有限的依赖性。为了克服这一限制,我们提出了一种新型的图形卷积,称为图形隐式非线性扩散(GIND),该卷积隐含地可以访问邻居的无限啤酒花,同时具有非线性扩散的自适应聚集特征,以防止过度张开。值得注意的是,我们表明,学到的表示形式可以正式化为显式凸优化目标的最小化器。有了这个属性,我们可以从优化的角度从理论上表征GIND的平衡。更有趣的是,我们可以通过修改相应的优化目标来诱导新的结构变体。具体而言,我们可以将先前的特性嵌入到平衡中,并引入跳过连接以促进训练稳定性。广泛的实验表明,GIND擅长捕获长期依赖性,并且在具有非线性扩散的同粒细胞和异性图上表现良好。此外,我们表明,我们模型的优化引起的变体可以提高性能并提高训练稳定性和效率。结果,我们的GIND在节点级别和图形级任务上都获得了重大改进。
translated by 谷歌翻译
图形神经网络(GNNS)由于其强大的表示能力而广泛用于图形结构化数据处理。通常认为,GNNS可以隐式消除非预测性的噪音。但是,对图神经网络中隐式降解作用的分析仍然开放。在这项工作中,我们进行了一项全面的理论研究,并分析了隐式denoising在GNN中发生的何时以及为什么发生。具体而言,我们研究噪声矩阵的收敛性。我们的理论分析表明,隐式转化很大程度上取决于连接性,图形大小和GNN体系结构。此外,我们通过扩展图形信号降解问题来正式定义并提出对抗图信号denoising(AGSD)问题。通过解决这样的问题,我们得出了一个可靠的图形卷积,可以增强节点表示的平滑度和隐式转化效果。广泛的经验评估验证了我们的理论分析和我们提出的模型的有效性。
translated by 谷歌翻译
We investigate the representation power of graph neural networks in the semisupervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs-ego-and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations-that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H 2 GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
translated by 谷歌翻译
图形神经网络(GNN)已被证明可以实现竞争结果,以解决与图形相关的任务,例如节点和图形分类,链接预测和节点以及各种域中的图形群集。大多数GNN使用消息传递框架,因此称为MPNN。尽管有很有希望的结果,但据报道,MPNN会遭受过度平滑,过度阵型和不足的影响。文献中已经提出了图形重新布线和图形池作为解决这些局限性的解决方案。但是,大多数最先进的图形重新布线方法无法保留该图的全局拓扑,因此没有可区分(电感),并且需要调整超参数。在本文中,我们提出了Diffwire,这是一个在MPNN中进行图形重新布线的新型框架,它通过利用LOV \'ASZ绑定来原理,完全可区分且无参数。我们的方法通过提出两个新的,mpnns中的新的互补层来提供统一的图形重新布线:首先,ctlayer,一个学习通勤时间并将其用作边缘重新加权的相关函数;其次,Gaplayer是优化光谱差距的图层,具体取决于网络的性质和手头的任务。我们从经验上验证了我们提出的方法的价值,并使用基准数据集分别验证了这些层的每个层以进行图形分类。 Diffwire将通勤时间的可学习性汇集到相关的曲率定义,为发展更具表现力的MPNN的发展打开了大门。
translated by 谷歌翻译
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN models for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
Graph neural networks (GNNs) have been widely used under semi-supervised settings. Prior studies have mainly focused on finding appropriate graph filters (e.g., aggregation schemes) to generalize well for both homophilic and heterophilic graphs. Even though these approaches are essential and effective, they still suffer from the sparsity in initial node features inherent in the bag-of-words representation. Common in semi-supervised learning where the training samples often fail to cover the entire dimensions of graph filters (hyperplanes), this can precipitate over-fitting of specific dimensions in the first projection matrix. To deal with this problem, we suggest a simple and novel strategy; create additional space by flipping the initial features and hyperplane simultaneously. Training in both the original and in the flip space can provide precise updates of learnable parameters. To the best of our knowledge, this is the first attempt that effectively moderates the overfitting problem in GNN. Extensive experiments on real-world datasets demonstrate that the proposed technique improves the node classification accuracy up to 40.2 %
translated by 谷歌翻译