尽管近期图形神经网络(GNN)成功,但常见的架构通常表现出显着的限制,包括对过天飞机,远程依赖性和杂散边缘的敏感性,例如,由于图形异常或对抗性攻击。至少部分地解决了一个简单的透明框架内的这些问题,我们考虑了一个新的GNN层系列,旨在模仿和整合两个经典迭代算法的更新规则,即近端梯度下降和迭代重复最小二乘(IRLS)。前者定义了一个可扩展的基础GNN架构,其免受过性的,而仍然可以通过允许任意传播步骤捕获远程依赖性。相反,后者产生了一种新颖的注意机制,该注意机制被明确地锚定到底层端到端能量函数,以及相对于边缘不确定性的稳定性。当结合时,我们获得了一个非常简单而强大的模型,我们在包括标准化基准,与异常扰动的图形,具有异化的图形和涉及远程依赖性的图形的不同方案的极其简单而强大的模型。在此过程中,我们与已明确为各个任务设计的SOTA GNN方法进行比较,实现竞争或卓越的节点分类准确性。我们的代码可以在https://github.com/fftyyy/twirls获得。
translated by 谷歌翻译
已经观察到图形神经网络(GNN)有时难以在跨节点上建模的长距离依赖性之间保持健康的平衡,同时避免了诸如过天平的节点表示的非线性后果。为了解决这个问题(以及其他事情),最近提出了两个单独的策略,即隐含和展开的GNN。前者将节点表示作为深度平衡模型的固定点,其可以有效地促进横跨图形的任意隐式传播,具有固定的存储器占用。相反,后者涉及将图形传播作为应用于某些图形正则化能功能的展开渐变迭代处理。在这种情况下激励,在本文中,我们仔细阐明了这些方法的相似性和差异,量化了他们所产生的解决方案的明确情况实际上是等同的,而行为发散的其他方法。这包括分析会聚,代表能力和解释性。我们还提供各种综合和公共现实世界基准的经验性头脑比较。
translated by 谷歌翻译
异质图神经网络(GNNS)在半监督学习设置中在节点分类任务上实现了强大的性能。但是,与更简单的GNN案例一样,基于消息的异质GNN可能难以在抵抗深模型中发生的过度厚度与捕获长期依赖关系图表结构化数据之间进行平衡。此外,由于不同类型的节点之间的异质关系,这种权衡的复杂性在异质图中复杂化。为了解决这些问题,我们提出了一种新型的异质GNN结构,其中层来自降低新型关系能量函数的优化步骤。相应的最小化器相对于能量函数参数是完全可区分的,因此可以应用双光线优化来有效地学习功能形式,其最小值为后续分类任务提供了最佳节点表示。特别是,这种方法使我们能够在不同的节点类型之间建模各种杂质关系,同时避免过度平滑效果。 8个异质图基准的实验结果表明,我们提出的方法可以达到竞争性节点分类的精度。
translated by 谷歌翻译
由于问题过度问题,大多数现有的图形神经网络只能使用其固有有限的聚合层捕获有限的依赖性。为了克服这一限制,我们提出了一种新型的图形卷积,称为图形隐式非线性扩散(GIND),该卷积隐含地可以访问邻居的无限啤酒花,同时具有非线性扩散的自适应聚集特征,以防止过度张开。值得注意的是,我们表明,学到的表示形式可以正式化为显式凸优化目标的最小化器。有了这个属性,我们可以从优化的角度从理论上表征GIND的平衡。更有趣的是,我们可以通过修改相应的优化目标来诱导新的结构变体。具体而言,我们可以将先前的特性嵌入到平衡中,并引入跳过连接以促进训练稳定性。广泛的实验表明,GIND擅长捕获长期依赖性,并且在具有非线性扩散的同粒细胞和异性图上表现良好。此外,我们表明,我们模型的优化引起的变体可以提高性能并提高训练稳定性和效率。结果,我们的GIND在节点级别和图形级任务上都获得了重大改进。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been predominant for graph learning tasks; however, recent studies showed that a well-known graph algorithm, Label Propagation (LP), combined with a shallow neural network can achieve comparable performance to GNNs in semi-supervised node classification on graphs with high homophily. In this paper, we show that this approach falls short on graphs with low homophily, where nodes often connect to the nodes of the opposite classes. To overcome this, we carefully design a combination of a base predictor with LP algorithm that enjoys a closed-form solution as well as convergence guarantees. Our algorithm first learns the class compatibility matrix and then aggregates label predictions using LP algorithm weighted by class compatibilities. On a wide variety of benchmarks, we show that our approach achieves the leading performance on graphs with various levels of homophily. Meanwhile, it has orders of magnitude fewer parameters and requires less execution time. Empirical evaluations demonstrate that simple adaptations of LP can be competitive in semi-supervised node classification in both homophily and heterophily regimes.
translated by 谷歌翻译
作为建模复杂关系的强大工具,HyperGraphs从图表学习社区中获得了流行。但是,深度刻画学习中的常用框架专注于具有边缘独立的顶点权重(EIVW)的超图,而无需考虑具有具有更多建模功率的边缘依赖性顶点权重(EDVWS)的超图。为了弥补这一点,我们提出了一般的超图光谱卷积(GHSC),这是一个通用学习框架,不仅可以处理EDVW和EIVW HyperGraphs,而且更重要的是,理论上可以明确地利用现有强大的图形卷积神经网络(GCNN)明确说明,从而很大程度上可以释放。超图神经网络的设计。在此框架中,给定的无向GCNN的图形拉普拉斯被统一的HyperGraph Laplacian替换,该统一的HyperGraph Laplacian通过将我们所定义的广义超透明牌与简单的无向图等同起来,从随机的步行角度将顶点权重信息替换。来自各个领域的广泛实验,包括社交网络分析,视觉目标分类和蛋白质学习,证明了拟议框架的最新性能。
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
图形神经网络(GNNS)由于其强大的表示能力而广泛用于图形结构化数据处理。通常认为,GNNS可以隐式消除非预测性的噪音。但是,对图神经网络中隐式降解作用的分析仍然开放。在这项工作中,我们进行了一项全面的理论研究,并分析了隐式denoising在GNN中发生的何时以及为什么发生。具体而言,我们研究噪声矩阵的收敛性。我们的理论分析表明,隐式转化很大程度上取决于连接性,图形大小和GNN体系结构。此外,我们通过扩展图形信号降解问题来正式定义并提出对抗图信号denoising(AGSD)问题。通过解决这样的问题,我们得出了一个可靠的图形卷积,可以增强节点表示的平滑度和隐式转化效果。广泛的经验评估验证了我们的理论分析和我们提出的模型的有效性。
translated by 谷歌翻译
图神经网络(GNN)已证明其在各种应用中的表现出色。然而,其背后的工作机制仍然神秘。 GNN模型旨在学习图形结构数据的有效表示,该数据本质上与图形信号denoising(GSD)的原理相吻合。算法展开是一种“学习优化”技术的算法,由于其在构建高效和可解释的神经网络体系结构方面的前景,人们引起了人们的关注。在本文中,我们引入了基于GSD问题的截断优化算法(例如梯度下降和近端梯度下降)构建的一类展开网络。它们被证明与许多流行的GNN模型紧密相连,因为这些GNN中的正向传播实际上是为特定GSD提供服务的展开网络。此外,可以将GNN模型的训练过程视为解决了较低级别的GSD问题的双重优化问题。这种连接带来了GNN的新景,因为我们可以尝试从GSD对应物中理解它们的实际功能,并且还可以激励设计新的GNN模型。基于算法展开的观点,一种名为UGDGNN的表达模型,即展开的梯度下降GNN,进一步提出了继承具有吸引力的理论属性的。七个基准数据集上的大量数值模拟表明,UGDGNN可以比最新模型实现卓越或竞争性的性能。
translated by 谷歌翻译
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the over-smoothing problem.In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: Initial residual and Identity mapping. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi-and fullsupervised tasks. Code is available at https: //github.com/chennnM/GCNII.
translated by 谷歌翻译
鉴于在现实世界应用中大规模图的流行率,训练神经模型的存储和时间引起了人们的关注。为了减轻关注点,我们提出和研究图形神经网络(GNNS)的图形凝结问题。具体而言,我们旨在将大型原始图凝结成一个小的,合成的和高度信息的图,以便在小图和大图上训练的GNN具有可比性的性能。我们通过优化梯度匹配损失并设计一种凝结节点期货和结构信息的策略来模仿原始图上的GNN训练轨迹,以解决凝结问题。广泛的实验证明了所提出的框架在将不同的图形数据集凝结成信息较小的较小图中的有效性。特别是,我们能够在REDDIT上近似于95.3%的原始测试准确性,Flickr的99.8%和CiteSeer的99.0%,同时将其图形尺寸降低了99.9%以上,并且可以使用冷凝图来训练各种GNN架构Code在https://github.com/chandlerbang/gcond上发布。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
图形神经网络(GNNS)从节点功能和输入图拓扑中利用信号来改善节点分类任务性能。然而,这些模型倾向于在异细胞图上表现不良,其中连接的节点具有不同的标记。最近提出了GNNS横跨具有不同程度的同性恋级别的图表。其中,依赖于多项式图滤波器的模型已经显示了承诺。我们观察到这些多项式图滤波器模型的解决方案也是过度确定的方程式系统的解决方案。它表明,在某些情况下,模型需要学习相当高的多项式。在调查中,我们发现由于其设计而在学习此类多项式的拟议模型。为了缓解这个问题,我们执行图表的特征分解,并建议学习作用于频谱的不同子集的多个自适应多项式滤波器。理论上和经验证明我们所提出的模型学习更好的过滤器,从而提高了分类准确性。我们研究了我们提出的模型的各个方面,包括利用潜在多项式滤波器的依义组分的数量以及节点分类任务上的各个多项式的性能的依赖性。我们进一步表明,我们的模型通过在大图中评估来扩展。我们的模型在最先进的模型上实现了高达5%的性能增益,并且通常优于现有的基于多项式滤波器的方法。
translated by 谷歌翻译
图形神经网络(GNNS)在提供图形结构时良好工作。但是,这种结构可能并不总是在现实世界应用中可用。该问题的一个解决方案是推断任务特定的潜在结构,然后将GNN应用于推断的图形。不幸的是,可能的图形结构的空间与节点的数量超级呈指数,因此任务特定的监督可能不足以学习结构和GNN参数。在这项工作中,我们提出了具有自我监督或拍打的邻接和GNN参数的同时学习,这是通过自我监督来推断图形结构的更多监督的方法。一个综合实验研究表明,缩小到具有数十万个节点的大图和胜过了几种模型,以便在已建立的基准上学习特定于任务的图形结构。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
具有数值节点特征和图形结构的图形神经网络(GNNS)作为输入显示出具有图形数据的各种监督学习任务的卓越性能。但是,GNN使用的数值节点特征通常是从大多数真实世界应用中的文本或表格(数字/分类)类型的原始数据中提取的。在大多数标准监督的学习设置中,使用IID(NON-GRAPH)数据的最佳模型不是简单的神经网络层,因此不容易被纳入GNN。在这里,我们提出了一个强大的堆叠框架,该框架将图形感知的传播与用于IID数据的任意模型融合在一起,这些模型是在多层中结合并堆叠的。我们的层面框架利用行李和堆叠策略来享受强有力的概括,从而有效地减轻了标签泄漏和过度拟合的方式。在各种具有表格/文本节点特征的图形数据集中,我们的方法相对于表格/文本和图形神经网络模型以及将两者结合的现有最新混合策略而获得了可比性或卓越的性能。
translated by 谷歌翻译