信息过度时是网络上远处节点之间效率低下的信息传播的现象。这是一个重要的问题,已知会显着影响图形神经网络(GNN)的训练,因为节点的接受场呈指数增长。为了减轻此问题,通常将称为重新布线的预处理程序应用于输入网络。在本文中,我们研究了经典曲率几何概念的离散类似物的使用来对网络上的信息流进行建模并重新织线。我们表明,这些经典概念在各种现实世界网络数据集上实现了GNN培训准确性的最新性能。此外,与当前的最新概念相比,这些经典概念在计算运行时表现出明显的优势。
translated by 谷歌翻译
大多数图形神经网络(GNNS)使用传递范例的消息,其中节点特征在输入图上传播。最近的作品指出,从远处节点流动的信息失真,作为限制依赖于长途交互的任务的消息的效率。这种现象称为“过度挤压”,已经启动到图形瓶颈,其中$ k $ -hop邻居的数量以$ k $迅速增长。我们在GNNS中提供了精确描述了GNNS中的过度挤压现象,并分析了它如何从图中的瓶颈引发。为此目的,我们介绍了一种新的基于边缘的组合曲率,并证明了负曲面负责过度挤压问题。我们还提出并通过实验测试了一种基于曲率的曲线图重新挖掘方法,以减轻过度挤压。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been successfully applied in many applications in computer sciences. Despite the success of deep learning architectures in other domains, deep GNNs still underperform their shallow counterparts. There are many open questions about deep GNNs, but over-smoothing and over-squashing are perhaps the most intriguing issues. When stacking multiple graph convolutional layers, the over-smoothing and over-squashing problems arise and have been defined as the inability of GNNs to learn deep representations and propagate information from distant nodes, respectively. Even though the widespread definitions of both problems are similar, these phenomena have been studied independently. This work strives to understand the underlying relationship between over-smoothing and over-squashing from a topological perspective. We show that both problems are intrinsically related to the spectral gap of the Laplacian of the graph. Therefore, there is a trade-off between these two problems, i.e., we cannot simultaneously alleviate both over-smoothing and over-squashing. We also propose a Stochastic Jost and Liu curvature Rewiring (SJLR) algorithm based on a bound of the Ollivier's Ricci curvature. SJLR is less expensive than previous curvature-based rewiring methods while retaining fundamental properties. Finally, we perform a thorough comparison of SJLR with previous techniques to alleviate over-smoothing or over-squashing, seeking to gain a better understanding of both problems.
translated by 谷歌翻译
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models, and we show that some recent state-of-the-art models are special instances of this framework. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the-art results.
translated by 谷歌翻译
图形神经网络(GNN)已被证明可以实现竞争结果,以解决与图形相关的任务,例如节点和图形分类,链接预测和节点以及各种域中的图形群集。大多数GNN使用消息传递框架,因此称为MPNN。尽管有很有希望的结果,但据报道,MPNN会遭受过度平滑,过度阵型和不足的影响。文献中已经提出了图形重新布线和图形池作为解决这些局限性的解决方案。但是,大多数最先进的图形重新布线方法无法保留该图的全局拓扑,因此没有可区分(电感),并且需要调整超参数。在本文中,我们提出了Diffwire,这是一个在MPNN中进行图形重新布线的新型框架,它通过利用LOV \'ASZ绑定来原理,完全可区分且无参数。我们的方法通过提出两个新的,mpnns中的新的互补层来提供统一的图形重新布线:首先,ctlayer,一个学习通勤时间并将其用作边缘重新加权的相关函数;其次,Gaplayer是优化光谱差距的图层,具体取决于网络的性质和手头的任务。我们从经验上验证了我们提出的方法的价值,并使用基准数据集分别验证了这些层的每个层以进行图形分类。 Diffwire将通勤时间的可学习性汇集到相关的曲率定义,为发展更具表现力的MPNN的发展打开了大门。
translated by 谷歌翻译
Hyperbolic space is emerging as a promising learning space for representation learning, owning to its exponential growth volume. Compared with the flat Euclidean space, the curved hyperbolic space is far more ambient and embeddable, particularly for datasets with implicit tree-like architectures, such as hierarchies and power-law distributions. On the other hand, the structure of a real-world network is usually intricate, with some regions being tree-like, some being flat, and others being circular. Directly embedding heterogeneous structural networks into a homogeneous embedding space unavoidably brings inductive biases and distortions. Inspiringly, the discrete curvature can well describe the local structure of a node and its surroundings, which motivates us to investigate the information conveyed by the network topology explicitly in improving geometric learning. To this end, we explore the properties of the local discrete curvature of graph topology and the continuous global curvature of embedding space. Besides, a Hyperbolic Curvature-aware Graph Neural Network, HCGNN, is further proposed. In particular, HCGNN utilizes the discrete curvature to lead message passing of the surroundings and adaptively adjust the continuous curvature simultaneously. Extensive experiments on node classification and link prediction tasks show that the proposed method outperforms various competitive models by a large margin in both high and low hyperbolic graph data. Case studies further illustrate the efficacy of discrete curvature in finding local clusters and alleviating the distortion caused by hyperbolic geometry.
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Graph Neural Networks (GNNs) had been demonstrated to be inherently susceptible to the problems of over-smoothing and over-squashing. These issues prohibit the ability of GNNs to model complex graph interactions by limiting their effectiveness at taking into account distant information. Our study reveals the key connection between the local graph geometry and the occurrence of both of these issues, thereby providing a unified framework for studying them at a local scale using the Ollivier's Ricci curvature. Based on our theory, a number of principled methods are proposed to alleviate the over-smoothing and over-squashing issues.
translated by 谷歌翻译
正如最近的作品中观察到的那样,通信图神经网络(GNN)中信号传播的质量强烈影响其表现力。特别是,对于依靠远程相互作用的预测任务,节点特征的递归聚合可能导致不希望的现象称为“过句”。我们提出了一个基于信息收缩的分析过度句子的框架。我们的分析以可靠计算的模型为指导,该模型由于冯·诺伊曼(Von Neumann),该模型在嘈杂的计算图中提供了新的洞察力作为信号淬灭的新见解。在此基础上,我们提出了一个旨在减轻过度量化的算法的图形。我们的算法采用了由扩展器图构造动机的随机局部边缘翻转原始的。我们将算法的光谱膨胀特性与现有基于曲率的非本地重新布线策略的光谱膨胀属性进行了比较。合成实验表明,尽管我们的算法通常具有较慢的膨胀速率,但总体计算更便宜,可以准确地保留节点度,并且永远不会断开图表的连接。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
A prominent paradigm for graph neural networks is based on the message passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate \textit{long distance communication} between nodes, as deep convolutional networks are prone to over-smoothing. In this paper, we present a novel method based on time derivative graph diffusion (TIDE), with a learnable time parameter. Our approach allows to adapt the spatial extent of diffusion across different tasks and network channels, thus enabling medium and long-distance communication efficiently. Furthermore, we show that our architecture directly enables local message passing and thus inherits from the expressive power of local message passing approaches. We show that on widely used graph benchmarks we achieve comparable performance and on a synthetic mesh dataset we outperform state-of-the-art methods like GCN or GRAND by a significant margin.
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
通过递归将整个社区的节点特征汇总,空间图卷积运算符已被宣布为图形神经网络(GNNS)成功的关键。然而,尽管GNN方法跨任务和应用程序进行了繁殖,但此聚合操作对其性能的影响尚未得到广泛的分析。实际上,尽管努力主要集中于优化神经网络的体系结构,但更少的工作试图表征(a)不同类别的空间卷积操作员,(b)特定类别的选择如何与数据的属性相关,以及(c)它对嵌入空间的几何形状的影响。在本文中,我们建议通过将现有操作员分为两个主要类(对称性与行规范的空间卷积)来回答所有三个问题,并展示它们如何转化为数据性质的不同隐性偏见。最后,我们表明,这种聚合操作员实际上是可调的,并且明确的制度在其中某些操作员(因此,嵌入几何形状)的某些选择可能更合适。
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译
消息传递神经网络(MPNNS)是由于其简单性和可扩展性而大部分地进行图形结构数据的深度学习的领先架构。不幸的是,有人认为这些架构的表现力有限。本文提出了一种名为Comifariant Subgraph聚合网络(ESAN)的新颖框架来解决这个问题。我们的主要观察是,虽然两个图可能无法通过MPNN可区分,但它们通常包含可区分的子图。因此,我们建议将每个图形作为由某些预定义策略导出的一组子图,并使用合适的等分性架构来处理它。我们为图同构同构同构造的1立维Weisfeiler-Leman(1-WL)测试的新型变体,并在这些新的WL变体方面证明了ESAN的表达性下限。我们进一步证明,我们的方法增加了MPNNS和更具表现力的架构的表现力。此外,我们提供了理论结果,描述了设计选择诸如子图选择政策和等效性神经结构的设计方式如何影响我们的架构的表现力。要处理增加的计算成本,我们提出了一种子图采样方案,可以将其视为我们框架的随机版本。关于真实和合成数据集的一套全面的实验表明,我们的框架提高了流行的GNN架构的表现力和整体性能。
translated by 谷歌翻译
In recent years, graph neural networks (GNNs) have emerged as a promising tool for solving machine learning problems on graphs. Most GNNs are members of the family of message passing neural networks (MPNNs). There is a close connection between these models and the Weisfeiler-Leman (WL) test of isomorphism, an algorithm that can successfully test isomorphism for a broad class of graphs. Recently, much research has focused on measuring the expressive power of GNNs. For instance, it has been shown that standard MPNNs are at most as powerful as WL in terms of distinguishing non-isomorphic graphs. However, these studies have largely ignored the distances between the representations of nodes/graphs which are of paramount importance for learning tasks. In this paper, we define a distance function between nodes which is based on the hierarchy produced by the WL algorithm, and propose a model that learns representations which preserve those distances between nodes. Since the emerging hierarchy corresponds to a tree, to learn these representations, we capitalize on recent advances in the field of hyperbolic neural networks. We empirically evaluate the proposed model on standard node and graph classification datasets where it achieves competitive performance with state-of-the-art models.
translated by 谷歌翻译
增强图在正规化图形神经网络(GNNS)方面起着至关重要的作用,该图形以信息传递的形式利用沿图的边缘进行信息交换。由于其有效性,简单的边缘和节点操作(例如,添加和删除)已被广泛用于图表增强中。然而,这种常见的增强技术可以显着改变原始图的语义,从而导致过度侵略性增强,从而在GNN学习中拟合不足。为了解决掉落或添加图形边缘和节点引起的此问题,我们提出了SoftEdge,将随机权重分配给给定图的一部分以进行增强。 SoftEdge生成的合成图保持与原始图相同的节点及其连接性,从而减轻原始图的语义变化。我们从经验上表明,这种简单的方法获得了与流行节点和边缘操纵方法的卓越精度,并且具有明显的弹性,可抵御GNN深度的准确性降解。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译