图形内核是历史上最广泛使用的图形分类任务的技术。然而,由于图的手工制作的组合特征,这些方法具有有限的性能。近年来,由于其性能卓越,图形神经网络(GNNS)已成为与下游图形相关任务的最先进的方法。大多数GNN基于消息传递神经网络(MPNN)框架。然而,最近的研究表明,MPNN不能超过Weisfeiler-Lehman(WL)算法在图形同构术中的力量。为了解决现有图形内核和GNN方法的限制,在本文中,我们提出了一种新的GNN框架,称为\ Texit {内核图形神经网络}(Kernnns),该框架将图形内核集成到GNN的消息传递过程中。通过卷积神经网络(CNNS)中的卷积滤波器的启发,KERGNNS采用可训练的隐藏图作为绘图过滤器,该绘图过滤器与子图组合以使用图形内核更新节点嵌入式。此外,我们表明MPNN可以被视为Kergnns的特殊情况。我们将Kergnns应用于多个与图形相关的任务,并使用交叉验证来与基准进行公平比较。我们表明,与现有的现有方法相比,我们的方法达到了竞争性能,证明了增加GNN的表现能力的可能性。我们还表明,KERGNNS中的训练有素的图形过滤器可以揭示数据集的本地图形结构,与传统GNN模型相比,显着提高了模型解释性。
translated by 谷歌翻译
许多现代神经架构的核心的卷积运算符可以有效地被视为在输入矩阵和滤波器之间执行点产品。虽然这很容易适用于诸如图像的数据,其可以在欧几里德空间中表示为常规网格,延伸卷积操作者以在图形上工作,而是由于它们的不规则结构而被证明更具有挑战性。在本文中,我们建议使用图形内部产品的图形内核,即在图形上计算内部产品,以将标准卷积运算符扩展到图形域。这使我们能够定义不需要计算输入图的嵌入的完全结构模型。我们的架构允许插入任何类型和数量的图形内核,并具有在培训过程中学到的结构面具方面提供一些可解释性的额外益处,类似于传统卷积神经网络中的卷积掩模发生的事情。我们执行广泛的消融研究,调查模型超参数的影响,我们表明我们的模型在标准图形分类数据集中实现了竞争性能。
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. * Equal contribution. † Work partially performed while in Tokyo, visiting Prof. Ken-ichi Kawarabayashi.
translated by 谷歌翻译
Graph神经网络(GNN)最近已成为使用图的机器学习的主要范式。对GNNS的研究主要集中于消息传递神经网络(MPNNS)的家族。与同构的Weisfeiler-Leman(WL)测试类似,这些模型遵循迭代的邻域聚合过程以更新顶点表示,并通过汇总顶点表示来更新顶点图表。尽管非常成功,但在过去的几年中,对MPNN进行了深入的研究。因此,需要新颖的体系结构,这将使该领域的研究能够脱离MPNN。在本文中,我们提出了一个新的图形神经网络模型,即所谓的$ \ pi $ -gnn,该模型学习了每个图的“软”排列(即双随机)矩阵,从而将所有图形投影到一个共同的矢量空间中。学到的矩阵在输入图的顶点上强加了“软”顺序,并基于此顺序,将邻接矩阵映射到向量中。这些向量可以被送入完全连接或卷积的层,以应对监督的学习任务。在大图的情况下,为了使模型在运行时间和记忆方面更有效,我们进一步放松了双随机矩阵,以使其排列随机矩阵。我们从经验上评估了图形分类和图形回归数据集的模型,并表明它与最新模型达到了性能竞争。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
图形神经网络(GNNS)最流行的设计范例是1跳消息传递 - 反复反复从1跳邻居聚集特征。但是,1-HOP消息传递的表达能力受Weisfeiler-Lehman(1-WL)测试的界定。最近,研究人员通过同时从节点的K-Hop邻居汇总信息传递到K-HOP消息。但是,尚无分析K-Hop消息传递的表达能力的工作。在这项工作中,我们从理论上表征了K-Hop消息传递的表达力。具体而言,我们首先正式区分了两种k-hop消息传递的内核,它们在以前的作品中经常被滥用。然后,我们通过表明它比1-Hop消息传递更强大,从而表征了K-Hop消息传递的表现力。尽管具有较高的表达能力,但我们表明K-Hop消息传递仍然无法区分一些简单的常规图。为了进一步增强其表现力,我们引入了KP-GNN框架,该框架通过利用每个跳跃中的外围子图信息来改善K-HOP消息。我们证明,KP-GNN可以区分几乎所有常规图,包括一些距离常规图,这些图无法通过以前的距离编码方法来区分。实验结果验证了KP-GNN的表达能力和有效性。 KP-GNN在所有基准数据集中都取得了竞争成果。
translated by 谷歌翻译
消息传递神经网络(MPNNS)是由于其简单性和可扩展性而大部分地进行图形结构数据的深度学习的领先架构。不幸的是,有人认为这些架构的表现力有限。本文提出了一种名为Comifariant Subgraph聚合网络(ESAN)的新颖框架来解决这个问题。我们的主要观察是,虽然两个图可能无法通过MPNN可区分,但它们通常包含可区分的子图。因此,我们建议将每个图形作为由某些预定义策略导出的一组子图,并使用合适的等分性架构来处理它。我们为图同构同构同构造的1立维Weisfeiler-Leman(1-WL)测试的新型变体,并在这些新的WL变体方面证明了ESAN的表达性下限。我们进一步证明,我们的方法增加了MPNNS和更具表现力的架构的表现力。此外,我们提供了理论结果,描述了设计选择诸如子图选择政策和等效性神经结构的设计方式如何影响我们的架构的表现力。要处理增加的计算成本,我们提出了一种子图采样方案,可以将其视为我们框架的随机版本。关于真实和合成数据集的一套全面的实验表明,我们的框架提高了流行的GNN架构的表现力和整体性能。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
图表神经网络(GNNS)最近提出了用于处理图形结构数据的神经网络结构。由于他们所采用的邻国聚合策略,现有的GNNS专注于捕获节点级信息并忽略高级信息。因此,现有的GNN受到本地置换不变性(LPI)问题引起的代表性限制。为了克服这些限制并丰富GNN捕获的特征,我们提出了一种新的GNN框架,称为两级GNN(TL-GNN)。这与节点级信息合并子图级信息。此外,我们提供了对LPI问题的数学分析,这表明子图级信息有利于克服与LPI相关的问题。还提出了一种基于动态编程算法的子图计数方法,并且该具有时间复杂度是O(n ^ 3),n是图的节点的数量。实验表明,TL-GNN优于现有的GNN,实现了最先进的性能。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
变压器架构最近在图表表示学习中引起了人们的注意,因为它自然地克服了图神经网络(GNN)的几个局限性,避免了它们严格的结构电感偏置,而仅通过位置编码来编码图形结构。在这里,我们表明,具有位置编码的变压器生成的节点表示不一定捕获它们之间的结构相似性。为了解决这个问题,我们提出了结构感知的变压器,这是一类简单而灵活的图形变压器,建立在新的自我发项机制的基础上。这一新的自我注意力通过在计算注意力之前提取植根于每个节点的子图表来结合结构信息。我们提出了几种自动生成子图表表示的方法,并从理论上说明结果表示至少与子图表一样表现力。从经验上讲,我们的方法在五个图预测基准上实现了最先进的性能。我们的结构感知框架可以利用任何现有的GNN提取子图表表示,我们表明它系统地改善了相对于基本GNN模型的性能,成功地结合了GNN和变形金刚的优势。我们的代码可在https://github.com/borgwardtlab/sat上找到。
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
Learning node embeddings that capture a node's position within the broader graph structure is crucial for many prediction tasks on graphs. However, existing Graph Neural Network (GNN) architectures have limited power in capturing the position/location of a given node with respect to all other nodes of the graph. Here we propose Position-aware Graph Neural Networks (P-GNNs), a new class of GNNs for computing position-aware node embeddings. P-GNN first samples sets of anchor nodes, computes the distance of a given target node to each anchor-set, and then learns a non-linear distance-weighted aggregation scheme over the anchor-sets. This way P-GNNs can capture positions/locations of nodes with respect to the anchor nodes. P-GNNs have several advantages: they are inductive, scalable, and can incorporate node feature information. We apply P-GNNs to multiple prediction tasks including link prediction and community detection. We show that P-GNNs consistently outperform state of the art GNNs, with up to 66% improvement in terms of the ROC AUC score.Node embedding methods can be categorized into Graph Neural Networks (GNNs) approaches (Scarselli et al., 2009),
translated by 谷歌翻译
In recent years, graph neural networks (GNNs) have emerged as a promising tool for solving machine learning problems on graphs. Most GNNs are members of the family of message passing neural networks (MPNNs). There is a close connection between these models and the Weisfeiler-Leman (WL) test of isomorphism, an algorithm that can successfully test isomorphism for a broad class of graphs. Recently, much research has focused on measuring the expressive power of GNNs. For instance, it has been shown that standard MPNNs are at most as powerful as WL in terms of distinguishing non-isomorphic graphs. However, these studies have largely ignored the distances between the representations of nodes/graphs which are of paramount importance for learning tasks. In this paper, we define a distance function between nodes which is based on the hierarchy produced by the WL algorithm, and propose a model that learns representations which preserve those distances between nodes. Since the emerging hierarchy corresponds to a tree, to learn these representations, we capitalize on recent advances in the field of hyperbolic neural networks. We empirically evaluate the proposed model on standard node and graph classification datasets where it achieves competitive performance with state-of-the-art models.
translated by 谷歌翻译
Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs-a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
近年来,图形神经网络(GNNS)被出现为一个强大的神经结构,以学习在监督的端到端时尚中的节点和图表的矢量表示。到目前为止,只有经验评估GNNS - 显示有希望的结果。以下工作从理论的角度调查了GNN,并将它们与1美元 - 二维韦斯美犬 - Leman Graph同构Heuristic(1美元-WL)相关联。我们表明GNNS在区分非同义(子)图表中,GNN具有与1美元-WL相同的表现力。因此,这两种算法也具有相同的缺点。基于此,我们提出了GNN的概括,所谓的$ k $ -dimensional gnns($ k $ -gnns),这可以考虑多个尺度的高阶图结构。这些高阶结构在社交网络和分子图的表征中起重要作用。我们的实验评估证实了我们的理论调查结果,并确认了更高阶信息在图形分类和回归的任务中有用。
translated by 谷歌翻译