Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels.
translated by 谷歌翻译
In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis.
translated by 谷歌翻译
这篇综述的目的是将读者介绍到图表内,以将其应用于化学信息学中的分类问题。图内核是使我们能够推断分子的化学特性的功能,可以帮助您完成诸如寻找适合药物设计的化合物等任务。内核方法的使用只是一种特殊的两种方式量化了图之间的相似性。我们将讨论限制在这种方法上,尽管近年来已经出现了流行的替代方法,但最著名的是图形神经网络。
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
Deep graph kernels
分类:
In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.
translated by 谷歌翻译
图形内核是历史上最广泛使用的图形分类任务的技术。然而,由于图的手工制作的组合特征,这些方法具有有限的性能。近年来,由于其性能卓越,图形神经网络(GNNS)已成为与下游图形相关任务的最先进的方法。大多数GNN基于消息传递神经网络(MPNN)框架。然而,最近的研究表明,MPNN不能超过Weisfeiler-Lehman(WL)算法在图形同构术中的力量。为了解决现有图形内核和GNN方法的限制,在本文中,我们提出了一种新的GNN框架,称为\ Texit {内核图形神经网络}(Kernnns),该框架将图形内核集成到GNN的消息传递过程中。通过卷积神经网络(CNNS)中的卷积滤波器的启发,KERGNNS采用可训练的隐藏图作为绘图过滤器,该绘图过滤器与子图组合以使用图形内核更新节点嵌入式。此外,我们表明MPNN可以被视为Kergnns的特殊情况。我们将Kergnns应用于多个与图形相关的任务,并使用交叉验证来与基准进行公平比较。我们表明,与现有的现有方法相比,我们的方法达到了竞争性能,证明了增加GNN的表现能力的可能性。我们还表明,KERGNNS中的训练有素的图形过滤器可以揭示数据集的本地图形结构,与传统GNN模型相比,显着提高了模型解释性。
translated by 谷歌翻译
近年来,基于Weisfeiler-Leman算法的算法和神经架构,是一个众所周知的Graph同构问题的启发式问题,它成为具有图形和关系数据的机器学习的强大工具。在这里,我们全面概述了机器学习设置中的算法的使用,专注于监督的制度。我们讨论了理论背景,展示了如何将其用于监督的图形和节点表示学习,讨论最近的扩展,并概述算法的连接(置换 - )方面的神经结构。此外,我们概述了当前的应用和未来方向,以刺激进一步的研究。
translated by 谷歌翻译
特征提取是图分析中的重要任务。这些特征向量(称为图形描述符)用于基于下游矢量空间的图形分析模型。过去证明了这个想法,基于光谱的图形描述符提供了最新的分类准确性。但是,要计算有意义的描述符的已知算法不会扩展到大图,因为:(1)它们需要将整个图存储在内存中,并且(2)最终用户无法控制算法的运行时。在本文中,我们提出流算法以大约计算三个不同的图形描述符,以捕获图的基本结构。在边缘流上操作使我们避免将整个图存储在内存中,并控制样本大小使我们能够将算法的运行时间保持在所需的范围内。我们通过分析近似误差和分类精度来证明所提出的描述符的功效。我们的可扩展算法计算图形的描述符,并在几分钟之内具有数百万个边缘。此外,这些描述符得出的预测精度可与最新方法相当,但只能使用25%的记忆来计算。
translated by 谷歌翻译
Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.
translated by 谷歌翻译
In this work, we propose a family of novel quantum kernels, namely the Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK), for un-attributed graphs. Different from most existing classical graph kernels, the proposed HAQJSK kernels can incorporate hierarchical aligned structure information between graphs and transform graphs of random sizes into fixed-sized aligned graph structures, i.e., the Hierarchical Transitive Aligned Adjacency Matrix of vertices and the Hierarchical Transitive Aligned Density Matrix of the Continuous-Time Quantum Walk (CTQW). For a pair of graphs to hand, the resulting HAQJSK kernels are defined by measuring the Quantum Jensen-Shannon Divergence (QJSD) between their transitive aligned graph structures. We show that the proposed HAQJSK kernels not only reflect richer intrinsic global graph characteristics in terms of the CTQW, but also address the drawback of neglecting structural correspondence information arising in most existing R-convolution kernels. Furthermore, unlike the previous Quantum Jensen-Shannon Kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can simultaneously guarantee the properties of permutation invariant and positive definiteness, explaining the theoretical advantages of the HAQJSK kernels. Experiments indicate the effectiveness of the proposed kernels.
translated by 谷歌翻译
In this paper, we propose a novel graph kernel, namely the Quantum-based Entropic Subtree Kernel (QESK), for Graph Classification. To this end, we commence by computing the Average Mixing Matrix (AMM) of the Continuous-time Quantum Walk (CTQW) evolved on each graph structure. Moreover, we show how this AMM matrix can be employed to compute a series of entropic subtree representations associated with the classical Weisfeiler-Lehman (WL) algorithm. For a pair of graphs, the QESK kernel is defined by computing the exponentiation of the negative Euclidean distance between their entropic subtree representations, theoretically resulting in a positive definite graph kernel. We show that the proposed QESK kernel not only encapsulates complicated intrinsic quantum-based structural characteristics of graph structures through the CTQW, but also theoretically addresses the shortcoming of ignoring the effects of unshared substructures arising in state-of-the-art R-convolution graph kernels. Moreover, unlike the classical R-convolution kernels, the proposed QESK can discriminate the distinctions of isomorphic subtrees in terms of the global graph structures, theoretically explaining the effectiveness. Experiments indicate that the proposed QESK kernel can significantly outperform state-of-the-art graph kernels and graph deep learning methods for graph classification problems.
translated by 谷歌翻译
Weisfeiler-Lehman(WL)测试已广泛应用于图内核,指标和神经网络。但是,它仅考虑图的一致性,从而导致结构信息的描述能力较弱。因此,它限制了应用方法的性能提高。另外,WL检验定义的图之间的相似性和距离是粗略的测量。据我们所知,本文首次阐明了这些事实,并定义了我们称为Wasserstein WL子树(WWLS)距离的指标。我们将WL子树引入节点附近的结构信息,并将其分配给每个节点。然后,我们定义一个基于$ l_1 $ - 应用的树编辑距离($ l_1 $ - ted)的新图嵌入空间:$ l_1 $ norm of noce noce node node nord noce node fartial farture varter vectors in space上的差异为$ l_1 $ - 节点。我们进一步提出了一种用于图嵌入的快速算法。最后,我们使用Wasserstein距离来反映$ L_1 $的图形级别。 WWL可以捕获传统指标困难的结构的小变化。我们在几个图形分类和度量验证实验中演示了其性能。
translated by 谷歌翻译
在本文中,据我们所知,我们提供了将各种掩盖机制纳入变形金刚以可扩展方式融入变形金刚结构的第一种综合方法。我们表明,有关线性因果关注的最新结果(Choromanski等,2021)和对数线性RPE注意力(Luo等,2021)是这种一般机制的特殊情况。但是,通过将问题作为对未掩盖注意力的拓扑调制(基于图的)调制,我们以前获得了几个未知结果,包括有效的D维RPE掩盖和图形内掩蔽。我们利用许多数学技术,从光谱分析到动态编程和随机步行到新算法,以求解图形上的马尔可夫过程。我们提供相应的经验评估。
translated by 谷歌翻译
许多现代神经架构的核心的卷积运算符可以有效地被视为在输入矩阵和滤波器之间执行点产品。虽然这很容易适用于诸如图像的数据,其可以在欧几里德空间中表示为常规网格,延伸卷积操作者以在图形上工作,而是由于它们的不规则结构而被证明更具有挑战性。在本文中,我们建议使用图形内部产品的图形内核,即在图形上计算内部产品,以将标准卷积运算符扩展到图形域。这使我们能够定义不需要计算输入图的嵌入的完全结构模型。我们的架构允许插入任何类型和数量的图形内核,并具有在培训过程中学到的结构面具方面提供一些可解释性的额外益处,类似于传统卷积神经网络中的卷积掩模发生的事情。我们执行广泛的消融研究,调查模型超参数的影响,我们表明我们的模型在标准图形分类数据集中实现了竞争性能。
translated by 谷歌翻译
尽管(消息通话)图形神经网络在图形或一般关系数据上近似置换量等函数方面具有明显的局限性,但更具表现力的高阶图神经网络不会扩展到大图。他们要么在$ k $ - 订单张量子上操作,要么考虑所有$ k $ - 节点子图,这意味着在内存需求中对$ k $的指数依赖,并且不适合图形的稀疏性。通过为图同构问题引入新的启发式方法,我们设计了一类通用的,置换式的图形网络,与以前的体系结构不同,该网络在表达性和可伸缩性之间提供了细粒度的控制,并适应了图的稀疏性。这些体系结构与监督节点和图形级别的标准高阶网络以及回归体系中的标准高阶图网络相比大大减少了计算时间,同时在预测性能方面显着改善了标准图神经网络和图形内核体系结构。
translated by 谷歌翻译
本文介绍了持续的Weisfeiler-Lehman随机步行方案(缩写为PWLR),用于图形表示,这是一个新型的数学框架,可生成具有离散和连续节点特征的图形的可解释的低维表示。提出的方案有效地结合了归一化的Weisfeiler-Lehman程序,在图形上随机行走以及持续的同源性。因此,我们整合了图形的三个不同属性,即局部拓扑特征,节点度和全局拓扑不变,同时保留图形扰动的稳定性。这概括了Weisfeiler-Lehman过程的许多变体,这些变体主要用于嵌入具有离散节点标签的图形。经验结果表明,可以有效地利用这些表示形式与最新的技术产生可比较的结果,以分类具有离散节点标签的图形,并在对具有连续节点特征的人分类中增强性能。
translated by 谷歌翻译
A current goal in the graph neural network literature is to enable transformers to operate on graph-structured data, given their success on language and vision tasks. Since the transformer's original sinusoidal positional encodings (PEs) are not applicable to graphs, recent work has focused on developing graph PEs, rooted in spectral graph theory or various spatial features of a graph. In this work, we introduce a new graph PE, Graph Automaton PE (GAPE), based on weighted graph-walking automata (a novel extension of graph-walking automata). We compare the performance of GAPE with other PE schemes on both machine translation and graph-structured tasks, and we show that it generalizes several other PEs. An additional contribution of this study is a theoretical and controlled experimental comparison of many recent PEs in graph transformers, independent of the use of edge features.
translated by 谷歌翻译
Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a "heuristic" that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.
translated by 谷歌翻译
大多数图形神经网络(GNNS)使用传递范例的消息,其中节点特征在输入图上传播。最近的作品指出,从远处节点流动的信息失真,作为限制依赖于长途交互的任务的消息的效率。这种现象称为“过度挤压”,已经启动到图形瓶颈,其中$ k $ -hop邻居的数量以$ k $迅速增长。我们在GNNS中提供了精确描述了GNNS中的过度挤压现象,并分析了它如何从图中的瓶颈引发。为此目的,我们介绍了一种新的基于边缘的组合曲率,并证明了负曲面负责过度挤压问题。我们还提出并通过实验测试了一种基于曲率的曲线图重新挖掘方法,以减轻过度挤压。
translated by 谷歌翻译
我们提出了一个新的图形神经网络,我们称为AgentNet,该网络专为图形级任务而设计。 AgentNet的灵感来自子宫性算法,具有独立于图形大小的计算复杂性。代理Net的体系结构从根本上与已知图神经网络的体系结构不同。在AgentNet中,一些受过训练的\ textit {神经代理}智能地行走图,然后共同决定输出。我们提供了对AgentNet的广泛理论分析:我们表明,代理可以学会系统地探索其邻居,并且AgentNet可以区分某些甚至3-WL无法区分的结构。此外,AgentNet能够将任何两个图形分开,这些图在子图方面完全不同。我们通过在难以辨认的图和现实图形分类任务上进行合成实验来确认这些理论结果。在这两种情况下,我们不仅与标准GNN相比,而且与计算更昂贵的GNN扩展相比。
translated by 谷歌翻译