尽管图形神经网络(GNNS)已成功地用于节点分类任务并在图中链接预测任务,但学习图级表示仍然是一个挑战。对于图级表示,重要的是要学习相邻节点的表示形式,即聚合和图形结构信息。为此目标开发了许多图形合并方法。但是,大多数现有的合并方法都使用K-HOP社区,而无需考虑图中的明确结构信息。在本文中,我们提出了使用先前的图形结构来克服限制的结构原型指导池(SPGP)。 SPGP将图形结构制定为可学习的原型向量,并计算节点和原型矢量之间的亲和力。这导致了一种新颖的节点评分方案,该方案在封装图形的有用结构的同时优先考虑信息性节点。我们的实验结果表明,SPGP的精度和可扩展性都优于图形分类基准数据集上的最先进的图形合并方法。
translated by 谷歌翻译
Advanced methods of applying deep learning to structured data such as graphs have been proposed in recent years. In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs. The method of generalizing the convolution operation to graphs has been proven to improve performance and is widely used. However, the method of applying downsampling to graphs is still difficult to perform and has room for improvement. In this paper, we propose a graph pooling method based on selfattention. Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.
translated by 谷歌翻译
随着各个领域的深度学习的巨大成功,图形神经网络(GNNS)也成为图形分类的主要方法。通过全局读出操作,只会聚合所有节点(或节点群集)表示,现有的GNN分类器获得输入图的图级表示,并使用表示来预测其类标签。但是,这种全局聚合不考虑每个节点的结构信息,这导致全局结构的信息丢失。特别地,它通过对所有节点表示来强制执行分类器的相同权重参数来限制辨别力;在实践中,他们中的每一个都有助于不同于其结构语义的目标类别。在这项工作中,我们提出了结构性语义读数(SSREAD)来总结位置级节点表示,这允许为分类模拟特定位置的权重参数,以及有效地捕获与全局结构相关的图形语义。给定输入图,SSREAD旨在通过使用其节点与结构原型之间的语义对齐来识别结构上有意义的位置,该结构原型编码每个位置的原型特征。结构原型经过优化,以最小化所有训练图的对准成本,而其他GNN参数训练以预测类标签。我们的实验结果表明,SSREAD显着提高了GNN分类器的分类性能和可解释性,同时兼容各种聚合函数,GNN架构和学习框架。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs-a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
Graph神经网络(GNN)最近已成为使用图的机器学习的主要范式。对GNNS的研究主要集中于消息传递神经网络(MPNNS)的家族。与同构的Weisfeiler-Leman(WL)测试类似,这些模型遵循迭代的邻域聚合过程以更新顶点表示,并通过汇总顶点表示来更新顶点图表。尽管非常成功,但在过去的几年中,对MPNN进行了深入的研究。因此,需要新颖的体系结构,这将使该领域的研究能够脱离MPNN。在本文中,我们提出了一个新的图形神经网络模型,即所谓的$ \ pi $ -gnn,该模型学习了每个图的“软”排列(即双随机)矩阵,从而将所有图形投影到一个共同的矢量空间中。学到的矩阵在输入图的顶点上强加了“软”顺序,并基于此顺序,将邻接矩阵映射到向量中。这些向量可以被送入完全连接或卷积的层,以应对监督的学习任务。在大图的情况下,为了使模型在运行时间和记忆方面更有效,我们进一步放松了双随机矩阵,以使其排列随机矩阵。我们从经验上评估了图形分类和图形回归数据集的模型,并表明它与最新模型达到了性能竞争。
translated by 谷歌翻译
协同的药物组合为增强治疗功效和减少不良反应提供了巨大的潜力。然而,由于未知的因果疾病信号通路,有效和协同的药物组合预测仍然是一个悬而未决的问题。尽管已经提出了各种深度学习(AI)模型来定量预测药物组合的协同作用。现有深度学习方法的主要局限性是它们本质上是不可解释的,这使得AI模型的结论是对人类专家的非透明度的结论,因此限制了模型结论的鲁棒性和这些模型在现实世界中的实施能力人类医疗保健。在本文中,我们开发了一个可解释的图神经网络(GNN),该神经网络(GNN)揭示了通过挖掘非常重要的亚分子网络来揭示协同(MOS)的基本基本治疗靶标和机制。可解释的GNN预测模型的关键点是一个新颖的图池层,基于自我注意的节点和边缘池(此后为SANEPOOL),可以根据节点特征和图表计算节点和边缘的注意力评分(重要性)拓扑。因此,提出的GNN模型提供了一种系统的方法来预测和解释基于检测到的关键亚分子网络的药物组合协同作用。我们评估了来自NCI Almanac药物组合筛查数据的46个核心癌症信号通路和药物组合的基因制造的分子网络。实验结果表明,1)Sanepool可以在其他流行的图神经网络中实现当前的最新性能; 2)由SANEPOOOL检测到的亚分子网络是可自我解释的,并且可以鉴定协同的药物组合。
translated by 谷歌翻译
我们专注于使用图形神经网络(GNN)模型来分类的图形分类,该模型预先计算了使用并行排列的邻域聚合图操作员的Bank的节点功能。这些GNN模型具有降低培训和推理时间,由于预兆,而且还与流行的GNN变体不同,这些VNN变体通过训练期间通过顺序邻域聚合过程更新节点特征。我们提供了理论条件,其中具有平行邻域聚集(简称PA-GNN的PA-GNN)的通用GNN模型作为鉴别非同胞图的众所周知的Weisfeiler-Lehman(WL)曲线构同试验。虽然PA-GNN模型与WL测试没有明显的关系,但我们表明从这两种方法获得的图形嵌入是无标有关的。然后,我们提出了一个专门的PA-GNN模型,称为旋转,从而携带开发的条件。我们通过数值实验证明了开发的模型在许多不同的现实世界数据集上实现了最先进的性能,同时保持WL测试的辨别力和训练过程之前预处理图的计算优势。
translated by 谷歌翻译
图形内核是历史上最广泛使用的图形分类任务的技术。然而,由于图的手工制作的组合特征,这些方法具有有限的性能。近年来,由于其性能卓越,图形神经网络(GNNS)已成为与下游图形相关任务的最先进的方法。大多数GNN基于消息传递神经网络(MPNN)框架。然而,最近的研究表明,MPNN不能超过Weisfeiler-Lehman(WL)算法在图形同构术中的力量。为了解决现有图形内核和GNN方法的限制,在本文中,我们提出了一种新的GNN框架,称为\ Texit {内核图形神经网络}(Kernnns),该框架将图形内核集成到GNN的消息传递过程中。通过卷积神经网络(CNNS)中的卷积滤波器的启发,KERGNNS采用可训练的隐藏图作为绘图过滤器,该绘图过滤器与子图组合以使用图形内核更新节点嵌入式。此外,我们表明MPNN可以被视为Kergnns的特殊情况。我们将Kergnns应用于多个与图形相关的任务,并使用交叉验证来与基准进行公平比较。我们表明,与现有的现有方法相比,我们的方法达到了竞争性能,证明了增加GNN的表现能力的可能性。我们还表明,KERGNNS中的训练有素的图形过滤器可以揭示数据集的本地图形结构,与传统GNN模型相比,显着提高了模型解释性。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译
图表分类是一种非常有影响力的任务,在多数世界应用中起着至关重要的作用,例如分子性质预测和蛋白质函数预测。以有限标记的图表处理新课程,几次拍摄图形分类已成为一座桥梁现有图分类解决方案与实际使用。这项工作探讨了基于度量的元学习的潜力,用于解决少量图形分类。我们突出了考虑解决方案结构特征的重要性,并提出了一种明确考虑全球结构的新框架和输入图的局部结构。在两个数据集,Chembl和三角形上测试了名为SMF-GIN的GIN的实施,其中广泛的实验验证了所提出的方法的有效性。 ChemBl构造成填补缺乏几次拍摄图形分类评估的大规模基准的差距,与SMF-GIN的实施一起释放:https://github.com/jiangshunyu/smf-ing。
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
基于1-HOP邻居之间的消息传递(MP)范式交换信息的图形神经网络(GNN),以在每一层构建节点表示。原则上,此类网络无法捕获在图形上学习给定任务的可能或必需的远程交互(LRI)。最近,人们对基于变压器的图的开发产生了越来越多的兴趣,这些方法可以考虑超出原始稀疏结构以外的完整节点连接,从而实现了LRI的建模。但是,仅依靠1跳消息传递的MP-gnn与位置特征表示形式结合使用时通常在几个现有的图形基准中表现得更好,因此,限制了Transferter类似体系结构的感知效用和排名。在这里,我们介绍了5个图形学习数据集的远程图基准(LRGB):Pascalvoc-SP,Coco-SP,PCQM-Contact,Peptides-Func和肽结构,可以说需要LRI推理以在给定的任务中实现强大的性能。我们基准测试基线GNN和Graph Transformer网络,以验证捕获长期依赖性的模型在这些任务上的性能明显更好。因此,这些数据集适用于旨在捕获LRI的MP-GNN和Graph Transformer架构的基准测试和探索。
translated by 谷歌翻译
图形神经网络(GNNS)依赖于图形结构来定义聚合策略,其中每个节点通过与邻居的信息组合来更新其表示。已知GNN的限制是,随着层数的增加,信息被平滑,压扁并且节点嵌入式变得无法区分,对性能产生负面影响。因此,实用的GNN模型雇用了几层,只能在每个节点周围的有限邻域利用图形结构。不可避免地,实际的GNN不会根据图的全局结构捕获信息。虽然有几种研究GNNS的局限性和表达性,但是关于图形结构数据的实际应用的问题需要全局结构知识,仍然没有答案。在这项工作中,我们通过向几个GNN模型提供全球信息并观察其对下游性能的影响来认证解决这个问题。我们的研究结果表明,全球信息实际上可以为共同的图形相关任务提供显着的好处。我们进一步确定了一项新的正规化策略,导致所有考虑的任务的平均准确性提高超过5%。
translated by 谷歌翻译
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown great potential in the field of graph representation learning. Standard GNNs define a local message-passing mechanism which propagates information over the whole graph domain by stacking multiple layers. This paradigm suffers from two major limitations, over-squashing and poor long-range dependencies, that can be solved using global attention but significantly increases the computational cost to quadratic complexity. In this work, we propose an alternative approach to overcome these structural limitations by leveraging the ViT/MLP-Mixer architectures introduced in computer vision. We introduce a new class of GNNs, called Graph MLP-Mixer, that holds three key properties. First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on the Long Range Graph Benchmark (LRGB) and the TreeNeighbourMatch datasets. Second, they offer better speed and memory efficiency with a complexity linear to the number of nodes and edges, surpassing the related Graph Transformer and expressive GNN models. Third, they show high expressivity in terms of graph isomorphism as they can distinguish at least 3-WL non-isomorphic graphs. We test our architecture on 4 simulated datasets and 7 real-world benchmarks, and show highly competitive results on all of them.
translated by 谷歌翻译
变压器架构已成为许多域中的主导选择,例如自然语言处理和计算机视觉。然而,与主流GNN变体相比,它对图形水平预测的流行排行榜没有竞争表现。因此,它仍然是一个谜,变形金机如何对图形表示学习表现良好。在本文中,我们通过提出了基于标准变压器架构构建的Gragemer来解决这一神秘性,并且可以在广泛的图形表示学习任务中获得优异的结果,特别是在最近的OGB大规模挑战上。我们在图中利用变压器的关键洞察是有效地将图形的结构信息有效地编码到模型中。为此,我们提出了几种简单但有效的结构编码方法,以帮助Gramemormer更好的模型图形结构数据。此外,我们在数学上表征了Gramemormer的表现力,并展示了我们编码图形结构信息的方式,许多流行的GNN变体都可以被涵盖为GrameRormer的特殊情况。
translated by 谷歌翻译
图形结构的数据集通常具有不规则的图表尺寸和连接,渲染使用最近的数据增强技术,例如混合,困难。为了解决这一挑战,我们在名为曲线图移植的图形级别提供了第一个混合图形增强方法,其在数据空间中混合了不规则图。要在图形的各种尺度上定义,我们的方法将子结构标识为可以保留本地信息的混合单元。由于没有特殊考虑上下文的​​基于混合的方法易于产生噪声样本,因此我们的方法明确地使用节点显着信息来选择有意义的子图并自适应地确定标签。我们在多个图形分类基准数据集中广泛地验证了我们多样化的GNN架构,来自不同尺寸的各种图形域。实验结果显示了我们对其他基本数据增强基线的方法的一致优势。我们还证明了曲线图移植在鲁棒性和模型校准方面提高了性能。
translated by 谷歌翻译