图形预训练策略一直在图形挖掘社区吸引人们的注意力,因为它们在没有任何标签信息的情况下在参数化图形神经网络(GNN)方面的灵活性。关键思想在于通过预测从输入图中提取的掩蔽图信号来编码有价值的信息。为了平衡各种图形信号的重要性(例如节点,边缘,子图),现有方法主要是通过引入超参数来重新进行图形信号的重要性来进行手工设计的。然而,人类对亚最佳高参数的干预通常会注入额外的偏见,并在下游应用中降低了概括性能。本文从新的角度解决了这些局限性,即为预培训GNN提供课程。我们提出了一个名为Mentorgnn的端到端模型,该模型旨在监督具有不同结构和不同特征空间的图表的GNN的预训练过程。为了理解不同粒度的异质图信号,我们提出了一种课程学习范式,该课程自动重新贴出图形信号,以确保对目标域进行良好的概括。此外,我们通过在预先训练的GNN的概括误差上得出自然且可解释的上限,从而对关系数据(即图形)的域自适应问题(即图形)发出了新的启示。有关大量真实图的广泛实验验证并验证了Mentorgnn的性能。
translated by 谷歌翻译
Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance? In many high-impact domains (e.g., brain networks and molecular graphs), collecting and annotating data is prohibitively expensive and time-consuming, which makes domain adaptation an attractive option to alleviate the label scarcity issue. In light of this, the state-of-the-art methods focus on deriving domain-invariant graph representation that minimizes the domain discrepancy. However, it has recently been shown that a small domain discrepancy loss may not always guarantee a good generalization performance, especially in the presence of disparate graph structures and label distribution shifts. In this paper, we present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs. In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity (e.g., node attributes, edges, and subgraphs). With that, we further propose a domain unification module together with a trinity-signal mixup scheme to jointly minimize the domain discrepancy and augment the knowledge transfer across graphs. Finally, comprehensive empirical results show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin.
translated by 谷歌翻译
Many applications of machine learning require a model to make accurate predictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.However, pre-training on graph datasets remains a hard challenge. Several key studies (
translated by 谷歌翻译
由于学术和工业领域的异质图无处不在,研究人员最近提出了许多异质图神经网络(HGNN)。在本文中,我们不再采用更强大的HGNN模型,而是有兴趣设计一个多功能的插件模块,该模块解释了从预先训练的HGNN中提取的关系知识。据我们所知,我们是第一个在异质图上提出高阶(雇用)知识蒸馏框架的人,无论HGNN的模型体系结构如何,它都可以显着提高预测性能。具体而言,我们的雇用框架最初执行一阶节点级知识蒸馏,该蒸馏曲线及其预测逻辑编码了老师HGNN的语义。同时,二阶关系级知识蒸馏模仿了教师HGNN生成的不同类型的节点嵌入之间的关系相关性。在各种流行的HGNN模型和三个现实世界的异质图上进行了广泛的实验表明,我们的方法获得了一致且相当大的性能增强,证明了其有效性和泛化能力。
translated by 谷歌翻译
关于图表的深度学习最近吸引了重要的兴趣。然而,大多数作品都侧重于(半)监督学习,导致缺点包括重标签依赖,普遍性差和弱势稳健性。为了解决这些问题,通过良好设计的借口任务在不依赖于手动标签的情况下提取信息知识的自我监督学习(SSL)已成为图形数据的有希望和趋势的学习范例。与计算机视觉和自然语言处理等其他域的SSL不同,图表上的SSL具有独家背景,设计理念和分类。在图表的伞下自我监督学习,我们对采用图表数据采用SSL技术的现有方法及时及全面的审查。我们构建一个统一的框架,数学上正式地规范图表SSL的范例。根据借口任务的目标,我们将这些方法分为四类:基于生成的,基于辅助性的,基于对比的和混合方法。我们进一步描述了曲线图SSL在各种研究领域的应用,并总结了绘图SSL的常用数据集,评估基准,性能比较和开源代码。最后,我们讨论了该研究领域的剩余挑战和潜在的未来方向。
translated by 谷歌翻译
图表神经网络(GNNS)已广泛应用于推荐任务,并获得了非常吸引人的性能。然而,大多数基于GNN的推荐方法在实践中遭受数据稀疏问题。同时,预训练技术在减轻了各个领域(如自然语言处理(NLP)和计算机视觉(CV)等域中的数据稀疏而取得了巨大成功。因此,图形预培训具有扩大基于GNN的建议的数据稀疏的巨大潜力。但是,预先培训GNN,建议面临独特的挑战。例如,不同推荐任务中的用户项交互图具有不同的用户和项目集,并且它们通常存在不同的属性。因此,在NLP和CV中常用的成功机制将知识从预训练任务转移到下游任务,例如共享所学习的嵌入式或特征提取器,而不是直接适用于现有的基于GNN的推荐模型。为了解决这些挑战,我们精致地设计了一个自适应图形预训练框架,用于本地化协作滤波(适应)。它不需要传输用户/项目嵌入式,并且能够跨越不同图的共同知识和每个图形的唯一性。广泛的实验结果表明了适应的有效性和优越性。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
图形神经网络是一种强大的深度学习工具,用于建模图形结构化数据,在众多图形学习任务上表现出了出色的性能。为了解决深图学习中的数据噪声和数据稀缺性问题,最近有关图形数据的研究已加剧。但是,常规数据增强方法几乎无法处理具有多模式性的非欧几里得空间中定义的图形结构化数据。在这项调查中,我们正式提出了图数据扩展的问题,并进一步审查了代表性技术及其在不同深度学习问题中的应用。具体而言,我们首先提出了图形数据扩展技术的分类法,然后通过根据增强信息方式对相关工作进行分类,从而提供结构化的审查。此外,我们总结了以数据为中心的深图学习中两个代表性问题中图数据扩展的应用:(1)可靠的图形学习,重点是增强输入图的实用性以及通过图数据增强的模型容量; (2)低资源图学习,其针对通过图数据扩大标记的训练数据量表的目标。对于每个问题,我们还提供层次结构问题分类法,并审查与图数据增强相关的现有文献。最后,我们指出了有希望的研究方向和未来研究的挑战。
translated by 谷歌翻译
Graph representation learning has emerged as a powerful technique for addressing real-world problems. Various downstream graph learning tasks have benefited from its recent developments, such as node classification, similarity search, and graph classification. However, prior arts on graph representation learning focus on domain specific problems and train a dedicated model for each graph dataset, which is usually non-transferable to out-of-domain data. Inspired by the recent advances in pre-training from natural language processing and computer vision, we design Graph Contrastive Coding (GCC) 1 -a self-supervised graph neural network pre-training framework-to capture the universal network topological properties across multiple networks. We design GCC's pre-training task as subgraph instance discrimination in and across networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations. We conduct extensive experiments on three graph learning tasks and ten graph datasets. The results show that GCC pre-trained on a collection of diverse datasets can achieve competitive or better performance to its task-specific and trained-from-scratch counterparts. This suggests that the pre-training and fine-tuning paradigm presents great potential for graph representation learning.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
众所周知,图形神经网络(GNN)的成功高度依赖于丰富的人类通知数据,这在实践中努力获得,并且并非总是可用的。当只有少数标记的节点可用时,如何开发高效的GNN仍在研究。尽管已证明自我训练对于半监督学习具有强大的功能,但其在图形结构数据上的应用可能会失败,因为(1)不利用较大的接收场来捕获远程节点相互作用,这加剧了传播功能的难度 - 标记节点到未标记节点的标签模式; (2)有限的标记数据使得在不同节点类别中学习良好的分离决策边界而不明确捕获基本的语义结构,这是一项挑战。为了解决捕获信息丰富的结构和语义知识的挑战,我们提出了一个新的图数据增强框架,AGST(增强图自训练),该框架由两个新的(即结构和语义)增强模块构建。 GST骨干。在这项工作中,我们研究了这个新颖的框架是否可以学习具有极有限标记节点的有效图预测模型。在有限标记节点数据的不同情况下,我们对半监督节点分类进行全面评估。实验结果证明了新的数据增强框架对节点分类的独特贡献,几乎没有标记的数据。
translated by 谷歌翻译
图形神经网络(GNN),图数据上深度神经网络的概括已被广泛用于各个领域,从药物发现到推荐系统。但是,当可用样本很少的情况下,这些应用程序的GNN是有限的。元学习一直是解决机器学习中缺乏样品的重要框架,近年来,研究人员已经开始将元学习应用于GNNS。在这项工作中,我们提供了对涉及GNN的不同元学习方法的综合调查,这些方法在各种图表中显示出使用这两种方法的力量。我们根据提出的架构,共享表示和应用程序分类文献。最后,我们讨论了几个激动人心的未来研究方向和打开问题。
translated by 谷歌翻译
自我监督的学习逐渐被出现为一种强大的图形表示学习技术。然而,在图表数据上进行可转换,概括和强大的表示学习仍然是对预训练图形神经网络的挑战。在本文中,我们提出了一种简单有效的自我监督的自我监督的预训练策略,命名为成对半图歧视(PHD),明确地预先在图形级别进行了图形神经网络。 PHD被设计为简单的二进制分类任务,以辨别两个半图是否来自同一源。实验表明,博士学位是一种有效的预训练策略,与最先进的策略相比,在13图分类任务上提供了可比或优越的性能,并在与节点级策略结合时实现了显着的改进。此外,所学习代表的可视化透露,博士策略确实赋予了模型来学习像分子支架等图形级知识。这些结果已将博士学位作为图形级别代表学习中的强大有效的自我监督的学习策略。
translated by 谷歌翻译
无监督的图形表示学习是图形数据的非琐碎主题。在结构化数据的无监督代表学习中对比学习和自我监督学习的成功激发了图表上的类似尝试。使用对比损耗的当前无监督的图形表示学习和预培训主要基于手工增强图数据之间的对比度。但是,由于不可预测的不变性,图数据增强仍然没有很好地探索。在本文中,我们提出了一种新颖的协作图形神经网络对比学习框架(CGCL),它使用多个图形编码器来观察图形。不同视图观察的特征充当了图形编码器之间对比学习的图表增强,避免了任何扰动以保证不变性。 CGCL能够处理图形级和节点级表示学习。广泛的实验表明CGCL在无监督的图表表示学习中的优势以及图形表示学习的手工数据增强组合的非必要性。
translated by 谷歌翻译
由于现实世界图形/网络数据中的广泛标签稀缺问题,因此,自我监督的图形神经网络(GNN)非常需要。曲线图对比度学习(GCL),通过训练GNN以其不同的增强形式最大化相同图表之间的表示之间的对应关系,即使在不使用标签的情况下也可以产生稳健和可转移的GNN。然而,GNN由传统的GCL培训经常冒险捕获冗余图形特征,因此可能是脆弱的,并在下游任务中提供子对比。在这里,我们提出了一种新的原理,称为普通的普通GCL(AD-GCL),其使GNN能够通过优化GCL中使用的对抗性图形增强策略来避免在训练期间捕获冗余信息。我们将AD-GCL与理论解释和设计基于可训练的边缘滴加图的实际实例化。我们通过与最先进的GCL方法进行了实验验证了AD-GCL,并在无监督,6 \%$ 14 \%$ 6 \%$ 14 \%$ 6 \%$ 6 \%$ 3 \%$ 3 \%$达到半监督总体学习设置,具有18个不同的基准数据集,用于分子属性回归和分类和社交网络分类。
translated by 谷歌翻译
Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
Transfer learning refers to the transfer of knowledge or information from a relevant source domain to a target domain. However, most existing transfer learning theories and algorithms focus on IID tasks, where the source/target samples are assumed to be independent and identically distributed. Very little effort is devoted to theoretically studying the knowledge transferability on non-IID tasks, e.g., cross-network mining. To bridge the gap, in this paper, we propose rigorous generalization bounds and algorithms for cross-network transfer learning from a source graph to a target graph. The crucial idea is to characterize the cross-network knowledge transferability from the perspective of the Weisfeiler-Lehman graph isomorphism test. To this end, we propose a novel Graph Subtree Discrepancy to measure the graph distribution shift between source and target graphs. Then the generalization error bounds on cross-network transfer learning, including both cross-network node classification and link prediction tasks, can be derived in terms of the source knowledge and the Graph Subtree Discrepancy across domains. This thereby motivates us to propose a generic graph adaptive network (GRADE) to minimize the distribution shift between source and target graphs for cross-network transfer learning. Experimental results verify the effectiveness and efficiency of our GRADE framework on both cross-network node classification and cross-domain recommendation tasks.
translated by 谷歌翻译
异质图卷积网络在解决异质网络数据的各种网络分析任务方面已广受欢迎,从链接预测到节点分类。但是,大多数现有作品都忽略了多型节点之间的多重网络的关系异质性,而在元路径中,元素嵌入中关系的重要性不同,这几乎无法捕获不同关系跨不同关系的异质结构信号。为了应对这一挑战,这项工作提出了用于异质网络嵌入的多重异质图卷积网络(MHGCN)。我们的MHGCN可以通过多层卷积聚合自动学习多重异质网络中不同长度的有用的异质元路径相互作用。此外,我们有效地将多相关结构信号和属性语义集成到学习的节点嵌入中,并具有无监督和精选的学习范式。在具有各种网络分析任务的五个现实世界数据集上进行的广泛实验表明,根据所有评估指标,MHGCN与最先进的嵌入基线的优势。
translated by 谷歌翻译
知识蒸馏最近成为一种流行的技术,以改善卷积神经网络的模型泛化能力。然而,它对图形神经网络的影响小于令人满意的,因为图形拓扑和节点属性可能以动态方式改变,并且在这种情况下,静态教师模型引导学生培训不足。在本文中,我们通过在在线蒸馏时期同时培训一组图形神经网络来解决这一挑战,其中组知识发挥作用作为动态虚拟教师,并且有效地捕获了图形神经网络的结构变化。为了提高蒸馏性能,在学生之间转移两种知识,以增强彼此:在图形拓扑和节点属性中反映信息的本地知识,以及反映课程预测的全局知识。随着香草知识蒸馏等,在利用有效的对抗性循环学习框架,将全球知识与KL分歧转移。广泛的实验验证了我们提出的在线对抗蒸馏方法的有效性。
translated by 谷歌翻译