在本文中,我们研究了在非全粒图上进行节点表示学习的自我监督学习的问题。现有的自我监督学习方法通​​常假定该图是同质的,其中链接的节点通常属于同一类或具有相似的特征。但是,这种同质性的假设在现实图表中并不总是正确的。我们通过为图神经网络开发脱钩的自我监督学习(DSSL)框架来解决这个问题。 DSSL模仿了节点的生成过程和语义结构的潜在变量建模的链接,该过程将不同邻域之间的不同基础语义解散到自我监督的节点学习过程中。我们的DSSL框架对编码器不可知,不需要预制的增强,因此对不同的图表灵活。为了通过潜在变量有效地优化框架,我们得出了自我监督目标的较低范围的证据,并开发了具有变异推理的可扩展培训算法。我们提供理论分析,以证明DSSL享有更好的下游性能。与竞争性的自我监督学习基线相比,对各种类图基准的广泛实验表明,我们提出的框架可以显着取得更好的性能。
translated by 谷歌翻译
Graph Contrastive Learning (GCL) has recently drawn much research interest for learning generalizable node representations in a self-supervised manner. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, nodes sharing similar characteristics may not always be geographically close, which poses a great challenge for unsupervised GCL efforts due to their inherent limitations in capturing such global graph knowledge. In this work, we address their inherent limitations by proposing a simple yet effective framework -- Simple Neural Networks with Structural and Semantic Contrastive Learning} (S^3-CL). Notably, by virtue of the proposed structural and semantic contrastive learning algorithms, even a simple neural network can learn expressive node representations that preserve valuable global structural and semantic patterns. Our experiments demonstrate that the node representations learned by S^3-CL achieve superior performance on different downstream tasks compared with the state-of-the-art unsupervised GCL methods. Implementation and more experimental details are publicly available at \url{https://github.com/kaize0409/S-3-CL.}
translated by 谷歌翻译
图级表示在各种现实世界中至关重要,例如预测分子的特性。但是实际上,精确的图表注释通常非常昂贵且耗时。为了解决这个问题,图形对比学习构造实例歧视任务,将正面对(同一图的增强对)汇总在一起,并将负面对(不同图的增强对)推开,以进行无监督的表示。但是,由于为了查询,其负面因素是从所有图中均匀抽样的,因此现有方法遭受关键采样偏置问题的损失,即,否定物可能与查询具有相同的语义结构,从而导致性能降解。为了减轻这种采样偏见问题,在本文中,我们提出了一种典型的图形对比度学习(PGCL)方法。具体而言,PGCL通过将语义相似的图形群群归为同一组的群集数据的基础语义结构,并同时鼓励聚类的一致性,以实现同一图的不同增强。然后给出查询,它通过从与查询群集不同的群集中绘制图形进行负采样,从而确保查询及其阴性样本之间的语义差异。此外,对于查询,PGCL根据其原型(集群质心)和查询原型之间的距离进一步重新重新重新重新重新享受其负样本,从而使那些具有中等原型距离的负面因素具有相对较大的重量。事实证明,这种重新加权策略比统一抽样更有效。各种图基准的实验结果证明了我们的PGCL比最新方法的优势。代码可在https://github.com/ha-lins/pgcl上公开获取。
translated by 谷歌翻译
在异质图上的自我监督学习(尤其是对比度学习)方法可以有效地摆脱对监督数据的依赖。同时,大多数现有的表示学习方法将异质图嵌入到欧几里得或双曲线的单个几何空间中。这种单个几何视图通常不足以观察由于其丰富的语义和复杂结构而观察到异质图的完整图片。在这些观察结果下,本文提出了一种新型的自我监督学习方法,称为几何对比度学习(GCL),以更好地表示监督数据是不可用时的异质图。 GCL同时观察了从欧几里得和双曲线观点的异质图,旨在强烈合并建模丰富的语义和复杂结构的能力,这有望为下游任务带来更多好处。 GCL通过在局部局部和局部全球语义水平上对比表示两种几何视图之间的相互信息。在四个基准数据集上进行的广泛实验表明,在三个任务上,所提出的方法在包括节点分类,节点群集和相似性搜索在内的三个任务上都超过了强基础,包括无监督的方法和监督方法。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
尽管图表学习(GRL)取得了重大进展,但要以足够的方式提取和嵌入丰富的拓扑结构和特征信息仍然是一个挑战。大多数现有方法都集中在本地结构上,并且无法完全融合全球拓扑结构。为此,我们提出了一种新颖的结构保留图表学习(SPGRL)方法,以完全捕获图的结构信息。具体而言,为了减少原始图的不确定性和错误信息,我们通过k-nearest邻居方法构建了特征图作为互补视图。该特征图可用于对比节点级别以捕获本地关系。此外,我们通过最大化整个图形和特征嵌入的相互信息(MI)来保留全局拓扑结构信息,从理论上讲,该信息可以简化为交换功能的特征嵌入和原始图以重建本身。广泛的实验表明,我们的方法在半监督节点分类任务上具有相当出色的性能,并且在图形结构或节点特征上噪声扰动下的鲁棒性出色。
translated by 谷歌翻译
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
translated by 谷歌翻译
链接预测是一项重要的任务,在各个域中具有广泛的应用程序。但是,大多数现有的链接预测方法都假定给定的图遵循同质的假设,并设计基于相似性的启发式方法或表示学习方法来预测链接。但是,许多现实世界图是异性图,同义假设不存在,这挑战了现有的链接预测方法。通常,在异性图中,有许多引起链接形成的潜在因素,并且两个链接的节点在一个或两个因素中往往相似,但在其他因素中可能是不同的,导致总体相似性较低。因此,一种方法是学习每个节点的分离表示形式,每个矢量捕获一个因子上的节点的潜在表示,这铺平了一种方法来模拟异性图中的链接形成,从而导致更好的节点表示学习和链接预测性能。但是,对此的工作非常有限。因此,在本文中,我们研究了一个新的问题,该问题是在异性图上进行链接预测的分离表示学习。我们提出了一种新颖的框架分解,可以通过建模链接形成并执行感知因素的消息来学习以促进链接预测来学习解开的表示形式。在13个现实世界数据集上进行的广泛实验证明了Disenlink对异性恋和血友病图的链接预测的有效性。我们的代码可从https://github.com/sjz5202/disenlink获得
translated by 谷歌翻译
基于图的异常检测已被广泛用于检测现实世界应用中的恶意活动。迄今为止,现有的解决此问题的尝试集中在二进制分类制度中的结构特征工程或学习上。在这项工作中,我们建议利用图形对比编码,并提出监督的GCCAD模型,以将异常节点与正常节点的距离与全球环境(例如所有节点的平均值)相比。为了使用稀缺标签处理场景,我们通过设计用于生成合成节点标签的图形损坏策略,进一步使GCCAD成为一个自制的框架。为了实现对比目标,我们设计了一个图形神经网络编码器,该编码器可以在消息传递过程中推断并进一步删除可疑链接,并了解输入图的全局上下文。我们在四个公共数据集上进行了广泛的实验,表明1)GCCAD显着且始终如一地超过各种高级基线,2)其自我监督版本没有微调可以通过其完全监督的版本来实现可比性的性能。
translated by 谷歌翻译
由于现实世界图形/网络数据中的广泛标签稀缺问题,因此,自我监督的图形神经网络(GNN)非常需要。曲线图对比度学习(GCL),通过训练GNN以其不同的增强形式最大化相同图表之间的表示之间的对应关系,即使在不使用标签的情况下也可以产生稳健和可转移的GNN。然而,GNN由传统的GCL培训经常冒险捕获冗余图形特征,因此可能是脆弱的,并在下游任务中提供子对比。在这里,我们提出了一种新的原理,称为普通的普通GCL(AD-GCL),其使GNN能够通过优化GCL中使用的对抗性图形增强策略来避免在训练期间捕获冗余信息。我们将AD-GCL与理论解释和设计基于可训练的边缘滴加图的实际实例化。我们通过与最先进的GCL方法进行了实验验证了AD-GCL,并在无监督,6 \%$ 14 \%$ 6 \%$ 14 \%$ 6 \%$ 6 \%$ 3 \%$ 3 \%$达到半监督总体学习设置,具有18个不同的基准数据集,用于分子属性回归和分类和社交网络分类。
translated by 谷歌翻译
无监督的图形表示学习是图形数据的非琐碎主题。在结构化数据的无监督代表学习中对比学习和自我监督学习的成功激发了图表上的类似尝试。使用对比损耗的当前无监督的图形表示学习和预培训主要基于手工增强图数据之间的对比度。但是,由于不可预测的不变性,图数据增强仍然没有很好地探索。在本文中,我们提出了一种新颖的协作图形神经网络对比学习框架(CGCL),它使用多个图形编码器来观察图形。不同视图观察的特征充当了图形编码器之间对比学习的图表增强,避免了任何扰动以保证不变性。 CGCL能够处理图形级和节点级表示学习。广泛的实验表明CGCL在无监督的图表表示学习中的优势以及图形表示学习的手工数据增强组合的非必要性。
translated by 谷歌翻译
Recently, contrastive learning (CL) has emerged as a successful method for unsupervised graph representation learning. Most graph CL methods first perform stochastic augmentation on the input graph to obtain two graph views and maximize the agreement of representations in the two views. Despite the prosperous development of graph CL methods, the design of graph augmentation schemes-a crucial component in CL-remains rarely explored. We argue that the data augmentation schemes should preserve intrinsic structures and attributes of graphs, which will force the model to learn representations that are insensitive to perturbation on unimportant nodes and edges. However, most existing methods adopt uniform data augmentation schemes, like uniformly dropping edges and uniformly shuffling features, leading to suboptimal performance. In this paper, we propose a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph. Specifically, on the topology level, we design augmentation schemes based on node centrality measures to highlight important connective structures. On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information. We perform extensive experiments of node classification on a variety of real-world datasets. Experimental results demonstrate that our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts, which validates the effectiveness of the proposed contrastive framework with adaptive augmentation. CCS CONCEPTS• Computing methodologies → Unsupervised learning; Neural networks; Learning latent representations.
translated by 谷歌翻译
图神经网络的自我监督学习(SSL)正在成为利用未标记数据的有前途的方式。当前,大多数方法基于从图像域改编的对比度学习,该学习需要视图生成和足够数量的负样本。相比之下,现有的预测模型不需要负面抽样,但缺乏关于借口训练任务设计的理论指导。在这项工作中,我们提出了lagraph,这是基于潜在图预测的理论基础的预测SSL框架。 lagraph的学习目标被推导为自我监督的上限,以预测未观察到的潜在图。除了改进的性能外,Lagraph还为包括基于不变性目标的预测模型的最新成功提供了解释。我们提供了比较毛发与不同领域中相关方法的理论分析。我们的实验结果表明,劳拉在性能方面的优势和鲁棒性对于训练样本量减少了图形级别和节点级任务。
translated by 谷歌翻译
图是对物体之间关系的强大表示,吸引了很多关注。图形学习的一个基本挑战是如何在没有标签的情况下训练有效的图形神经网络(GNN)编码器,这些标签既昂贵又耗时。对比学习(CL)是应对这一挑战的最受欢迎的范式之一,该挑战通过区分正和负节点对来训练GNN。尽管最近的CL方法取得了成功,但仍然存在两个爆炸案。首先,如何减少基于随机拓扑的数据增强引入的语义错误。传统CL通过节点级拓扑接近定义正和负节点对,该节点拓扑接近度仅基于图形拓扑,而不论节点属性的语义信息如何,因此某些语义上相似的节点可能被错误地视为负对。其次,如何有效地对现实图形的多重性进行建模,其中节点通过各种关系连接,并且每个关系都可以形成均匀的图层。为了解决这些问题,我们提出了一种新型的多重异质图原型对比度倾斜(X-GAL)框架来提取节点嵌入。 X-GOAL由两个组成部分组成:目标框架,该目标框架学习每个均匀图层的节点嵌入,以及一个对齐正则化,通过对齐层特定的节点嵌入来共同对不同的层进行模拟不同的层。具体而言,目标框架通过简洁的图形转换技术捕获节点级信息,并通过将节点拉到嵌入空间中的同一语义簇中,从而捕获群集级信息。对齐正则化在节点和群集级别的层上对齐嵌入。我们在各种现实世界数据集和下游任务上评估X-GAL,以证明其有效性。
translated by 谷歌翻译
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance compared to other benchmarks, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.
translated by 谷歌翻译
图神经网络(GNN)在学习图表表示方面取得了巨大成功,从而促进了各种与图形相关的任务。但是,大多数GNN方法都采用监督的学习设置,由于难以获得标记的数据,因此在现实世界中并不总是可行的。因此,图表自学学习一直在吸引越来越多的关注。图对比度学习(GCL)是自我监督学习的代表性框架。通常,GCL通过将语义上相似的节点(阳性样品)和不同的节点(阴性样品)与锚节点进行对比来学习节点表示。没有访问标签,通常通过数据增强产生阳性样品,而负样品是从整个图中均匀采样的,这导致了亚最佳目标。具体而言,数据增强自然限制了该过程中涉及的正样本的数量(通常只采用一个阳性样本)。另一方面,随机采样过程不可避免地选择假阴性样品(样品与锚共享相同的语义)。这些问题限制了GCL的学习能力。在这项工作中,我们提出了一个增强的目标,以解决上述问题。我们首先引入了一个不可能实现的理想目标,该目标包含所有正样本,没有假阴性样本。然后,基于对阳性和负样品进行采样的分布,将这个理想的目标转化为概率形式。然后,我们以节点相似性对这些分布进行建模,并得出增强的目标。各种数据集上的全面实验证明了在不同设置下提出的增强目标的有效性。
translated by 谷歌翻译
众所周知,图形神经网络(GNN)的成功高度依赖于丰富的人类通知数据,这在实践中努力获得,并且并非总是可用的。当只有少数标记的节点可用时,如何开发高效的GNN仍在研究。尽管已证明自我训练对于半监督学习具有强大的功能,但其在图形结构数据上的应用可能会失败,因为(1)不利用较大的接收场来捕获远程节点相互作用,这加剧了传播功能的难度 - 标记节点到未标记节点的标签模式; (2)有限的标记数据使得在不同节点类别中学习良好的分离决策边界而不明确捕获基本的语义结构,这是一项挑战。为了解决捕获信息丰富的结构和语义知识的挑战,我们提出了一个新的图数据增强框架,AGST(增强图自训练),该框架由两个新的(即结构和语义)增强模块构建。 GST骨干。在这项工作中,我们研究了这个新颖的框架是否可以学习具有极有限标记节点的有效图预测模型。在有限标记节点数据的不同情况下,我们对半监督节点分类进行全面评估。实验结果证明了新的数据增强框架对节点分类的独特贡献,几乎没有标记的数据。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph, 1) the similarity between the original graph and the generated augmented graph gradually decreases; 2) the discrimination between all nodes within each augmented view gradually increases. In this paper, we argue that both such prior information can be incorporated (differently) into the contrastive learning paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
translated by 谷歌翻译
关于图表的深度学习最近吸引了重要的兴趣。然而,大多数作品都侧重于(半)监督学习,导致缺点包括重标签依赖,普遍性差和弱势稳健性。为了解决这些问题,通过良好设计的借口任务在不依赖于手动标签的情况下提取信息知识的自我监督学习(SSL)已成为图形数据的有希望和趋势的学习范例。与计算机视觉和自然语言处理等其他域的SSL不同,图表上的SSL具有独家背景,设计理念和分类。在图表的伞下自我监督学习,我们对采用图表数据采用SSL技术的现有方法及时及全面的审查。我们构建一个统一的框架,数学上正式地规范图表SSL的范例。根据借口任务的目标,我们将这些方法分为四类:基于生成的,基于辅助性的,基于对比的和混合方法。我们进一步描述了曲线图SSL在各种研究领域的应用,并总结了绘图SSL的常用数据集,评估基准,性能比较和开源代码。最后,我们讨论了该研究领域的剩余挑战和潜在的未来方向。
translated by 谷歌翻译