众所周知,图形神经网络(GNN)的成功高度依赖于丰富的人类通知数据,这在实践中努力获得,并且并非总是可用的。当只有少数标记的节点可用时,如何开发高效的GNN仍在研究。尽管已证明自我训练对于半监督学习具有强大的功能,但其在图形结构数据上的应用可能会失败,因为(1)不利用较大的接收场来捕获远程节点相互作用,这加剧了传播功能的难度 - 标记节点到未标记节点的标签模式; (2)有限的标记数据使得在不同节点类别中学习良好的分离决策边界而不明确捕获基本的语义结构,这是一项挑战。为了解决捕获信息丰富的结构和语义知识的挑战,我们提出了一个新的图数据增强框架,AGST(增强图自训练),该框架由两个新的(即结构和语义)增强模块构建。 GST骨干。在这项工作中,我们研究了这个新颖的框架是否可以学习具有极有限标记节点的有效图预测模型。在有限标记节点数据的不同情况下,我们对半监督节点分类进行全面评估。实验结果证明了新的数据增强框架对节点分类的独特贡献,几乎没有标记的数据。
translated by 谷歌翻译
灵感来自深度学习的广泛成功,已经提出了图表神经网络(GNNS)来学习表达节点表示,并在各种图形学习任务中表现出有希望的性能。然而,现有的努力主要集中在提供相对丰富的金色标记节点的传统半监督设置。虽然数据标签是难以忍受的事实令人生畏的事实并且需要强化领域知识,但特别是在考虑图形结构数据的异质性时,它通常是不切实际的。在几次半监督的环境下,大多数现有GNN的性能不可避免地受到过度装备和过天际问题的破坏,在很大程度上由于标记数据的短缺。在本文中,我们提出了一种配备有新型元学习算法的解耦的网络架构来解决这个问题。从本质上讲,我们的框架META-PN通过META学习的标签传播策略在未标记节点上乘坐高质量的伪标签,这有效增强了稀缺标记的数据,同时在培训期间启用大型接受领域。广泛的实验表明,与各种基准数据集上的现有技术相比,我们的方法提供了简单且实质性的性能。
translated by 谷歌翻译
Graph Contrastive Learning (GCL) has recently drawn much research interest for learning generalizable node representations in a self-supervised manner. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, nodes sharing similar characteristics may not always be geographically close, which poses a great challenge for unsupervised GCL efforts due to their inherent limitations in capturing such global graph knowledge. In this work, we address their inherent limitations by proposing a simple yet effective framework -- Simple Neural Networks with Structural and Semantic Contrastive Learning} (S^3-CL). Notably, by virtue of the proposed structural and semantic contrastive learning algorithms, even a simple neural network can learn expressive node representations that preserve valuable global structural and semantic patterns. Our experiments demonstrate that the node representations learned by S^3-CL achieve superior performance on different downstream tasks compared with the state-of-the-art unsupervised GCL methods. Implementation and more experimental details are publicly available at \url{https://github.com/kaize0409/S-3-CL.}
translated by 谷歌翻译
图形神经网络是一种强大的深度学习工具,用于建模图形结构化数据,在众多图形学习任务上表现出了出色的性能。为了解决深图学习中的数据噪声和数据稀缺性问题,最近有关图形数据的研究已加剧。但是,常规数据增强方法几乎无法处理具有多模式性的非欧几里得空间中定义的图形结构化数据。在这项调查中,我们正式提出了图数据扩展的问题,并进一步审查了代表性技术及其在不同深度学习问题中的应用。具体而言,我们首先提出了图形数据扩展技术的分类法,然后通过根据增强信息方式对相关工作进行分类,从而提供结构化的审查。此外,我们总结了以数据为中心的深图学习中两个代表性问题中图数据扩展的应用:(1)可靠的图形学习,重点是增强输入图的实用性以及通过图数据增强的模型容量; (2)低资源图学习,其针对通过图数据扩大标记的训练数据量表的目标。对于每个问题,我们还提供层次结构问题分类法,并审查与图数据增强相关的现有文献。最后,我们指出了有希望的研究方向和未来研究的挑战。
translated by 谷歌翻译
图表神经网络(GNNS)在半监督学习场景中取得了显着的成功。图形神经网络中的消息传递机制有助于未标记的节点收集标记邻居的监督信号。在这项工作中,我们调查了一项广泛采用的半监督学习方法之一的一致性正则化的一致性,可以帮助提高图形神经网络的性能。我们重新审视图形神经网络的两种一致性正则化方法。一个是简单的一致性正则化(SCR),另一个是均值是均值 - 教师一致性正则化(MCR)。我们将一致性正则化方法与两个最先进的GNN结合起来并在OGBN-Products数据集上进行实验。通过一致性正常化,可以在具有和无外数据的OGBN-Products数据集中提高最先进的GNN的性能0.3%。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been widely applied in the semi-supervised node classification task, where a key point lies in how to sufficiently leverage the limited but valuable label information. Most of the classical GNNs solely use the known labels for computing the classification loss at the output. In recent years, several methods have been designed to additionally utilize the labels at the input. One part of the methods augment the node features via concatenating or adding them with the one-hot encodings of labels, while other methods optimize the graph structure by assuming neighboring nodes tend to have the same label. To bring into full play the rich information of labels, in this paper, we present a label-enhanced learning framework for GNNs, which first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels. Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs. Moreover, a training node selection technique is provided to eliminate the potential label leakage issue and guarantee the model generalization ability. Finally, an adaptive self-training strategy is proposed to iteratively enlarge the training set with more reliable pseudo labels and distinguish the importance of each pseudo-labeled node during the model training process. Experimental results on both real-world and synthetic datasets demonstrate our approach can not only consistently outperform the state-of-the-arts, but also effectively smooth the representations of intra-class nodes.
translated by 谷歌翻译
图形神经网络(GNN)在许多基于图的任务中表现出强大的表示能力。具体而言,由于其简单性和性能优势,GNN(例如APPNP)的解耦结构变得流行。但是,这些GNN的端到端培训使它们在计算和记忆消耗方面效率低下。为了应对这些局限性,在这项工作中,我们为图形神经网络提供了交替的优化框架,不需要端到端培训。在不同设置下进行的广泛实验表明,所提出的算法的性能与现有的最新算法相当,但具有更好的计算和记忆效率。此外,我们表明我们的框架可以利用优势来增强现有的脱钩GNN。
translated by 谷歌翻译
图形神经网络(GNNS)在建模图形结构数据方面表明了它们的能力。但是,实际图形通常包含结构噪声并具有有限的标记节点。当在这些图表中培训时,GNN的性能会显着下降,这阻碍了许多应用程序的GNN。因此,与有限标记的节点开发抗噪声GNN是重要的。但是,这是一个相当有限的工作。因此,我们研究了在具有有限标记节点的嘈杂图中开发鲁棒GNN的新问题。我们的分析表明,嘈杂的边缘和有限的标记节点都可能损害GNN的消息传递机制。为减轻这些问题,我们提出了一种新颖的框架,该框架采用嘈杂的边缘作为监督,以学习去噪和密集的图形,这可以减轻或消除嘈杂的边缘,并促进GNN的消息传递,以缓解有限标记节点的问题。生成的边缘还用于规则地将具有标记平滑度的未标记节点的预测规范化,以更好地列车GNN。实验结果对现实世界数据集展示了在具有有限标记节点的嘈杂图中提出框架的稳健性。
translated by 谷歌翻译
Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for this academic-industrial gap is the neighborhood-fetching latency incurred by data dependency in GNNs, which make it hard to deploy for latency-sensitive applications that require fast inference. Conversely, without involving any feature aggregation, MLPs have no data dependency and infer much faster than GNNs, but their performance is less competitive. Motivated by these complementary strengths and weaknesses, we propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and MLPs. Specifically, the GSDN framework is based purely on MLPs, where structural information is only implicitly used as prior to guide knowledge self-distillation between the neighborhood and the target, substituting the explicit neighborhood information propagation as in GNNs. As a result, GSDN enjoys the benefits of graph topology-awareness in training but has no data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with self-distillation, e.g., GSDN improves over stand-alone MLPs by 15.54\% on average and outperforms the state-of-the-art GNNs on six datasets. Regarding inference speed, GSDN infers 75X-89X faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
translated by 谷歌翻译
图形广泛用于建模数据的关系结构,并且图形机器学习(ML)的研究具有广泛的应用,从分子图中的药物设计到社交网络中的友谊建议。图形ML的流行方法通常需要大量的标记实例来实现令人满意的结果,这在现实世界中通常是不可行的,因为在图形上标记了新出现的概念的数据(例如,在图形上的新分类)是有限的。尽管已将元学习应用于不同的几个图形学习问题,但大多数现有的努力主要假设所有所见类别的数据都是金标记的,而当这些方法弱标记时,这些方法可能会失去疗效严重的标签噪声。因此,我们旨在研究一个新的问题,即弱监督图元学习,以改善知识转移的模型鲁棒性。为了实现这一目标,我们提出了一个新的图形学习框架 - 本文中的图形幻觉网络(Meta-GHN)。基于一种新的鲁棒性增强的情节训练,元研究将从弱标记的数据中幻觉清洁节点表示,并提取高度可转移的元知识,这使该模型能够快速适应不见了的任务,几乎没有标记的实例。广泛的实验表明,元基因与现有图形学习研究的优越性有关弱监督的少数弹性分类的任务。
translated by 谷歌翻译
由于学术和工业领域的异质图无处不在,研究人员最近提出了许多异质图神经网络(HGNN)。在本文中,我们不再采用更强大的HGNN模型,而是有兴趣设计一个多功能的插件模块,该模块解释了从预先训练的HGNN中提取的关系知识。据我们所知,我们是第一个在异质图上提出高阶(雇用)知识蒸馏框架的人,无论HGNN的模型体系结构如何,它都可以显着提高预测性能。具体而言,我们的雇用框架最初执行一阶节点级知识蒸馏,该蒸馏曲线及其预测逻辑编码了老师HGNN的语义。同时,二阶关系级知识蒸馏模仿了教师HGNN生成的不同类型的节点嵌入之间的关系相关性。在各种流行的HGNN模型和三个现实世界的异质图上进行了广泛的实验表明,我们的方法获得了一致且相当大的性能增强,证明了其有效性和泛化能力。
translated by 谷歌翻译
Graph structure learning (GSL), which aims to learn the adjacency matrix for graph neural networks (GNNs), has shown great potential in boosting the performance of GNNs. Most existing GSL works apply a joint learning framework where the estimated adjacency matrix and GNN parameters are optimized for downstream tasks. However, as GSL is essentially a link prediction task, whose goal may largely differ from the goal of the downstream task. The inconsistency of these two goals limits the GSL methods to learn the potential optimal graph structure. Moreover, the joint learning framework suffers from scalability issues in terms of time and space during the process of estimation and optimization of the adjacency matrix. To mitigate these issues, we propose a graph structure refinement (GSR) framework with a pretrain-finetune pipeline. Specifically, The pre-training phase aims to comprehensively estimate the underlying graph structure by a multi-view contrastive learning framework with both intra- and inter-view link prediction tasks. Then, the graph structure is refined by adding and removing edges according to the edge probabilities estimated by the pre-trained model. Finally, the fine-tuning GNN is initialized by the pre-trained model and optimized toward downstream tasks. With the refined graph structure remaining static in the fine-tuning space, GSR avoids estimating and optimizing graph structure in the fine-tuning phase which enjoys great scalability and efficiency. Moreover, the fine-tuning GNN is boosted by both migrating knowledge and refining graphs. Extensive experiments are conducted to evaluate the effectiveness (best performance on six benchmark datasets), efficiency, and scalability (13.8x faster using 32.8% GPU memory compared to the best GSL baseline on Cora) of the proposed model.
translated by 谷歌翻译
近年来,图形神经网络(GNNS)已实现了节点分类的最新性能。但是,大多数现有的GNN会遭受图形不平衡问题。在许多实际情况下,节点类都是不平衡的,其中一些多数类构成了图的大部分部分。 GNN中的消息传播机制将进一步扩大这些多数类的主导地位,从而导致次级分类性能。在这项工作中,我们试图通过生成少数族裔类实例来平衡培训数据,从而扩展了以前的基于过度采样的技术来解决这个问题。此任务是不平凡的,因为这些技术的设计是实例是独立的。忽视关系信息会使此过采样过程变得复杂。此外,节点分类任务通常仅使用少数标记的节点进行半监督设置,从而为少数族裔实例的产生提供了不足的监督。生成的低质量新节点会损害训练有素的分类器。在这项工作中,我们通过在构造的嵌入空间中综合新节点来解决这些困难,该节点编码节点属性和拓扑信息。此外,对边缘生成器进行同时训练,以建模图结构并为新样品提供关系。为了进一步提高数据效率,我们还探索合成的混合``中间''节点在此过度采样过程中利用多数类的节点。对现实世界数据集的实验验证了我们提出的框架的有效性。
translated by 谷歌翻译
本文研究了用于无监督场景的图形神经网络(GNN)的节点表示。具体地,我们推导了理论分析,并在不适当定义的监督信号时,在不同的图形数据集中提供关于GNN的非稳定性能的实证演示。 GNN的性能取决于节点特征平滑度和图形结构的局部性。为了平滑通过图形拓扑和节点功能测量的节点接近度的差异,我们提出了帆 - 一个小说\下划线{s} elf- \下划线{a} u段图对比度\下划线{i} ve \ nignline {l}收入框架,使用两个互补的自蒸馏正则化模块,\ emph {Ie},内部和图间知识蒸馏。我们展示了帆在各种图形应用中的竞争性能。即使使用单个GNN层,Sail也在各种基准数据集中持续竞争或更好的性能,与最先进的基线相比。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been a prevailing technique for tackling various analysis tasks on graph data. A key premise for the remarkable performance of GNNs relies on complete and trustworthy initial graph descriptions (i.e., node features and graph structure), which is often not satisfied since real-world graphs are often incomplete due to various unavoidable factors. In particular, GNNs face greater challenges when both node features and graph structure are incomplete at the same time. The existing methods either focus on feature completion or structure completion. They usually rely on the matching relationship between features and structure, or employ joint learning of node representation and feature (or structure) completion in the hope of achieving mutual benefit. However, recent studies confirm that the mutual interference between features and structure leads to the degradation of GNN performance. When both features and structure are incomplete, the mismatch between features and structure caused by the missing randomness exacerbates the interference between the two, which may trigger incorrect completions that negatively affect node representation. To this end, in this paper we propose a general GNN framework based on teacher-student distillation to improve the performance of GNNs on incomplete graphs, namely T2-GNN. To avoid the interference between features and structure, we separately design feature-level and structure-level teacher models to provide targeted guidance for student model (base GNNs, such as GCN) through distillation. Then we design two personalized methods to obtain well-trained feature and structure teachers. To ensure that the knowledge of the teacher model is comprehensively and effectively distilled to the student model, we further propose a dual distillation mode to enable the student to acquire as much expert knowledge as possible.
translated by 谷歌翻译
在本文中,我们研究了在非全粒图上进行节点表示学习的自我监督学习的问题。现有的自我监督学习方法通​​常假定该图是同质的,其中链接的节点通常属于同一类或具有相似的特征。但是,这种同质性的假设在现实图表中并不总是正确的。我们通过为图神经网络开发脱钩的自我监督学习(DSSL)框架来解决这个问题。 DSSL模仿了节点的生成过程和语义结构的潜在变量建模的链接,该过程将不同邻域之间的不同基础语义解散到自我监督的节点学习过程中。我们的DSSL框架对编码器不可知,不需要预制的增强,因此对不同的图表灵活。为了通过潜在变量有效地优化框架,我们得出了自我监督目标的较低范围的证据,并开发了具有变异推理的可扩展培训算法。我们提供理论分析,以证明DSSL享有更好的下游性能。与竞争性的自我监督学习基线相比,对各种类图基准的广泛实验表明,我们提出的框架可以显着取得更好的性能。
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
尽管图神经网络(GNNS)已经证明了它们在处理非欧国人结构数据方面的功效,但由于多跳数据依赖性施加的可伸缩性约束,因此很难将它们部署在实际应用中。现有方法试图通过使用训练有素的GNN的标签训练多层感知器(MLP)来解决此可伸缩性问题。即使可以显着改善MLP的性能,但两个问题仍能阻止MLP的表现优于GNN并在实践中使用:图形结构信息的无知和对节点功能噪声的敏感性。在本文中,我们建议在图(NOSMOG)上学习噪声稳定结构感知的MLP,以克服挑战。具体而言,我们首先将节点内容与位置功能进行补充,以帮助MLP捕获图形结构信息。然后,我们设计了一种新颖的表示相似性蒸馏策略,以将结构节点相似性注入MLP。最后,我们介绍了对抗性功能的扩展,以确保稳定的学习能力噪声,并进一步提高性能。广泛的实验表明,在七个数据集中,NOSMOG在转导和归纳环境中均优于GNN和最先进的方法,同时保持竞争性推理效率。
translated by 谷歌翻译
图神经网络(GNN)在学习图表表示方面取得了巨大成功,从而促进了各种与图形相关的任务。但是,大多数GNN方法都采用监督的学习设置,由于难以获得标记的数据,因此在现实世界中并不总是可行的。因此,图表自学学习一直在吸引越来越多的关注。图对比度学习(GCL)是自我监督学习的代表性框架。通常,GCL通过将语义上相似的节点(阳性样品)和不同的节点(阴性样品)与锚节点进行对比来学习节点表示。没有访问标签,通常通过数据增强产生阳性样品,而负样品是从整个图中均匀采样的,这导致了亚最佳目标。具体而言,数据增强自然限制了该过程中涉及的正样本的数量(通常只采用一个阳性样本)。另一方面,随机采样过程不可避免地选择假阴性样品(样品与锚共享相同的语义)。这些问题限制了GCL的学习能力。在这项工作中,我们提出了一个增强的目标,以解决上述问题。我们首先引入了一个不可能实现的理想目标,该目标包含所有正样本,没有假阴性样本。然后,基于对阳性和负样品进行采样的分布,将这个理想的目标转化为概率形式。然后,我们以节点相似性对这些分布进行建模,并得出增强的目标。各种数据集上的全面实验证明了在不同设置下提出的增强目标的有效性。
translated by 谷歌翻译
图形神经网络(GNNS)在节点分类,回归和推荐任务中取得了最新的最新性能。当可提供高质量和丰富的连接结构时,GNNS工作好。但是,在许多真实世界图中,该要求在节点度具有幂律分布的许多真实世界中,因为许多节点具有较少或嘈杂的连接。这种情况的极端情况是节点可能没有邻居,称为严格的冷启动(SCS)场景。这会强制预测模型依赖于节点的输入特征。与通过蒸馏方法相比,我们提出冷啤酒以解决SCS和嘈杂的邻居设置。我们介绍了功能贡献比(FCR),测量使用电感GNN解决SCS问题的可行性,并选择SCS泛化的最佳体系结构。我们通过实验显示FCR Disentangles图数据集的各种组成部分的贡献,并展示了几个公共基准和专有电子商务数据集上的冷啤酒的优越性。我们方法的源代码可用于:https://github.com/amazon-research/gnn-tail-一致化。
translated by 谷歌翻译