In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have become increasingly important in recent years due to their state-of-the-art performance on many important downstream applications. Existing GNNs have mostly focused on learning a single node representation, despite that a node often exhibits polysemous behavior in different contexts. In this work, we develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph. Such disentangled representations are more interpretable and useful than a single embedding. Furthermore, PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings. The framework is flexible enough and the general design helps in the wide applicability of the learned embeddings to suit the domain. We utilize publicly available benchmark datasets to evaluate our approach and against a variety of baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction where we achieve an average gain of 15% while remaining competitive for node classification. Finally, we also demonstrate the utility of PersonaSAGE with a case study for personalized recommendation of different entity types in a data management platform.
translated by 谷歌翻译
图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
Clustering is a fundamental problem in network analysis that finds closely connected groups of nodes and separates them from other nodes in the graph, while link prediction is to predict whether two nodes in a network are likely to have a link. The definition of both naturally determines that clustering must play a positive role in obtaining accurate link prediction tasks. Yet researchers have long ignored or used inappropriate ways to undermine this positive relationship. In this article, We construct a simple but efficient clustering-driven link prediction framework(ClusterLP), with the goal of directly exploiting the cluster structures to obtain connections between nodes as accurately as possible in both undirected graphs and directed graphs. Specifically, we propose that it is easier to establish links between nodes with similar representation vectors and cluster tendencies in undirected graphs, while nodes in a directed graphs can more easily point to nodes similar to their representation vectors and have greater influence in their own cluster. We customized the implementation of ClusterLP for undirected and directed graphs, respectively, and the experimental results using multiple real-world networks on the link prediction task showed that our models is highly competitive with existing baseline models. The code implementation of ClusterLP and baselines we use are available at https://github.com/ZINUX1998/ClusterLP.
translated by 谷歌翻译
作为建模复杂关系的强大工具,HyperGraphs从图表学习社区中获得了流行。但是,深度刻画学习中的常用框架专注于具有边缘独立的顶点权重(EIVW)的超图,而无需考虑具有具有更多建模功率的边缘依赖性顶点权重(EDVWS)的超图。为了弥补这一点,我们提出了一般的超图光谱卷积(GHSC),这是一个通用学习框架,不仅可以处理EDVW和EIVW HyperGraphs,而且更重要的是,理论上可以明确地利用现有强大的图形卷积神经网络(GCNN)明确说明,从而很大程度上可以释放。超图神经网络的设计。在此框架中,给定的无向GCNN的图形拉普拉斯被统一的HyperGraph Laplacian替换,该统一的HyperGraph Laplacian通过将我们所定义的广义超透明牌与简单的无向图等同起来,从随机的步行角度将顶点权重信息替换。来自各个领域的广泛实验,包括社交网络分析,视觉目标分类和蛋白质学习,证明了拟议框架的最新性能。
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
基于观察到的图,对在关系结构数据上应用机器学习技术的兴趣增加了。通常,该图并不能完全代表节点之间的真实关系。在这些设置中,构建以观测图为条件的生成模型可以考虑图形不确定性。各种现有技术要么依赖于限制性假设,无法在样品中保留拓扑特性,要么在较大的图表中昂贵。在这项工作中,我们介绍了用于通过图形构建分布的节点复制模型。随机图的采样是通过替换每个节点的邻居的邻居来进行采样的。采样图保留图形结构的关键特征,而无需明确定位它们。此外,该模型的采样非常简单,并与节点线性缩放。我们在三个任务中显示了复制模型的有用性。首先,在节点分类中,基于节点复制的贝叶斯公式在稀疏数据设置中实现了更高的精度。其次,我们采用建议的模型来减轻对抗攻击对图形拓扑的影响。最后,将模型纳入推荐系统设置,改善了对最新方法的回忆。
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
网络表示学习(NRL)方法在过去几年中受到了重大关注,因此由于它们在几个图形分析问题中的成功,包括节点分类,链路预测和聚类。这种方法旨在以一种保留网络的结构信息的方式将网络的每个顶点映射到低维空间中。特别感兴趣的是基于随机行走的方法;这些方法将网络转换为节点序列的集合,旨在通过预测序列内每个节点的上下文来学习节点表示。在本文中,我们介绍了一种通用框架,以增强通过基于主题信息的随机行走方法获取的节点的嵌入。类似于自然语言处理中局部单词嵌入的概念,所提出的模型首先将每个节点分配给潜在社区,并有利于各种统计图模型和社区检测方法,然后了解增强的主题感知表示。我们在两个下游任务中评估我们的方法:节点分类和链路预测。实验结果表明,通过纳入节点和社区嵌入,我们能够以广泛的广泛的基线NRL模型表明。
translated by 谷歌翻译
异质图卷积网络在解决异质网络数据的各种网络分析任务方面已广受欢迎,从链接预测到节点分类。但是,大多数现有作品都忽略了多型节点之间的多重网络的关系异质性,而在元路径中,元素嵌入中关系的重要性不同,这几乎无法捕获不同关系跨不同关系的异质结构信号。为了应对这一挑战,这项工作提出了用于异质网络嵌入的多重异质图卷积网络(MHGCN)。我们的MHGCN可以通过多层卷积聚合自动学习多重异质网络中不同长度的有用的异质元路径相互作用。此外,我们有效地将多相关结构信号和属性语义集成到学习的节点嵌入中,并具有无监督和精选的学习范式。在具有各种网络分析任务的五个现实世界数据集上进行的广泛实验表明,根据所有评估指标,MHGCN与最先进的嵌入基线的优势。
translated by 谷歌翻译
时间图代表实体之间的动态关系,并发生在许多现实生活中的应用中,例如社交网络,电子商务,通信,道路网络,生物系统等。他们需要根据其生成建模和表示学习的研究超出与静态图有关的研究。在这项调查中,我们全面回顾了近期针对处理时间图提出的神经时间依赖图表的学习和生成建模方法。最后,我们确定了现有方法的弱点,并讨论了我们最近发表的论文提格的研究建议[24]。
translated by 谷歌翻译
在低维空间中节点的学习表示是一项至关重要的任务,在网络分析中具有许多有趣的应用,包括链接预测,节点分类和可视化。解决此问题的两种流行方法是矩阵分解和基于步行的随机模型。在本文中,我们旨在将两全其美的最好的人融合在一起,以学习节点表示。特别是,我们提出了一个加权矩阵分解模型,该模型编码有关网络节点的随机步行信息。这种新颖的表述的好处是,它使我们能够利用内核函数,而无需意识到确切的接近矩阵,从而增强现有矩阵分解方法的表达性,并减轻其计算复杂性。我们通过多个内核学习公式扩展了方法,该公式提供了学习内核作为以数据驱动方式的词典的线性组合的灵活性。我们在现实世界网络上执行经验评估,表明所提出的模型优于基线节点嵌入下游机器学习任务中的算法。
translated by 谷歌翻译
近年来,对图表的研究受到了极大的关注。但是,到目前为止,大多数研究都集中在单层图的嵌入上。涉及多层结构的表示问题问题的少数研究取决于以下强烈的假设:层间链接是已知的,这限制了可能的应用范围。在这里,我们提出了多层,这是允许嵌入多重网络的图形算法的概括。我们表明,多层能够重建层内和层间连接性,超过了图形,该图是为简单图形而设计的。接下来,通过全面的实验分析,我们还以简单和多重网络中的嵌入性能阐明,表明图的密度或链接的随机性都会强烈影响嵌入的质量。
translated by 谷歌翻译
We investigate the representation power of graph neural networks in the semisupervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs-ego-and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations-that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H 2 GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
translated by 谷歌翻译
图表上的表示学习(也称为图形嵌入)显示了其对一系列机器学习应用程序(例如分类,预测和建议)的重大影响。但是,现有的工作在很大程度上忽略了现代应用程序中图和边缘的属性(或属性)中包含的丰富信息,例如,属性图表示的节点和边缘。迄今为止,大多数现有的图形嵌入方法要么仅关注具有图形拓扑的普通图,要么仅考虑节点上的属性。我们提出了PGE,这是一个图形表示学习框架,该框架将节点和边缘属性都包含到图形嵌入过程中。 PGE使用节点聚类来分配偏差来区分节点的邻居,并利用多个数据驱动的矩阵来汇总基于偏置策略采样的邻居的属性信息。 PGE采用了流行的邻里聚合归纳模型。我们通过显示PGE如何实现更好的嵌入结果的详细分析,并验证PGE的性能,而不是最新的嵌入方法嵌入方法在基准应用程序上的嵌入方法,例如节点分类和对现实世界中的链接预测数据集。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译
近年来,异构图形神经网络(HGNNS)一直在开花,但每个工作所使用的独特数据处理和评估设置会让他们的进步完全了解。在这项工作中,我们通过使用其官方代码,数据集,设置和超参数来展示12个最近的HGNN的系统再现,揭示了关于HGNN的进展的令人惊讶的结果。我们发现,由于设置不当,简单的均匀GNN,例如GCN和GAT在很大程度上低估了。具有适当输入的GAT通常可以匹配或优于各种场景的所有现有HGNN。为了促进稳健和可重复的HGNN研究,我们构建异构图形基准(HGB),由具有三个任务的11个不同数据集组成。 HGB标准化异构图数据分割,特征处理和性能评估的过程。最后,我们介绍了一个简单但非常强大的基线简单 - HGN - 这显着优于HGB上以前的所有模型 - 以加速未来HGNN的进步。
translated by 谷歌翻译
给定图形学习任务,例如链接预测,在新的图数据集上,我们如何自动选择最佳方法及其超参数(集体称为模型)?图形学习的模型选择在很大程度上是临时的。一种典型的方法是将流行方法应用于新数据集,但这通常是次优的。另一方面,系统比较新图上的模型迅速变得太成本过高,甚至不切实际。在这项工作中,我们为自动图机学习开发了第一种称为AutoGML的元学习方法,该方法利用了基准图数据集中大量现有方法的先前性能,并携带此先前的经验以自动选择有效的有效用于新图的模型,无需任何模型培训或评估。为了捕获来自不同领域的图形之间的相似性,我们引入了量化图形结构特征的专业元图特征。然后,我们设计了一个代表模型和图形之间关系的元图,并开发了在元图上运行的图形元学习器,该图估计了每个模型与不同图的相关性。通过广泛的实验,我们表明,使用AutoGML选择新图的方法显着优于始终应用流行方法以及几个现有的元学习者,同时在测试时非常快。
translated by 谷歌翻译
异质图具有多个节点和边缘类型,并且在语义上比同质图更丰富。为了学习这种复杂的语义,许多用于异质图的图形神经网络方法使用Metapaths捕获节点之间的多跳相互作用。通常,非目标节点的功能未纳入学习过程。但是,可以存在涉及多个节点或边缘的非线性高阶相互作用。在本文中,我们提出了Simplicial Graph注意网络(SGAT),这是一种简单的复杂方法,可以通过将非目标节点的特征放在简单上来表示这种高阶相互作用。然后,我们使用注意机制和上邻接来生成表示。我们凭经验证明了方法在异质图数据集上使用节点分类任务的方法的功效,并进一步显示了SGAT通过采用随机节点特征来提取结构信息的能力。数值实验表明,SGAT的性能优于其他当前最新的异质图学习方法。
translated by 谷歌翻译