图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
时间图代表实体之间的动态关系,并发生在许多现实生活中的应用中,例如社交网络,电子商务,通信,道路网络,生物系统等。他们需要根据其生成建模和表示学习的研究超出与静态图有关的研究。在这项调查中,我们全面回顾了近期针对处理时间图提出的神经时间依赖图表的学习和生成建模方法。最后,我们确定了现有方法的弱点,并讨论了我们最近发表的论文提格的研究建议[24]。
translated by 谷歌翻译
This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the "LINE," which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online. 1
translated by 谷歌翻译
图表上的表示学习(也称为图形嵌入)显示了其对一系列机器学习应用程序(例如分类,预测和建议)的重大影响。但是,现有的工作在很大程度上忽略了现代应用程序中图和边缘的属性(或属性)中包含的丰富信息,例如,属性图表示的节点和边缘。迄今为止,大多数现有的图形嵌入方法要么仅关注具有图形拓扑的普通图,要么仅考虑节点上的属性。我们提出了PGE,这是一个图形表示学习框架,该框架将节点和边缘属性都包含到图形嵌入过程中。 PGE使用节点聚类来分配偏差来区分节点的邻居,并利用多个数据驱动的矩阵来汇总基于偏置策略采样的邻居的属性信息。 PGE采用了流行的邻里聚合归纳模型。我们通过显示PGE如何实现更好的嵌入结果的详细分析,并验证PGE的性能,而不是最新的嵌入方法嵌入方法在基准应用程序上的嵌入方法,例如节点分类和对现实世界中的链接预测数据集。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
一组广泛建立的无监督节点嵌入方法可以解释为由两个独特的步骤组成:i)基于兴趣图的相似性矩阵的定义,然后是II)ii)该矩阵的明确或隐式因素化。受这个观点的启发,我们提出了框架的两个步骤的改进。一方面,我们建议根据自由能距离编码节点相似性,该自由能距离在最短路径和通勤时间距离之间进行了插值,从而提供了额外的灵活性。另一方面,我们根据损耗函数提出了一种基质分解方法,该方法将Skip-Gram模型的损失函数推广到任意相似性矩阵。与基于广泛使用的$ \ ell_2 $损失的因素化相比,该方法可以更好地保留与较高相似性分数相关的节点对。此外,它可以使用高级自动分化工具包轻松实现,并通过利用GPU资源进行有效计算。在现实世界数据集上的节点聚类,节点分类和链接预测实验证明了与最先进的替代方案相比,合并基于自由能的相似性以及所提出的矩阵分解的有效性。
translated by 谷歌翻译
图形嵌入是图形节点到一组向量的转换。良好的嵌入应捕获图形拓扑,节点与节点的关系以及有关图,其子图和节点的其他相关信息。如果实现了这些目标,则嵌入是网络中有意义的,可理解的,可理解的压缩表示形式,可用于其他机器学习工具,例如节点分类,社区检测或链接预测。主要的挑战是,需要确保嵌入很好地描述图形的属性。结果,选择最佳嵌入是一项具有挑战性的任务,并且通常需要领域专家。在本文中,我们在现实世界网络和人为生成的网络上进行了一系列广泛的实验,并使用选定的图嵌入算法进行了一系列的实验。根据这些实验,我们制定了两个一般结论。首先,如果需要在运行实验之前选择一种嵌入算法,则Node2Vec是最佳选择,因为它在我们的测试中表现最好。话虽如此,在所有测试中都没有单一的赢家,此外,大多数嵌入算法都具有应该调整并随机分配的超参数。因此,如果可能的话,我们对从业者的主要建议是生成几个问题的嵌入,然后使用一个通用框架,该框架为无监督的图形嵌入比较提供了工具。该框架(最近在文献中引入并在GitHub存储库中很容易获得)将分歧分数分配给嵌入,以帮助区分好的分数和不良的分数。
translated by 谷歌翻译
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions. * The two first authors made equal contributions. 1 While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology.
translated by 谷歌翻译
在低维空间中节点的学习表示是一项至关重要的任务,在网络分析中具有许多有趣的应用,包括链接预测,节点分类和可视化。解决此问题的两种流行方法是矩阵分解和基于步行的随机模型。在本文中,我们旨在将两全其美的最好的人融合在一起,以学习节点表示。特别是,我们提出了一个加权矩阵分解模型,该模型编码有关网络节点的随机步行信息。这种新颖的表述的好处是,它使我们能够利用内核函数,而无需意识到确切的接近矩阵,从而增强现有矩阵分解方法的表达性,并减轻其计算复杂性。我们通过多个内核学习公式扩展了方法,该公式提供了学习内核作为以数据驱动方式的词典的线性组合的灵活性。我们在现实世界网络上执行经验评估,表明所提出的模型优于基线节点嵌入下游机器学习任务中的算法。
translated by 谷歌翻译
异质图卷积网络在解决异质网络数据的各种网络分析任务方面已广受欢迎,从链接预测到节点分类。但是,大多数现有作品都忽略了多型节点之间的多重网络的关系异质性,而在元路径中,元素嵌入中关系的重要性不同,这几乎无法捕获不同关系跨不同关系的异质结构信号。为了应对这一挑战,这项工作提出了用于异质网络嵌入的多重异质图卷积网络(MHGCN)。我们的MHGCN可以通过多层卷积聚合自动学习多重异质网络中不同长度的有用的异质元路径相互作用。此外,我们有效地将多相关结构信号和属性语义集成到学习的节点嵌入中,并具有无监督和精选的学习范式。在具有各种网络分析任务的五个现实世界数据集上进行的广泛实验表明,根据所有评估指标,MHGCN与最先进的嵌入基线的优势。
translated by 谷歌翻译
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks.Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations.We demonstrate the efficacy of node2vec over existing state-ofthe-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning stateof-the-art task-independent representations in complex networks.
translated by 谷歌翻译
图表学习方法为解决图形所代表的复杂的现实世界问题打开了新的可能性。但是,这些应用程序中使用的许多图包括数百万节点和数十亿个边缘,并且超出了当前方法和软件实现的功能。我们提供葡萄,这是一种用于图形处理和表示学习的软件资源,能够通过使用专业和智能数据结构,算法和快速并行实现来通过大图扩展。与最先进的软件资源相比,葡萄显示出经验空间和时间复杂性的数量级的改善,以及边缘预测和节点标签预测性能的实质和统计学上的显着改善。此外,葡萄提供了来自文献和其他来源的80,000多种图,标准化界面允许直接整合第三方库,61个节点嵌入方法,25个推理模型和3个模块化管道,以允许公平且可重复的方法比较以及用于图形处理和嵌入的库。
translated by 谷歌翻译
我们研究大规模网络嵌入问题,旨在学习网络挖掘应用的低维潜在表示。网络嵌入领域的最新研究导致了大型进展,如深散,线,NetMF,NetSMF。然而,许多真实网络的巨大尺寸使得从整个网络学习网络嵌入的网络昂贵。在这项工作中,我们提出了一种新的网络嵌入方法,称为“NES”,其学习来自小型代表性子图的网络嵌入。 NES利用图表采样的理论,以有效地构建具有较小尺寸的代表性子图,该子图尺寸可用于对完整网络进行推断,使得能够显着提高嵌入学习的效率。然后,NES有效地计算从该代表子图嵌入的网络。与众所周知的方法相比,对各种规模和类型网络的广泛实验表明NES实现了可比性和显着的效率优势。
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
近年来,异构图形神经网络(HGNNS)一直在开花,但每个工作所使用的独特数据处理和评估设置会让他们的进步完全了解。在这项工作中,我们通过使用其官方代码,数据集,设置和超参数来展示12个最近的HGNN的系统再现,揭示了关于HGNN的进展的令人惊讶的结果。我们发现,由于设置不当,简单的均匀GNN,例如GCN和GAT在很大程度上低估了。具有适当输入的GAT通常可以匹配或优于各种场景的所有现有HGNN。为了促进稳健和可重复的HGNN研究,我们构建异构图形基准(HGB),由具有三个任务的11个不同数据集组成。 HGB标准化异构图数据分割,特征处理和性能评估的过程。最后,我们介绍了一个简单但非常强大的基线简单 - HGN - 这显着优于HGB上以前的所有模型 - 以加速未来HGNN的进步。
translated by 谷歌翻译