We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. Deep-Walk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs.DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as Blog-Catalog, Flickr, and YouTube. Our results show that Deep-Walk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, Deep-Walk's representations are able to outperform all baseline methods while using 60% less training data.DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.
translated by 谷歌翻译
网络表示学习(NRL)方法在过去几年中受到了重大关注,因此由于它们在几个图形分析问题中的成功,包括节点分类,链路预测和聚类。这种方法旨在以一种保留网络的结构信息的方式将网络的每个顶点映射到低维空间中。特别感兴趣的是基于随机行走的方法;这些方法将网络转换为节点序列的集合,旨在通过预测序列内每个节点的上下文来学习节点表示。在本文中,我们介绍了一种通用框架,以增强通过基于主题信息的随机行走方法获取的节点的嵌入。类似于自然语言处理中局部单词嵌入的概念,所提出的模型首先将每个节点分配给潜在社区,并有利于各种统计图模型和社区检测方法,然后了解增强的主题感知表示。我们在两个下游任务中评估我们的方法:节点分类和链路预测。实验结果表明,通过纳入节点和社区嵌入,我们能够以广泛的广泛的基线NRL模型表明。
translated by 谷歌翻译
图表表示学习是一种快速增长的领域,其中一个主要目标是在低维空间中产生有意义的图形表示。已经成功地应用了学习的嵌入式来执行各种预测任务,例如链路预测,节点分类,群集和可视化。图表社区的集体努力提供了数百种方法,但在所有评估指标下没有单一方法擅长,例如预测准确性,运行时间,可扩展性等。该调查旨在通过考虑算法来评估嵌入方法的所有主要类别的图表变体,参数选择,可伸缩性,硬件和软件平台,下游ML任务和多样化数据集。我们使用包含手动特征工程,矩阵分解,浅神经网络和深图卷积网络的分类法组织了图形嵌入技术。我们使用广泛使用的基准图表评估了节点分类,链路预测,群集和可视化任务的这些类别算法。我们在Pytorch几何和DGL库上设计了我们的实验,并在不同的多核CPU和GPU平台上运行实验。我们严格地审查了各种性能指标下嵌入方法的性能,并总结了结果。因此,本文可以作为比较指南,以帮助用户选择最适合其任务的方法。
translated by 谷歌翻译
This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the "LINE," which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online. 1
translated by 谷歌翻译
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks.Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations.We demonstrate the efficacy of node2vec over existing state-ofthe-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning stateof-the-art task-independent representations in complex networks.
translated by 谷歌翻译
在低维空间中节点的学习表示是一项至关重要的任务,在网络分析中具有许多有趣的应用,包括链接预测,节点分类和可视化。解决此问题的两种流行方法是矩阵分解和基于步行的随机模型。在本文中,我们旨在将两全其美的最好的人融合在一起,以学习节点表示。特别是,我们提出了一个加权矩阵分解模型,该模型编码有关网络节点的随机步行信息。这种新颖的表述的好处是,它使我们能够利用内核函数,而无需意识到确切的接近矩阵,从而增强现有矩阵分解方法的表达性,并减轻其计算复杂性。我们通过多个内核学习公式扩展了方法,该公式提供了学习内核作为以数据驱动方式的词典的线性组合的灵活性。我们在现实世界网络上执行经验评估,表明所提出的模型优于基线节点嵌入下游机器学习任务中的算法。
translated by 谷歌翻译
Since the invention of word2vec [28,29], the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk [31] empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE [37], in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE [36] can be viewed as the joint factorization of multiple networks' Laplacians; (4) node2vec [16] is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method 1 as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.
translated by 谷歌翻译
图形嵌入,代表数值向量的本地和全局邻域信息,是广泛的现实系统数学建模的关键部分。在嵌入算法中,事实证明,基于步行的随机算法非常成功。这些算法通过创建许多随机步行,并重新定义步骤来收集信息。创建随机步行是嵌入过程中最苛刻的部分。计算需求随着网络的规模而增加。此外,对于现实世界网络,考虑到相同基础上的所有节点,低度节点的丰度都会造成不平衡的数据问题。在这项工作中,提出了一种计算较少且节点连接性统一抽样方法。在提出的方法中,随机步行的数量与节点的程度成比例地创建。当将算法应用于大图时,所提出的算法的优点将变得更加增强。提出了使用两个网络(即Cora和Citeseer)进行比较研究。与固定数量的步行情况相比,提出的方法需要减少50%的计算工作,以达到节点分类和链接预测计算的相同精度。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
在过去十年中,图形内核引起了很多关注,并在结构化数据上发展成为一种快速发展的学习分支。在过去的20年中,该领域发生的相当大的研究活动导致开发数十个图形内核,每个图形内核都对焦于图形的特定结构性质。图形内核已成功地成功地在广泛的域中,从社交网络到生物信息学。本调查的目标是提供图形内核的文献的统一视图。特别是,我们概述了各种图形内核。此外,我们对公共数据集的几个内核进行了实验评估,并提供了比较研究。最后,我们讨论图形内核的关键应用,并概述了一些仍有待解决的挑战。
translated by 谷歌翻译
图形嵌入是图形节点到一组向量的转换。良好的嵌入应捕获图形拓扑,节点与节点的关系以及有关图,其子图和节点的其他相关信息。如果实现了这些目标,则嵌入是网络中有意义的,可理解的,可理解的压缩表示形式,可用于其他机器学习工具,例如节点分类,社区检测或链接预测。主要的挑战是,需要确保嵌入很好地描述图形的属性。结果,选择最佳嵌入是一项具有挑战性的任务,并且通常需要领域专家。在本文中,我们在现实世界网络和人为生成的网络上进行了一系列广泛的实验,并使用选定的图嵌入算法进行了一系列的实验。根据这些实验,我们制定了两个一般结论。首先,如果需要在运行实验之前选择一种嵌入算法,则Node2Vec是最佳选择,因为它在我们的测试中表现最好。话虽如此,在所有测试中都没有单一的赢家,此外,大多数嵌入算法都具有应该调整并随机分配的超参数。因此,如果可能的话,我们对从业者的主要建议是生成几个问题的嵌入,然后使用一个通用框架,该框架为无监督的图形嵌入比较提供了工具。该框架(最近在文献中引入并在GitHub存储库中很容易获得)将分歧分数分配给嵌入,以帮助区分好的分数和不良的分数。
translated by 谷歌翻译
Deep graph kernels
分类:
In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
复杂网络分析的最新进展为不同领域的应用开辟了广泛的可能性。网络分析的功能取决于节点特征。基于拓扑的节点特征是对局部和全局空间关系和节点连接结构的实现。因此,收集有关节点特征的正确信息和相邻节点的连接结构在复杂网络分析中在节点分类和链接预测中起着最突出的作用。目前的工作介绍了一种新的特征抽象方法,即基于嵌入匿名随机步行向量上的匿名随机步行,即过渡概率矩阵(TPM)。节点特征向量由从预定义半径中的一组步行中获得的过渡概率组成。过渡概率与局部连接结构直接相关,因此正确嵌入到特征向量上。在节点识别/分类中测试了建议的嵌入方法的成功,并在三个常用的现实世界网络上进行了链接预测。在现实世界网络中,具有相似连接结构的节点很常见。因此,从类似网络中获取新网络预测的信息是一种显着特征,它使所提出的算法在跨网络概括任务方面优于最先进的算法。
translated by 谷歌翻译
在过去的二十年中,我们目睹了以图形或网络形式构建的有价值的大数据的大幅增长。为了将传统的机器学习和数据分析技术应用于此类数据,有必要将图形转换为基于矢量的表示,以保留图形最重要的结构属性。为此,文献中已经提出了大量的图形嵌入方法。它们中的大多数产生了适用于各种应用的通用嵌入,例如节点聚类,节点分类,图形可视化和链接预测。在本文中,我们提出了两个新的图形嵌入算法,这些算法是基于专门为节点分类问题设计的随机步道。已设计算法的随机步行采样策略旨在特别注意集线器 - 高度节点,这些节点在大规模图中具有最关键的作用。通过分析对现实世界网络嵌入的三种分类算法的分类性能,对所提出的方法进行实验评估。获得的结果表明,与当前最流行的随机步行方法相比,我们的方法可大大提高所检查分类器的预测能力(NODE2VEC)。
translated by 谷歌翻译
Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization.
translated by 谷歌翻译
我们研究大规模网络嵌入问题,旨在学习网络挖掘应用的低维潜在表示。网络嵌入领域的最新研究导致了大型进展,如深散,线,NetMF,NetSMF。然而,许多真实网络的巨大尺寸使得从整个网络学习网络嵌入的网络昂贵。在这项工作中,我们提出了一种新的网络嵌入方法,称为“NES”,其学习来自小型代表性子图的网络嵌入。 NES利用图表采样的理论,以有效地构建具有较小尺寸的代表性子图,该子图尺寸可用于对完整网络进行推断,使得能够显着提高嵌入学习的效率。然后,NES有效地计算从该代表子图嵌入的网络。与众所周知的方法相比,对各种规模和类型网络的广泛实验表明NES实现了可比性和显着的效率优势。
translated by 谷歌翻译
Local graph neighborhood sampling is a fundamental computational problem that is at the heart of algorithms for node representation learning. Several works have presented algorithms for learning discrete node embeddings where graph nodes are represented by discrete features such as attributes of neighborhood nodes. Discrete embeddings offer several advantages compared to continuous word2vec-like node embeddings: ease of computation, scalability, and interpretability. We present LoNe Sampler, a suite of algorithms for generating discrete node embeddings by Local Neighborhood Sampling, and address two shortcomings of previous work. First, our algorithms have rigorously understood theoretical properties. Second, we show how to generate approximate explicit vector maps that avoid the expensive computation of a Gram matrix for the training of a kernel model. Experiments on benchmark datasets confirm the theoretical findings and demonstrate the advantages of the proposed methods.
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译