Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the over-smoothing problem.In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: Initial residual and Identity mapping. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi-and fullsupervised tasks. Code is available at https: //github.com/chennnM/GCNII.
translated by 谷歌翻译
作为建模复杂关系的强大工具,HyperGraphs从图表学习社区中获得了流行。但是,深度刻画学习中的常用框架专注于具有边缘独立的顶点权重(EIVW)的超图,而无需考虑具有具有更多建模功率的边缘依赖性顶点权重(EDVWS)的超图。为了弥补这一点,我们提出了一般的超图光谱卷积(GHSC),这是一个通用学习框架,不仅可以处理EDVW和EIVW HyperGraphs,而且更重要的是,理论上可以明确地利用现有强大的图形卷积神经网络(GCNN)明确说明,从而很大程度上可以释放。超图神经网络的设计。在此框架中,给定的无向GCNN的图形拉普拉斯被统一的HyperGraph Laplacian替换,该统一的HyperGraph Laplacian通过将我们所定义的广义超透明牌与简单的无向图等同起来,从随机的步行角度将顶点权重信息替换。来自各个领域的广泛实验,包括社交网络分析,视觉目标分类和蛋白质学习,证明了拟议框架的最新性能。
translated by 谷歌翻译
在半监督的学习领域中,作为GNN的变体模型,图形卷积网络(GCN)通过将卷积引入GNN来实现非欧盟数据的有希望的结果。但是,GCN及其变体模型无法安全地使用风险未标记数据的信息,这将降低半监督学习的性能。因此,我们提出了一个安全的GCN框架(SAFE-GCN),以提高学习绩效。在Safe-GCN中,我们设计了一个迭代过程来标记未标记的数据。在每次迭代中,学会了GCN及其监督版本(S-GCN),以高信任地找到未标记的数据。然后将高信心的未标记数据及其伪标签添加到标签集中。最后,两者都添加了未标记的数据和标记的数据来训练S-GCN,该S-GCN可以安全地探索风险未标记的数据,并可以安全使用大量未标记的数据。在三个众所周知的引用网络数据集上评估了安全性GCN的性能,并且获得的结果证明了该框架对几种基于图的半监督学习方法的有效性。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译
图形卷积网络(GCN)已被证明是一个有力的概念,在过去几年中,已成功应用于许多领域的各种任务。在这项工作中,我们研究了为GCN定义铺平道路的理论,包括经典图理论的相关部分。我们还讨论并在实验上证明了GCN的关键特性和局限性,例如由样品的统计依赖性引起的,该图由图的边缘引入,这会导致完整梯度的估计值偏置。我们讨论的另一个限制是Minibatch采样对模型性能的负面影响。结果,在参数更新期间,在整个数据集上计算梯度,从而破坏了对大图的可扩展性。为了解决这个问题,我们研究了替代方法,这些方法允许在每次迭代中仅采样一部分数据,可以安全地学习良好的参数。我们重现了KIPF等人的工作中报告的结果。并提出一个灵感签名的实现,这是一种无抽样的minibatch方法。最终,我们比较了基准数据集上的两个实现,证明它们在半监督节点分类任务的预测准确性方面是可比的。
translated by 谷歌翻译
Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online. 1
translated by 谷歌翻译
We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.
translated by 谷歌翻译
图形卷积是一种最近可扩展的方法,用于通过在多个层上汇总本地节点信息来对属性图进行深度特征学习。这样的层仅考虑向前模型中节点邻居的属性信息,并且不将全球网络结构的知识纳入学习任务。特别是,模块化功能提供了有关网络社区结构的方便信息。在这项工作中,我们通过将网络的社区结构保存目标纳入图卷积模型中,调查了对学习表示的质量的影响。我们通过在输出层中的成本函数中的明确正规化项和通过辅助层计算的附加损失项中通过两种方式结合目标。我们报告了在图形卷积体系结构中保存术语的社区结构的效果。对两个归因的分布图网络进行的实验评估表明,社区保护目标的合并提高了稀疏标签制度中的半监督节点分类精度。
translated by 谷歌翻译
基于Web的交互可以经常由归因图表示,并且在这些图中的节点聚类最近受到了很多关注。多次努力已成功应用图形卷积网络(GCN),但由于GCNS已被显示出遭受过平滑问题的GCNS的精度一些限制。虽然其他方法(特别是基于拉普拉斯平滑的方法)已经报告了更好的准确性,但所有工作的基本限制都是缺乏可扩展性。本文通过将LAPLACIAN平滑与广义的PageRank相同,并将随机步行基于算法应用为可伸缩图滤波器来解决这一打开问题。这构成了我们可扩展的深度聚类算法RWSL的基础,其中通过自我监督的迷你批量培训机制,我们同时优化了一个深度神经网络,用于采样集群分配分配和AutoEncoder,用于群集导向的嵌入。使用6个现实世界数据集和6个聚类指标,我们表明RWSL实现了几个最近基线的结果。最值得注意的是,我们显示与所有其他深度聚类框架不同的RWSL可以继续以超过一百万个节点的图形扩展,即句柄。我们还演示了RWSL如何在仅使用单个GPU的18亿边缘的图表上执行节点聚类。
translated by 谷歌翻译
本文提出了FLGC,这是一个简单但有效的全线性图形卷积网络,用于半监督和无人监督的学习。基于计算具有解耦步骤的全局最优闭合液解决方案而不是使用梯度下降,而不是使用梯度下降。我们展示(1)FLGC强大的是处理图形结构化数据和常规数据,(2)具有闭合形式解决方案的训练图卷积模型提高了计算效率而不会降低性能,而(3)FLGC作为自然概括非欧几里德域的经典线性模型,例如Ridge回归和子空间聚类。此外,我们通过引入初始剩余策略来实现半监督的FLGC和无监督的FLGC,使FLGC能够聚集长距离邻域并减轻过平滑。我们将我们的半监督和无人监督的FLGC与各种分类和聚类基准的许多最先进的方法进行比较,表明建议的FLGC模型在准确性,鲁棒性和学习效率方面始终如一地优于先前的方法。我们的FLGC的核心代码在https://github.com/angrycai/flgc下发布。
translated by 谷歌翻译
Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, coauthorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods. CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Computing methodologies → Artificial intelligence; Neural networks.
translated by 谷歌翻译
图形神经网络(GNNS)在学习归属图中显示了很大的力量。但是,GNNS从源节点利用遥控器的信息仍然是一个挑战。此外,常规GNN要求将图形属性作为输入,因此它们无法应用于纯图。在论文中,我们提出了名为G-GNNS(GNN的全局信息)的新模型来解决上述限制。首先,通过无监督的预训练获得每个节点的全局结构和属性特征,其保留与节点相关联的全局信息。然后,使用全局功能和原始网络属性,我们提出了一个并行GNN的并行框架来了解这些功能的不同方面。所提出的学习方法可以应用于普通图和归属图。广泛的实验表明,G-GNNS可以在三个标准评估图上优于其他最先进的模型。特别是,我们的方法在学习归属图表时建立了Cora(84.31 \%)和PubMed(80.95 \%)的新基准记录。
translated by 谷歌翻译
图表神经网络(GNNS)在各种机器学习任务中获得了表示学习的提高。然而,应用邻域聚合的大多数现有GNN通常在图中的图表上执行不良,其中相邻的节点属于不同的类。在本文中,我们示出了在典型的异界图中,边缘可以被引导,以及是否像是处理边缘,也可以使它们过度地影响到GNN模型的性能。此外,由于异常的限制,节点对来自本地邻域之外的类似节点的消息非常有益。这些激励我们开发一个自适应地学习图表的方向性的模型,并利用潜在的长距离相关性节点之间。我们首先将图拉普拉斯概括为基于所提出的特征感知PageRank算法向数字化,该算法同时考虑节点之间的图形方向性和长距离特征相似性。然后,Digraph Laplacian定义了一个图形传播矩阵,导致一个名为{\ em diglaciangcn}的模型。基于此,我们进一步利用节点之间的通勤时间测量的节点接近度,以便在拓扑级别上保留节点的远距离相关性。具有不同级别的10个数据集的广泛实验,同意级别展示了我们在节点分类任务任务中对现有解决方案的有效性。
translated by 谷歌翻译
诚然,图形卷积网络(GCN)在图形数据集(例如社交网络,引文网络等)上取得了出色的结果。但是,通过梯度下降,使用数千次迭代来优化这些框架中的SoftMax作为决策层。此外,由于忽略了图节点的内部分布,决策层可能会导致半监督学习中的性能不令人满意,而标签支持较少。为了解决引用的问题,我们提出了一个新颖的图形模型,该模型具有用于图挖掘的非梯度决策层。首先,流形学习与标签局部结构保存统一,以捕获节点的拓扑信息。此外,由于非梯度特性,封闭式解决方案被用作GCN的决策层。特别是,为该图模型设计了一种联合优化方法,该方法极大地加速了模型的收敛性。最后,广泛的实验表明,与当前模型相比,所提出的模型已经达到了最先进的性能。
translated by 谷歌翻译
图形卷积网络(GCN)类似于卷积神经网络(CNN),通常基于两个主要操作 - 空间和点的卷积。在GCN的背景下,与CNN不同,通常选择基于图形laplacian的预定的​​空间操作员,通常只允许学习点的操作。但是,学习有意义的空间操作员对于开发更具表现力的GCN以提高性能至关重要。在本文中,我们提出了PathGCN,这是一种从图上的随机路径学习空间操作员的新方法。我们分析方法的收敛及其与现有GCN的差异。此外,我们讨论了将我们所学的空间操作员与点卷积相结合的几种选择。我们在众多数据集上进行的广泛实验表明,通过适当地学习空间和角度的卷积,可以固有地避免诸如过度光滑的现象,并实现新的最先进的性能。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
本文研究了跨网络节点分类的问题,以克服单个网络中标记的数据的不足。它旨在利用部分标记的源网络中的标签信息来帮助完全未标记或部分标记的目标网络中的节点分类。由于跨网络的域转移,现有的单网络学习方法无法解决此问题。一些多网络学习方法在很大程度上依赖于跨网络连接的存在,因此对于此问题是不适用的。为了解决这个问题,我们提出了一种小说\ textColor {black} {graph}通过利用对抗域的适应和图形卷积的技术来传递学习框架。它由两个组成部分组成:半监督的学习组件和一个对抗域的适应性组件。前者的目标是通过源网络和目标网络的给定标签信息学习类别的歧视节点表示,而后者则有助于减轻源和目标域之间的分布差异以促进知识传递。对现实世界数据集的广泛经验评估表明,ADAGCN可以在源网络上以低标签速率成功传输类信息,并且源和目标域之间的差异很大。复制实验结果的源代码可在https://github.com/daiquanyu/adagcn上获得。
translated by 谷歌翻译