诚然,图形卷积网络(GCN)在图形数据集(例如社交网络,引文网络等)上取得了出色的结果。但是,通过梯度下降,使用数千次迭代来优化这些框架中的SoftMax作为决策层。此外,由于忽略了图节点的内部分布,决策层可能会导致半监督学习中的性能不令人满意,而标签支持较少。为了解决引用的问题,我们提出了一个新颖的图形模型,该模型具有用于图挖掘的非梯度决策层。首先,流形学习与标签局部结构保存统一,以捕获节点的拓扑信息。此外,由于非梯度特性,封闭式解决方案被用作GCN的决策层。特别是,为该图模型设计了一种联合优化方法,该方法极大地加速了模型的收敛性。最后,广泛的实验表明,与当前模型相比,所提出的模型已经达到了最先进的性能。
translated by 谷歌翻译
在半监督的学习领域中,作为GNN的变体模型,图形卷积网络(GCN)通过将卷积引入GNN来实现非欧盟数据的有希望的结果。但是,GCN及其变体模型无法安全地使用风险未标记数据的信息,这将降低半监督学习的性能。因此,我们提出了一个安全的GCN框架(SAFE-GCN),以提高学习绩效。在Safe-GCN中,我们设计了一个迭代过程来标记未标记的数据。在每次迭代中,学会了GCN及其监督版本(S-GCN),以高信任地找到未标记的数据。然后将高信心的未标记数据及其伪标签添加到标签集中。最后,两者都添加了未标记的数据和标记的数据来训练S-GCN,该S-GCN可以安全地探索风险未标记的数据,并可以安全使用大量未标记的数据。在三个众所周知的引用网络数据集上评估了安全性GCN的性能,并且获得的结果证明了该框架对几种基于图的半监督学习方法的有效性。
translated by 谷歌翻译
Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.
translated by 谷歌翻译
图形神经网络(GNN)在许多基于图的任务中表现出强大的表示能力。具体而言,由于其简单性和性能优势,GNN(例如APPNP)的解耦结构变得流行。但是,这些GNN的端到端培训使它们在计算和记忆消耗方面效率低下。为了应对这些局限性,在这项工作中,我们为图形神经网络提供了交替的优化框架,不需要端到端培训。在不同设置下进行的广泛实验表明,所提出的算法的性能与现有的最新算法相当,但具有更好的计算和记忆效率。此外,我们表明我们的框架可以利用优势来增强现有的脱钩GNN。
translated by 谷歌翻译
由于它们对处理图形结构数据的显着功率,图表卷积网络(GCNS)已广泛应用于各个领域。典型的GCN及其变体在同声源性假设下工作(即,具有相同类的节点容易彼此连接),同时忽略许多真实网络中存在的异源性(即,具有不同类别的节点倾向于形成边缘) 。现有方法通过主要聚集高阶邻域或梳理即时表示来处理异常的方法,这导致结果导致噪声和无关的信息。但这些方法没有改变在同性恋假设下工作的传播机制(这是GCN的基本部分)。这使得难以区分不同类别的节点的表示。为了解决这个问题,在本文中,我们设计了一种新的传播机制,可以根据节点对之间自动或异常改变传播和聚合过程。为了自适应地学习传播过程,我们在节点对之间引入两个奇妙程度的两个测量,这分别基于拓扑和属性信息来学习。然后,我们将学习的同音源于Graph卷积框架纳入图形卷积框架,该框架在端到端的架构中培训,使其能够超越奇妙的假设。更重要的是,我们理论上证明我们的模型可以根据他们的同意程度来限制节点之间的表示的相似性。 7个现实世界数据集的实验表明,这种新方法在异常或低意识下表现出最先进的方法,并在精梳性下获得竞争性能。
translated by 谷歌翻译
本文提出了FLGC,这是一个简单但有效的全线性图形卷积网络,用于半监督和无人监督的学习。基于计算具有解耦步骤的全局最优闭合液解决方案而不是使用梯度下降,而不是使用梯度下降。我们展示(1)FLGC强大的是处理图形结构化数据和常规数据,(2)具有闭合形式解决方案的训练图卷积模型提高了计算效率而不会降低性能,而(3)FLGC作为自然概括非欧几里德域的经典线性模型,例如Ridge回归和子空间聚类。此外,我们通过引入初始剩余策略来实现半监督的FLGC和无监督的FLGC,使FLGC能够聚集长距离邻域并减轻过平滑。我们将我们的半监督和无人监督的FLGC与各种分类和聚类基准的许多最先进的方法进行比较,表明建议的FLGC模型在准确性,鲁棒性和学习效率方面始终如一地优于先前的方法。我们的FLGC的核心代码在https://github.com/angrycai/flgc下发布。
translated by 谷歌翻译
图表卷积网络(GCN)显示了探索图形表示的显着潜力。然而,GCN聚合机制无法通过异常概括到网络上的网络,其中大多数节点具有来自不同类别的邻居,该邻居通常存在于现实网络中。为了使GCN的传播和聚合机制适合于粗源性和异常的(甚至它们的混合物),我们将块建模引入GCN的框架,以便它可以实现“块导向的分类聚合”,并自动学习不同类别邻居的相应聚合规则。通过将块建模掺入聚合过程中,GCN能够根据其同音程度判别歧视来自同性恋和异交邻居的信息。我们将我们的算法与最先进的方法进行了比较了异证问题。经验结果证明了我们在异交数据集中现有方法的新方法的优越性,同时在同性恋数据集中保持竞争性能。
translated by 谷歌翻译
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the over-smoothing problem.In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: Initial residual and Identity mapping. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi-and fullsupervised tasks. Code is available at https: //github.com/chennnM/GCNII.
translated by 谷歌翻译
图形神经网络(GNNS)对图表上的半监督节点分类展示了卓越的性能,结果是它们能够同时利用节点特征和拓扑信息的能力。然而,大多数GNN隐含地假设曲线图中的节点和其邻居的标签是相同或一致的,其不包含在异质图中,其中链接节点的标签可能不同。因此,当拓扑是非信息性的标签预测时,普通的GNN可以显着更差,而不是在每个节点上施加多层Perceptrons(MLPS)。为了解决上述问题,我们提出了一种新的$ -laplacian基于GNN模型,称为$ ^ P $ GNN,其消息传递机制来自离散正则化框架,并且可以理论上解释为多项式图的近似值在$ p $ -laplacians的频谱域上定义过滤器。光谱分析表明,新的消息传递机制同时用作低通和高通滤波器,从而使$ ^ P $ GNNS对同性恋和异化图有效。关于现实世界和合成数据集的实证研究验证了我们的调查结果,并证明了$ ^ P $ GNN明显优于异交基准的几个最先进的GNN架构,同时在同性恋基准上实现竞争性能。此外,$ ^ p $ gnns可以自适应地学习聚合权重,并且对嘈杂的边缘具有强大。
translated by 谷歌翻译
属性网络上的节点分类是一项半监督任务,对于网络分析至关重要。通过将图形卷积网络(GCN)中的两个关键操作解耦,即具有转换和邻域聚合,截断的GCN的一些最新作品可以支持这些信息,以更深入地传播并实现高级性能。但是,它们遵循GCN的传统结构感知的传播策略,因此很难捕获节点的属性相关性,并对由两个端点属于不同类别的边缘描述的结构噪声敏感。为了解决这些问题,我们提出了一种新方法,称为“裂开式”传播,然后训练(PAMT)。关键思想是将属性相似性掩码整合到结构感知的传播过程中。这样,PAMT可以在传播过程中保留相邻节点的属性相关性,并有效地减少结构噪声的影响。此外,我们开发了一种迭代改进机制,以在改善培训性能的培训过程中更新相似性面罩。在四个现实世界数据集上进行的广泛实验证明了PAMT的出色性能和鲁棒性。
translated by 谷歌翻译
尽管图表学习(GRL)取得了重大进展,但要以足够的方式提取和嵌入丰富的拓扑结构和特征信息仍然是一个挑战。大多数现有方法都集中在本地结构上,并且无法完全融合全球拓扑结构。为此,我们提出了一种新颖的结构保留图表学习(SPGRL)方法,以完全捕获图的结构信息。具体而言,为了减少原始图的不确定性和错误信息,我们通过k-nearest邻居方法构建了特征图作为互补视图。该特征图可用于对比节点级别以捕获本地关系。此外,我们通过最大化整个图形和特征嵌入的相互信息(MI)来保留全局拓扑结构信息,从理论上讲,该信息可以简化为交换功能的特征嵌入和原始图以重建本身。广泛的实验表明,我们的方法在半监督节点分类任务上具有相当出色的性能,并且在图形结构或节点特征上噪声扰动下的鲁棒性出色。
translated by 谷歌翻译
灵感来自深度学习的广泛成功,已经提出了图表神经网络(GNNS)来学习表达节点表示,并在各种图形学习任务中表现出有希望的性能。然而,现有的努力主要集中在提供相对丰富的金色标记节点的传统半监督设置。虽然数据标签是难以忍受的事实令人生畏的事实并且需要强化领域知识,但特别是在考虑图形结构数据的异质性时,它通常是不切实际的。在几次半监督的环境下,大多数现有GNN的性能不可避免地受到过度装备和过天际问题的破坏,在很大程度上由于标记数据的短缺。在本文中,我们提出了一种配备有新型元学习算法的解耦的网络架构来解决这个问题。从本质上讲,我们的框架META-PN通过META学习的标签传播策略在未标记节点上乘坐高质量的伪标签,这有效增强了稀缺标记的数据,同时在培训期间启用大型接受领域。广泛的实验表明,与各种基准数据集上的现有技术相比,我们的方法提供了简单且实质性的性能。
translated by 谷歌翻译
Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, coauthorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods. CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Computing methodologies → Artificial intelligence; Neural networks.
translated by 谷歌翻译
We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have attracted increasing attention in recent years and have achieved excellent performance in semi-supervised node classification tasks. The success of most GNNs relies on one fundamental assumption, i.e., the original graph structure data is available. However, recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph, making it necessary to learn comprehensive and robust graph structures for downstream tasks, rather than relying only on the raw graph structure. In light of this, we seek to learn optimal graph structures for downstream tasks and propose a novel framework for semi-supervised classification. Specifically, based on the structural context information of graph and node representations, we encode the complex interactions in semantics and generate semantic graphs to preserve the global structure. Moreover, we develop a novel multi-measure attention layer to optimize the similarity rather than prescribing it a priori, so that the similarity can be adaptively evaluated by integrating measures. These graphs are fused and optimized together with GNN towards semi-supervised classification objective. Extensive experiments and ablation studies on six real-world datasets clearly demonstrate the effectiveness of our proposed model and the contribution of each component.
translated by 谷歌翻译
Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral graph convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral graph convolutions with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomenon. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph convolutions and achieve superior performance in both full- and semi-supervised node classification tasks. Most notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that spectral-based GNNs have superior performance. Our code is available at https://github.com/ivam-he/ChebNetII.
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译
图表神经网络(GNNS)在图形结构数据的表现中表现出巨大的成功。在捕获图形拓扑中,GNN中的层展图表卷积显示为强大。在此过程中,GNN通常由预定义的内核引导,例如拉普拉斯矩阵,邻接矩阵或其变体。但是,预定义的内核的采用可能会限制不同图形的必要性:图形和内核之间的不匹配将导致次优性能。例如,当高频信息对于图表具有重要意义时,聚焦在低频信息上的GNN可能无法实现令人满意的性能,反之亦然。为了解决这个问题,在本文中,我们提出了一种新颖的框架 - 即,即Adaptive Kernel图神经网络(AKGNN) - 这将在第一次尝试时以统一的方式适应最佳图形内核。在所提出的AKGNN中,我们首先设计一种数据驱动的图形内核学习机制,它通过修改图拉普拉斯的最大特征值来自适应地调制全通过和低通滤波器之间的平衡。通过此过程,AKGNN了解高频信号之间的最佳阈值以减轻通用问题。稍后,我们通过参数化技巧进一步减少参数的数量,并通过全局读出功能增强富有表现力。在确认的基准数据集中进行了广泛的实验,并且有希望的结果通过与最先进的GNNS比较,展示了我们所提出的Akgnn的出色表现。源代码在公开上可用:https://github.com/jumxglhf/akgnn。
translated by 谷歌翻译
基于图形的群集在群集任务中扮演着重要作用。作为图形卷积网络(GCN),图形类型数据上的神经网络的变体已经实现了令人印象深刻的性能,发现GCN是否可用于在非图形数据上增加基于图形的聚类方法,即,一般数据。但是,鉴于$ N $示例,基于图形的聚类方法通常需要至少$ O(n ^ 2)$时间来构建图形,图表卷积需要密集图和$ uyn $ o(n ^ 2)$。 o(| \ mathcal {e} |)$ for稀疏的$ | \ mathcal {e} | $边。换句话说,基于图形的聚类和GCN患有严重的低效率问题。为了解决这个问题,进一步雇用GCN促进基于图形的聚类的能力,我们提出了一种新的聚类方法,奇迹。由于常规群集方案中未提供图形结构,首先通过引入生成图模型来展示如何将非图形数据集转换为图形,该模型用于构建GCN。从原始数据生成锚来构建二分的图形,使得图表卷积的计算复杂度从$ O(n ^ 2)$和$ o(| \ mathcal {e} |)$到$ o(n) $。群集的后续步骤可以轻松设计为$ O(n)$操作。有趣的是,锚天然导致暹罗的GCN架构。由锚构造的二分图是动态更新的,以利用数据后面的高级信息。最终,我们理论上证明简单的更新将导致退化,因此设计了特定的策略。
translated by 谷歌翻译
图表是一个宇宙数据结构,广泛用于组织现实世界中的数据。像交通网络,社交和学术网络这样的各种实际网络网络可以由图表代表。近年来,目睹了在网络中代表顶点的快速发展,进入低维矢量空间,称为网络表示学习。表示学习可以促进图形数据上的新算法的设计。在本调查中,我们对网络代表学习的当前文献进行了全面审查。现有算法可以分为三组:浅埋模型,异构网络嵌入模型,图形神经网络的模型。我们为每个类别审查最先进的算法,并讨论这些算法之间的基本差异。调查的一个优点是,我们系统地研究了不同类别的算法底层的理论基础,这提供了深入的见解,以更好地了解网络表示学习领域的发展。
translated by 谷歌翻译