大规模图在现实情况下无处不在,可以通过图神经网络(GNN)训练以生成下游任务的表示形式。鉴于大规模图的丰富信息和复杂的拓扑结构,我们认为在这样的图中存在冗余,并将降低训练效率。不幸的是,模型可伸缩性严重限制了通过香草GNNS训练大规模图的效率。尽管在基于抽样的培训方法方面取得了最新进展,但基于抽样的GNN通常忽略了冗余问题。在大规模图上训练这些型号仍然需要无法容忍的时间。因此,我们建议通过重新思考图中的固有特征来降低冗余并提高使用GNN的大规模训练效率。在本文中,我们开拓者提出了一种称为dropreef的曾经使用的方法,以在大规模图中删除冗余。具体而言,我们首先进行初步实验,以探索大规模图中的潜在冗余。接下来,我们提出一个度量标准,以量化图中所有节点的异质性。基于实验和理论分析,我们揭示了大规模图中的冗余,即具有高邻居异质的节点和大量邻居。然后,我们建议Dropreef一劳永逸地检测并删除大规模图中的冗余,以帮助减少训练时间,同时确保模型准确性没有牺牲。为了证明DropReef的有效性,我们将其应用于最新的基于最新的采样GNN,用于训练大规模图,这是由于此类模型的高精度。使用Dropreef杠杆,可以大力提高模型的训练效率。 Dropreef高度兼容,并且在离线上执行,从而在很大程度上使目前和未来的最新采样GNN受益。
translated by 谷歌翻译
采样是图形神经网络(GNN)培训的关键操作,有助于降低成本。以前的文献已经通过数学和统计方法探索了改进采样算法。但是,采样算法和硬件之间存在差距。在不考虑硬件的情况下,算法设计人员仅在算法级别优化采样,缺少通过利用硬件功能来促进现有采样算法效率的巨大潜力。在本文中,我们开创了一个为主流采样算法提出的统一编程模型,称为GNNSampler,涵盖了各个类别中采样算法的关键过程。其次,为了利用硬件功能,我们选择数据局部性作为案例研究,并在图中探索节点及其邻居之间的数据位置,以减轻采样中不规则的内存访问。第三,我们在GNNSampler中实现了各种采样算法的局部感知优化,以优化一般的采样过程。最后,我们强调在大图数据集上进行实验,以分析训练时间,准确性和硬件级指标之间的相关性。广泛的实验表明,我们的方法通用到主流采样算法,并有助于大大减少训练时间,尤其是在大规模图中。
translated by 谷歌翻译
图形神经网络(GNN)在学习强大的节点表示中显示了令人信服的性能,这些表现在保留节点属性和图形结构信息的强大节点表示中。然而,许多GNNS在设计有更深的网络结构或手柄大小的图形时遇到有效性和效率的问题。已经提出了几种采样算法来改善和加速GNN的培训,但他们忽略了解GNN性能增益的来源。图表数据中的信息的测量可以帮助采样算法来保持高价值信息,同时消除冗余信息甚至噪声。在本文中,我们提出了一种用于GNN的公制引导(MEGUIDE)子图学习框架。 MEGUIDE采用两种新颖的度量:功能平滑和连接失效距离,以指导子图采样和迷你批次的培训。功能平滑度专为分析节点的特征而才能保留最有价值的信息,而连接失败距离可以测量结构信息以控制子图的大小。我们展示了MEGUIDE在多个数据集上培训各种GNN的有效性和效率。
translated by 谷歌翻译
图形神经网络(GNN)在许多基于图的应用程序中取得了巨大成功。但是,巨大的尺寸和高稀疏度的图表阻碍了其在工业场景下的应用。尽管为大规模图提出了一些可扩展的GNN,但它们为每个节点采用固定的$ k $ hop邻域,因此在稀疏区域内采用大型繁殖深度时面临过度光滑的问题。为了解决上述问题,我们提出了一种新的GNN体系结构 - 图形注意多层感知器(GAMLP),该架构可以捕获不同图形知识范围之间的基本相关性。我们已经与天使平台部署了GAMLP,并进一步评估了现实世界数据集和大规模工业数据集的GAMLP。这14个图数据集的广泛实验表明,GAMLP在享有高可扩展性和效率的同时,达到了最先进的性能。具体来说,在我们的大规模腾讯视频数据集上的预测准确性方面,它的表现优于1.3 \%,同时达到了高达$ 50 \ times $ triending的速度。此外,它在开放图基准的最大同质和异质图(即OGBN-PAPERS100M和OGBN-MAG)的排行榜上排名第一。
translated by 谷歌翻译
图形神经网络(GNNS)由于图形数据的规模和模型参数的数量呈指数增长,因此限制了它们在实际应用中的效用,因此往往会遭受高计算成本。为此,最近的一些作品着重于用彩票假设(LTH)稀疏GNN,以降低推理成本,同时保持绩效水平。但是,基于LTH的方法具有两个主要缺点:1)它们需要对密集模型进行详尽且迭代的训练,从而产生了极大的训练计算成本,2)它们仅修剪图形结构和模型参数,但忽略了节点功能维度,存在大量冗余。为了克服上述局限性,我们提出了一个综合的图形渐进修剪框架,称为CGP。这是通过在一个训练过程中设计在训练图周期修剪范式上进行动态修剪GNN来实现的。与基于LTH的方法不同,提出的CGP方法不需要重新训练,这大大降低了计算成本。此外,我们设计了一个共同策略,以全面地修剪GNN的所有三个核心元素:图形结构,节点特征和模型参数。同时,旨在完善修剪操作,我们将重生过程引入我们的CGP框架,以重新建立修剪但重要的连接。提出的CGP通过在6个GNN体系结构中使用节点分类任务进行评估,包括浅层模型(GCN和GAT),浅但深度散发模型(SGC和APPNP)以及Deep Models(GCNII和RESGCN),总共有14个真实图形数据集,包括来自挑战性开放图基准的大规模图数据集。实验表明,我们提出的策略在匹配时大大提高了训练和推理效率,甚至超过了现有方法的准确性。
translated by 谷歌翻译
图形神经网络(GNN)已被证明是分析非欧国人图数据的强大工具。但是,缺乏有效的分布图学习(GL)系统极大地阻碍了GNN的应用,尤其是当图形大且GNN相对深时。本文中,我们提出了GraphTheta,这是一种以顶点为中心的图形编程模型实现的新颖分布式和可扩展的GL系统。 GraphTheta是第一个基于分布式图处理的GL系统,其神经网络运算符以用户定义的功能实现。该系统支持多种培训策略,并在分布式(虚拟)机器上启用高度可扩展的大图学习。为了促进图形卷积实现,GraphTheta提出了一个名为NN-Tgar的新的GL抽象,以弥合图形处理和图形深度学习之间的差距。提出了分布式图引擎,以通过混合平行执行进行随机梯度下降优化。此外,除了全球批次和迷你批次外,我们还为新的集群批次培训策略提供了支持。我们使用许多网络大小的数据集评估GraphTheta,范围从小,适度到大规模。实验结果表明,GraphTheta可以很好地扩展到1,024名工人,用于培训内部开发的GNN,该工业尺度的Aripay数据集为14亿个节点和41亿个属性边缘,并带有CPU虚拟机(Dockers)群的小群。 (5 $ \ sim $ 12GB)。此外,GraphTheta比最先进的GNN实现获得了可比或更好的预测结果,证明其学习GNN和现有框架的能力,并且可以超过多达$ 2.02 \ tims $ $ 2.02 \ times $,具有更好的可扩展性。据我们所知,这项工作介绍了文献中最大的边缘属性GNN学习任务。
translated by 谷歌翻译
最近提出了基于子图的图表学习(SGRL)来应对规范图神经网络(GNNS)遇到的一些基本挑战,并在许多重要的数据科学应用(例如链接,关系和主题预测)中证明了优势。但是,当前的SGRL方法遇到了可伸缩性问题,因为它们需要为每个培训或测试查询提取子图。扩大规范GNN的最新解决方案可能不适用于SGRL。在这里,我们通过共同设计学习算法及其系统支持,为可扩展的SGRL提出了一种新颖的框架Surel。 Surel采用基于步行的子图表分解,并将步行重新形成子图,从而大大降低了子图提取的冗余并支持并行计算。具有数百万个节点和边缘的六个同质,异质和高阶图的实验证明了Surel的有效性和可扩展性。特别是,与SGRL基线相比,Surel可以实现10 $ \ times $ Quad-Up,具有可比甚至更好的预测性能;与规范GNN相比,Surel可实现50%的预测准确性。
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
translated by 谷歌翻译
图形神经网络(GNN)是用于建模图数据的流行机器学习方法。许多GNN在同质图上表现良好,同时在异质图上表现不佳。最近,一些研究人员将注意力转移到设计GNN,以通过调整消息传递机制或扩大消息传递的接收场来设计GNN。与从模型设计的角度来减轻异性疾病问题的现有作品不同,我们建议通过重新布线结构来从正交角度研究异质图,以减少异质性并使传统GNN的表现更好。通过全面的经验研究和分析,我们验证了重新布线方法的潜力。为了充分利用其潜力,我们提出了一种名为Deep Hertophilly Graph Rewiring(DHGR)的方法,以通过添加同粒子边缘和修剪异质边缘来重新线图。通过比较节点邻居的标签/特征 - 分布的相似性来确定重新布线的详细方法。此外,我们为DHGR设计了可扩展的实现,以确保高效率。 DHRG可以轻松地用作任何GNN的插件模块,即图形预处理步骤,包括同型和异性的GNN,以提高其在节点分类任务上的性能。据我们所知,这是研究图形的第一部重新绘图图形的作品。在11个公共图数据集上进行的广泛实验证明了我们提出的方法的优势。
translated by 谷歌翻译
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
translated by 谷歌翻译
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy-using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at https://github.com/google-research/google-research/ tree/master/cluster_gcn.
translated by 谷歌翻译
Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks. Several GNN frameworks have then been developed for fast and easy implementation of GNN models. Despite their popularity, they are not well documented, and their implementations and system performance have not been well understood. In particular, unlike the traditional GNNs that are trained based on the entire graph in a full-batch manner, recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs. While they improve the scalability, their training times still depend on the implementations in the frameworks as sampling and its associated operations can introduce non-negligible overhead and computational cost. In addition, it is unknown how much the frameworks are 'eco-friendly' from a green computing perspective. In this paper, we provide an in-depth study of two mainstream GNN frameworks along with three state-of-the-art GNNs to analyze their performance in terms of runtime and power/energy consumption. We conduct extensive benchmark experiments at several different levels and present detailed analysis results and observations, which could be helpful for further improvement and optimization.
translated by 谷歌翻译
消息传递已作为设计图形神经网络(GNN)的有效工具的发展。但是,消息传递的大多数现有方法简单地简单或平均所有相邻的功能更新节点表示。它们受到两个问题的限制,即(i)缺乏可解释性来识别对GNN的预测重要的节点特征,以及(ii)特征过度混合,导致捕获长期依赖和无能为力的过度平滑问题在异质或低同质的下方处理图。在本文中,我们提出了一个节点级胶囊图神经网络(NCGNN),以通过改进的消息传递方案来解决这些问题。具体而言,NCGNN表示节点为节点级胶囊组,其中每个胶囊都提取其相应节点的独特特征。对于每个节点级胶囊,开发了一个新颖的动态路由过程,以适应适当的胶囊,以从设计的图形滤波器确定的子图中聚集。 NCGNN聚集仅有利的胶囊并限制无关的消息,以避免交互节点的过度混合特征。因此,它可以缓解过度平滑的问题,并通过同粒或异质的图表学习有效的节点表示。此外,我们提出的消息传递方案本质上是可解释的,并免于复杂的事后解释,因为图形过滤器和动态路由过程确定了节点特征的子集,这对于从提取的子分类中的模型预测最为重要。关于合成和现实图形的广泛实验表明,NCGNN可以很好地解决过度光滑的问题,并为半监视的节点分类产生更好的节点表示。它的表现优于同质和异质的艺术状态。
translated by 谷歌翻译
图神经网络(GNN)在节点分类任务上取得了巨大成功。尽管对开发和评估GNN具有广泛的兴趣,但它们已经通过有限的基准数据集进行了评估。结果,现有的GNN评估缺乏来自图的各种特征的细粒分析。在此激励的情况下,我们对合成图生成器进行了广泛的实验,该实验可以生成具有控制特征以进行细粒分析的图形。我们的实证研究阐明了带有节点类标签的真实图形标签的四个主要特征的GNN的优势和劣势,即1)类规模分布(平衡与失衡),2)等级之间的边缘连接比例(均质VS之间)异性词),3)属性值(偏见与随机),4)图形大小(小与大)。此外,为了促进对GNN的未来研究,我们公开发布了我们的代码库,该代码库允许用户用各种图表评估各种GNN。我们希望这项工作为未来的研究提供有趣的见解。
translated by 谷歌翻译
节点分类在各种图形挖掘任务中至关重要。在实践中,实际图通常遵循长尾分布,其中大量类仅由有限的标记节点组成。尽管图神经网络(GNN)在节点分类方面取得了显着改善,但在这种情况下,它们的性能大大降低。主要原因可以归因于由于元任务中不同节点/类分布引起的任务差异(即节点级别和类级别的方差)引起的任务差异,因此元素训练和元检验之间存在巨大的概括差距。因此,为了有效地减轻任务差异的影响,我们在少数弹出的学习设置下提出了一个任务自适应的节点分类框架。具体而言,我们首先在具有丰富标记节点的类中积累了元知识。然后,我们通过提出的任务自适应模块将这些知识转移到具有有限标记的节点的类别中。特别是,为了适应元任务之间的不同节点/类分布,我们建议三个基本模块以执行\ emph {node-level},\ emph {class-level}和\ emph {task-emph {task-level}适应元任务分别。这样,我们的框架可以对不同的元任务进行适应,从而提高元测试任务上的模型概括性能。在四个普遍的节点分类数据集上进行了广泛的实验,证明了我们的框架优于最先进的基线。我们的代码可在https://github.com/songw-sw/tent上提供。
translated by 谷歌翻译
图表学习目的旨在将节点内容与图形结构集成以学习节点/图表示。然而,发现许多现有的图形学习方法在具有高异性级别的数据上不能很好地工作,这是不同类标签之间很大比例的边缘。解决这个问题的最新努力集中在改善消息传递机制上。但是,尚不清楚异质性是否确实会损害图神经网络(GNNS)的性能。关键是要展现一个节点与其直接邻居之间的关系,例如它们是异性还是同质性?从这个角度来看,我们在这里研究了杂质表示在披露连接节点之间的关系之前/之后的杂音表示的作用。特别是,我们提出了一个端到端框架,该框架既学习边缘的类型(即异性/同质性),并利用边缘类型的信息来提高图形神经网络的表现力。我们以两种不同的方式实施此框架。具体而言,为了避免通过异质边缘传递的消息,我们可以通过删除边缘分类器鉴定的异性边缘来优化图形结构。另外,可以利用有关异性邻居的存在的信息进行特征学习,因此,设计了一种混合消息传递方法来汇总同质性邻居,并根据边缘分类使异性邻居多样化。广泛的实验表明,在整个同质级别的多个数据集上,通过在多个数据集上提出的框架对GNN的绩效提高了显着提高。
translated by 谷歌翻译
Graph Neural Networks (GNNs) are powerful tools for graph representation learning. Despite their rapid development, GNNs also face some challenges, such as over-fitting, over-smoothing, and non-robustness. Previous works indicate that these problems can be alleviated by random dropping methods, which integrate augmented data into models by randomly masking parts of the input. However, some open problems of random dropping on GNNs remain to be solved. First, it is challenging to find a universal method that are suitable for all cases considering the divergence of different datasets and models. Second, augmented data introduced to GNNs causes the incomplete coverage of parameters and unstable training process. Third, there is no theoretical analysis on the effectiveness of random dropping methods on GNNs. In this paper, we propose a novel random dropping method called DropMessage, which performs dropping operations directly on the propagated messages during the message-passing process. More importantly, we find that DropMessage provides a unified framework for most existing random dropping methods, based on which we give theoretical analysis of their effectiveness. Furthermore, we elaborate the superiority of DropMessage: it stabilizes the training process by reducing sample variance; it keeps information diversity from the perspective of information theory, enabling it become a theoretical upper bound of other methods. To evaluate our proposed method, we conduct experiments that aims for multiple tasks on five public datasets and two industrial datasets with various backbone models. The experimental results show that DropMessage has the advantages of both effectiveness and generalization, and can significantly alleviate the problems mentioned above.
translated by 谷歌翻译
The goal of graph summarization is to represent large graphs in a structured and compact way. A graph summary based on equivalence classes preserves pre-defined features of a graph's vertex within a $k$-hop neighborhood such as the vertex labels and edge labels. Based on these neighborhood characteristics, the vertex is assigned to an equivalence class. The calculation of the assigned equivalence class must be a permutation invariant operation on the pre-defined features. This is achieved by sorting on the feature values, e. g., the edge labels, which is computationally expensive, and subsequently hashing the result. Graph Neural Networks (GNN) fulfill the permutation invariance requirement. We formulate the problem of graph summarization as a subgraph classification task on the root vertex of the $k$-hop neighborhood. We adapt different GNN architectures, both based on the popular message-passing protocol and alternative approaches, to perform the structural graph summarization task. We compare different GNNs with a standard multi-layer perceptron (MLP) and Bloom filter as non-neural method. For our experiments, we consider four popular graph summary models on a large web graph. This resembles challenging multi-class vertex classification tasks with the numbers of classes ranging from $576$ to multiple hundreds of thousands. Our results show that the performance of GNNs are close to each other. In three out of four experiments, the non-message-passing GraphMLP model outperforms the other GNNs. The performance of the standard MLP is extraordinary good, especially in the presence of many classes. Finally, the Bloom filter outperforms all neural architectures by a large margin, except for the dataset with the fewest number of $576$ classes.
translated by 谷歌翻译
社交机器人被称为社交网络上的自动帐户,这些帐户试图像人类一样行事。尽管图形神经网络(GNNS)已大量应用于社会机器人检测领域,但大量的领域专业知识和先验知识大量参与了最先进的方法,以设计专门的神经网络体系结构,以设计特定的神经网络体系结构。分类任务。但是,在模型设计中涉及超大的节点和网络层,通常会导致过度平滑的问题和缺乏嵌入歧视。在本文中,我们提出了罗斯加斯(Rosgas),这是一种新颖的加强和自我监督的GNN Architecture搜索框架,以适应性地指出了最合适的多跳跃社区和GNN体系结构中的层数。更具体地说,我们将社交机器人检测问题视为以用户为中心的子图嵌入和分类任务。我们利用异构信息网络来通过利用帐户元数据,关系,行为特征和内容功能来展示用户连接。 Rosgas使用多代理的深钢筋学习(RL)机制来导航最佳邻域和网络层的搜索,以分别学习每个目标用户的子图嵌入。开发了一种用于加速RL训练过程的最接近的邻居机制,Rosgas可以借助自我监督的学习来学习更多的判别子图。 5个Twitter数据集的实验表明,Rosgas在准确性,训练效率和稳定性方面优于最先进的方法,并且在处理看不见的样本时具有更好的概括。
translated by 谷歌翻译