神经网络(NNS)在广泛的应用中的成功导致对理解这些模型的潜在学习动态的兴趣增加。在本文中,我们通过采用图形透视并调查NNS图形结构与其性能之间的关系来超越学习动态的描述。具体地,我们提出(1)表示神经网络学习过程作为时间不断发展的图表(即,通过时代的一系列静态图形快照),(2)在简单的时间内捕获NN期间NN的结构变化发明内容,(3)利用结构摘要,以预测底层NN在分类或回归任务中的准确性。对于NNS的动态图形表示,我们探索完全连接和卷积层的结构表示,这是强大的NN模型的关键组件。我们的分析表明,图形统计数据简单摘要,如加权程度和特征向量中心,只能用于准确地预测NNS的性能。例如,基于Lenet架构的5次训练时期构造的基于加权的基于程度的概要,实现了超过93%的分类精度。我们的发现对于不同的NN架构,包括Lenet,VGG,AlexNet和Reset。
translated by 谷歌翻译
大多数人类活动都需要在正式或非正式团队内部和跨部队进行合作。我们对团队所花费的合作努力与他们的表现有何关系的理解仍然是一个辩论问题。团队合作导致了一个高度相互联系的生态系统,这些生态系统可能是重叠的组件,其中与团队成员和其他团队进行互动执行任务。为了解决这个问题,我们提出了一个图形神经网络模型,旨在预测团队的性能,同时确定确定这种结果的驱动程序。特别是,该模型基于三个架构渠道:拓扑,中心性和上下文,它们捕获了不同因素可能塑造了团队的成功。我们赋予该模型具有两种注意机制,以提高模型性能并允许解释性。第一种机制允许查明团队内部的关键成员。第二种机制使我们能够量化三个驱动程序在确定结果绩效方面的贡献。我们在广泛的域上测试模型性能,其表现优于所考虑的大多数经典和神经基准。此外,我们包括专门设计的合成数据集,以验证该模型如何删除我们的模型胜过基线的预期属性。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
神经结构搜索使建筑设计的自动化。尽管它取得了成功,但它是计算的昂贵且没有关于如何设计理想的架构的洞察力。在这里,我们提出了一种在寻找神经网络的新方法,在我们通过重新加热相应的图形来搜索神经架构并通过图形属性预测架构性能。因为我们在整个图形空间上没有执行机器学习并使用预测的架构性能来搜索架构,因此搜索过程非常有效。我们发现基于图形的搜索可以提供对所需架构的合理预测。此外,我们找到了有效预测架构性能的图形属性。我们的工作提出了一种寻找神经结构的新方式,并提供神经结构设计的见解。
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
许多现代神经架构的核心的卷积运算符可以有效地被视为在输入矩阵和滤波器之间执行点产品。虽然这很容易适用于诸如图像的数据,其可以在欧几里德空间中表示为常规网格,延伸卷积操作者以在图形上工作,而是由于它们的不规则结构而被证明更具有挑战性。在本文中,我们建议使用图形内部产品的图形内核,即在图形上计算内部产品,以将标准卷积运算符扩展到图形域。这使我们能够定义不需要计算输入图的嵌入的完全结构模型。我们的架构允许插入任何类型和数量的图形内核,并具有在培训过程中学到的结构面具方面提供一些可解释性的额外益处,类似于传统卷积神经网络中的卷积掩模发生的事情。我们执行广泛的消融研究,调查模型超参数的影响,我们表明我们的模型在标准图形分类数据集中实现了竞争性能。
translated by 谷歌翻译
神经网络是人工智能的巅峰之作,因为近年来我们目睹了许多新颖的体系结构,学习和优化技术的深度学习。利用这一事实是,神经网络固有地构成神经元之间的多部分图,我们旨在直接分析其结构,以提取有意义的信息,以改善学习过程。对于我们的知识图挖掘技术,尚未对神经网络中的学习进行增强。在本文中,我们为从深度学习体系结构中提取的完整加权多部分图的K核结构提出了一个改编版本。由于多方图是两分图的组合,而两分图的组合是超图的起点图,因此我们设计了k-hypercore分解,这是k核退化性的超图类似物。我们将K-Hypercore应用于几个神经网络体系结构,更具体地用于卷积神经网络和多层感知,以进行非常短的训练后的图像识别任务。然后,我们使用了由神经元的超核数量提供的信息来重新定位神经网络的权重,从而偏向梯度优化方案。广泛的实验证明,K-Hypercore的表现优于最新初始化方法。
translated by 谷歌翻译
图形内核是历史上最广泛使用的图形分类任务的技术。然而,由于图的手工制作的组合特征,这些方法具有有限的性能。近年来,由于其性能卓越,图形神经网络(GNNS)已成为与下游图形相关任务的最先进的方法。大多数GNN基于消息传递神经网络(MPNN)框架。然而,最近的研究表明,MPNN不能超过Weisfeiler-Lehman(WL)算法在图形同构术中的力量。为了解决现有图形内核和GNN方法的限制,在本文中,我们提出了一种新的GNN框架,称为\ Texit {内核图形神经网络}(Kernnns),该框架将图形内核集成到GNN的消息传递过程中。通过卷积神经网络(CNNS)中的卷积滤波器的启发,KERGNNS采用可训练的隐藏图作为绘图过滤器,该绘图过滤器与子图组合以使用图形内核更新节点嵌入式。此外,我们表明MPNN可以被视为Kergnns的特殊情况。我们将Kergnns应用于多个与图形相关的任务,并使用交叉验证来与基准进行公平比较。我们表明,与现有的现有方法相比,我们的方法达到了竞争性能,证明了增加GNN的表现能力的可能性。我们还表明,KERGNNS中的训练有素的图形过滤器可以揭示数据集的本地图形结构,与传统GNN模型相比,显着提高了模型解释性。
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
translated by 谷歌翻译
Graphs are ubiquitous in nature and can therefore serve as models for many practical but also theoretical problems. For this purpose, they can be defined as many different types which suitably reflect the individual contexts of the represented problem. To address cutting-edge problems based on graph data, the research field of Graph Neural Networks (GNNs) has emerged. Despite the field's youth and the speed at which new models are developed, many recent surveys have been published to keep track of them. Nevertheless, it has not yet been gathered which GNN can process what kind of graph types. In this survey, we give a detailed overview of already existing GNNs and, unlike previous surveys, categorize them according to their ability to handle different graph types and properties. We consider GNNs operating on static and dynamic graphs of different structural constitutions, with or without node or edge attributes. Moreover, we distinguish between GNN models for discrete-time or continuous-time dynamic graphs and group the models according to their architecture. We find that there are still graph types that are not or only rarely covered by existing GNN models. We point out where models are missing and give potential reasons for their absence.
translated by 谷歌翻译
保持个人特征和复杂的关系,广泛利用和研究了图表数据。通过更新和聚合节点的表示,能够捕获结构信息,图形神经网络(GNN)模型正在获得普及。在财务背景下,该图是基于实际数据构建的,这导致复杂的图形结构,因此需要复杂的方法。在这项工作中,我们在最近的财务环境中对GNN模型进行了全面的审查。我们首先将普通使用的财务图分类并总结每个节点的功能处理步骤。然后,我们总结了每个地图类型的GNN方法,每个区域的应用,并提出一些潜在的研究领域。
translated by 谷歌翻译
图形结构的数据集通常具有不规则的图表尺寸和连接,渲染使用最近的数据增强技术,例如混合,困难。为了解决这一挑战,我们在名为曲线图移植的图形级别提供了第一个混合图形增强方法,其在数据空间中混合了不规则图。要在图形的各种尺度上定义,我们的方法将子结构标识为可以保留本地信息的混合单元。由于没有特殊考虑上下文的​​基于混合的方法易于产生噪声样本,因此我们的方法明确地使用节点显着信息来选择有意义的子图并自适应地确定标签。我们在多个图形分类基准数据集中广泛地验证了我们多样化的GNN架构,来自不同尺寸的各种图形域。实验结果显示了我们对其他基本数据增强基线的方法的一致优势。我们还证明了曲线图移植在鲁棒性和模型校准方面提高了性能。
translated by 谷歌翻译
深度学习文献通过新的架构和培训技术不断更新。然而,尽管有一些关于随机权重的发现,但最近的研究却忽略了重量初始化。另一方面,最近的作品一直在接近网络科学,以了解训练后人工神经网络(ANN)的结构和动态。因此,在这项工作中,我们分析了随机初始化网络中神经元的中心性。我们表明,较高的神经元强度方差可能会降低性能,而较低的神经元强度方差通常会改善它。然后,提出了一种新方法,根据其强度根据优先附着(PA)规则重新连接神经元连接,从而大大降低了通过常见方法初始化的层的强度方差。从这个意义上讲,重新布线仅重新组织连接,同时保留权重的大小和分布。我们通过对图像分类进行的广泛统计分析表明,在使用简单和复杂的体系结构和学习时间表时,在大多数情况下,在培训和测试过程中,性能都会提高。我们的结果表明,除了规模外,权重的组织也与更好的初始化初始化有关。
translated by 谷歌翻译
情绪预测在心理健康和情绪感知计算中起着至关重要的作用。情绪的复杂性质是由于其对一个人的生理健康,精神状态和周围环境的依赖而产生的,这使其预测一项艰巨的任务。在这项工作中,我们利用移动传感数据来预测幸福和压力。除了一个人的生理特征外,我们还通过天气和社交网络纳入了环境的影响。为此,我们利用电话数据来构建社交网络并开发机器学习体系结构,该架构从图形网络的多个用户中汇总信息,并将其与数据的时间动态集成在一起,以预测所有用户的情感。社交网络的构建不会在用户的EMA或数据收集方面产生额外的成本,也不会引起隐私问题。我们提出了一种自动化用户社交网络影响预测的架构,能够处理现实生活中社交网络的动态分布,从而使其可扩展到大规模网络。我们广泛的评估突出了社交网络集成提供的改进。我们进一步研究了图形拓扑对模型性能的影响。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
图形卷积网络(GCN)由于学习图信息的显着表示能力而实现了令人印象深刻的性能。但是,GCN在深网上实施时需要昂贵的计算功率,因此很难将其部署在电池供电的设备上。相比之下,执行生物保真推理过程的尖峰神经网络(SNN)提供了节能的神经结构。在这项工作中,我们提出了SpikingGCN,这是一个端到端框架,旨在将GCN的嵌入与SNN的生物层性特征相结合。原始图数据根据图形卷积的合并编码为尖峰列车。我们通过利用与神经元节点结合的完全连接的层来进一步对生物信息处理进行建模。在各种场景(例如引用网络,图像图分类和推荐系统)中,我们的实验结果表明,该方法可以针对最新方法获得竞争性能。此外,我们表明,在神经形态芯片上的SpikingGCN可以将能源效率的明显优势带入图形数据分析中,这表明了其构建环境友好的机器学习模型的巨大潜力。
translated by 谷歌翻译