Neuroomaging的最新进展以及网络数据统计学习中的算法创新提供了一种独特的途径,可以集成大脑结构和功能,从而有助于揭示系统水平的一些大脑组织原则。在此方向上,我们通过曲线图编码器 - 解码器系统制定了一种模拟脑结构连接(SC)和功能连接(FC)之间的关系的监督图形表示学习框架,其中SC用作预测经验FC的输入。训练图卷积编码器捕获模拟实际神经通信的大脑区域之间的直接和间接相互作用,以及集成结构网络拓扑和节点(即,区域特定的)属性的信息。编码器学习节点级SC嵌入,它们组合以生成用于重建经验FC网络的(全大脑)图级表示。所提出的端到端模型利用多目标损失函数来共同重建FC网络,并学习用于下游主题的SC-To-Fc映射的判别图表表示(即,图形级)分类。综合实验表明,所述关系的学习表现从受试者的脑网络的内在属性中捕获有价值的信息,并导致提高对来自人类连接项目的大量重型饮酒者和非饮酒者的准确性提高。我们的工作提供了关于脑网络之间关系的新见解,支持使用图形表示学习的有希望的前景,了解有关人脑活动和功能的更多信息。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Mapping the connectome of the human brain using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic study of how to design effective GNNs for brain network analysis. To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs. BrainGB standardizes the process by (1) summarizing brain network construction pipelines for both functional and structural neuroimaging modalities and (2) modularizing the implementation of GNN designs. We conduct extensive experiments on datasets across cohorts and modalities and recommend a set of general recipes for effective GNN designs on brain networks. To support open and reproducible research on GNN-based brain network analysis, we host the BrainGB website at https://braingb.us with models, tutorials, examples, as well as an out-of-box Python package. We hope that this work will provide useful empirical evidence and offer insights for future research in this novel and promising direction.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
图理论分析已成为建模大脑功能和解剖连接性的标准工具。随着连接组学的出现,主要的图形或感兴趣的网络是结构连接组(源自DTI拖拉术)和功能连接组(源自静止状态fMRI)。但是,大多数已发表的连接组研究都集中在结构或功能连接上,但是在同一数据集中可用的情况下,它们之间的互补信息可以共同利用以提高我们对大脑的理解。为此,我们提出了一个功能约束的结构图变量自动编码器(FCS-GVAE),能够以无监督的方式合并功能和结构连接的信息。这导致了一个关节的低维嵌入,该嵌入建立了一个统一的空间坐标系,用于在不同受试者之间进行比较。我们使用公开可用的OASIS-3阿尔茨海默氏病(AD)数据集评估我们的方法,并表明为最佳编码功能性脑动力学而言,有必要的配方是必要的。此外,所提出的联合嵌入方法比不使用互补连接信息的方法更准确地区分不同的患者子选集。
translated by 谷歌翻译
无创医学神经影像学已经对大脑连通性产生了许多发现。开发了几种实质技术绘制形态,结构和功能性脑连接性,以创建人脑中神经元活动的全面路线图。依靠其非欧国人数据类型,图形神经网络(GNN)提供了一种学习深图结构的巧妙方法,并且它正在迅速成为最先进的方法,从而导致各种网络神经科学任务的性能增强。在这里,我们回顾了当前基于GNN的方法,突出了它们在与脑图有关的几种应用中使用的方式,例如缺失的脑图合成和疾病分类。最后,我们通过绘制了通往网络神经科学领域中更好地应用GNN模型在神经系统障碍诊断和人群图整合中的路径。我们工作中引用的论文列表可在https://github.com/basiralab/gnns-inns-intwork-neuroscience上找到。
translated by 谷歌翻译
在大脑中找到适当的动态活动的适当表示对于许多下游应用至关重要。由于其高度动态的性质,暂时平均fMRI(功能磁共振成像)只能提供狭窄的脑活动视图。以前的作品缺乏学习和解释大脑体系结构中潜在动态的能力。本文构建了一个有效的图形神经网络模型,该模型均包含了从DWI(扩散加权成像)获得的区域映射的fMRI序列和结构连接性作为输入。我们通过学习样品水平的自适应邻接矩阵并进行新型多分辨率内群平滑来发现潜在大脑动力学的良好表示。我们还将输入归因于具有集成梯度的输入,这使我们能够针对每个任务推断(1)高度涉及的大脑连接和子网络,(2)成像序列的时间键帧,这些成像序列表征了任务,以及(3)歧视单个主体的子网络。这种识别特征在异质任务和个人中表征信号状态的关键子网的能力对神经科学和其他科学领域至关重要。广泛的实验和消融研究表明,我们提出的方法在空间 - 周期性图信号建模中的优越性和效率,具有对脑动力学的深刻解释。
translated by 谷歌翻译
几何深度学习取得了长足的进步,旨在概括从传统领域到非欧几里得群岛的结构感知神经网络的设计,从而引起图形神经网络(GNN),这些神经网络(GNN)可以应用于形成的图形结构数据,例如社会,例如,网络,生物化学和材料科学。尤其是受欧几里得对应物的启发,尤其是图形卷积网络(GCN)通过提取结构感知功能来成功处理图形数据。但是,当前的GNN模型通常受到各种现象的限制,这些现象限制了其表达能力和推广到更复杂的图形数据集的能力。大多数模型基本上依赖于通过本地平均操作对图形信号的低通滤波,从而导致过度平滑。此外,为了避免严重的过度厚度,大多数流行的GCN式网络往往是较浅的,并且具有狭窄的接收场,导致侵犯。在这里,我们提出了一个混合GNN框架,该框架将传统的GCN过滤器与通过几何散射定义的带通滤波器相结合。我们进一步介绍了一个注意框架,该框架允许该模型在节点级别上从不同过滤器的组合信息进行本地参与。我们的理论结果确定了散射过滤器的互补益处,以利用图表中的结构信息,而我们的实验显示了我们方法对各种学习任务的好处。
translated by 谷歌翻译
理解神经动力学的空间和时间特征之间的相互作用可以有助于我们对人脑中信息处理的理解。图形神经网络(GNN)提供了一种新的可能性,可以解释图形结构化信号,如在复杂的大脑网络中观察到的那些。在我们的研究中,我们比较不同的时空GNN架构,并研究他们复制在功能MRI(FMRI)研究中获得的神经活动分布的能力。我们评估GNN模型在MRI研究中各种场景的性能,并将其与VAR模型进行比较,目前主要用于定向功能连接分析。我们表明,即使当可用数据稀缺时,基于基于解剖学基板的局部功能相互作用,基于GNN的方法也能够鲁棒地规模到大型网络研究。通过包括作为信息衬底的解剖连接以进行信息传播,这种GNN还提供了关于指向连接性分析的多模阶视角,提供了研究脑网络中的时空动态的新颖可能性。
translated by 谷歌翻译
Graph AutoCododers(GAE)和变分图自动编码器(VGAE)作为链接预测的强大方法出现。他们的表现对社区探测问题的印象不那么令人印象深刻,根据最近和同意的实验评估,它们的表现通常超过了诸如louvain方法之类的简单替代方案。目前尚不清楚可以通过GAE和VGAE改善社区检测的程度,尤其是在没有节点功能的情况下。此外,不确定是否可以在链接预测上同时保留良好的性能。在本文中,我们表明,可以高精度地共同解决这两个任务。为此,我们介绍和理论上研究了一个社区保留的消息传递方案,通过在计算嵌入空间时考虑初始图形结构和基于模块化的先验社区来掺杂我们的GAE和VGAE编码器。我们还提出了新颖的培训和优化策略,包括引入一个模块化的正规器,以补充联合链路预测和社区检测的现有重建损失。我们通过对各种现实世界图的深入实验验证,证明了方法的经验有效性,称为模块化感知的GAE和VGAE。
translated by 谷歌翻译
图表可以模拟实体之间的复杂交互,它在许多重要的应用程序中自然出现。这些应用程序通常可以投入到标准图形学习任务中,其中关键步骤是学习低维图表示。图形神经网络(GNN)目前是嵌入方法中最受欢迎的模型。然而,邻域聚合范例中的标准GNN患有区分\ EMPH {高阶}图形结构的有限辨别力,而不是\ EMPH {低位}结构。为了捕获高阶结构,研究人员求助于主题和开发的基于主题的GNN。然而,现有的基于主基的GNN仍然仍然遭受较少的辨别力的高阶结构。为了克服上述局限性,我们提出了一个新颖的框架,以更好地捕获高阶结构的新框架,铰接于我们所提出的主题冗余最小化操作员和注射主题组合的新颖框架。首先,MGNN生成一组节点表示W.R.T.每个主题。下一阶段是我们在图案中提出的冗余最小化,该主题在彼此相互比较并蒸馏出每个主题的特征。最后,MGNN通过组合来自不同图案的多个表示来执行节点表示的更新。特别地,为了增强鉴别的功率,MGNN利用重新注射功能来组合表示的函数w.r.t.不同的主题。我们进一步表明,我们的拟议体系结构增加了GNN的表现力,具有理论分析。我们展示了MGNN在节点分类和图形分类任务上的七个公共基准上表现出最先进的方法。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
功能磁共振成像(fMRI)的功能连通性网络(FCN)数据越来越多地用于诊断脑疾病。然而,最新的研究用来使用单个脑部分析地图集以一定的空间尺度构建FCN,该空间尺度很大程度上忽略了层次范围内不同空间尺度的功能相互作用。在这项研究中,我们提出了一个新型框架,以对脑部疾病诊断进行多尺度FCN分析。我们首先使用一组定义明确的多尺地图像来计算多尺度FCN。然后,我们利用多尺度地图集中各个区域之间具有生物学意义的大脑分层关系,以跨多个空间尺度进行淋巴结池,即“ Atlas指导的池”。因此,我们提出了一个基于多尺度的层次图形卷积网络(MAHGCN),该网络(MAHGCN)建立在图形卷积和ATLAS引导的池上,以全面地从多尺度FCN中详细提取诊断信息。关于1792名受试者的神经影像数据的实验证明了我们提出的方法在诊断阿尔茨海默氏病(AD),AD的前驱阶段(即轻度认知障碍[MCI])以及自闭症谱系障碍(ASD),,AD的前瞻性阶段(即,轻度认知障碍[MCI]),,精度分别为88.9%,78.6%和72.7%。所有结果都显示出我们提出的方法比其他竞争方法具有显着优势。这项研究不仅证明了使用深度学习增强的静止状态fMRI诊断的可行性,而且还强调,值得探索多尺度脑层次结构中的功能相互作用,并将其整合到深度学习网络体系结构中,以更好地理解有关的神经病理学。脑疾病。
translated by 谷歌翻译
多模式数据通过将来自来自各个域的数据与具有非常不同的统计特性的数据集成来提供自然现象的互补信息。捕获多模式数据的模态和跨换体信息是多模式学习方法的基本能力。几何感知数据分析方法通过基于其几何底层结构隐式表示各种方式的数据来提供这些能力。此外,在许多应用中,在固有的几何结构上明确地定义数据。对非欧几里德域的深度学习方法是一个新兴的研究领域,最近在许多研究中被调查。大多数流行方法都是为单峰数据开发的。本文提出了一种多模式多缩放图小波卷积网络(M-GWCN)作为端到端网络。 M-GWCN同时通过应用多尺度图小波变换来找到模态表示,以在每个模态的图形域中提供有用的本地化属性,以及通过学习各种方式之间的相关性的学习置换的跨模式表示。 M-GWCN不限于具有相同数量的数据的均匀模式,或任何指示模式之间的对应关系的现有知识。已经在三个流行的单峰显式图形数据集和五个多模式隐式界面进行了几个半监督节点分类实验。实验结果表明,与光谱图域卷积神经网络和最先进的多模式方法相比,所提出的方法的优越性和有效性。
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, existing methods developed for this purpose are typically discriminative, computing features of local subgraphs around two neighboring nodes and predicting potential links between them from the perspective of subgraph classification. In this formalism, the selection of enclosing subgraphs and heuristic structural features for subgraph classification significantly affects the performance of the methods. To overcome this limitation, this paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP. Instead of sampling positive and negative links and heuristically computing the features of their enclosing subgraphs, GraphLP utilizes the feature learning ability of deep-learning models to automatically extract the structural patterns of graphs for link prediction under the assumption that real-world graphs are not locally isolated. Moreover, GraphLP explores high-order connectivity patterns to utilize the hierarchical organizational structures of graphs for link prediction. Our experimental results on all common benchmark datasets from different applications demonstrate that the proposed method consistently outperforms other state-of-the-art methods. Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
translated by 谷歌翻译