In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
translated by 谷歌翻译
我们介绍了一种新颖的谐波分析,用于在函数上定义的函数,随机步行操作员是基石。作为第一步,我们将随机步行操作员的一组特征向量作为非正交傅里叶类型的功能,用于通过定向图。我们通过将从其Dirichlet能量获得的随机步行操作员的特征向量的变化与其相关的特征值的真实部分连接来发现频率解释。从这个傅立叶基础,我们可以进一步继续,并在有向图中建立多尺度分析。通过将Coifman和MagGioni扩展到定向图,我们提出了一种冗余小波变换和抽取的小波变换。因此,我们对导向图的谐波分析的发展导致我们考虑应用于突出了我们框架效率的指示图的图形上的半监督学习问题和信号建模问题。
translated by 谷歌翻译
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
translated by 谷歌翻译
图表表示学习有许多现实世界应用,从超级分辨率的成像,3D计算机视觉到药物重新扫描,蛋白质分类,社会网络分析。图表数据的足够表示对于图形结构数据的统计或机器学习模型的学习性能至关重要。在本文中,我们提出了一种用于图形数据的新型多尺度表示系统,称为抽取帧的图形数据,其在图表上形成了本地化的紧密框架。抽取的帧系统允许在粗粒链上存储图形数据表示,并在每个比例的多个尺度处处理图形数据,数据存储在子图中。基于此,我们通过建设性数据驱动滤波器组建立用于在多分辨率下分解和重建图数据的抽取G-Framewelet变换。图形帧构建基于基于链的正交基础,支持快速图傅里叶变换。由此,我们为抽取的G-Frameword变换或FGT提供了一种快速算法,该算法具有线性计算复杂度O(n),用于尺寸N的图表。用数值示例验证抽取的帧谱和FGT的理论,用于随机图形。现实世界应用的效果是展示的,包括用于交通网络的多分辨率分析,以及图形分类任务的图形神经网络。
translated by 谷歌翻译
In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.
translated by 谷歌翻译
Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclideanstructured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graphand 3D shape analysis and show that it consistently outperforms previous approaches.
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
马尔可夫链是一类概率模型,在定量科学中已广泛应用。这部分是由于它们的多功能性,但是可以通过分析探测的便利性使其更加复杂。本教程为马尔可夫连锁店提供了深入的介绍,并探索了它们与图形和随机步行的联系。我们利用从线性代数和图形论的工具来描述不同类型的马尔可夫链的过渡矩阵,特别着眼于探索与这些矩阵相对应的特征值和特征向量的属性。提出的结果与机器学习和数据挖掘中的许多方法有关,我们在各个阶段描述了这些方法。本文并没有本身就成为一项新颖的学术研究,而是提出了一些已知结果的集合以及一些新概念。此外,该教程的重点是向读者提供直觉,而不是正式的理解,并且仅假定对线性代数和概率理论的概念的基本曝光。因此,来自各种学科的学生和研究人员可以访问它。
translated by 谷歌翻译
基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
A link stream is a set of triplets $(t, u, v)$ indicating that $u$ and $v$ interacted at time $t$. Link streams model numerous datasets and their proper study is crucial in many applications. In practice, raw link streams are often aggregated or transformed into time series or graphs where decisions are made. Yet, it remains unclear how the dynamical and structural information of a raw link stream carries into the transformed object. This work shows that it is possible to shed light into this question by studying link streams via algebraically linear graph and signal operators, for which we introduce a novel linear matrix framework for the analysis of link streams. We show that, due to their linearity, most methods in signal processing can be easily adopted by our framework to analyze the time/frequency information of link streams. However, the availability of linear graph methods to analyze relational/structural information is limited. We address this limitation by developing (i) a new basis for graphs that allow us to decompose them into structures at different resolution levels; and (ii) filters for graphs that allow us to change their structural information in a controlled manner. By plugging-in these developments and their time-domain counterpart into our framework, we are able to (i) obtain a new basis for link streams that allow us to represent them in a frequency-structure domain; and (ii) show that many interesting transformations to link streams, like the aggregation of interactions or their embedding into a euclidean space, can be seen as simple filters in our frequency-structure domain.
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
图形信号处理(GSP)中的基本前提是,将目标信号的成对(反)相关性作为边缘权重以用于图形过滤。但是,现有的快速图抽样方案仅针对描述正相关的正图设计和测试。在本文中,我们表明,对于具有强固有抗相关的数据集,合适的图既包含正边缘和负边缘。作为响应,我们提出了一种以平衡签名图的概念为中心的线性时间签名的图形采样方法。具体而言,给定的经验协方差数据矩阵$ \ bar {\ bf {c}} $,我们首先学习一个稀疏的逆矩阵(Graph laplacian)$ \ MATHCAL {l} $对应于签名图$ \ Mathcal $ \ Mathcal {G} $ 。我们为平衡签名的图形$ \ Mathcal {g} _b $ - 近似$ \ Mathcal {g} $通过Edge Exge Exgement Exgmentation -As Graph频率组件定义Laplacian $ \ Mathcal {L} _b $的特征向量。接下来,我们选择样品以将低通滤波器重建误差分为两个步骤最小化。我们首先将Laplacian $ \ Mathcal {L} _b $的所有Gershgorin圆盘左端对齐,最小的EigenValue $ \ lambda _ {\ min}(\ Mathcal {l} _b)$通过相似性转换$ \ MATHCAL $ \ MATHCAL} s \ Mathcal {l} _b \ s^{ - 1} $,利用最新的线性代数定理,称为gershgorin disc perfect perfect对齐(GDPA)。然后,我们使用以前的快速gershgorin盘式对齐采样(GDAS)方案对$ \ Mathcal {L} _p $进行采样。实验结果表明,我们签名的图形采样方法在各种数据集上明显优于现有的快速采样方案。
translated by 谷歌翻译
图形卷积网络(GCN)已被证明是一个有力的概念,在过去几年中,已成功应用于许多领域的各种任务。在这项工作中,我们研究了为GCN定义铺平道路的理论,包括经典图理论的相关部分。我们还讨论并在实验上证明了GCN的关键特性和局限性,例如由样品的统计依赖性引起的,该图由图的边缘引入,这会导致完整梯度的估计值偏置。我们讨论的另一个限制是Minibatch采样对模型性能的负面影响。结果,在参数更新期间,在整个数据集上计算梯度,从而破坏了对大图的可扩展性。为了解决这个问题,我们研究了替代方法,这些方法允许在每次迭代中仅采样一部分数据,可以安全地学习良好的参数。我们重现了KIPF等人的工作中报告的结果。并提出一个灵感签名的实现,这是一种无抽样的minibatch方法。最终,我们比较了基准数据集上的两个实现,证明它们在半监督节点分类任务的预测准确性方面是可比的。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
多模式数据通过将来自来自各个域的数据与具有非常不同的统计特性的数据集成来提供自然现象的互补信息。捕获多模式数据的模态和跨换体信息是多模式学习方法的基本能力。几何感知数据分析方法通过基于其几何底层结构隐式表示各种方式的数据来提供这些能力。此外,在许多应用中,在固有的几何结构上明确地定义数据。对非欧几里德域的深度学习方法是一个新兴的研究领域,最近在许多研究中被调查。大多数流行方法都是为单峰数据开发的。本文提出了一种多模式多缩放图小波卷积网络(M-GWCN)作为端到端网络。 M-GWCN同时通过应用多尺度图小波变换来找到模态表示,以在每个模态的图形域中提供有用的本地化属性,以及通过学习各种方式之间的相关性的学习置换的跨模式表示。 M-GWCN不限于具有相同数量的数据的均匀模式,或任何指示模式之间的对应关系的现有知识。已经在三个流行的单峰显式图形数据集和五个多模式隐式界面进行了几个半监督节点分类实验。实验结果表明,与光谱图域卷积神经网络和最先进的多模式方法相比,所提出的方法的优越性和有效性。
translated by 谷歌翻译
散射变换是一种基于小波的多层转换,最初是作为卷积神经网络(CNN)的模型引入的,它在我们对这些网络稳定性和不变性属性的理解中发挥了基础作用。随后,人们普遍兴趣将CNN的成功扩展到具有非欧盟结构的数据集,例如图形和歧管,从而导致了几何深度学习的新兴领域。为了提高我们对这个新领域中使用的体系结构的理解,几篇论文提出了对非欧几里得数据结构(如无方向的图形和紧凑的Riemannian歧管)的散射转换的概括。在本文中,我们介绍了一个通用的统一模型,用于测量空间上的几何散射。我们提出的框架包括以前的几何散射作品作为特殊情况,但也适用于更通用的设置,例如有向图,签名图和带边界的歧管。我们提出了一个新标准,该标准可以识别哪些有用表示应该不变的组,并表明该标准足以确保散射变换具有理想的稳定性和不变性属性。此外,我们考虑从随机采样未知歧管获得的有限度量空间。我们提出了两种构造数据驱动图的方法,在该图上相关的图形散射转换近似于基础歧管上的散射变换。此外,我们使用基于扩散图的方法来证明这些近似值之一的收敛速率的定量估计值,因为样品点的数量趋向于无穷大。最后,我们在球形图像,有向图和高维单细胞数据上展示了方法的实用性。
translated by 谷歌翻译
将计算谐波分析工具扩展到常规格子的经典设置到更普通的图形和网络的设置是非常重要的,最近已经完成了许多研究。由IRION和SAITO(2014)开发的通用HAAR-WALSH变换(GHWT)是图形上的信号的多尺度变换,这是古典哈拉和沃尔什哈拉德变换的概括。我们提出了扩展的广义Haar-Walsh变换(eGHWT),这是Thiele和Villemoes(1996)的适应时频倾斜的概括。 eGHWT不仅检查了图形域分区的效率,还可以同时查看“续间域”分区。因此,图形信号的EGHWT及其相关的最佳基础选择算法显着提高了以前的计算成本,$ O(n \ log n)$的先前GHW的性能,其中$ n $是一个节点的数量输入图。虽然GHWT最佳基础算法在$ \ mathbb {r} ^ $可能的正交基础中寻求给定任务的最适合的正常正常基础。在$ \ mathbb {r} ^ n $,eghwt最佳基础算法可以找到一个通过在$ \ mathbb {r} ^ n $中搜索超过0.618美元\ cdot(1.84)^ n $可能的正交基础。本文介绍了EGHWT最佳基础算法的细节,并使用包括真正曲线信号的若干示例以及作为曲线图信号观看的传统数字图像来展示其优越性。此外,我们还通过将它们视为从其列和行生成的图表的张量乘积来展示如何扩展到2D信号和矩阵形式数据,并展示其对图像近似的应用的有效性。
translated by 谷歌翻译
我们研究了以模型为简单络合物的抽象拓扑空间支撑的处理信号的线性过滤器,可以解释为解释节点,边缘,三角形面的图形的概括等,以处理此类信号,我们开发了定义为Matrix polynomials的简单卷积过滤器下霍德·拉普拉斯人的下部和上部。首先,我们研究了这些过滤器的特性,并表明它们是线性和转移不变的,以及置换和定向等效的。这些过滤器也可以以低计算复杂性的分布式方式实现,因为它们仅涉及(多个回合)上层和下相邻简单之间的简单转移。其次,着眼于边缘流,我们研究了这些过滤器的频率响应,并研究了如何使用Hodge分类来描述梯度,卷曲和谐波频率。我们讨论了这些频率如何对应于霍德拉普拉斯(Hodge laplacian)的下部和上等耦合以及上的核心,并且可以通过我们的滤波器设计独立调整。第三,我们研究设计简单卷积过滤器并讨论其相对优势的不同程序。最后,我们在几种应用中证实了简单过滤器:提取简单信号的不同频率组件,以denoise边缘流量以及分析金融市场和交通网络。
translated by 谷歌翻译