In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.
translated by 谷歌翻译
Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclideanstructured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graphand 3D shape analysis and show that it consistently outperforms previous approaches.
translated by 谷歌翻译
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
translated by 谷歌翻译
In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
图形卷积网络(GCN)已被证明是一个有力的概念,在过去几年中,已成功应用于许多领域的各种任务。在这项工作中,我们研究了为GCN定义铺平道路的理论,包括经典图理论的相关部分。我们还讨论并在实验上证明了GCN的关键特性和局限性,例如由样品的统计依赖性引起的,该图由图的边缘引入,这会导致完整梯度的估计值偏置。我们讨论的另一个限制是Minibatch采样对模型性能的负面影响。结果,在参数更新期间,在整个数据集上计算梯度,从而破坏了对大图的可扩展性。为了解决这个问题,我们研究了替代方法,这些方法允许在每次迭代中仅采样一部分数据,可以安全地学习良好的参数。我们重现了KIPF等人的工作中报告的结果。并提出一个灵感签名的实现,这是一种无抽样的minibatch方法。最终,我们比较了基准数据集上的两个实现,证明它们在半监督节点分类任务的预测准确性方面是可比的。
translated by 谷歌翻译
基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
图表表示学习有许多现实世界应用,从超级分辨率的成像,3D计算机视觉到药物重新扫描,蛋白质分类,社会网络分析。图表数据的足够表示对于图形结构数据的统计或机器学习模型的学习性能至关重要。在本文中,我们提出了一种用于图形数据的新型多尺度表示系统,称为抽取帧的图形数据,其在图表上形成了本地化的紧密框架。抽取的帧系统允许在粗粒链上存储图形数据表示,并在每个比例的多个尺度处处理图形数据,数据存储在子图中。基于此,我们通过建设性数据驱动滤波器组建立用于在多分辨率下分解和重建图数据的抽取G-Framewelet变换。图形帧构建基于基于链的正交基础,支持快速图傅里叶变换。由此,我们为抽取的G-Frameword变换或FGT提供了一种快速算法,该算法具有线性计算复杂度O(n),用于尺寸N的图表。用数值示例验证抽取的帧谱和FGT的理论,用于随机图形。现实世界应用的效果是展示的,包括用于交通网络的多分辨率分析,以及图形分类任务的图形神经网络。
translated by 谷歌翻译
A number of problems can be formulated as prediction on graph-structured data. In this work, we generalize the convolution operator from regular grids to arbitrary graphs while avoiding the spectral domain, which allows us to handle graphs of varying size and connectivity. To move beyond a simple diffusion, filter weights are conditioned on the specific edge labels in the neighborhood of a vertex. Together with the proper choice of graph coarsening, we explore constructing deep neural networks for graph classification. In particular, we demonstrate the generality of our formulation in point cloud classification, where we set the new state of the art, and on a graph classification dataset, where we outperform other deep learning approaches. The source code is available at https://github.com/mys007/ecc.
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
translated by 谷歌翻译
我们介绍了CheBlieset,一种对(各向异性)歧管的组成的方法。对基于GRAP和基于组的神经网络的成功进行冲浪,我们利用了几何深度学习领域的最新发展,以推导出一种新的方法来利用数据中的任何各向异性。通过离散映射的谎言组,我们开发由各向异性卷积层(Chebyshev卷积),空间汇集和解凝层制成的图形神经网络,以及全球汇集层。集团的标准因素是通过具有各向异性左不变性的黎曼距离的图形上的等级和不变的运算符来实现的。由于其简单的形式,Riemannian公制可以在空间和方向域中模拟任何各向异性。这种对Riemannian度量的各向异性的控制允许平衡图形卷积层的不变性(各向异性度量)的平衡(各向异性指标)。因此,我们打开大门以更好地了解各向异性特性。此外,我们经验证明了在CIFAR10上的各向异性参数的存在(数据依赖性)甜点。这一关键的结果是通过利用数据中的各向异性属性来获得福利的证据。我们还评估了在STL10(图像数据)和ClimateNet(球面数据)上的这种方法的可扩展性,显示了对不同任务的显着适应性。
translated by 谷歌翻译
多模式数据通过将来自来自各个域的数据与具有非常不同的统计特性的数据集成来提供自然现象的互补信息。捕获多模式数据的模态和跨换体信息是多模式学习方法的基本能力。几何感知数据分析方法通过基于其几何底层结构隐式表示各种方式的数据来提供这些能力。此外,在许多应用中,在固有的几何结构上明确地定义数据。对非欧几里德域的深度学习方法是一个新兴的研究领域,最近在许多研究中被调查。大多数流行方法都是为单峰数据开发的。本文提出了一种多模式多缩放图小波卷积网络(M-GWCN)作为端到端网络。 M-GWCN同时通过应用多尺度图小波变换来找到模态表示,以在每个模态的图形域中提供有用的本地化属性,以及通过学习各种方式之间的相关性的学习置换的跨模式表示。 M-GWCN不限于具有相同数量的数据的均匀模式,或任何指示模式之间的对应关系的现有知识。已经在三个流行的单峰显式图形数据集和五个多模式隐式界面进行了几个半监督节点分类实验。实验结果表明,与光谱图域卷积神经网络和最先进的多模式方法相比,所提出的方法的优越性和有效性。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Deep neural networks have enjoyed remarkable success for various vision tasks, however it remains challenging to apply CNNs to domains lacking a regular underlying structures such as 3D point clouds. Towards this we propose a novel convolutional architecture, termed Spi-derCNN, to efficiently extract geometric features from point clouds. Spi-derCNN is comprised of units called SpiderConv, which extend convolutional operations from regular grids to irregular point sets that can be embedded in R n , by parametrizing a family of convolutional filters. We design the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial that ensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical architecture from classical CNNs, which allows it to extract semantic deep features. Experiments on ModelNet40[4] demonstrate that SpiderCNN achieves state-of-the-art accuracy 92.4% on standard benchmarks, and shows competitive performance on segmentation task.
translated by 谷歌翻译
Deep learning has been shown to be successful in a number of domains, ranging from acoustics, images, to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques. In this survey, we comprehensively review the different types of deep learning methods on graphs. We divide the existing methods into five categories based on their model architectures and training strategies: graph recurrent neural networks, graph convolutional networks, graph autoencoders, graph reinforcement learning, and graph adversarial methods. We then provide a comprehensive overview of these methods in a systematic manner mainly by following their development history. We also analyze the differences and compositions of different methods. Finally, we briefly outline the applications in which they have been used and discuss potential future research directions.
translated by 谷歌翻译
我们介绍了一种新颖的谐波分析,用于在函数上定义的函数,随机步行操作员是基石。作为第一步,我们将随机步行操作员的一组特征向量作为非正交傅里叶类型的功能,用于通过定向图。我们通过将从其Dirichlet能量获得的随机步行操作员的特征向量的变化与其相关的特征值的真实部分连接来发现频率解释。从这个傅立叶基础,我们可以进一步继续,并在有向图中建立多尺度分析。通过将Coifman和MagGioni扩展到定向图,我们提出了一种冗余小波变换和抽取的小波变换。因此,我们对导向图的谐波分析的发展导致我们考虑应用于突出了我们框架效率的指示图的图形上的半监督学习问题和信号建模问题。
translated by 谷歌翻译
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and nonlinear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We show that, replacing the expression space of an existing state-of-theart face model with our model, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.
translated by 谷歌翻译
预测具有微观结构的材料的代表性样品的演变是均质化的基本问题。在这项工作中,我们提出了一种图形卷积神经网络,其利用直接初始微结构的离散化表示,而无需分割或聚类。与基于特征和基于像素的卷积神经网络模型相比,所提出的方法具有许多优点:(a)它是深入的,因为它不需要卵容,但可以从中受益,(b)它具有简单的实现使用标准卷积滤波器和层,(c)它在没有插值的非结构化和结构网格数据上本身工作(与基于像素的卷积神经网络不同),并且(d)它可以保留与其他基于图形的卷积神经网络等旋转不变性。我们展示了所提出的网络的性能,并将其与传统的基于像素的卷积神经网络模型和基于传统的像素的卷积神经网络模型进行比较,并且在多个大型数据集上的基于特征的图形卷积神经网络。
translated by 谷歌翻译