图表信号处理是一种普遍存在的任务,如传感器,社会,运输和大脑网络,点云处理和图形神经网络等许多应用程序。通常,图形信号在感测过程中损坏,从而需要恢复。在本文中,我们提出了一种基于深度算法展开(DAU)的图形信号恢复方法。首先,我们通过展开乘法器(ADMM)的交替方向方法的迭代来呈现曲线图信号置位。然后,我们建议通过展开即插即用ADMM(PNP-ADMM)的迭代进行线性劣化的一般恢复方法。在第二种方法中,将展开的基于ADMM的Denoiser纳入子模块,导致嵌套的DAU结构。所提出的去噪/恢复方法中的参数以端到端的方式进行培训。我们的方法是可解释的,并保持参数的数量,因为我们只调谐与图形的正则化参数。我们克服了现有曲线图信号恢复方法中的两个主要挑战:1)由于固定参数,凸优化算法的有限性能由于通常手动确定的固定参数。 2)图形神经网络的大量参数导致训练难度。对曲线信号去噪和插值的几个实验是对合成和真实世界的数据进行的。所提出的方法在两个任务中的根均方误差方面,在几种现有技术上显示了性能改进。
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
随着从现实世界所收集的图形数据仅仅是无噪声,图形的实际表示应该是强大的噪声。现有的研究通常侧重于特征平滑,但留下几何结构不受影响。此外,大多数工作需要L2-Norm,追求全局平滑度,这限制了图形神经网络的表现。本文根据特征和结构噪声裁定图表数据的常规程序,其中目标函数用乘法器(ADMM)的交替方向方法有效地解决。该方案允许采用多个层,而无需过平滑的关注,并且保证对最佳解决方案的收敛性。实证研究证明,即使在重大污染的情况下,我们的模型也与流行的图表卷积相比具有明显更好的性能。
translated by 谷歌翻译
基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions.
translated by 谷歌翻译
多模式数据通过将来自来自各个域的数据与具有非常不同的统计特性的数据集成来提供自然现象的互补信息。捕获多模式数据的模态和跨换体信息是多模式学习方法的基本能力。几何感知数据分析方法通过基于其几何底层结构隐式表示各种方式的数据来提供这些能力。此外,在许多应用中,在固有的几何结构上明确地定义数据。对非欧几里德域的深度学习方法是一个新兴的研究领域,最近在许多研究中被调查。大多数流行方法都是为单峰数据开发的。本文提出了一种多模式多缩放图小波卷积网络(M-GWCN)作为端到端网络。 M-GWCN同时通过应用多尺度图小波变换来找到模态表示,以在每个模态的图形域中提供有用的本地化属性,以及通过学习各种方式之间的相关性的学习置换的跨模式表示。 M-GWCN不限于具有相同数量的数据的均匀模式,或任何指示模式之间的对应关系的现有知识。已经在三个流行的单峰显式图形数据集和五个多模式隐式界面进行了几个半监督节点分类实验。实验结果表明,与光谱图域卷积神经网络和最先进的多模式方法相比,所提出的方法的优越性和有效性。
translated by 谷歌翻译
通过扫描真实世界对象或场景采集的3D点云人已经发现了广泛的应用,包括融入式远程呈现,自动驾驶,监视等。它们通常是由噪声扰动或由低密度,这妨碍下游的任务,如表面重建遭受和理解。在本文中,我们提出了点集的二次采样恢复,这获知会聚点朝向下方的表面的点云的连续梯度场的新型范例。特别是,我们表示经由其梯度场点云 - 对数概率密度函数的梯度,和执行梯度场是连续的,这样就保证了模型可解优化的连续性。基于经由提出的神经网络估计出的连续梯度场,重新采样点云量对输入噪声或稀疏的点云执行基于梯度的马尔可夫链蒙特卡洛(MCMC)。此外,我们提出了点云恢复,基本上迭代地细化中间重采样点云,并在重采样过程容纳各种先验期间引入正则化到基于梯度的MCMC。大量的实验结果表明,该点集重采样实现了代表恢复工作,包括点云去噪和采样的国家的最先进的性能。
translated by 谷歌翻译
Deconvolution is a widely used strategy to mitigate the blurring and noisy degradation of hyperspectral images~(HSI) generated by the acquisition devices. This issue is usually addressed by solving an ill-posed inverse problem. While investigating proper image priors can enhance the deconvolution performance, it is not trivial to handcraft a powerful regularizer and to set the regularization parameters. To address these issues, in this paper we introduce a tuning-free Plug-and-Play (PnP) algorithm for HSI deconvolution. Specifically, we use the alternating direction method of multipliers (ADMM) to decompose the optimization problem into two iterative sub-problems. A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels. A measure of 3D residual whiteness is then investigated to adjust the penalty parameters when solving the quadratic sub-problems, as well as a stopping criterion. Experimental results on both simulated and real-world data with ground-truth demonstrate the superiority of the proposed method.
translated by 谷歌翻译
In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
translated by 谷歌翻译
3D点云通常由一个或多个观点处由传感器获取的深度测量构成。测量值遭受量化和噪声损坏。为了提高质量,以前的作品在将不完美深度数据投射到3D空间之后,将点云\ Textit {a postiriori}代名。相反,在合成3D点云之前,我们在感测图像\ Texit {a先验}上直接增强深度测量。通过增强物理传感过程附近,在后续处理步骤模糊测量误差之前,我们将我们的优化定制到我们的深度形成模型。具体而言,我们将深度形成为信号相关噪声添加和非均匀日志量化的组合过程。使用来自实际深度传感器的收集的经验数据验证设计的模型(配有参数)。为了在深度图像中增强每个像素行,我们首先通过特征图学习将可用行像素之间的视图帧内相似性编码为边缘权重。接下来我们通过观点映射和稀疏线性插值建立与另一个整流的深度图像的视图间相似性。这导致最大的后验(MAP)图滤波物镜,其凸显和可微分。我们使用加速梯度下降(AGD)有效地优化目标,其中最佳步长通过Gershgorin圆定理(GCT)近似。实验表明,我们的方法在两个既定点云质量指标中显着优于最近的近期云去噪方案和最先进的图像去噪方案。
translated by 谷歌翻译
Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclideanstructured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graphand 3D shape analysis and show that it consistently outperforms previous approaches.
translated by 谷歌翻译
图形信号处理(GSP)中的基本前提是,将目标信号的成对(反)相关性作为边缘权重以用于图形过滤。但是,现有的快速图抽样方案仅针对描述正相关的正图设计和测试。在本文中,我们表明,对于具有强固有抗相关的数据集,合适的图既包含正边缘和负边缘。作为响应,我们提出了一种以平衡签名图的概念为中心的线性时间签名的图形采样方法。具体而言,给定的经验协方差数据矩阵$ \ bar {\ bf {c}} $,我们首先学习一个稀疏的逆矩阵(Graph laplacian)$ \ MATHCAL {l} $对应于签名图$ \ Mathcal $ \ Mathcal {G} $ 。我们为平衡签名的图形$ \ Mathcal {g} _b $ - 近似$ \ Mathcal {g} $通过Edge Exge Exgement Exgmentation -As Graph频率组件定义Laplacian $ \ Mathcal {L} _b $的特征向量。接下来,我们选择样品以将低通滤波器重建误差分为两个步骤最小化。我们首先将Laplacian $ \ Mathcal {L} _b $的所有Gershgorin圆盘左端对齐,最小的EigenValue $ \ lambda _ {\ min}(\ Mathcal {l} _b)$通过相似性转换$ \ MATHCAL $ \ MATHCAL} s \ Mathcal {l} _b \ s^{ - 1} $,利用最新的线性代数定理,称为gershgorin disc perfect perfect对齐(GDPA)。然后,我们使用以前的快速gershgorin盘式对齐采样(GDAS)方案对$ \ Mathcal {L} _p $进行采样。实验结果表明,我们签名的图形采样方法在各种数据集上明显优于现有的快速采样方案。
translated by 谷歌翻译
图神经网络(GNN)已证明其在各种应用中的表现出色。然而,其背后的工作机制仍然神秘。 GNN模型旨在学习图形结构数据的有效表示,该数据本质上与图形信号denoising(GSD)的原理相吻合。算法展开是一种“学习优化”技术的算法,由于其在构建高效和可解释的神经网络体系结构方面的前景,人们引起了人们的关注。在本文中,我们引入了基于GSD问题的截断优化算法(例如梯度下降和近端梯度下降)构建的一类展开网络。它们被证明与许多流行的GNN模型紧密相连,因为这些GNN中的正向传播实际上是为特定GSD提供服务的展开网络。此外,可以将GNN模型的训练过程视为解决了较低级别的GSD问题的双重优化问题。这种连接带来了GNN的新景,因为我们可以尝试从GSD对应物中理解它们的实际功能,并且还可以激励设计新的GNN模型。基于算法展开的观点,一种名为UGDGNN的表达模型,即展开的梯度下降GNN,进一步提出了继承具有吸引力的理论属性的。七个基准数据集上的大量数值模拟表明,UGDGNN可以比最新模型实现卓越或竞争性的性能。
translated by 谷歌翻译
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
translated by 谷歌翻译
空气污染监测平台在预防和减轻污染影响方面发挥着非常重要的作用。绘图信号处理领域的最新进展使得可以使用图表描述和分析空气污染监测网络。其中一个主要应用是使用传感器的子集重新重建图表中的测量信号。使用来自传感器邻居的信息重建信号可以有助于提高网络数据的质量,示例是用相关的相邻节点的缺失数据填充,或者校正与更准确的相邻传感器的漂移传感器。本文比较了各种类型的图形信号重建方法应用于西班牙空气污染参考站的真实数据集。所考虑的方法是拉普拉斯插值,曲线​​图信号处理低通基的曲线曲线信号重建,以及基于内核的曲线图信号重建,并在测量O3,NO2和PM10的实际空气污染数据集上进行比较。示出了重建污染物信号的方法的能力,以及该重建的计算成本。结果表明了基于基于内核的曲线图信号重建的方法的优越性,以及具有大量低成本传感器的空气污染监测网络中的方法的难度。但是,我们表明可以通过简单的方法克服可扩展性,例如使用聚类算法对网络进行分区。
translated by 谷歌翻译
我们考虑了从节点观测值估算多个网络拓扑的问题,其中假定这些网络是从相同(未知)随机图模型中绘制的。我们采用图形作为我们的随机图模型,这是一个非参数模型,可以从中绘制出潜在不同大小的图形。图形子的多功能性使我们能够解决关节推理问题,即使对于要恢复的图形包含不同数量的节点并且缺乏整个图形的精确比对的情况。我们的解决方案是基于将最大似然惩罚与Graphon估计方案结合在一起,可用于增强现有网络推理方法。通过引入嘈杂图抽样信息的强大方法,进一步增强了所提出的联合网络和图形估计。我们通过将其性能与合成和实际数据集中的竞争方法进行比较来验证我们提出的方法。
translated by 谷歌翻译
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
translated by 谷歌翻译
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
translated by 谷歌翻译
图形卷积网络对于从图形结构数据进行深入学习而变得必不可少。大多数现有的图形卷积网络都有两个大缺点。首先,它们本质上是低通滤波器,因此忽略了图形信号的潜在有用的中和高频带。其次,固定了现有图卷积过滤器的带宽。图形卷积过滤器的参数仅转换图输入而不更改图形卷积滤波器函数的曲率。实际上,除非我们有专家领域知识,否则我们不确定是否应该在某个点保留或切断频率。在本文中,我们建议自动图形卷积网络(AUTOGCN)捕获图形信号的完整范围,并自动更新图形卷积过滤器的带宽。虽然它基于图谱理论,但我们的自动环境也位于空间中,并具有空间形式。实验结果表明,AutoGCN比仅充当低通滤波器的基线方法实现了显着改善。
translated by 谷歌翻译
图形卷积网络(GCN)已被证明是一个有力的概念,在过去几年中,已成功应用于许多领域的各种任务。在这项工作中,我们研究了为GCN定义铺平道路的理论,包括经典图理论的相关部分。我们还讨论并在实验上证明了GCN的关键特性和局限性,例如由样品的统计依赖性引起的,该图由图的边缘引入,这会导致完整梯度的估计值偏置。我们讨论的另一个限制是Minibatch采样对模型性能的负面影响。结果,在参数更新期间,在整个数据集上计算梯度,从而破坏了对大图的可扩展性。为了解决这个问题,我们研究了替代方法,这些方法允许在每次迭代中仅采样一部分数据,可以安全地学习良好的参数。我们重现了KIPF等人的工作中报告的结果。并提出一个灵感签名的实现,这是一种无抽样的minibatch方法。最终,我们比较了基准数据集上的两个实现,证明它们在半监督节点分类任务的预测准确性方面是可比的。
translated by 谷歌翻译