在过去的十年中,图上的信号处理已成为一个非常活跃的研究领域。具体而言,使用从图形上构建的框架(例如图上的小波)在统计或深度学习中的应用数量显着增加。我们特别考虑通过数据驱动的小波紧密框架方法在图表上进行信号的情况。这种自适应方法基于使用Stein的无偏风险估计校准的阈值,该阈值适合于紧密框架表示。我们可以使用Chebyshev-Jackson多项式近似值将其扩展到大图,从而可以快速计算小波系数,而无需计算laplacian特征性组成。但是,紧密框架的过度本质将白噪声转化为相关的噪声。结果,转换噪声的协方差出现在确定的差异项中,因此需要计算和存储框架,从而导致大图的不切实际计算。为了估计这种协方差,我们基于零均值和单位方差随机变量的快速转换制定和分析蒙特卡洛策略。这种新的数据驱动的denoisisy方法可以在差异隐私中发现自然应用。从真实和模拟数据的大小变化图上进行了全面的性能分析。
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogues to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting, and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.
translated by 谷歌翻译
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
translated by 谷歌翻译
我们介绍了一种新颖的谐波分析,用于在函数上定义的函数,随机步行操作员是基石。作为第一步,我们将随机步行操作员的一组特征向量作为非正交傅里叶类型的功能,用于通过定向图。我们通过将从其Dirichlet能量获得的随机步行操作员的特征向量的变化与其相关的特征值的真实部分连接来发现频率解释。从这个傅立叶基础,我们可以进一步继续,并在有向图中建立多尺度分析。通过将Coifman和MagGioni扩展到定向图,我们提出了一种冗余小波变换和抽取的小波变换。因此,我们对导向图的谐波分析的发展导致我们考虑应用于突出了我们框架效率的指示图的图形上的半监督学习问题和信号建模问题。
translated by 谷歌翻译
我们研究了趋势过滤的多元版本,称为Kronecker趋势过滤或KTF,因为设计点以$ D $维度形成格子。 KTF是单变量趋势过滤的自然延伸(Steidl等,2006; Kim等人,2009; Tibshirani,2014),并通过最大限度地减少惩罚最小二乘问题,其罚款术语总和绝对(高阶)沿每个坐标方向估计参数的差异。相应的惩罚运算符可以编写单次趋势过滤惩罚运营商的Kronecker产品,因此名称Kronecker趋势过滤。等效,可以在$ \ ell_1 $ -penalized基础回归问题上查看KTF,其中基本功能是下降阶段函数的张量产品,是一个分段多项式(离散样条)基础,基于单变量趋势过滤。本文是Sadhanala等人的统一和延伸结果。 (2016,2017)。我们开发了一套完整的理论结果,描述了$ k \ grone 0 $和$ d \ geq 1 $的$ k ^ {\ mathrm {th}} $ over kronecker趋势过滤的行为。这揭示了许多有趣的现象,包括KTF在估计异构平滑的功能时KTF的优势,并且在$ d = 2(k + 1)$的相位过渡,一个边界过去(在高维对 - 光滑侧)线性泡沫不能完全保持一致。我们还利用Tibshirani(2020)的离散花键来利用最近的结果,特别是离散的花键插值结果,使我们能够将KTF估计扩展到恒定时间内的任何偏离晶格位置(与晶格数量的大小无关)。
translated by 谷歌翻译
图表表示学习有许多现实世界应用,从超级分辨率的成像,3D计算机视觉到药物重新扫描,蛋白质分类,社会网络分析。图表数据的足够表示对于图形结构数据的统计或机器学习模型的学习性能至关重要。在本文中,我们提出了一种用于图形数据的新型多尺度表示系统,称为抽取帧的图形数据,其在图表上形成了本地化的紧密框架。抽取的帧系统允许在粗粒链上存储图形数据表示,并在每个比例的多个尺度处处理图形数据,数据存储在子图中。基于此,我们通过建设性数据驱动滤波器组建立用于在多分辨率下分解和重建图数据的抽取G-Framewelet变换。图形帧构建基于基于链的正交基础,支持快速图傅里叶变换。由此,我们为抽取的G-Frameword变换或FGT提供了一种快速算法,该算法具有线性计算复杂度O(n),用于尺寸N的图表。用数值示例验证抽取的帧谱和FGT的理论,用于随机图形。现实世界应用的效果是展示的,包括用于交通网络的多分辨率分析,以及图形分类任务的图形神经网络。
translated by 谷歌翻译
基于光谱的图形神经网络(SGNNS)在图表表示学习中一直吸引了不断的关注。然而,现有的SGNN是限于实现具有刚性变换的曲线滤波器(例如,曲线图傅立叶或预定义的曲线波小波变换)的限制,并且不能适应驻留在手中的图形和任务上的信号。在本文中,我们提出了一种新颖的图形神经网络,实现了具有自适应图小波的曲线图滤波器。具体地,自适应图表小波通过神经网络参数化提升结构学习,其中开发了基于结构感知的提升操作(即,预测和更新操作)以共同考虑图形结构和节点特征。我们建议基于扩散小波提升以缓解通过分区非二分类图引起的结构信息损失。通过设计,得到了所得小波变换的局部和稀疏性以及提升结构的可扩展性。我们进一步通过在学习的小波中学习稀疏图表表示来引导软阈值滤波操作,从而产生局部,高效和可伸缩的基于小波的图形滤波器。为了确保学习的图形表示不变于节点排列,在网络的输入中采用层以根据其本地拓扑信息重新排序节点。我们在基准引用和生物信息图形数据集中评估节点级和图形级别表示学习任务的所提出的网络。大量实验在准确性,效率和可扩展性方面展示了在现有的SGNN上的所提出的网络的优越性。
translated by 谷歌翻译
我们研究了紧凑型歧管M上的回归问题。为了利用数据的基本几何形状和拓扑结构,回归任务是基于歧管的前几个特征函数执行的,该特征是歧管的laplace-beltrami操作员,通过拓扑处罚进行正规化。提出的惩罚基于本征函数或估计功能的子级集的拓扑。显示总体方法可在合成和真实数据集上对各种应用产生有希望的和竞争性能。我们还根据回归函数估计,其预测误差及其平滑度(从拓扑意义上)提供理论保证。综上所述,这些结果支持我们方法在目标函数“拓扑平滑”的情况下的相关性。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
translated by 谷歌翻译
本文为信号去噪提供了一般交叉验证框架。然后将一般框架应用于非参数回归方法,例如趋势过滤和二元推车。然后显示所得到的交叉验证版本以获得最佳调谐的类似物所熟知的几乎相同的收敛速度。没有任何先前的趋势过滤或二元推车的理论分析。为了说明框架的一般性,我们还提出并研究了两个基本估算器的交叉验证版本;套索用于高维线性回归和矩阵估计的奇异值阈值阈值。我们的一般框架是由Chatterjee和Jafarov(2015)的想法的启发,并且可能适用于使用调整参数的广泛估算方法。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
translated by 谷歌翻译
许多现代数据集,从神经影像和地统计数据等领域都以张量数据的随机样本的形式来说,这可以被理解为对光滑的多维随机功能的嘈杂观察。来自功能数据分析的大多数传统技术被维度的诅咒困扰,并且随着域的尺寸增加而迅速变得棘手。在本文中,我们提出了一种学习从多维功能数据样本的持续陈述的框架,这些功能是免受诅咒的几种表现形式的。这些表示由一组可分离的基函数构造,该函数被定义为最佳地适应数据。我们表明,通过仔细定义的数据的仔细定义的减少转换的张测仪分解可以有效地解决所得到的估计问题。使用基于差分运算符的惩罚,并入粗糙的正则化。也建立了相关的理论性质。在模拟研究中证明了我们对竞争方法的方法的优点。我们在神经影像动物中得出真正的数据应用。
translated by 谷歌翻译
我们开发了一种多尺度方法,以从实验或模拟中观察到的物理字段或配置的数据集估算高维概率分布。通过这种方式,我们可以估计能量功能(或哈密顿量),并有效地在从统计物理学到宇宙学的各个领域中生成多体系统的新样本。我们的方法 - 小波条件重新归一化组(WC-RG) - 按比例进行估算,以估算由粗粒磁场来调节的“快速自由度”的条件概率的模型。这些概率分布是由与比例相互作用相关的能量函数建模的,并以正交小波为基础表示。 WC-RG将微观能量函数分解为各个尺度上的相互作用能量之和,并可以通过从粗尺度到细度来有效地生成新样品。近相变,它避免了直接估计和采样算法的“临界减速”。理论上通过结合RG和小波理论的结果来解释这一点,并为高斯和$ \ varphi^4 $字段理论进行数值验证。我们表明,多尺度WC-RG基于能量的模型比局部电位模型更通用,并且可以在所有长度尺度上捕获复杂的多体相互作用系统的物理。这是针对反映宇宙学中暗物质分布的弱透镜镜头的,其中包括与长尾概率分布的长距离相互作用。 WC-RG在非平衡系统中具有大量的潜在应用,其中未知基础分布{\ it先验}。最后,我们讨论了WC-RG和深层网络体系结构之间的联系。
translated by 谷歌翻译
量子计算有可能彻底改变和改变我们的生活和理解世界的方式。该审查旨在提供对量子计算的可访问介绍,重点是统计和数据分析中的应用。我们从介绍了了解量子计算所需的基本概念以及量子和经典计算之间的差异。我们描述了用作量子算法的构建块的核心量子子程序。然后,我们审查了一系列预期的量子算法,以便在统计和机器学习中提供计算优势。我们突出了将量子计算应用于统计问题的挑战和机遇,并讨论潜在的未来研究方向。
translated by 谷歌翻译
我们考虑了使用显微镜或X射线散射技术产生的图像数据自组装的模型的贝叶斯校准。为了说明BCP平衡结构中的随机远程疾病,我们引入了辅助变量以表示这种不确定性。然而,这些变量导致了高维图像数据的综合可能性,通常可以评估。我们使用基于测量运输的可能性方法以及图像数据的摘要统计数据来解决这一具有挑战性的贝叶斯推理问题。我们还表明,可以计算出有关模型参数的数据中的预期信息收益(EIG),而无需额外的成本。最后,我们介绍了基于二嵌段共聚物薄膜自组装和自上而下显微镜表征的ohta-kawasaki模型的数值案例研究。为了进行校准,我们介绍了一些基于域的能量和傅立叶的摘要统计数据,并使用EIG量化了它们的信息性。我们证明了拟议方法研究数据损坏和实验设计对校准结果的影响的力量。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译