Data-driven neighborhood definitions and graph constructions are often used in machine learning and signal processing applications. k-nearest neighbor~(kNN) and $\epsilon$-neighborhood methods are among the most common methods used for neighborhood selection, due to their computational simplicity. However, the choice of parameters associated with these methods, such as k and $\epsilon$, is still ad hoc. We make two main contributions in this paper. First, we present an alternative view of neighborhood selection, where we show that neighborhood construction is equivalent to a sparse signal approximation problem. Second, we propose an algorithm, non-negative kernel regression~(NNK), for obtaining neighborhoods that lead to better sparse representation. NNK draws similarities to the orthogonal matching pursuit approach to signal representation and possesses desirable geometric and theoretical properties. Experiments demonstrate (i) the robustness of the NNK algorithm for neighborhood and graph construction, (ii) its ability to adapt the number of neighbors to the data properties, and (iii) its superior performance in local neighborhood and graph-based machine learning tasks.
translated by 谷歌翻译
通过收集大量数据,然后使用获得的数据直接优化系统参数,正在设计越来越多的系统。通常,这是在没有分析数据集结构的情况下完成的。随着任务复杂性,数据大小和参数都增加到数百万甚至数十亿,数据汇总正成为一个主要挑战。在这项工作中,我们通过字典学习〜(DL)研究了数据汇总,利用了最近引入的非负核回归(NNK)图的属性。与以前的DL技术(例如KSVD)不同,我们提出的NNK均值学习了代表输入数据空间的原子的几何词典。实验表明,与KMeans和KSVD的线性和内核版本相比,使用NNK均值的汇总可以提供更好的类别分离。此外,NNK均值是可扩展的,其运行时复杂性与Kmeans相似。
translated by 谷歌翻译
We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.
translated by 谷歌翻译
在机器学习中调用多种假设需要了解歧管的几何形状和维度,理论决定了需要多少样本。但是,在应用程序数据中,采样可能不均匀,歧管属性是未知的,并且(可能)非纯化;这意味着社区必须适应本地结构。我们介绍了一种用于推断相似性内核提供数据的自适应邻域的算法。从本地保守的邻域(Gabriel)图开始,我们根据加权对应物进行迭代率稀疏。在每个步骤中,线性程序在全球范围内产生最小的社区,并且体积统计数据揭示了邻居离群值可能违反了歧管几何形状。我们将自适应邻域应用于非线性维度降低,地球计算和维度估计。与标准算法的比较,例如使用K-Nearest邻居,证明了它们的实用性。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
这篇综述的目的是将读者介绍到图表内,以将其应用于化学信息学中的分类问题。图内核是使我们能够推断分子的化学特性的功能,可以帮助您完成诸如寻找适合药物设计的化合物等任务。内核方法的使用只是一种特殊的两种方式量化了图之间的相似性。我们将讨论限制在这种方法上,尽管近年来已经出现了流行的替代方法,但最著名的是图形神经网络。
translated by 谷歌翻译
Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
Kernel matrices, as well as weighted graphs represented by them, are ubiquitous objects in machine learning, statistics and other related fields. The main drawback of using kernel methods (learning and inference using kernel matrices) is efficiency -- given $n$ input points, most kernel-based algorithms need to materialize the full $n \times n$ kernel matrix before performing any subsequent computation, thus incurring $\Omega(n^2)$ runtime. Breaking this quadratic barrier for various problems has therefore, been a subject of extensive research efforts. We break the quadratic barrier and obtain $\textit{subquadratic}$ time algorithms for several fundamental linear-algebraic and graph processing primitives, including approximating the top eigenvalue and eigenvector, spectral sparsification, solving linear systems, local clustering, low-rank approximation, arboricity estimation and counting weighted triangles. We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix. In particular, we develop efficient reductions from $\textit{weighted vertex}$ and $\textit{weighted edge sampling}$ on kernel graphs, $\textit{simulating random walks}$ on kernel graphs, and $\textit{importance sampling}$ on matrices to Kernel Density Estimation and show that we can generate samples from these distributions in $\textit{sublinear}$ (in the support of the distribution) time. Our reductions are the central ingredient in each of our applications and we believe they may be of independent interest. We empirically demonstrate the efficacy of our algorithms on low-rank approximation (LRA) and spectral sparsification, where we observe a $\textbf{9x}$ decrease in the number of kernel evaluations over baselines for LRA and a $\textbf{41x}$ reduction in the graph size for spectral sparsification.
translated by 谷歌翻译
We consider the nonlinear inverse problem of learning a transition operator $\mathbf{A}$ from partial observations at different times, in particular from sparse observations of entries of its powers $\mathbf{A},\mathbf{A}^2,\cdots,\mathbf{A}^{T}$. This Spatio-Temporal Transition Operator Recovery problem is motivated by the recent interest in learning time-varying graph signals that are driven by graph operators depending on the underlying graph topology. We address the nonlinearity of the problem by embedding it into a higher-dimensional space of suitable block-Hankel matrices, where it becomes a low-rank matrix completion problem, even if $\mathbf{A}$ is of full rank. For both a uniform and an adaptive random space-time sampling model, we quantify the recoverability of the transition operator via suitable measures of incoherence of these block-Hankel embedding matrices. For graph transition operators these measures of incoherence depend on the interplay between the dynamics and the graph topology. We develop a suitable non-convex iterative reweighted least squares (IRLS) algorithm, establish its quadratic local convergence, and show that, in optimal scenarios, no more than $\mathcal{O}(rn \log(nT))$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator $\mathbf{A}$ of size $n \times n$. This establishes that spatial samples can be substituted by a comparable number of space-time samples. We provide an efficient implementation of the proposed IRLS algorithm with space complexity of order $O(r n T)$ and per-iteration time complexity linear in $n$. Numerical experiments for transition operators based on several graph models confirm that the theoretical findings accurately track empirical phase transitions, and illustrate the applicability and scalability of the proposed algorithm.
translated by 谷歌翻译
子空间聚类是将大约位于几个低维子空间的数据样本集合集合的经典问题。此问题的当前最新方法基于自我表达模型,该模型表示样品是其他样品的线性组合。但是,这些方法需要足够广泛的样品才能准确表示,这在许多应用中可能不一定是可以访问的。在本文中,我们阐明了这个常见的问题,并认为每个子空间中的数据分布在自我表达模型的成功中起着至关重要的作用。我们提出的解决此问题的解决方案是由数据扩展在深神经网络的概括力中的核心作用引起的。我们为无监督和半监督的设置提出了两个子空间聚类框架,这些框架使用增强样品作为扩大词典来提高自我表达表示的质量。我们提出了一种使用一些标记的样品进行半监督问题的自动增强策略,该问题取决于数据样本位于多个线性子空间的联合以下事实。实验结果证实了数据增强的有效性,因为它显着提高了一般自我表达模型的性能。
translated by 谷歌翻译
The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.
translated by 谷歌翻译
形状约束,例如非负,单调性,凸度或超模型性,在机器学习和统计的各种应用中都起着关键作用。但是,将此方面的信息以艰苦的方式(例如,在间隔的所有点)纳入预测模型,这是一个众所周知的具有挑战性的问题。我们提出了一个统一和模块化的凸优化框架,依赖于二阶锥(SOC)拧紧,以编码属于矢量值重现的载体内核Hilbert Spaces(VRKHSS)的模型对函数衍生物的硬仿射SDP约束。所提出的方法的模块化性质允许同时处理多个形状约束,并将无限数量的约束限制为有限的许多。我们证明了所提出的方案的收敛及其自适应变体的收敛性,利用VRKHSS的几何特性。由于基于覆盖的拧紧构造,该方法特别适合具有小到中等输入维度的任务。该方法的效率在形状优化,机器人技术和计量经济学的背景下进行了说明。
translated by 谷歌翻译
度量的运输提供了一种用于建模复杂概率分布的多功能方法,并具有密度估计,贝叶斯推理,生成建模及其他方法的应用。单调三角传输地图$ \ unicode {x2014} $近似值$ \ unicode {x2013} $ rosenblatt(kr)重新安排$ \ unicode {x2014} $是这些任务的规范选择。然而,此类地图的表示和参数化对它们的一般性和表现力以及对从数据学习地图学习(例如,通过最大似然估计)出现的优化问题的属性产生了重大影响。我们提出了一个通用框架,用于通过平滑函数的可逆变换来表示单调三角图。我们建立了有关转化的条件,以使相关的无限维度最小化问题没有伪造的局部最小值,即所有局部最小值都是全球最小值。我们展示了满足某些尾巴条件的目标分布,唯一的全局最小化器与KR地图相对应。鉴于来自目标的样品,我们提出了一种自适应算法,该算法估计了基础KR映射的稀疏半参数近似。我们证明了如何将该框架应用于关节和条件密度估计,无可能的推断以及有向图形模型的结构学习,并在一系列样本量之间具有稳定的概括性能。
translated by 谷歌翻译
我们研究了紧凑型歧管M上的回归问题。为了利用数据的基本几何形状和拓扑结构,回归任务是基于歧管的前几个特征函数执行的,该特征是歧管的laplace-beltrami操作员,通过拓扑处罚进行正规化。提出的惩罚基于本征函数或估计功能的子级集的拓扑。显示总体方法可在合成和真实数据集上对各种应用产生有希望的和竞争性能。我们还根据回归函数估计,其预测误差及其平滑度(从拓扑意义上)提供理论保证。综上所述,这些结果支持我们方法在目标函数“拓扑平滑”的情况下的相关性。
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
这是一份有关降低光谱维度降低方法统一的教程和调查论文,通过半决赛编程(SDP)学习内核学习,最大方差展开(MVU)或半芬特嵌入(SDE)及其变体。我们首先解释了如何将频谱降低方法降低方法统一为具有不同内核的内核主成分分析(PCA)。在距离矩阵方面,该统一可以解释为内核的本本函数学习或表示。然后,由于光谱方法被统一为内核PCA,因此我们说,让我们学习将数据的歧管展开至最大方差的最佳内核。我们首先简要介绍了SDP的内核学习来进行转导任务。然后,我们详细解释MVU。解释了使用最近的邻居图,通过课堂展开,Fisher Criterion和通过彩色MVU进行的各种监督MVU。我们还使用本征函数和内核映射解释了MVU的样本外扩展。最后,我们介绍了MVU的其他变体,包括尊重嵌入,放松的MVU和Landmark MVU的动作,以获取大数据。
translated by 谷歌翻译
决策森林(森林),尤其是随机森林和梯度促进树木,与许多监督学习场景中的其他方法相比,已经证明了最先进的准确性。尤其是,森林在表格数据中占主导地位,即当特征空间非结构化时,因此信号是特征指数置换的不变性。然而,在存在于多种多样(例如图像,文本和语音)深网(网络)(特别是卷积深网(Convnets))上的结构化数据中,倾向于优于森林。我们猜想至少部分原因是网络的输入不仅仅是特征幅度,也是其索引。相反,天真的森林实施未能明确考虑特征指数。最近提出的森林方法表明,对于每个节点,森林从某些特定分布中隐式采样一个随机矩阵。这些森林像某些类别的网络一样,通过将特征空间划分为对应于线性函数的凸多物体来学习。我们以这种方法为基础,并表明人们可以以多种感知方式选择分布来纳入特征区域。我们在数据上活在三个不同的流形上的数据上证明了经验性能:圆环,图像和时间序列。此外,我们证明了其在多元模拟环境中的强度,并且在预测癫痫患者的手术结果方面也表现出了优越性,并从非运动脑区域的原始立体定向EEG数据中预测运动方向。在所有模拟和真实数据中,歧管随机森林(MORF)算法的表现优于忽略特征空间结构并挑战Convnets的性能。此外,MORF运行迅速,并保持解释性和理论上的理由。
translated by 谷歌翻译