扩散是分子从较高浓度的区域的运动到较低浓度的区域。它可用于描述数据点之间的交互。在许多机器学习问题包括转导半监督学习和少量学习的问题,标记和未标记的数据点之间的关系是高分类精度的关键组件。在本文中,由对流扩散颂歌的启发,我们提出了一种新颖的扩散剩余网络(Diff-Reset),将扩散机制引入内部的神经网络中。在结构化数据假设下,证明扩散机构可以提高距离直径比,从而提高了阶级间点间的可分离性,并减少了局部分类点之间的距离。该特性可以通过用于构建可分离超平面的剩余网络来轻松采用。各种数据集中的半监控图节点分类和几次拍摄图像分类的广泛实验验证了所提出的扩散机制的有效性。
translated by 谷歌翻译
神经消息传递是用于图形结构数据的基本功能提取单元,它考虑了相邻节点特征在网络传播中从一层到另一层的影响。我们通过相互作用的粒子系统与具有吸引力和排斥力的相互作用粒子系统以及在相变建模中产生的艾伦 - 卡恩力进行建模。该系统是一个反应扩散过程,可以将颗粒分离为不同的簇。这会导致图形神经网络的艾伦 - 卡恩消息传递(ACMP),其中解决方案的数值迭代构成了消息传播。 ACMP背后的机制是颗粒的相变,该颗粒能够形成多群集,从而实现GNNS预测进行节点分类。 ACMP可以将网络深度推向数百个层,理论上证明了严格的dirichlet能量下限。因此,它提供了GNN的深层模型,该模型避免了GNN过度厚度的常见问题。具有高均匀难度的各种实际节点分类数据集的实验表明,具有ACMP的GNN可以实现最先进的性能,而不会衰减Dirichlet Energy。
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
通过内核矩阵或图形laplacian矩阵代表数据点的光谱方法已成为无监督数据分析的主要工具。在许多应用程序场景中,可以通过神经网络嵌入的光谱嵌入可以在数据样本上进行训练,这为实现自动样本外扩展以及计算可扩展性提供了一种有希望的方法。在Spectralnet的原始论文中采用了这种方法(Shaham等人,2018年),我们称之为Specnet1。当前的论文引入了一种名为SpecNet2的新神经网络方法,以计算光谱嵌入,该方法优化了特征问题的等效目标,并删除了SpecNet1中的正交层。 SpecNet2还允许通过通过梯度公式跟踪每个数据点的邻居来分离图形亲和力矩阵的行采样和列。从理论上讲,我们证明了新的无正交物质目标的任何局部最小化均显示出领先的特征向量。此外,证明了使用基于批处理的梯度下降法的这种新的无正交目标的全局收敛。数值实验证明了在模拟数据和图像数据集上Specnet2的性能和计算效率的提高。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
These notes were compiled as lecture notes for a course developed and taught at the University of the Southern California. They should be accessible to a typical engineering graduate student with a strong background in Applied Mathematics. The main objective of these notes is to introduce a student who is familiar with concepts in linear algebra and partial differential equations to select topics in deep learning. These lecture notes exploit the strong connections between deep learning algorithms and the more conventional techniques of computational physics to achieve two goals. First, they use concepts from computational physics to develop an understanding of deep learning algorithms. Not surprisingly, many concepts in deep learning can be connected to similar concepts in computational physics, and one can utilize this connection to better understand these algorithms. Second, several novel deep learning algorithms can be used to solve challenging problems in computational physics. Thus, they offer someone who is interested in modeling a physical phenomena with a complementary set of tools.
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.
translated by 谷歌翻译
我们提出了图形耦合振荡器网络(GraphCon),这是一个新颖的图形学习框架。它基于普通微分方程(ODE)的二阶系统的离散化,该系统建模了非线性控制和阻尼振荡器网络,并通过基础图的邻接结构结合。我们的框架的灵活性允许作为耦合函数任何基本的GNN层(例如卷积或注意力),通过该函数,通过该函数通过该函数通过该函数通过该函数通过所提出的ODES的动力学来构建多层深神经网络。我们将GNN中通常遇到的过度厚度问题与基础ode的稳态稳定性联系起来,并表明零二核能能量稳态对于我们提出的ODE不稳定。这表明所提出的框架减轻了过度厚度的问题。此外,我们证明GraphCon减轻了爆炸和消失的梯度问题,以促进对多层GNN的训练。最后,我们证明我们的方法在各种基于图形的学习任务方面就最先进的方法提供了竞争性能。
translated by 谷歌翻译
在机器学习中调用多种假设需要了解歧管的几何形状和维度,理论决定了需要多少样本。但是,在应用程序数据中,采样可能不均匀,歧管属性是未知的,并且(可能)非纯化;这意味着社区必须适应本地结构。我们介绍了一种用于推断相似性内核提供数据的自适应邻域的算法。从本地保守的邻域(Gabriel)图开始,我们根据加权对应物进行迭代率稀疏。在每个步骤中,线性程序在全球范围内产生最小的社区,并且体积统计数据揭示了邻居离群值可能违反了歧管几何形状。我们将自适应邻域应用于非线性维度降低,地球计算和维度估计。与标准算法的比较,例如使用K-Nearest邻居,证明了它们的实用性。
translated by 谷歌翻译
最小化能量的动力系统在几何和物理学中无处不在。我们为GNN提出了一个梯度流框架,其中方程遵循可学习能量的最陡峭下降的方向。这种方法允许从多粒子的角度来解释GNN的演变,以通过对称“通道混合”矩阵的正和负特征值在特征空间中学习吸引力和排斥力。我们对溶液进行光谱分析,并得出结论,梯度流量图卷积模型可以诱导以图高频为主导的动力学,这对于异性数据集是理想的。我们还描述了对常见GNN体系结构的结构约束,从而将其解释为梯度流。我们进行了彻底的消融研究,以证实我们的理论分析,并在现实世界同质和异性数据集上显示了简单和轻量级模型的竞争性能。
translated by 谷歌翻译
这项调查的目的是介绍对深神经网络的近似特性的解释性回顾。具体而言,我们旨在了解深神经网络如何以及为什么要优于其他经典线性和非线性近似方法。这项调查包括三章。在第1章中,我们回顾了深层网络及其组成非线性结构的关键思想和概念。我们通过在解决回归和分类问题时将其作为优化问题来形式化神经网络问题。我们简要讨论用于解决优化问题的随机梯度下降算法以及用于解决优化问题的后传播公式,并解决了与神经网络性能相关的一些问题,包括选择激活功能,成本功能,过度适应问题和正则化。在第2章中,我们将重点转移到神经网络的近似理论上。我们首先介绍多项式近似中的密度概念,尤其是研究实现连续函数的Stone-WeierStrass定理。然后,在线性近似的框架内,我们回顾了馈电网络的密度和收敛速率的一些经典结果,然后在近似Sobolev函数中进行有关深网络复杂性的最新发展。在第3章中,利用非线性近似理论,我们进一步详细介绍了深度和近似网络与其他经典非线性近似方法相比的近似优势。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
我们研究具有流形结构的物理系统的langevin动力学$ \ MATHCAL {M} \ subset \ Mathbb {r}^p $,基于收集的样品点$ \ {\ Mathsf {x} _i \} _ {_i \} _ {i = 1} ^n \ subset \ mathcal {m} $探测未知歧管$ \ mathcal {m} $。通过扩散图,我们首先了解反应坐标$ \ {\ MATHSF {y} _i \} _ {i = 1}^n \ subset \ subset \ mathcal {n} $对应于$ \ {\ {\ mathsf {x} _i _i \ \ \ \ \ _i \ \ \ \ {x} } _ {i = 1}^n $,其中$ \ mathcal {n} $是$ \ mathcal {m} $的歧义diffeomorphic,并且与$ \ mathbb {r}^\ ell $ insometryally嵌入了$ \ ell $,带有$ \ ell \ ell \ ell \ ell \ el \ ell \ el \ el \ ell \ el \ LL P $。在$ \ Mathcal {n} $上的诱导Langevin动力学在反应坐标方面捕获了缓慢的时间尺度动力学,例如生化反应的构象变化。要构建$ \ Mathcal {n} $上的Langevin Dynamics的高效稳定近似,我们利用反应坐标$ \ MATHSF {y} n effertold $ \ Mathcal {n} $上的歧管$ \ Mathcal {n} $上的相应的fokker-planck方程$。我们为此Fokker-Planck方程提出了可实施的,无条件稳定的数据驱动的有限卷方程,该方程将自动合并$ \ Mathcal {n} $的歧管结构。此外,我们在$ \ Mathcal {n} $上提供了有限卷方案的加权$ L^2 $收敛分析。所提出的有限体积方案在$ \ {\ Mathsf {y} _i \} _ {i = 1}^n $上导致Markov链,并具有近似的过渡概率和最近的邻居点之间的跳跃速率。在无条件稳定的显式时间离散化之后,数据驱动的有限体积方案为$ \ Mathcal {n} $上的Langevin Dynamics提供了近似的Markov进程,并且近似的Markov进程享有详细的平衡,Ergodicity和其他良好的属性。
translated by 谷歌翻译
An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译
We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.
translated by 谷歌翻译