We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. Nearest neighbors of points on a set of randomly selected rays are recorded into a tensor, called the RaySense signature. From the signature, statistical information about the data set, as well as certain geometrical information, can be extracted, independent of the ray set. We present a few examples illustrating applications of the proposed sampling strategy.
translated by 谷歌翻译
我们调查识别来自域中的采样点的域的边界。我们向边界引入正常矢量的新估计,指向边界的距离,以及对边界条内的点位于边界的测试。可以有效地计算估算器,并且比文献中存在的估计更准确。我们为估算者提供严格的错误估计。此外,我们使用检测到的边界点来解决Point云上PDE的边值问题。我们在点云上证明了LAPLACH和EIKONG方程的错误估计。最后,我们提供了一系列数值实验,说明了我们的边界估计器,在点云上的PDE应用程序的性能,以及在图像数据集上测试。
translated by 谷歌翻译
众所周知,进食前馈神经网络的学习速度很慢,并且在深度学习应用中呈现了几十年的瓶颈。例如,广泛用于训练神经网络的基于梯度的学习算法在所有网络参数都必须迭代调整时往往会缓慢起作用。为了解决这个问题,研究人员和从业人员都尝试引入随机性来减少学习要求。基于Igelnik和Pao的原始结构,具有随机输入层的重量和偏见的单层神经网络在实践中取得了成功,但是缺乏必要的理论理由。在本文中,我们开始填补这一理论差距。我们提供了一个(校正的)严格证明,即Igelnik和PAO结构是连续函数在紧凑型域上连续函数的通用近似值,并且近似错误渐近地衰减,例如$ o(1/\ sqrt {n})网络节点。然后,我们将此结果扩展到非反应设置,证明人们可以在$ n $的情况下实现任何理想的近似误差,而概率很大。我们进一步调整了这种随机神经网络结构,以近似欧几里得空间的平滑,紧凑的亚曼叶量的功能,从而在渐近和非催化形式的理论保证中提供了理论保证。最后,我们通过数值实验说明了我们在歧管上的结果。
translated by 谷歌翻译
在机器学习中调用多种假设需要了解歧管的几何形状和维度,理论决定了需要多少样本。但是,在应用程序数据中,采样可能不均匀,歧管属性是未知的,并且(可能)非纯化;这意味着社区必须适应本地结构。我们介绍了一种用于推断相似性内核提供数据的自适应邻域的算法。从本地保守的邻域(Gabriel)图开始,我们根据加权对应物进行迭代率稀疏。在每个步骤中,线性程序在全球范围内产生最小的社区,并且体积统计数据揭示了邻居离群值可能违反了歧管几何形状。我们将自适应邻域应用于非线性维度降低,地球计算和维度估计。与标准算法的比较,例如使用K-Nearest邻居,证明了它们的实用性。
translated by 谷歌翻译
Experimental sciences have come to depend heavily on our ability to organize, interpret and analyze high-dimensional datasets produced from observations of a large number of variables governed by natural processes. Natural laws, conservation principles, and dynamical structure introduce intricate inter-dependencies among these observed variables, which in turn yield geometric structure, with fewer degrees of freedom, on the dataset. We show how fine-scale features of this structure in data can be extracted from \emph{discrete} approximations to quantum mechanical processes given by data-driven graph Laplacians and localized wavepackets. This data-driven quantization procedure leads to a novel, yet natural uncertainty principle for data analysis induced by limited data. We illustrate the new approach with algorithms and several applications to real-world data, including the learning of patterns and anomalies in social distancing and mobility behavior during the COVID-19 pandemic.
translated by 谷歌翻译
本文提出了一个无网格的计算框架和机器学习理论,用于在未知的歧管上求解椭圆形PDE,并根据扩散地图(DM)和深度学习确定点云。 PDE求解器是作为监督的学习任务制定的,以解决最小二乘回归问题,该问题施加了近似PDE的代数方程(如果适用)。该代数方程涉及通过DM渐近扩展获得的图形拉平型矩阵,该基质是二阶椭圆差差算子的一致估计器。最终的数值方法是解决受神经网络假设空间解决方案的高度非凸经验最小化问题。在体积良好的椭圆PDE设置中,当假设空间由具有无限宽度或深度的神经网络组成时,我们表明,经验损失函数的全球最小化器是大型训练数据极限的一致解决方案。当假设空间是一个两层神经网络时,我们表明,对于足够大的宽度,梯度下降可以识别经验损失函数的全局最小化器。支持数值示例证明了解决方案的收敛性,范围从具有低和高共限度的简单歧管到具有和没有边界的粗糙表面。我们还表明,所提出的NN求解器可以在具有概括性误差的新数据点上稳健地概括PDE解决方案,这些误差几乎与训练错误相同,从而取代了基于Nystrom的插值方法。
translated by 谷歌翻译
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
In this paper, we propose Wasserstein Isometric Mapping (Wassmap), a nonlinear dimensionality reduction technique that provides solutions to some drawbacks in existing global nonlinear dimensionality reduction algorithms in imaging applications. Wassmap represents images via probability measures in Wasserstein space, then uses pairwise Wasserstein distances between the associated measures to produce a low-dimensional, approximately isometric embedding. We show that the algorithm is able to exactly recover parameters of some image manifolds including those generated by translations or dilations of a fixed generating measure. Additionally, we show that a discrete version of the algorithm retrieves parameters from manifolds generated from discrete measures by providing a theoretical bridge to transfer recovery results from functional data to discrete data. Testing of the proposed algorithms on various image data manifolds show that Wassmap yields good embeddings compared with other global and local techniques.
translated by 谷歌翻译
本文通过引入几何深度学习(GDL)框架来构建通用馈电型型模型与可区分的流形几何形状兼容的通用馈电型模型,从而解决了对非欧国人数据进行处理的需求。我们表明,我们的GDL模型可以在受控最大直径的紧凑型组上均匀地近似任何连续目标函数。我们在近似GDL模型的深度上获得了最大直径和上限的曲率依赖性下限。相反,我们发现任何两个非分类紧凑型歧管之间始终都有连续的函数,任何“局部定义”的GDL模型都不能均匀地近似。我们的最后一个主要结果确定了数据依赖性条件,确保实施我们近似的GDL模型破坏了“维度的诅咒”。我们发现,任何“现实世界”(即有限)数据集始终满足我们的状况,相反,如果目标函数平滑,则任何数据集都满足我们的要求。作为应用,我们确认了以下GDL模型的通用近似功能:Ganea等。 (2018)的双波利馈电网络,实施Krishnan等人的体系结构。 (2015年)的深卡尔曼 - 滤波器和深度玛克斯分类器。我们构建了:Meyer等人的SPD-Matrix回归剂的通用扩展/变体。 (2011)和Fletcher(2003)的Procrustean回归剂。在欧几里得的环境中,我们的结果暗示了Kidger和Lyons(2020)的近似定理和Yarotsky和Zhevnerchuk(2019)无估计近似率的数据依赖性版本的定量版本。
translated by 谷歌翻译
我们研究具有流形结构的物理系统的langevin动力学$ \ MATHCAL {M} \ subset \ Mathbb {r}^p $,基于收集的样品点$ \ {\ Mathsf {x} _i \} _ {_i \} _ {i = 1} ^n \ subset \ mathcal {m} $探测未知歧管$ \ mathcal {m} $。通过扩散图,我们首先了解反应坐标$ \ {\ MATHSF {y} _i \} _ {i = 1}^n \ subset \ subset \ mathcal {n} $对应于$ \ {\ {\ mathsf {x} _i _i \ \ \ \ \ _i \ \ \ \ {x} } _ {i = 1}^n $,其中$ \ mathcal {n} $是$ \ mathcal {m} $的歧义diffeomorphic,并且与$ \ mathbb {r}^\ ell $ insometryally嵌入了$ \ ell $,带有$ \ ell \ ell \ ell \ ell \ el \ ell \ el \ el \ ell \ el \ LL P $。在$ \ Mathcal {n} $上的诱导Langevin动力学在反应坐标方面捕获了缓慢的时间尺度动力学,例如生化反应的构象变化。要构建$ \ Mathcal {n} $上的Langevin Dynamics的高效稳定近似,我们利用反应坐标$ \ MATHSF {y} n effertold $ \ Mathcal {n} $上的歧管$ \ Mathcal {n} $上的相应的fokker-planck方程$。我们为此Fokker-Planck方程提出了可实施的,无条件稳定的数据驱动的有限卷方程,该方程将自动合并$ \ Mathcal {n} $的歧管结构。此外,我们在$ \ Mathcal {n} $上提供了有限卷方案的加权$ L^2 $收敛分析。所提出的有限体积方案在$ \ {\ Mathsf {y} _i \} _ {i = 1}^n $上导致Markov链,并具有近似的过渡概率和最近的邻居点之间的跳跃速率。在无条件稳定的显式时间离散化之后,数据驱动的有限体积方案为$ \ Mathcal {n} $上的Langevin Dynamics提供了近似的Markov进程,并且近似的Markov进程享有详细的平衡,Ergodicity和其他良好的属性。
translated by 谷歌翻译
点云分析没有姿势前导者在真实应用中非常具有挑战性,因为点云的方向往往是未知的。在本文中,我们提出了一个全新的点集学习框架prin,即点亮旋转不变网络,专注于点云分析中的旋转不变特征提取。我们通过密度意识的自适应采样构建球形信号,以处理球形空间中的扭曲点分布。提出了球形Voxel卷积和点重新采样以提取每个点的旋转不变特征。此外,我们将Prin扩展到称为Sprin的稀疏版本,直接在稀疏点云上运行。 Prin和Sprin都可以应用于从对象分类,部分分割到3D特征匹配和标签对齐的任务。结果表明,在随机旋转点云的数据集上,Sprin比无任何数据增强的最先进方法表现出更好的性能。我们还为我们的方法提供了彻底的理论证明和分析,以实现我们的方法实现的点明智的旋转不变性。我们的代码可在https://github.com/qq456cvb/sprin上找到。
translated by 谷歌翻译
我们引入了一种基于最近邻居回归的活动函数近似的算法。我们的活跃邻居回归器(ANNR)依靠Voronoi-Delaunay框架从计算几何形状到具有恒定估计函数值的空间将空间细分为恒定的函数值,并以一种将函数图的几何形状计入的方式选择新的查询点。我们将最新的最新活动函数近似值(称为defer)视为基于空间的增量矩形分区,为主基线。ANNR解决了由延期中使用的空间细分策略产生的许多局限性。我们提供了我们方法的计算有效实施以及理论停止保证。经验结果表明,Annr优于封闭形式函数和现实示例的基线,例如引力波参数推断和生成模型潜在空间的探索。
translated by 谷歌翻译
持续的同源性(PH)是拓扑数据分析中最流行的方法之一。尽管PH已用于许多不同类型的应用程序中,但其成功背后的原因仍然难以捉摸。特别是,尚不知道哪种类别的问题最有效,或者在多大程度上可以检测几何或拓扑特征。这项工作的目的是确定pH在数据分析中比其他方法更好甚至更好的问题。我们考虑三个基本形状分析任务:从形状采样的2D和3D点云中检测孔数,曲率和凸度。实验表明,pH在这些任务中取得了成功,超过了几个基线,包括PointNet,这是一个精确地受到点云的属性启发的体系结构。此外,我们观察到,pH对于有限的计算资源和有限的培训数据以及分布外测试数据,包括各种数据转换和噪声,仍然有效。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
A common approach to modeling networks assigns each node to a position on a low-dimensional manifold where distance is inversely proportional to connection likelihood. More positive manifold curvature encourages more and tighter communities; negative curvature induces repulsion. We consistently estimate manifold type, dimension, and curvature from simply connected, complete Riemannian manifolds of constant curvature. We represent the graph as a noisy distance matrix based on the ties between cliques, then develop hypothesis tests to determine whether the observed distances could plausibly be embedded isometrically in each of the candidate geometries. We apply our approach to data-sets from economics and neuroscience.
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
我们介绍了一种算法,用于计算采样歧管的测量测量算法,其依赖于对采样数据的植物嵌入的曲线图的模拟。我们的方法利用经典的结果在半导体分析和量子古典对应中,并形成用于学习数据集的歧管的技术的基础,随后用于高维数据集的非线性维度降低。我们以基于CoVID-19移动数据的聚类演示,从模型歧管中采样数据采样的数据,并通过集群演示来说明新的算法。最后,我们的方法揭示了数据采样和量化提供的离散化之间有趣的连接。
translated by 谷歌翻译
对对抗性示例强大的学习分类器已经获得了最近的关注。标准强大学习框架的主要缺点是人为强大的RADIUS $ R $,适用于所有输入。这忽略了数据可能是高度异构的事实,在这种情况下,它是合理的,在某些数据区域中,鲁棒性区域应该更大,并且在其他区域中更小。在本文中,我们通过提出名为邻域最佳分类器的新限制分类器来解决此限制,该分类通过使用最接近的支持点的标签扩展其支持之外的贝叶斯最佳分类器。然后,我们认为该分类器可能会使其稳健性区域的大小最大化,但受到等于贝叶斯的准确性的约束。然后,我们存在足够的条件,该条件下可以表示为重量函数的一般非参数方法会聚在此限制,并且显示最近的邻居和内核分类器在某些条件下满足它们。
translated by 谷歌翻译