We describe a measure quantization procedure i.e., an algorithm which finds the best approximation of a target probability law (and more generally signed finite variation measure) by a sum of Q Dirac masses (Q being the quantization parameter). The procedure is implemented by minimizing the statistical distance between the original measure and its quantized version; the distance is built from a negative definite kernel and, if necessary, can be computed on the fly and feed to a stochastic optimization algorithm (such as SGD, Adam, ...). We investigate theoretically the fundamental questions of existence of the optimal measure quantizer and identify what are the required kernel properties that guarantee suitable behavior. We test the procedure, called HEMQ, on several databases: multi-dimensional Gaussian mixtures, Wiener space cubature, Italian wine cultivars and the MNIST image database. The results indicate that the HEMQ algorithm is robust and versatile and, for the class of Huber-energy kernels, it matches the expected intuitive behavior.
translated by 谷歌翻译
矢量值随机变量的矩序列可以表征其定律。我们通过使用所谓的稳健签名矩来研究路径值随机变量(即随机过程)的类似问题。这使我们能够为随机过程定律得出最大平均差异类型的度量,并研究其在随机过程定律方面引起的拓扑。可以使用签名内核对该度量进行内核,从而有效地计算它。作为应用程序,我们为随机过程定律提供了非参数的两样本假设检验。
translated by 谷歌翻译
比较概率分布是许多机器学习算法的关键。最大平均差异(MMD)和最佳运输距离(OT)是在过去几年吸引丰富的关注的概率措施之间的两类距离。本文建立了一些条件,可以通过MMD规范控制Wassersein距离。我们的作品受到压缩统计学习(CSL)理论的推动,资源有效的大规模学习的一般框架,其中训练数据总结在单个向量(称为草图)中,该训练数据捕获与所考虑的学习任务相关的信息。在CSL中的现有结果启发,我们介绍了H \“较旧的较低限制的等距属性(H \”较旧的LRIP)并表明这家属性具有有趣的保证对压缩统计学习。基于MMD与Wassersein距离之间的关系,我们通过引入和研究学习任务的Wassersein可读性的概念来提供压缩统计学习的保证,即概率分布之间的某些特定于特定的特定度量,可以由Wassersein界定距离。
translated by 谷歌翻译
We introduce and study a novel model-selection strategy for Bayesian learning, based on optimal transport, along with its associated predictive posterior law: the Wasserstein population barycenter of the posterior law over models. We first show how this estimator, termed Bayesian Wasserstein barycenter (BWB), arises naturally in a general, parameter-free Bayesian model-selection framework, when the considered Bayesian risk is the Wasserstein distance. Examples are given, illustrating how the BWB extends some classic parametric and non-parametric selection strategies. Furthermore, we also provide explicit conditions granting the existence and statistical consistency of the BWB, and discuss some of its general and specific properties, providing insights into its advantages compared to usual choices, such as the model average estimator. Finally, we illustrate how this estimator can be computed using the stochastic gradient descent (SGD) algorithm in Wasserstein space introduced in a companion paper arXiv:2201.04232v2 [math.OC], and provide a numerical example for experimental validation of the proposed method.
translated by 谷歌翻译
形状约束,例如非负,单调性,凸度或超模型性,在机器学习和统计的各种应用中都起着关键作用。但是,将此方面的信息以艰苦的方式(例如,在间隔的所有点)纳入预测模型,这是一个众所周知的具有挑战性的问题。我们提出了一个统一和模块化的凸优化框架,依赖于二阶锥(SOC)拧紧,以编码属于矢量值重现的载体内核Hilbert Spaces(VRKHSS)的模型对函数衍生物的硬仿射SDP约束。所提出的方法的模块化性质允许同时处理多个形状约束,并将无限数量的约束限制为有限的许多。我们证明了所提出的方案的收敛及其自适应变体的收敛性,利用VRKHSS的几何特性。由于基于覆盖的拧紧构造,该方法特别适合具有小到中等输入维度的任务。该方法的效率在形状优化,机器人技术和计量经济学的背景下进行了说明。
translated by 谷歌翻译
概率分布之间的差异措施,通常被称为统计距离,在概率理论,统计和机器学习中普遍存在。为了在估计这些距离的距离时,对维度的诅咒,最近的工作已经提出了通过带有高斯内核的卷积在测量的分布中平滑局部不规则性。通过该框架的可扩展性至高维度,我们研究了高斯平滑$ P $ -wassersein距离$ \ mathsf {w} _p ^ {(\ sigma)} $的结构和统计行为,用于任意$ p \ GEQ 1 $。在建立$ \ mathsf {w} _p ^ {(\ sigma)} $的基本度量和拓扑属性之后,我们探索$ \ mathsf {w} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,其中$ \ hat {\ mu} _n $是$ n $独立观察的实证分布$ \ mu $。我们证明$ \ mathsf {w} _p ^ {(\ sigma)} $享受$ n ^ { - 1/2} $的参数经验融合速率,这对比$ n ^ { - 1 / d} $率对于未平滑的$ \ mathsf {w} _p $ why $ d \ geq 3 $。我们的证明依赖于控制$ \ mathsf {w} _p ^ {(\ sigma)} $ by $ p $ th-sting spoollow sobolev restion $ \ mathsf {d} _p ^ {(\ sigma)} $并导出限制$ \ sqrt {n} \,\ mathsf {d} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,适用于所有尺寸$ d $。作为应用程序,我们提供了使用$ \ mathsf {w} _p ^ {(\ sigma)} $的两个样本测试和最小距离估计的渐近保证,使用$ p = 2 $的实验使用$ \ mathsf {d} _2 ^ {(\ sigma)} $。
translated by 谷歌翻译
计算科学和统计推断中的许多应用都需要计算有关具有未知归一化常数的复杂高维分布以及这些常数的估计。在这里,我们开发了一种基于从简单的基本分布生成样品,沿着速度场生成的流量运输的方法,并沿这些流程线执行平均值。这种非平衡重要性采样(NEIS)策略是直接实施的,可用于具有任意目标分布的计算。在理论方面,我们讨论了如何将速度场定制到目标,并建立所提出的估计器是一个完美的估计器,具有零变化。我们还通过将基本分布映射到目标上,通过传输图绘制了NEIS和方法之间的连接。在计算方面,我们展示了如何使用深度学习来代表神经网络,并将其训练为零方差最佳。这些结果在高维示例上进行了数值说明,我们表明训练速度场可以将NEIS估计量的方差降低至6个数量级,而不是Vanilla估计量。我们还表明,NEIS在这些示例上的表现要比NEAL的退火重要性采样(AIS)更好。
translated by 谷歌翻译
我们将最初在多维扩展和降低多元数据的降低领域发展为功能设置。我们专注于经典缩放和ISOMAP - 在这些领域中起重要作用的原型方法 - 并在功能数据分析的背景下展示它们的使用。在此过程中,我们强调了环境公制扮演的关键作用。
translated by 谷歌翻译
我们在非标准空间上介绍了积极的确定核的新类别,这些空间完全是严格的确定性或特征。特别是,我们讨论了可分离的希尔伯特空间上的径向内核,并在Banach空间和强型负类型的度量空间上引入了广泛的内核。一般结果用于在可分离的$ l^p $空间和一组措施上提供明确的核类。
translated by 谷歌翻译
机器学习中使用的生成深神经网络,例如变异自动编码器(VAE)和生成的对抗网络(GAN),每次都要求使用新对象保持类似于某些示例列表的约束时,每次都会产生新对象给出作为输入。但是,这种行为与人类艺术家的行为不同,后者随着时代的流逝而改变自己的风格,很少返回最初的作品。我们研究了一种情况,即使用VAE从某些经验数据集描述的概率度量中进行采样。根据有关Rabolev统计距离的最新作品,我们提出了一个数值范式,将与生成算法结合使用,该算法满足以下两个要求:创建的对象不重复并演变以填充整个目标概率度量。
translated by 谷歌翻译
Wassersein梯度流通概率措施在各种优化问题中发现了许多应用程序。它们通常由于由涉及梯度型电位的一些平均场相互作用而发展的可交换粒子系统的连续极限。然而,在许多问题中,例如在多层神经网络中,所谓的粒子是在节点可更换的大图上的边缘权重。已知这样的大图可以收敛到连续的限制,称为Graphons,因为它们的大小增长到无穷大。我们表明,边缘权重的合适功能的欧几里德梯度流量会聚到可以被适当地描述为梯度流的曲线上的曲线给出的新型连续轴限制,或者更重要的是最大斜率的曲线。我们的设置涵盖了诸如同性恋功能和标量熵的石墨源上的几种自然功能,并详细介绍了示例。
translated by 谷歌翻译
我们考虑了一个通用的非线性模型,其中信号是未知(可能增加的,可能增加的特征数量)的有限混合物,该特征是由由真实非线性参数参数化的连续字典发出的。在连续或离散设置中使用高斯(可能相关)噪声观察信号。我们提出了一种网格优化方法,即一种不使用参数空间上任何离散化方案的方法来估计特征的非线性参数和混合物的线性参数。我们使用有关离网方法的几何形状的最新结果,在真实的基础非线性参数上给出最小的分离,以便可以构建插值证书函数。还使用尾部界限,用于高斯过程的上流,我们将预测误差限制为高概率。假设可以构建证书函数,我们的预测误差绑定到日志 - 因线性回归模型中LASSO预测器所达到的速率类似。我们还建立了收敛速率,以高概率量化线性和非线性参数的估计质量。
translated by 谷歌翻译
在机器学习或统计中,通常希望减少高维空间$ \ mathbb {r} ^ d $的数据点样本的维度。本文介绍了一种维度还原方法,其中嵌入坐标是作为半定程序无限尺寸模拟的溶液获得的正半定核的特征向量。这种嵌入是自适应和非线性的。我们对学习内核的弱者和强烈的平滑假设讨论了这个问题。我们的方法的主要特点是在两种情况下存在嵌入坐标的样本延伸公式。该外推公式产生内核矩阵的延伸到数据相关的Mercer内核功能。我们的经验结果表明,与光谱嵌入方法相比,该嵌入方法对异常值的影响更加稳健。
translated by 谷歌翻译
显示了最佳的收敛速率,显示了对保守随机偏微分方程的平均场限制对解决方案解决方案解决方案解决方案的收敛。作为第二个主要结果,该SPDE的定量中心极限定理再次得出,并以最佳的收敛速率得出。该结果尤其适用于在过叠层化的,浅的神经网络中与SPDES溶液中随机梯度下降动力学的平均场缩放率的收敛性。结果表明,在限制SPDE中包含波动可以提高收敛速度,并保留有关随机梯度下降的波动的信息。
translated by 谷歌翻译
In this paper we prove Gamma-convergence of a nonlocal perimeter of Minkowski type to a local anisotropic perimeter. The nonlocal model describes the regularizing effect of adversarial training in binary classifications. The energy essentially depends on the interaction between two distributions modelling likelihoods for the associated classes. We overcome typical strict regularity assumptions for the distributions by only assuming that they have bounded $BV$ densities. In the natural topology coming from compactness, we prove Gamma-convergence to a weighted perimeter with weight determined by an anisotropic function of the two densities. Despite being local, this sharp interface limit reflects classification stability with respect to adversarial perturbations. We further apply our results to deduce Gamma-convergence of the associated total variations, to study the asymptotics of adversarial training, and to prove Gamma-convergence of graph discretizations for the nonlocal perimeter.
translated by 谷歌翻译
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activations and confirm the statistical benefits of this implicit bias.
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
Quantifying the deviation of a probability distribution is challenging when the target distribution is defined by a density with an intractable normalizing constant. The kernel Stein discrepancy (KSD) was proposed to address this problem and has been applied to various tasks including diagnosing approximate MCMC samplers and goodness-of-fit testing for unnormalized statistical models. This article investigates a convergence control property of the diffusion kernel Stein discrepancy (DKSD), an instance of the KSD proposed by Barp et al. (2019). We extend the result of Gorham and Mackey (2017), which showed that the KSD controls the bounded-Lipschitz metric, to functions of polynomial growth. Specifically, we prove that the DKSD controls the integral probability metric defined by a class of pseudo-Lipschitz functions, a polynomial generalization of Lipschitz functions. We also provide practical sufficient conditions on the reproducing kernel for the stated property to hold. In particular, we show that the DKSD detects non-convergence in moments with an appropriate kernel.
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the "active labeling" problem, which focuses on active learning with partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples. We illustrate our technique in depth for robust regression.
translated by 谷歌翻译