我们引入了一个新的非线性降低框架的新框架,其中预测因子和响应都是分布数据,它们被建模为度量空间的成员。我们实现非线性足够尺寸降低的关键步骤是在度量空间上构建通用内核,从而导致繁殖Hilbert空间的预测变量和响应,这些空间足以表征有条件的独立性,以决定足够的尺寸减少。对于单变量分布,我们使用Wasserstein距离的众所周知的分位数来构建通用内核。对于多元分布,我们求助于最近开发的切成薄片的Wasserstein距离,以实现此目的。由于可以通过单变量瓦斯汀距离的分位数表示来计算切片的瓦斯坦距离,因此多变量瓦斯坦距离的计算保持在可管理的水平。该方法应用于几个数据集,包括生育能力和死亡率分布数据和卡尔加里温度数据。
translated by 谷歌翻译
With the rapid development of data collection techniques, complex data objects that are not in the Euclidean space are frequently encountered in new statistical applications. Fr\'echet regression model (Peterson & M\"uller 2019) provides a promising framework for regression analysis with metric space-valued responses. In this paper, we introduce a flexible sufficient dimension reduction (SDR) method for Fr\'echet regression to achieve two purposes: to mitigate the curse of dimensionality caused by high-dimensional predictors and to provide a visual inspection tool for Fr\'echet regression. Our approach is flexible enough to turn any existing SDR method for Euclidean (X,Y) into one for Euclidean X and metric space-valued Y. The basic idea is to first map the metric-space valued random object $Y$ to a real-valued random variable $f(Y)$ using a class of functions, and then perform classical SDR to the transformed data. If the class of functions is sufficiently rich, then we are guaranteed to uncover the Fr\'echet SDR space. We showed that such a class, which we call an ensemble, can be generated by a universal kernel. We established the consistency and asymptotic convergence rate of the proposed methods. The finite-sample performance of the proposed methods is illustrated through simulation studies for several commonly encountered metric spaces that include Wasserstein space, the space of symmetric positive definite matrices, and the sphere. We illustrated the data visualization aspect of our method by exploring the human mortality distribution data across countries and by studying the distribution of hematoma density.
translated by 谷歌翻译
在这项工作中,我们通过alpha log-determinant(log-det)在两个不同的环境中的Hilbert-schmidt操作员之间的alpha log-determinant(log-det)差异介绍了正式化的kullback-leibler和r \'enyi的分歧(log-det)差异以及在繁殖内核希尔伯特空间(RKHS)上定义的高斯措施; (ii)具有平方的可集成样品路径的高斯工艺。对于特征性内核,第一个设置导致在完整的,可分开的度量空间上进行任意borel概率度量之间的差异。我们表明,Hilbert-Schmidt Norm中的Alpha Log-Det差异是连续的,这使我们能够将大量定律应用于希尔伯特太空值的随机变量。因此,我们表明,在这两种情况下,都可以使用有限的依赖性gram矩阵/高斯措施和有限的样本数据来始终如一地从其有限维版本中始终有效地估算其有限差异版本在所有情况下,无独立的}样品复杂性。 RKHS方法论在两种情况下的理论分析中都起着核心作用。数值实验说明了数学公式。
translated by 谷歌翻译
我们在非标准空间上介绍了积极的确定核的新类别,这些空间完全是严格的确定性或特征。特别是,我们讨论了可分离的希尔伯特空间上的径向内核,并在Banach空间和强型负类型的度量空间上引入了广泛的内核。一般结果用于在可分离的$ l^p $空间和一组措施上提供明确的核类。
translated by 谷歌翻译
我们解决了条件平均嵌入(CME)的内核脊回归估算的一致性,这是给定$ y $ x $的条件分布的嵌入到目标重现内核hilbert space $ hilbert space $ hilbert Space $ \ Mathcal {H} _y $ $ $ $ 。 CME允许我们对目标RKHS功能的有条件期望,并已在非参数因果和贝叶斯推论中使用。我们解决了错误指定的设置,其中目标CME位于Hilbert-Schmidt操作员的空间中,该操作员从$ \ Mathcal {H} _X _x $和$ L_2 $和$ \ MATHCAL {H} _Y $ $之间的输入插值空间起作用。该操作员的空间被证明是新定义的矢量值插值空间的同构。使用这种同构,我们在未指定的设置下为经验CME估计量提供了一种新颖的自适应统计学习率。我们的分析表明,我们的费率与最佳$ o(\ log n / n)$速率匹配,而无需假设$ \ Mathcal {h} _y $是有限维度。我们进一步建立了学习率的下限,这表明所获得的上限是最佳的。
translated by 谷歌翻译
我们介绍了一类小说的预计方法,对实际线上的概率分布数据集进行统计分析,具有2-Wassersein指标。我们特别关注主成分分析(PCA)和回归。为了定义这些模型,我们通过将数据映射到合适的线性空间并使用度量投影运算符来限制Wassersein空间中的结果来利用与其弱利米结构密切相关的Wasserstein空间的表示。通过仔细选择切线,我们能够推出快速的经验方法,利用受约束的B样条近似。作为我们方法的副产品,我们还能够为PCA的PCA进行更快的例程来获得分布。通过仿真研究,我们将我们的方法与先前提出的方法进行比较,表明我们预计的PCA具有类似的性能,即使在拼盘下也是极其灵活的。研究了模型的若干理论性质,并证明了渐近一致性。讨论了两个真实世界应用于美国和风速预测的Covid-19死亡率。
translated by 谷歌翻译
我们为多元分布时间序列的统计分析提出了一个新的自动回归模型。感兴趣的数据包括一系列在真实线的有限间隔内支持的多个概率度量,并由不同的时间瞬间索引。概率度量是在Wasserstein空间中建模为随机对象的。我们通过首先将所有原始措施居中在Lebesgue度量的切线空间中建立自动回归模型,以便它们的Fr \'Echet意味着变为Lebesgue度量。使用迭代的随机函数系统的理论,提供了这种模型解决方案的存在,独特性和平稳性的结果。我们还提出了模型系数的一致估计器。除了对模拟数据的分析外,还用两个实际数据集说明了所提出的模型集,该数据集由不同国家 /地区的年龄分布和巴黎的自行车共享网络制成。最后,由于我们对模型系数施加的正面和有限性约束,这是在这些约束下学习的拟议估计器,因此自然具有稀疏的结构。稀疏性允许在多变量分布时间序列中学习提出的模型在学习时间依赖性图中的应用。
translated by 谷歌翻译
In nonparametric independence testing, we observe i.i.d.\ data $\{(X_i,Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$. Modern test statistics such as the kernel Hilbert-Schmidt Independence Criterion (HSIC) and Distance Covariance (dCov) have intractable null distributions due to the degeneracy of the underlying U-statistics. Thus, in practice, one often resorts to using permutation testing, which provides a nonasymptotic guarantee at the expense of recalculating the quadratic-time statistics (say) a few hundred times. This paper provides a simple but nontrivial modification of HSIC and dCov (called xHSIC and xdCov, pronounced ``cross'' HSIC/dCov) so that they have a limiting Gaussian distribution under the null, and thus do not require permutations. This requires building on the newly developed theory of cross U-statistics by Kim and Ramdas (2020), and in particular developing several nontrivial extensions of the theory in Shekhar et al. (2022), which developed an analogous permutation-free kernel two-sample test. We show that our new tests, like the originals, are consistent against fixed alternatives, and minimax rate optimal against smooth local alternatives. Numerical simulations demonstrate that compared to the full dCov or HSIC, our variants have the same power up to a $\sqrt 2$ factor, giving practitioners a new option for large problems or data-analysis pipelines where computation, not sample size, could be the bottleneck.
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features $\varphi(X)$ of data $X$ to estimate a target $Y$, while being conditionally independent of a distractor $Z$ given $Y$. Both $Z$ and $Y$ are assumed to be continuous-valued but relatively low dimensional, whereas $X$ and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance. It is then only necessary to enforce independence of $\varphi(X)$ from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if $\varphi(X) \perp \!\!\! \perp Z \mid Y$. In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features.
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
我们考虑通过复制内核希尔伯特空间的相关协方差操作员对概率分布进行分析。我们表明,冯·诺伊曼(Von Neumann)的熵和这些操作员的相对熵与香农熵和相对熵的通常概念密切相关,并具有许多特性。它们与来自概率分布的各种口径的有效估计算法结合在一起。我们还考虑了产品空间,并表明对于张量产品内核,我们可以定义互信息和联合熵的概念,然后可以完美地表征独立性,但只能部分条件独立。我们最终展示了这些新的相对熵概念如何导致对数分区函数的新上限,这些函数可以与变异推理方法中的凸优化一起使用,从而提供了新的概率推理方法家族。
translated by 谷歌翻译
我们将最初在多维扩展和降低多元数据的降低领域发展为功能设置。我们专注于经典缩放和ISOMAP - 在这些领域中起重要作用的原型方法 - 并在功能数据分析的背景下展示它们的使用。在此过程中,我们强调了环境公制扮演的关键作用。
translated by 谷歌翻译
Interacting particle or agent systems that display a rich variety of swarming behaviours are ubiquitous in science and engineering. A fundamental and challenging goal is to understand the link between individual interaction rules and swarming. In this paper, we study the data-driven discovery of a second-order particle swarming model that describes the evolution of $N$ particles in $\mathbb{R}^d$ under radial interactions. We propose a learning approach that models the latent radial interaction function as Gaussian processes, which can simultaneously fulfill two inference goals: one is the nonparametric inference of {the} interaction function with pointwise uncertainty quantification, and the other one is the inference of unknown scalar parameters in the non-collective friction forces of the system. We formulate the learning problem as a statistical inverse problem and provide a detailed analysis of recoverability conditions, establishing that a coercivity condition is sufficient for recoverability. Given data collected from $M$ i.i.d trajectories with independent Gaussian observational noise, we provide a finite-sample analysis, showing that our posterior mean estimator converges in a Reproducing kernel Hilbert space norm, at an optimal rate in $M$ equal to the one in the classical 1-dimensional Kernel Ridge regression. As a byproduct, we show we can obtain a parametric learning rate in $M$ for the posterior marginal variance using $L^{\infty}$ norm, and the rate could also involve $N$ and $L$ (the number of observation time instances for each trajectory), depending on the condition number of the inverse problem. Numerical results on systems that exhibit different swarming behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
translated by 谷歌翻译
切成薄片的相互信息(SMI)定义为在随机变量的一维随机投影之间的平均值(MI)项。它是对经典MI依赖的替代度量,该量子保留了许多特性,但更可扩展到高维度。但是,对SMI本身和其估计率的定量表征取决于环境维度,这对于理解可伸缩性至关重要,仍然晦涩难懂。这项工作将原始的SMI定义扩展到$ K $ -SMI,该定义将预测视为$ k $维二维子空间,并提供了有关其依赖性尺寸的多方面帐户。在2-Wasserstein指标中使用差分熵连续性的新结果,我们对Monte Carlo(MC)基于$ K $ -SMI的估计的错误得出了尖锐的界限,并明确依赖于$ K $和环境维度,揭示了他们与样品数量的相互作用。然后,我们将MC Integrator与神经估计框架相结合,以提供端到端$ K $ -SMI估算器,为此建立了最佳的收敛率。随着尺寸的增长,我们还探索了人口$ k $ -smi的渐近学,从而为高斯近似结果提供了在适当的力矩范围下衰减的残差。我们的理论通过数值实验验证,并适用于切片Infogan,该切片完全提供了$ k $ -smi的可伸缩性问题的全面定量说明,包括SMI作为特殊情况,当$ k = 1 $。
translated by 谷歌翻译
概率分布之间的差异措施,通常被称为统计距离,在概率理论,统计和机器学习中普遍存在。为了在估计这些距离的距离时,对维度的诅咒,最近的工作已经提出了通过带有高斯内核的卷积在测量的分布中平滑局部不规则性。通过该框架的可扩展性至高维度,我们研究了高斯平滑$ P $ -wassersein距离$ \ mathsf {w} _p ^ {(\ sigma)} $的结构和统计行为,用于任意$ p \ GEQ 1 $。在建立$ \ mathsf {w} _p ^ {(\ sigma)} $的基本度量和拓扑属性之后,我们探索$ \ mathsf {w} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,其中$ \ hat {\ mu} _n $是$ n $独立观察的实证分布$ \ mu $。我们证明$ \ mathsf {w} _p ^ {(\ sigma)} $享受$ n ^ { - 1/2} $的参数经验融合速率,这对比$ n ^ { - 1 / d} $率对于未平滑的$ \ mathsf {w} _p $ why $ d \ geq 3 $。我们的证明依赖于控制$ \ mathsf {w} _p ^ {(\ sigma)} $ by $ p $ th-sting spoollow sobolev restion $ \ mathsf {d} _p ^ {(\ sigma)} $并导出限制$ \ sqrt {n} \,\ mathsf {d} _p ^ {(\ sigma)}(\ hat {\ mu} _n,\ mu)$,适用于所有尺寸$ d $。作为应用程序,我们提供了使用$ \ mathsf {w} _p ^ {(\ sigma)} $的两个样本测试和最小距离估计的渐近保证,使用$ p = 2 $的实验使用$ \ mathsf {d} _2 ^ {(\ sigma)} $。
translated by 谷歌翻译
比较概率分布是许多机器学习算法的关键。最大平均差异(MMD)和最佳运输距离(OT)是在过去几年吸引丰富的关注的概率措施之间的两类距离。本文建立了一些条件,可以通过MMD规范控制Wassersein距离。我们的作品受到压缩统计学习(CSL)理论的推动,资源有效的大规模学习的一般框架,其中训练数据总结在单个向量(称为草图)中,该训练数据捕获与所考虑的学习任务相关的信息。在CSL中的现有结果启发,我们介绍了H \“较旧的较低限制的等距属性(H \”较旧的LRIP)并表明这家属性具有有趣的保证对压缩统计学习。基于MMD与Wassersein距离之间的关系,我们通过引入和研究学习任务的Wassersein可读性的概念来提供压缩统计学习的保证,即概率分布之间的某些特定于特定的特定度量,可以由Wassersein界定距离。
translated by 谷歌翻译
We consider autocovariance operators of a stationary stochastic process on a Polish space that is embedded into a reproducing kernel Hilbert space. We investigate how empirical estimates of these operators converge along realizations of the process under various conditions. In particular, we examine ergodic and strongly mixing processes and obtain several asymptotic results as well as finite sample error bounds. We provide applications of our theory in terms of consistency results for kernel PCA with dependent data and the conditional mean embedding of transition probabilities. Finally, we use our approach to examine the nonparametric estimation of Markov transition operators and highlight how our theory can give a consistency analysis for a large family of spectral analysis methods including kernel-based dynamic mode decomposition.
translated by 谷歌翻译
内核方法是机器学习中最流行的技术之一,使用再现内核希尔伯特空间(RKHS)的属性来解决学习任务。在本文中,我们提出了一种新的数据分析框架,与再现内核Hilbert $ C ^ * $ - 模块(rkhm)和rkhm中的内核嵌入(kme)。由于RKHM包含比RKHS或VVRKHS)的更丰富的信息,因此使用RKHM的分析使我们能够捕获和提取诸如功能数据的结构属性。我们向RKHM展示了rkhm理论的分支,以适用于数据分析,包括代表性定理,以及所提出的KME的注射性和普遍性。我们还显示RKHM概括RKHS和VVRKHS。然后,我们提供采用RKHM和提议的KME对数据分析的具体程序。
translated by 谷歌翻译
在包括生成建模的各种机器学习应用中的两个概率措施中,已经证明了切片分歧的想法是成功的,并且包括计算两种测量的一维随机投影之间的“基地分歧”的预期值。然而,这种技术的拓扑,统计和计算后果尚未完整地确定。在本文中,我们的目标是弥合这种差距并导出切片概率分歧的各种理论特性。首先,我们表明切片保留了公制公理和分歧的弱连续性,这意味着切片分歧将共享相似的拓扑性质。然后,我们在基本发散属于积分概率度量类别的情况下精确结果。另一方面,我们在轻度条件下建立了切片分歧的样本复杂性并不依赖于问题尺寸。我们终于将一般结果应用于几个基地分歧,并说明了我们对合成和实际数据实验的理论。
translated by 谷歌翻译