我们研究了非线性状态空间模型中对不可糊化的观察函数的无监督学习。假设观察过程的大量数据以及状态过程的分布,我们引入了一种非参数通用力矩方法,以通过约束回归来估计观察函数。主要的挑战来自观察函数的不可抑制性以及国家与观察之间缺乏数据对。我们解决了二次损失功能可识别性的基本问题,并表明可识别性的功能空间是闭合状态过程的RKHS。数值结果表明,前两个矩和时间相关以及上限和下限可以识别从分段多项式到平滑函数的功能,从而导致收敛估计器。还讨论了该方法的局限性,例如由于对称性和平稳性而引起的非识别性。
translated by 谷歌翻译
我们为相互作用粒子系统的平均场方程中相互作用内核的可识别性提供了完整的表征。关键是识别概率二次损耗功能具有独特的最小化器的功能空间。我们考虑两个数据自适应$ l^2 $空间,一个带有Lebesgue度量,另一个具有均值固有的探索度量。对于每个$ l^2 $空间,损耗功能的Fr \'echet导数会导致半阳性的积分运算符,因此,可识别性在集成运算符的非零特征值和功能空间的特征空间上保留在特征空间上识别是与积分运算符相关的RKHS的$ l^2 $ clublosure。此外,仅当整体操作员严格呈正时,可识别性在$ l^2 $空间上。因此,逆问题是错误的,需要正则化。在截断的SVD正则化的背景下,我们从数值上证明了加权$ l^2 $空间比未加权的$ l^2 $空间更可取,因为它会导致更准确的正则化估计器。
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译
The estimation of cumulative distribution functions (CDFs) is an important learning task with a great variety of downstream applications, such as risk assessments in predictions and decision making. In this paper, we study functional regression of contextual CDFs where each data point is sampled from a linear combination of context dependent CDF basis functions. We propose functional ridge-regression-based estimation methods that estimate CDFs accurately everywhere. In particular, given $n$ samples with $d$ basis functions, we show estimation error upper bounds of $\widetilde{O}(\sqrt{d/n})$ for fixed design, random design, and adversarial context cases. We also derive matching information theoretic lower bounds, establishing minimax optimality for CDF functional regression. Furthermore, we remove the burn-in time in the random design setting using an alternative penalized estimator. Then, we consider agnostic settings where there is a mismatch in the data generation process. We characterize the error of the proposed estimators in terms of the mismatched error, and show that the estimators are well-behaved under model mismatch. Finally, to complete our study, we formalize infinite dimensional models where the parameter space is an infinite dimensional Hilbert space, and establish self-normalized estimation error upper bounds for this setting.
translated by 谷歌翻译
我们为多元分布时间序列的统计分析提出了一个新的自动回归模型。感兴趣的数据包括一系列在真实线的有限间隔内支持的多个概率度量,并由不同的时间瞬间索引。概率度量是在Wasserstein空间中建模为随机对象的。我们通过首先将所有原始措施居中在Lebesgue度量的切线空间中建立自动回归模型,以便它们的Fr \'Echet意味着变为Lebesgue度量。使用迭代的随机函数系统的理论,提供了这种模型解决方案的存在,独特性和平稳性的结果。我们还提出了模型系数的一致估计器。除了对模拟数据的分析外,还用两个实际数据集说明了所提出的模型集,该数据集由不同国家 /地区的年龄分布和巴黎的自行车共享网络制成。最后,由于我们对模型系数施加的正面和有限性约束,这是在这些约束下学习的拟议估计器,因此自然具有稀疏的结构。稀疏性允许在多变量分布时间序列中学习提出的模型在学习时间依赖性图中的应用。
translated by 谷歌翻译
我们研究了张量张量的回归,其中的目标是将张量的响应与张量协变量与塔克等级参数张量/矩阵连接起来,而没有其内在等级的先验知识。我们提出了Riemannian梯度下降(RGD)和Riemannian Gauss-Newton(RGN)方法,并通过研究等级过度参数化的影响来应对未知等级的挑战。我们通过表明RGD和RGN分别线性地和四边形地收敛到两个等级的统计最佳估计值,从而为一般的张量调节回归提供了第一个收敛保证。我们的理论揭示了一种有趣的现象:Riemannian优化方法自然地适应了过度参数化,而无需修改其实施。我们还为低度多项式框架下的标量调整回归中的统计计算差距提供了第一个严格的证据。我们的理论证明了``统计计算差距的祝福''现象:在张张量的张量回归中,对于三个或更高的张紧器,在张张量的张量回归中,计算所需的样本量与中等级别相匹配的计算量相匹配。在考虑计算可行的估计器时,虽然矩阵设置没有此类好处。这表明中等等级的过度参数化本质上是``在张量调整的样本量三分或更高的样本大小上,三分或更高的样本量。最后,我们进行仿真研究以显示我们提出的方法的优势并证实我们的理论发现。
translated by 谷歌翻译
我们在分布式框架中得出最小值测试错误,其中数据被分成多个机器,并且它们与中央机器的通信仅限于$ b $位。我们研究了高斯白噪声下的$ d $ - 和无限维信号检测问题。我们还得出达到理论下限的分布式测试算法。我们的结果表明,分布式测试受到从根本上不同的现象,这些现象在分布式估计中未观察到。在我们的发现中,我们表明,可以访问共享随机性的测试协议在某些制度中的性能比不进行的测试协议可以更好地表现。我们还观察到,即使仅使用单个本地计算机上可用的信息,一致的非参数分布式测试始终是可能的,即使只有$ 1 $的通信和相应的测试优于最佳本地测试。此外,我们还得出了自适应非参数分布测试策略和相应的理论下限。
translated by 谷歌翻译
Interacting particle or agent systems that display a rich variety of swarming behaviours are ubiquitous in science and engineering. A fundamental and challenging goal is to understand the link between individual interaction rules and swarming. In this paper, we study the data-driven discovery of a second-order particle swarming model that describes the evolution of $N$ particles in $\mathbb{R}^d$ under radial interactions. We propose a learning approach that models the latent radial interaction function as Gaussian processes, which can simultaneously fulfill two inference goals: one is the nonparametric inference of {the} interaction function with pointwise uncertainty quantification, and the other one is the inference of unknown scalar parameters in the non-collective friction forces of the system. We formulate the learning problem as a statistical inverse problem and provide a detailed analysis of recoverability conditions, establishing that a coercivity condition is sufficient for recoverability. Given data collected from $M$ i.i.d trajectories with independent Gaussian observational noise, we provide a finite-sample analysis, showing that our posterior mean estimator converges in a Reproducing kernel Hilbert space norm, at an optimal rate in $M$ equal to the one in the classical 1-dimensional Kernel Ridge regression. As a byproduct, we show we can obtain a parametric learning rate in $M$ for the posterior marginal variance using $L^{\infty}$ norm, and the rate could also involve $N$ and $L$ (the number of observation time instances for each trajectory), depending on the condition number of the inverse problem. Numerical results on systems that exhibit different swarming behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
translated by 谷歌翻译
在本文中,我们提出{\ it \下划线{r} ecursive} {\ it \ usef \ undesline {i} mortance} {\ it \ it \ usew supsline {s} ketching} algorithM squares {\ it \下划线{o} ptimization}(risro)。 Risro的关键步骤是递归重要性草图,这是一个基于确定性设计的递归投影的新素描框架,它与文献中的随机素描\ Citep {Mahoney2011 randomized,Woodruff2014sketching}有很大不同。在这个新的素描框架下,可以重新解释文献中的几种现有算法,而Risro比它们具有明显的优势。 Risro易于实现,并在计算上有效,其中每次迭代中的核心过程是解决降低尺寸最小二乘问题的问题。我们在某些轻度条件下建立了Risro的局部二次线性和二次收敛速率。我们还发现了Risro与Riemannian Gauss-Newton算法在固定等级矩阵上的联系。在机器学习和统计数据中的两种应用中,RISRO的有效性得到了证明:低级别矩阵痕量回归和相位检索。仿真研究证明了Risro的出色数值性能。
translated by 谷歌翻译
This paper provides estimation and inference methods for an identified set's boundary (i.e., support function) where the selection among a very large number of covariates is based on modern regularized tools. I characterize the boundary using a semiparametric moment equation. Combining Neyman-orthogonality and sample splitting ideas, I construct a root-N consistent, uniformly asymptotically Gaussian estimator of the boundary and propose a multiplier bootstrap procedure to conduct inference. I apply this result to the partially linear model, the partially linear IV model and the average partial derivative with an interval-valued outcome.
translated by 谷歌翻译
贝叶斯神经网络试图将神经网络的强大预测性能与与贝叶斯架构预测产出相关的不确定性的正式量化相结合。然而,它仍然不清楚如何在升入网络的输出空间时,如何赋予网络的参数。提出了一种可能的解决方案,使用户能够为手头的任务提供适当的高斯过程协方差函数。我们的方法构造了网络参数的先前分配,称为ridgelet,它近似于网络的输出空间中的Posited高斯过程。与神经网络和高斯过程之间的连接的现有工作相比,我们的分析是非渐近的,提供有限的样本大小的错误界限。这建立了贝叶斯神经网络可以近似任何高斯过程,其协方差函数是足够规律的任何高斯过程。我们的实验评估仅限于概念验证,在那里我们证明ridgele先前可以在可以提供合适的高斯过程的回归问题之前出现非结构化。
translated by 谷歌翻译
我们介绍了一类小说的预计方法,对实际线上的概率分布数据集进行统计分析,具有2-Wassersein指标。我们特别关注主成分分析(PCA)和回归。为了定义这些模型,我们通过将数据映射到合适的线性空间并使用度量投影运算符来限制Wassersein空间中的结果来利用与其弱利米结构密切相关的Wasserstein空间的表示。通过仔细选择切线,我们能够推出快速的经验方法,利用受约束的B样条近似。作为我们方法的副产品,我们还能够为PCA的PCA进行更快的例程来获得分布。通过仿真研究,我们将我们的方法与先前提出的方法进行比较,表明我们预计的PCA具有类似的性能,即使在拼盘下也是极其灵活的。研究了模型的若干理论性质,并证明了渐近一致性。讨论了两个真实世界应用于美国和风速预测的Covid-19死亡率。
translated by 谷歌翻译
Quantifying the deviation of a probability distribution is challenging when the target distribution is defined by a density with an intractable normalizing constant. The kernel Stein discrepancy (KSD) was proposed to address this problem and has been applied to various tasks including diagnosing approximate MCMC samplers and goodness-of-fit testing for unnormalized statistical models. This article investigates a convergence control property of the diffusion kernel Stein discrepancy (DKSD), an instance of the KSD proposed by Barp et al. (2019). We extend the result of Gorham and Mackey (2017), which showed that the KSD controls the bounded-Lipschitz metric, to functions of polynomial growth. Specifically, we prove that the DKSD controls the integral probability metric defined by a class of pseudo-Lipschitz functions, a polynomial generalization of Lipschitz functions. We also provide practical sufficient conditions on the reproducing kernel for the stated property to hold. In particular, we show that the DKSD detects non-convergence in moments with an appropriate kernel.
translated by 谷歌翻译
We study a double robust Bayesian inference procedure on the average treatment effect (ATE) under unconfoundedness. Our Bayesian approach involves a correction term for prior distributions adjusted by the propensity score. We prove asymptotic equivalence of our Bayesian estimator and efficient frequentist estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i.e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa. Consequently, the resulting Bayesian point estimator internalizes the bias correction as the frequentist-type doubly robust estimator, and the Bayesian credible sets form confidence intervals with asymptotically exact coverage probability. In simulations, we find that this corrected Bayesian procedure leads to significant bias reduction of point estimation and accurate coverage of confidence intervals, especially when the dimensionality of covariates is large relative to the sample size and the underlying functions become complex. We illustrate our method in an application to the National Supported Work Demonstration.
translated by 谷歌翻译
比较概率分布是许多机器学习算法的关键。最大平均差异(MMD)和最佳运输距离(OT)是在过去几年吸引丰富的关注的概率措施之间的两类距离。本文建立了一些条件,可以通过MMD规范控制Wassersein距离。我们的作品受到压缩统计学习(CSL)理论的推动,资源有效的大规模学习的一般框架,其中训练数据总结在单个向量(称为草图)中,该训练数据捕获与所考虑的学习任务相关的信息。在CSL中的现有结果启发,我们介绍了H \“较旧的较低限制的等距属性(H \”较旧的LRIP)并表明这家属性具有有趣的保证对压缩统计学习。基于MMD与Wassersein距离之间的关系,我们通过引入和研究学习任务的Wassersein可读性的概念来提供压缩统计学习的保证,即概率分布之间的某些特定于特定的特定度量,可以由Wassersein界定距离。
translated by 谷歌翻译
In non-smooth stochastic optimization, we establish the non-convergence of the stochastic subgradient descent (SGD) to the critical points recently called active strict saddles by Davis and Drusvyatskiy. Such points lie on a manifold $M$ where the function $f$ has a direction of second-order negative curvature. Off this manifold, the norm of the Clarke subdifferential of $f$ is lower-bounded. We require two conditions on $f$. The first assumption is a Verdier stratification condition, which is a refinement of the popular Whitney stratification. It allows us to establish a reinforced version of the projection formula of Bolte \emph{et.al.} for Whitney stratifiable functions, and which is of independent interest. The second assumption, termed the angle condition, allows to control the distance of the iterates to $M$. When $f$ is weakly convex, our assumptions are generic. Consequently, generically in the class of definable weakly convex functions, the SGD converges to a local minimizer.
translated by 谷歌翻译
我们认为,从其嘈杂的瞬间信息中,在任何维度上学习$ k $ spike混合物的稀疏力矩问题。我们使用运输距离来测量学习混合物的准确性。先前的算法要么假设某些分离假设,使用更多的恢复力矩,要么在(超级)指数时间内运行。我们针对一维问题的算法(也称为稀疏Hausdorff Moment问题)是经典Prony方法的强大版本,我们的贡献主要在于分析。我们比以前的工作进行了全球和更严格的分析(分析了Prony方法的中间结果的扰动)。有用的技术成分是由Vandermonde矩阵定义的线性系统与Schur多项式之间的连接,这使我们能够提供独立于分离的紧密扰动,并且在其他情况下可能很有用。为了解决高维问题,我们首先通过将1维算法和分析扩展到复数来解决二维问题。我们针对高维情况的算法通过将混合物的1-D投影与随机矢量和一组混合物的一组2D投影来确定每个尖峰的坐标。我们的结果在学习主题模型和高斯混合物中有应用,这意味着改善了样本复杂性结果或在先前的工作中运行时间。
translated by 谷歌翻译
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.
translated by 谷歌翻译
Koopman运算符是无限维的运算符,可全球线性化非线性动态系统,使其光谱信息可用于理解动态。然而,Koopman运算符可以具有连续的光谱和无限维度的子空间,使得它们的光谱信息提供相当大的挑战。本文介绍了具有严格融合的数据驱动算法,用于从轨迹数据计算Koopman运算符的频谱信息。我们引入了残余动态模式分解(ResDMD),它提供了第一种用于计算普通Koopman运算符的Spectra和PseudtoStra的第一种方案,无需光谱污染。使用解析器操作员和RESDMD,我们还计算与测量保存动态系统相关的光谱度量的平滑近似。我们证明了我们的算法的显式收敛定理,即使计算连续频谱和离散频谱的密度,也可以实现高阶收敛即使是混沌系统。我们展示了在帐篷地图,高斯迭代地图,非线性摆,双摆,洛伦茨系统和11美元延长洛伦兹系统的算法。最后,我们为具有高维状态空间的动态系统提供了我们的算法的核化变体。这使我们能够计算与具有20,046维状态空间的蛋白质分子的动态相关的光谱度量,并计算出湍流流过空气的误差界限的非线性Koopman模式,其具有雷诺数为$> 10 ^ 5 $。一个295,122维的状态空间。
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译