Interacting particle or agent systems that display a rich variety of swarming behaviours are ubiquitous in science and engineering. A fundamental and challenging goal is to understand the link between individual interaction rules and swarming. In this paper, we study the data-driven discovery of a second-order particle swarming model that describes the evolution of $N$ particles in $\mathbb{R}^d$ under radial interactions. We propose a learning approach that models the latent radial interaction function as Gaussian processes, which can simultaneously fulfill two inference goals: one is the nonparametric inference of {the} interaction function with pointwise uncertainty quantification, and the other one is the inference of unknown scalar parameters in the non-collective friction forces of the system. We formulate the learning problem as a statistical inverse problem and provide a detailed analysis of recoverability conditions, establishing that a coercivity condition is sufficient for recoverability. Given data collected from $M$ i.i.d trajectories with independent Gaussian observational noise, we provide a finite-sample analysis, showing that our posterior mean estimator converges in a Reproducing kernel Hilbert space norm, at an optimal rate in $M$ equal to the one in the classical 1-dimensional Kernel Ridge regression. As a byproduct, we show we can obtain a parametric learning rate in $M$ for the posterior marginal variance using $L^{\infty}$ norm, and the rate could also involve $N$ and $L$ (the number of observation time instances for each trajectory), depending on the condition number of the inverse problem. Numerical results on systems that exhibit different swarming behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.
translated by 谷歌翻译
本文研究了无限二维希尔伯特空间之间线性算子的学习。训练数据包括希尔伯特空间中的一对随机输入向量以及在未知的自我接合线性运算符下的嘈杂图像。假设操作员在已知的基础上是对角线化的,则该工作解决了给定数据估算操作员特征值的等效反问题。采用贝叶斯方法,理论分析在无限的数据限制中建立了后部收缩率,而高斯先验者与反向问题的正向图没有直接相关。主要结果还包括学习理论的概括错误保证了广泛的分配变化。这些收敛速率分别量化了数据平滑度和真实特征值衰减或生长的影响,分别是紧凑或无界操作员对样品复杂性的影响。数值证据支持对角线和非对角性环境中的理论。
translated by 谷歌翻译
近年来目睹了采用灵活的机械学习模型进行乐器变量(IV)回归的兴趣,但仍然缺乏不确定性量化方法的发展。在这项工作中,我们为IV次数回归提出了一种新的Quasi-Bayesian程序,建立了最近开发的核化IV模型和IV回归的双/极小配方。我们通过在$ l_2 $和sobolev规范中建立最低限度的最佳收缩率,并讨论可信球的常见有效性来分析所提出的方法的频繁行为。我们进一步推出了一种可扩展的推理算法,可以扩展到与宽神经网络模型一起工作。实证评价表明,我们的方法对复杂的高维问题产生了丰富的不确定性估计。
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
在本文中,我们考虑了基于系数的正则分布回归,该回归旨在从概率措施中回归到复制的内核希尔伯特空间(RKHS)的实现响应(RKHS),该响应将正则化放在系数上,而内核被假定为无限期的。 。该算法涉及两个采样阶段,第一阶段样本由分布组成,第二阶段样品是从这些分布中获得的。全面研究了回归函数的不同规律性范围内算法的渐近行为,并通过整体操作员技术得出学习率。我们在某些温和条件下获得最佳速率,这与单级采样的最小最佳速率相匹配。与文献中分布回归的内核方法相比,所考虑的算法不需要内核是对称的和阳性的半明确仪,因此为设计不确定的内核方法提供了一个简单的范式,从而丰富了分布回归的主题。据我们所知,这是使用不确定核进行分配回归的第一个结果,我们的算法可以改善饱和效果。
translated by 谷歌翻译
Many problems in causal inference and economics can be formulated in the framework of conditional moment models, which characterize the target function through a collection of conditional moment restrictions. For nonparametric conditional moment models, efficient estimation often relies on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used. In this work, we address this issue by proposing a procedure that automatically learns representations with controlled measures of ill-posedness. Our method approximates a linear representation defined by the spectral decomposition of a conditional expectation operator, which can be used for kernelized estimators and is known to facilitate minimax optimal estimation in certain settings. We show this representation can be efficiently estimated from data, and establish L2 consistency for the resulting estimator. We evaluate the proposed method on proximal causal inference tasks, exhibiting promising performance on high-dimensional, semi-synthetic data.
translated by 谷歌翻译
我们解决了在没有观察到的混杂的存在下的因果效应估计的问题,但是观察到潜在混杂因素的代理。在这种情况下,我们提出了两种基于内核的方法,用于非线性因果效应估计:(a)两阶段回归方法,以及(b)最大矩限制方法。我们专注于近端因果学习设置,但是我们的方法可以用来解决以弗雷霍尔姆积分方程为特征的更广泛的逆问题。特别是,我们提供了在非线性环境中解决此问题的两阶段和矩限制方法的统一视图。我们为每种算法提供一致性保证,并证明这些方法在合成数据和模拟现实世界任务的数据上获得竞争结果。特别是,我们的方法优于不适合利用代理变量的早期方法。
translated by 谷歌翻译
内核Stein差异(KSD)是一种基于内核的广泛使用概率指标之间差异的非参数量度。它通常在用户从候选概率度量中收集的样本集合的情况下使用,并希望将它们与指定的目标概率度量进行比较。 KSD的一个有用属性是,它可以仅从候选度量的样本中计算出来,并且不知道目标度量的正常化常数。 KSD已用于一系列设置,包括合适的测试,参数推断,MCMC输出评估和生成建模。当前KSD方法论的两个主要问题是(i)超出有限维度欧几里得环境之外的适用性以及(ii)缺乏影响KSD性能的清晰度。本文提供了KSD的新频谱表示,这两种补救措施都使KSD适用于希尔伯特(Hilbert)评估数据,并揭示了内核和Stein oterator Choice对KSD的影响。我们通过在许多合成数据实验中对各种高斯和非高斯功能模型进行拟合优度测试来证明所提出的方法的功效。
translated by 谷歌翻译
We consider autocovariance operators of a stationary stochastic process on a Polish space that is embedded into a reproducing kernel Hilbert space. We investigate how empirical estimates of these operators converge along realizations of the process under various conditions. In particular, we examine ergodic and strongly mixing processes and obtain several asymptotic results as well as finite sample error bounds. We provide applications of our theory in terms of consistency results for kernel PCA with dependent data and the conditional mean embedding of transition probabilities. Finally, we use our approach to examine the nonparametric estimation of Markov transition operators and highlight how our theory can give a consistency analysis for a large family of spectral analysis methods including kernel-based dynamic mode decomposition.
translated by 谷歌翻译
在许多学科中,动态系统的数据信息预测模型的开发引起了广泛的兴趣。我们提出了一个统一的框架,用于混合机械和机器学习方法,以从嘈杂和部分观察到的数据中识别动态系统。我们将纯数据驱动的学习与混合模型进行比较,这些学习结合了不完善的域知识。我们的公式与所选的机器学习模型不可知,在连续和离散的时间设置中都呈现,并且与表现出很大的内存和错误的模型误差兼容。首先,我们从学习理论的角度研究无内存线性(W.R.T.参数依赖性)模型误差,从而定义了过多的风险和概括误差。对于沿阵行的连续时间系统,我们证明,多余的风险和泛化误差都通过与T的正方形介于T的术语(指定训练数据的时间间隔)的术语界定。其次,我们研究了通过记忆建模而受益的方案,证明了两类连续时间复发性神经网络(RNN)的通用近似定理:两者都可以学习与内存有关的模型误差。此外,我们将一类RNN连接到储层计算,从而将学习依赖性错误的学习与使用随机特征在Banach空间之间进行监督学习的最新工作联系起来。给出了数值结果(Lorenz '63,Lorenz '96多尺度系统),以比较纯粹的数据驱动和混合方法,发现混合方法较少,渴望数据较少,并且更有效。最后,我们从数值上证明了如何利用数据同化来从嘈杂,部分观察到的数据中学习隐藏的动态,并说明了通过这种方法和培训此类模型来表示记忆的挑战。
translated by 谷歌翻译
随机kriging已被广泛用于模拟元模拟,以预测复杂模拟模型的响应表面。但是,它的使用仅限于设计空间低维的情况,因为通常,样品复杂性(即随机Kriging生成准确预测所需的设计点数量)在设计的维度上呈指数增长。空间。大型样本量导致运行模拟模型的过度样本成本和由于需要倒入大量协方差矩阵而引起的严重计算挑战。基于张量的马尔可夫内核和稀疏的网格实验设计,我们开发了一种新颖的方法,可极大地减轻维数的诅咒。我们表明,即使在模型错误指定下,提议的方法论的样本复杂性也仅在维度上略有增长。我们还开发了快速算法,这些算法以其精确形式计算随机kriging,而无需任何近似方案。我们通过广泛的数值实验证明,我们的方法可以通过超过10,000维的设计空间来处理问题,从而通过相对于典型的替代方法在实践中通过数量级来提高预测准确性和计算效率。
translated by 谷歌翻译
我们研究了对识别的非唯一麻烦的线性功能的通用推断,该功能定义为未识别条件矩限制的解决方案。这个问题出现在各种应用中,包括非参数仪器变量模型,未衡量的混杂性下的近端因果推断以及带有阴影变量的丢失 - 与随机数据。尽管感兴趣的线性功能(例如平均治疗效应)在适当的条件下是可以识别出的,但令人讨厌的非独家性对统计推断构成了严重的挑战,因为在这种情况下,常见的滋扰估计器可能是不稳定的,并且缺乏固定限制。在本文中,我们提出了对滋扰功能的受惩罚的最小估计器,并表明它们在这种挑战性的环境中有效推断。提出的滋扰估计器可以适应灵活的功能类别,重要的是,无论滋扰是否是唯一的,它们都可以融合到由惩罚确定的固定限制。我们使用受惩罚的滋扰估计器来形成有关感兴趣的线性功能的依据估计量,并在通用高级条件下证明其渐近正态性,这提供了渐近有效的置信区间。
translated by 谷歌翻译
非线性自适应控制理论中的一个关键假设是系统的不确定性可以在一组已知基本函数的线性跨度中表示。虽然该假设导致有效的算法,但它将应用限制为非常特定的系统类别。我们介绍一种新的非参数自适应算法,其在参数上学习无限尺寸密度,以取消再现内核希尔伯特空间中的未知干扰。令人惊讶的是,所产生的控制输入承认,尽管其底层无限尺寸结构,但是尽管它的潜在无限尺寸结构实现了其实施的分析表达。虽然这种自适应输入具有丰富和富有敏感性的 - 例如,传统的线性参数化 - 其计算复杂性随时间线性增长,使其比其参数对应力相对较高。利用随机傅里叶特征的理论,我们提供了一种有效的随机实现,该实现恢复了经典参数方法的复杂性,同时可透明地保留非参数输入的表征性。特别地,我们的显式范围仅取决于系统的基础参数,允许我们所提出的算法有效地缩放到高维系统。作为该方法的说明,我们展示了随机近似算法学习由牛顿重力交互的十点批量组成的60维系统的预测模型的能力。
translated by 谷歌翻译
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models or analyses with a downstream application such that error quantification plays a key role. However, by entirely ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of a widely-applied theorem for conditioning GPs on a finite number of direct observations to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate and the capability to incorporate uncertain model parameters and observations. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models.
translated by 谷歌翻译
We study a class of dynamical systems modelled as Markov chains that admit an invariant distribution via the corresponding transfer, or Koopman, operator. While data-driven algorithms to reconstruct such operators are well known, their relationship with statistical learning is largely unexplored. We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system. We consider the restriction of this operator to a reproducing kernel Hilbert space and introduce a notion of risk, from which different estimators naturally arise. We link the risk with the estimation of the spectral decomposition of the Koopman operator. These observations motivate a reduced-rank operator regression (RRR) estimator. We derive learning bounds for the proposed estimator, holding both in i.i.d. and non i.i.d. settings, the latter in terms of mixing coefficients. Our results suggest RRR might be beneficial over other widely used estimators as confirmed in numerical experiments both for forecasting and mode decomposition.
translated by 谷歌翻译
这项调查旨在提供线性模型及其背后的理论的介绍。我们的目标是对读者进行严格的介绍,并事先接触普通最小二乘。在机器学习中,输出通常是输入的非线性函数。深度学习甚至旨在找到需要大量计算的许多层的非线性依赖性。但是,这些算法中的大多数都基于简单的线性模型。然后,我们从不同视图中描述线性模型,并找到模型背后的属性和理论。线性模型是回归问题中的主要技术,其主要工具是最小平方近似,可最大程度地减少平方误差之和。当我们有兴趣找到回归函数时,这是一个自然的选择,该回归函数可以最大程度地减少相应的预期平方误差。这项调查主要是目的的摘要,即线性模型背后的重要理论的重要性,例如分布理论,最小方差估计器。我们首先从三种不同的角度描述了普通的最小二乘,我们会以随机噪声和高斯噪声干扰模型。通过高斯噪声,该模型产生了可能性,因此我们引入了最大似然估计器。它还通过这种高斯干扰发展了一些分布理论。最小二乘的分布理论将帮助我们回答各种问题并引入相关应用。然后,我们证明最小二乘是均值误差的最佳无偏线性模型,最重要的是,它实际上接近了理论上的极限。我们最终以贝叶斯方法及以后的线性模型结束。
translated by 谷歌翻译
在这项工作中,我们考虑线性逆问题$ y = ax + \ epsilon $,其中$ a \ colon x \ to y $是可分离的hilbert spaces $ x $和$ y $之间的已知线性运算符,$ x $。 $ x $和$ \ epsilon $中的随机变量是$ y $的零平均随机过程。该设置涵盖成像中的几个逆问题,包括去噪,去束和X射线层析造影。在古典正规框架内,我们专注于正则化功能的情况下未能先验,而是从数据中学习。我们的第一个结果是关于均方误差的最佳广义Tikhonov规则器的表征。我们发现它完全独立于前向操作员$ a $,并仅取决于$ x $的平均值和协方差。然后,我们考虑从两个不同框架中设置的有限训练中学习常规程序的问题:一个监督,根据$ x $和$ y $的样本,只有一个无人监督,只基于$ x $的样本。在这两种情况下,我们证明了泛化界限,在X $和$ \ epsilon $的分发的一些弱假设下,包括子高斯变量的情况。我们的界限保持在无限尺寸的空间中,从而表明更精细和更细的离散化不会使这个学习问题更加困难。结果通过数值模拟验证。
translated by 谷歌翻译
无限维功能空间之间的学习映射已在机器学习的许多学科中取得了经验成功,包括生成建模,功能数据分析,因果推理和多方面的增强学习。在本文中,我们研究了在两个无限维sobolev繁殖内核希尔伯特空间之间学习希尔伯特 - 施密特操作员的统计限制。我们根据Sobolev Hilbert-Schmidt规范建立了信息理论的下限,并表明一种正规化学习了偏见轮廓以下的光谱成分,并且忽略了差异高于方差轮廓的频谱成分可以达到最佳学习率。同时,偏置和方差轮廓之间的光谱成分为我们设计计算可行的机器学习算法的灵活性。基于此观察结果,我们开发了一种多级内核操作员学习算法,该算法在无限维函数空间之间学习线性运算符时是最佳的。
translated by 谷歌翻译
广义贝叶斯推理使用损失函数而不是可能性的先前信仰更新,因此可以用于赋予鲁棒性,以防止可能的错误规范的可能性。在这里,我们认为广泛化的贝叶斯推论斯坦坦差异作为损失函数的损失,由应用程序的可能性含有难治性归一化常数。在这种情况下,斯坦因差异来避免归一化恒定的评估,并产生封闭形式或使用标准马尔可夫链蒙特卡罗的通用后出版物。在理论层面上,我们显示了一致性,渐近的正常性和偏见 - 稳健性,突出了这些物业如何受到斯坦因差异的选择。然后,我们提供关于一系列棘手分布的数值实验,包括基于内核的指数家庭模型和非高斯图形模型的应用。
translated by 谷歌翻译