鉴于$ n $ i.i.d.从未知的分发$ P $绘制的样本,何时可以生成更大的$ n + m $ samples,这些标题不能与$ n + m $ i.i.d区别区别。从$ p $绘制的样品?(AXELROD等人2019)将该问题正式化为样本放大问题,并为离散分布和高斯位置模型提供了最佳放大程序。然而,这些程序和相关的下限定制到特定分布类,对样本扩增的一般统计理解仍然很大程度上。在这项工作中,我们通过推出通常适用的放大程序,下限技术和与现有统计概念的联系来放置对公司统计基础的样本放大问题。我们的技术适用于一大类分布,包括指数家庭,并在样本放大和分配学习之间建立严格的联系。
translated by 谷歌翻译
对于高维和非参数统计模型,速率最优估计器平衡平方偏差和方差是一种常见的现象。虽然这种平衡被广泛观察到,但很少知道是否存在可以避免偏差和方差之间的权衡的方法。我们提出了一般的策略,以获得对任何估计方差的下限,偏差小于预先限定的界限。这表明偏差差异折衷的程度是不可避免的,并且允许量化不服从其的方法的性能损失。该方法基于许多抽象的下限,用于涉及关于不同概率措施的预期变化以及诸如Kullback-Leibler或Chi-Sque-diversence的信息措施的变化。其中一些不平等依赖于信息矩阵的新概念。在该物品的第二部分中,将抽象的下限应用于几种统计模型,包括高斯白噪声模型,边界估计问题,高斯序列模型和高维线性回归模型。对于这些特定的统计应用,发生不同类型的偏差差异发生,其实力变化很大。对于高斯白噪声模型中集成平方偏置和集成方差之间的权衡,我们将较低界限的一般策略与减少技术相结合。这允许我们将原始问题与估计的估计器中的偏差折衷联动,以更简单的统计模型中具有额外的对称性属性。在高斯序列模型中,发生偏差差异的不同相位转换。虽然偏差和方差之间存在非平凡的相互作用,但是平方偏差的速率和方差不必平衡以实现最小估计速率。
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
我们重新审视耐受分发测试的问题。也就是说,给出来自未知分发$ P $超过$ \ {1,\ dots,n \} $的样本,它是$ \ varepsilon_1 $ -close到或$ \ varepsilon_2 $ -far从引用分发$ q $(总变化距离)?尽管过去十年来兴趣,但在极端情况下,这个问题很好。在无噪声设置(即,$ \ varepsilon_1 = 0 $)中,样本复杂性是$ \ theta(\ sqrt {n})$,强大的域大小。在频谱的另一端时,当$ \ varepsilon_1 = \ varepsilon_2 / 2 $时,样本复杂性跳转到勉强su​​blinear $ \ theta(n / \ log n)$。然而,非常少于中级制度。我们充分地表征了分发测试中的公差价格,作为$ N $,$ varepsilon_1 $,$ \ varepsilon_2 $,最多一个$ \ log n $ factor。具体来说,我们显示了\ [\ tilde \ theta \ left的样本复杂性(\ frac {\ sqrt {n}} {\ varepsilon_2 ^ {2}} + \ frac {n} {\ log n} \ cdot \ max \左\ {\ frac {\ varepsilon_1} {\ varepsilon_2 ^ 2},\ left(\ frac {\ varepsilon_1} {\ varepsilon_2 ^ 2} \右)^ {\!\!\!2} \ \ \} \右) ,\]提供两个先前已知的案例之间的顺利折衷。我们还为宽容的等价测试问题提供了类似的表征,其中$ p $和$ q $均未赘述。令人惊讶的是,在这两种情况下,对样本复杂性的主数量是比率$ \ varepsilon_1 / varepsilon_2 ^ 2 $,而不是更直观的$ \ varepsilon_1 / \ varepsilon_2 $。特别是技术兴趣是我们的下限框架,这涉及在以往的工作中处理不对称所需的新颖近似性理论工具,从而缺乏以前的作品。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity. Specifically, given a small number of corrupted samples from a high-dimensional heavy-tailed distribution whose mean $\mu$ is guaranteed to be sparse, the goal is to efficiently compute a hypothesis that accurately approximates $\mu$ with high probability. Prior work had obtained efficient algorithms for robust sparse mean estimation of light-tailed distributions. In this work, we give the first sample-efficient and polynomial-time robust sparse mean estimator for heavy-tailed distributions under mild moment assumptions. Our algorithm achieves the optimal asymptotic error using a number of samples scaling logarithmically with the ambient dimension. Importantly, the sample complexity of our method is optimal as a function of the failure probability $\tau$, having an additive $\log(1/\tau)$ dependence. Our algorithm leverages the stability-based approach from the algorithmic robust statistics literature, with crucial (and necessary) adaptations required in our setting. Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite matrices satisfying certain sparsity properties.
translated by 谷歌翻译
我们研究了在存在$ \ epsilon $ - 对抗异常值的高维稀疏平均值估计的问题。先前的工作为此任务获得了该任务的样本和计算有效算法,用于辅助性Subgaussian分布。在这项工作中,我们开发了第一个有效的算法,用于强大的稀疏平均值估计,而没有对协方差的先验知识。对于$ \ Mathbb r^d $上的分布,带有“认证有限”的$ t $ tum-矩和足够轻的尾巴,我们的算法达到了$ o(\ epsilon^{1-1/t})$带有样品复杂性$的错误(\ epsilon^{1-1/t}) m =(k \ log(d))^{o(t)}/\ epsilon^{2-2/t} $。对于高斯分布的特殊情况,我们的算法达到了$ \ tilde o(\ epsilon)$的接近最佳错误,带有样品复杂性$ m = o(k^4 \ mathrm {polylog}(d)(d))/\ epsilon^^ 2 $。我们的算法遵循基于方形的总和,对算法方法的证明。我们通过统计查询和低度多项式测试的下限来补充上限,提供了证据,表明我们算法实现的样本时间 - 错误权衡在质量上是最好的。
translated by 谷歌翻译
我们研究了广义熵的连续性属性作为潜在的概率分布的函数,用动作空间和损失函数定义,并使用此属性来回答统计学习理论中的基本问题:各种学习方法的过度风险分析。我们首先在几种常用的F分歧,Wassersein距离的熵差异导出了两个分布的熵差,这取决于动作空间的距离和损失函数,以及由熵产生的Bregman发散,这也诱导了两个分布之间的欧几里德距离方面的界限。对于每个一般结果的讨论给出了示例,使用现有的熵差界进行比较,并且基于新结果导出新的相互信息上限。然后,我们将熵差异界限应用于统计学习理论。结果表明,两种流行的学习范式,频繁学习和贝叶斯学习中的过度风险都可以用不同形式的广义熵的连续性研究。然后将分析扩展到广义条件熵的连续性。扩展为贝叶斯决策提供了不匹配的分布来提供性能范围。它也会导致第三个划分的学习范式的过度风险范围,其中决策规则是在经验分布的预定分布家族的预测下进行最佳设计。因此,我们通过广义熵的连续性建立了统计学习三大范式的过度风险分析的统一方法。
translated by 谷歌翻译
We study non-parametric estimation of the value function of an infinite-horizon $\gamma$-discounted Markov reward process (MRP) using observations from a single trajectory. We provide non-asymptotic guarantees for a general family of kernel-based multi-step temporal difference (TD) estimates, including canonical $K$-step look-ahead TD for $K = 1, 2, \ldots$ and the TD$(\lambda)$ family for $\lambda \in [0,1)$ as special cases. Our bounds capture its dependence on Bellman fluctuations, mixing time of the Markov chain, any mis-specification in the model, as well as the choice of weight function defining the estimator itself, and reveal some delicate interactions between mixing time and model mis-specification. For a given TD method applied to a well-specified model, its statistical error under trajectory data is similar to that of i.i.d. sample transition pairs, whereas under mis-specification, temporal dependence in data inflates the statistical error. However, any such deterioration can be mitigated by increased look-ahead. We complement our upper bounds by proving minimax lower bounds that establish optimality of TD-based methods with appropriately chosen look-ahead and weighting, and reveal some fundamental differences between value function estimation and ordinary non-parametric regression.
translated by 谷歌翻译
量化概率分布之间的异化的统计分歧(SDS)是统计推理和机器学习的基本组成部分。用于估计这些分歧的现代方法依赖于通过神经网络(NN)进行参数化经验变化形式并优化参数空间。这种神经估算器在实践中大量使用,但相应的性能保证是部分的,并呼吁进一步探索。特别是,涉及的两个错误源之间存在基本的权衡:近似和经验估计。虽然前者需要NN课程富有富有表现力,但后者依赖于控制复杂性。我们通过非渐近误差界限基于浅NN的基于浅NN的估计的估算权,重点关注四个流行的$ \ mathsf {f} $ - 分离 - kullback-leibler,chi squared,squared hellinger,以及总变异。我们分析依赖于实证过程理论的非渐近功能近似定理和工具。界限揭示了NN尺寸和样品数量之间的张力,并使能够表征其缩放速率,以确保一致性。对于紧凑型支持的分布,我们进一步表明,上述上三次分歧的神经估算器以适当的NN生长速率接近Minimax率 - 最佳,实现了对数因子的参数速率。
translated by 谷歌翻译
We consider the problem of robustly testing the norm of a high-dimensional sparse signal vector under two different observation models. In the first model, we are given $n$ i.i.d. samples from the distribution $\mathcal{N}\left(\theta,I_d\right)$ (with unknown $\theta$), of which a small fraction has been arbitrarily corrupted. Under the promise that $\|\theta\|_0\le s$, we want to correctly distinguish whether $\|\theta\|_2=0$ or $\|\theta\|_2>\gamma$, for some input parameter $\gamma>0$. We show that any algorithm for this task requires $n=\Omega\left(s\log\frac{ed}{s}\right)$ samples, which is tight up to logarithmic factors. We also extend our results to other common notions of sparsity, namely, $\|\theta\|_q\le s$ for any $0 < q < 2$. In the second observation model that we consider, the data is generated according to a sparse linear regression model, where the covariates are i.i.d. Gaussian and the regression coefficient (signal) is known to be $s$-sparse. Here too we assume that an $\epsilon$-fraction of the data is arbitrarily corrupted. We show that any algorithm that reliably tests the norm of the regression coefficient requires at least $n=\Omega\left(\min(s\log d,{1}/{\gamma^4})\right)$ samples. Our results show that the complexity of testing in these two settings significantly increases under robustness constraints. This is in line with the recent observations made in robust mean testing and robust covariance testing.
translated by 谷歌翻译
三角形流量,也称为kn \“{o}的Rosenblatt测量耦合,包括用于生成建模和密度估计的归一化流模型的重要构建块,包括诸如实值的非体积保存变换模型的流行自回归流模型(真实的NVP)。我们提出了三角形流量统计模型的统计保证和样本复杂性界限。特别是,我们建立了KN的统计一致性和kullback-leibler估算器的rospblatt的kullback-leibler估计的有限样本会聚率使用实证过程理论的工具测量耦合。我们的结果突出了三角形流动下播放功能类的各向异性几何形状,优化坐标排序,并导致雅各比比流动的统计保证。我们对合成数据进行数值实验,以说明我们理论发现的实际意义。
translated by 谷歌翻译
我们调查与高斯的混合的数据分享共同但未知,潜在虐待协方差矩阵的数据。我们首先考虑具有两个等级大小的组件的高斯混合,并根据最大似然估计导出最大切割整数程序。当样品的数量在维度下线性增长时,我们证明其解决方案实现了最佳的错误分类率,直到对数因子。但是,解决最大切割问题似乎是在计算上棘手的。为了克服这一点,我们开发了一种高效的频谱算法,该算法达到最佳速率,但需要一种二次样本量。虽然这种样本复杂性比最大切割问题更差,但我们猜测没有多项式方法可以更好地执行。此外,我们收集了支持统计计算差距存在的数值和理论证据。最后,我们将MAX-CUT程序概括为$ k $ -means程序,该程序处理多组分混合物的可能性不平等。它享有相似的最优性保证,用于满足运输成本不平等的分布式的混合物,包括高斯和强烈的对数的分布。
translated by 谷歌翻译
本文研究了基于Laplacian Eigenmaps(Le)的基于Laplacian EIGENMAPS(PCR-LE)的主要成分回归的统计性质,这是基于Laplacian Eigenmaps(Le)的非参数回归的方法。 PCR-LE通过投影观察到的响应的向量$ {\ bf y} =(y_1,\ ldots,y_n)$ to to changbood图表拉普拉斯的某些特征向量跨越的子空间。我们表明PCR-Le通过SoboLev空格实现了随机设计回归的最小收敛速率。在设计密度$ P $的足够平滑条件下,PCR-le达到估计的最佳速率(其中已知平方$ l ^ 2 $ norm的最佳速率为$ n ^ { - 2s /(2s + d) )} $)和健美的测试($ n ^ { - 4s /(4s + d)$)。我们还表明PCR-LE是\ EMPH {歧管Adaptive}:即,我们考虑在小型内在维度$ M $的歧管上支持设计的情况,并为PCR-LE提供更快的界限Minimax估计($ n ^ { - 2s /(2s + m)$)和测试($ n ^ { - 4s /(4s + m)$)收敛率。有趣的是,这些利率几乎总是比图形拉普拉斯特征向量的已知收敛率更快;换句话说,对于这个问题的回归估计的特征似乎更容易,统计上讲,而不是估计特征本身。我们通过经验证据支持这些理论结果。
translated by 谷歌翻译
近似消息传递(AMP)是解决高维统计问题的有效迭代范式。但是,当迭代次数超过$ o \ big(\ frac {\ log n} {\ log log \ log \ log n} \时big)$(带有$ n $问题维度)。为了解决这一不足,本文开发了一个非吸附框架,用于理解峰值矩阵估计中的AMP。基于AMP更新的新分解和可控的残差项,我们布置了一个分析配方,以表征在存在独立初始化的情况下AMP的有限样本行为,该过程被进一步概括以进行光谱初始化。作为提出的分析配方的两个具体后果:(i)求解$ \ mathbb {z} _2 $同步时,我们预测了频谱初始化AMP的行为,最高为$ o \ big(\ frac {n} {\ mathrm {\ mathrm { poly} \ log n} \ big)$迭代,表明该算法成功而无需随后的细化阶段(如最近由\ citet {celentano2021local}推测); (ii)我们表征了稀疏PCA中AMP的非反应性行为(在尖刺的Wigner模型中),以广泛的信噪比。
translated by 谷歌翻译
我们为高维分布的身份测试提供了改进的差异私有算法。具体来说,对于带有已知协方差$ \ sigma $的$ d $二维高斯分布,我们可以测试该分布是否来自$ \ Mathcal {n}(\ mu^*,\ sigma)$,对于某些固定$ \ mu^** $或从某个$ \ MATHCAL {n}(\ mu,\ sigma)$,总变化距离至少$ \ alpha $ from $ \ mathcal {n}(\ mu^*,\ sigma)$(\ varepsilon) ,0)$ - 微分隐私,仅使用\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/3}} {1/3}} { \ alpha^{4/3} \ cdot \ varepsilon^{2/3}}} + \ frac {1} {\ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right)\]唯一\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/4}} {\ alpha \ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right )\]用于计算有效算法的样品。我们还提供了一个匹配的下限,表明我们的计算效率低下的算法具有最佳的样品复杂性。我们还将算法扩展到各种相关问题,包括对具有有限但未知协方差的高斯人的平均测试,对$ \ { - 1,1,1 \}^d $的产品分布的均匀性测试以及耐受性测试。我们的结果改善了Canonne等人的先前最佳工作。 (\ frac {\ sqrt {d}} {\ alpha^2} \ right)$在许多标准参数设置中。此外,我们的结果表明,令人惊讶的是,可以使用$ d $二维高斯的私人身份测试,可以用少于离散分布的私人身份测试尺寸$ d $ \ cite {actharyasz18}的私人身份测试来完成,以重组猜测〜\ cite {canonnekmuz20}的下限。
translated by 谷歌翻译
Mixtures of regression are a powerful class of models for regression learning with respect to a highly uncertain and heterogeneous response variable of interest. In addition to being a rich predictive model for the response given some covariates, the parameters in this model class provide useful information about the heterogeneity in the data population, which is represented by the conditional distributions for the response given the covariates associated with a number of distinct but latent subpopulations. In this paper, we investigate conditions of strong identifiability, rates of convergence for conditional density and parameter estimation, and the Bayesian posterior contraction behavior arising in finite mixture of regression models, under exact-fitted and over-fitted settings and when the number of components is unknown. This theory is applicable to common choices of link functions and families of conditional distributions employed by practitioners. We provide simulation studies and data illustrations, which shed some light on the parameter learning behavior found in several popular regression mixture models reported in the literature.
translated by 谷歌翻译