We consider the problem of constructing minimax rate-optimal estimators for a doubly robust nonparametric functional that has witnessed applications across the causal inference and conditional independence testing literature. Minimax rate-optimal estimators for such functionals are typically constructed through higher-order bias corrections of plug-in and one-step type estimators and, in turn, depend on estimators of nuisance functions. In this paper, we consider a parallel question of interest regarding the optimality and/or sub-optimality of plug-in and one-step bias-corrected estimators for the specific doubly robust functional of interest. Specifically, we verify that by using undersmoothing and sample splitting techniques when constructing nuisance function estimators, one can achieve minimax rates of convergence in all H\"older smoothness classes of the nuisance functions (i.e. the propensity score and outcome regression) provided that the marginal density of the covariates is sufficiently regular. Additionally, by demonstrating suitable lower bounds on these classes of estimators, we demonstrate the necessity to undersmooth the nuisance function estimators to obtain minimax optimal rates of convergence.
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
本文研究了基于Laplacian Eigenmaps(Le)的基于Laplacian EIGENMAPS(PCR-LE)的主要成分回归的统计性质,这是基于Laplacian Eigenmaps(Le)的非参数回归的方法。 PCR-LE通过投影观察到的响应的向量$ {\ bf y} =(y_1,\ ldots,y_n)$ to to changbood图表拉普拉斯的某些特征向量跨越的子空间。我们表明PCR-Le通过SoboLev空格实现了随机设计回归的最小收敛速率。在设计密度$ P $的足够平滑条件下,PCR-le达到估计的最佳速率(其中已知平方$ l ^ 2 $ norm的最佳速率为$ n ^ { - 2s /(2s + d) )} $)和健美的测试($ n ^ { - 4s /(4s + d)$)。我们还表明PCR-LE是\ EMPH {歧管Adaptive}:即,我们考虑在小型内在维度$ M $的歧管上支持设计的情况,并为PCR-LE提供更快的界限Minimax估计($ n ^ { - 2s /(2s + m)$)和测试($ n ^ { - 4s /(4s + m)$)收敛率。有趣的是,这些利率几乎总是比图形拉普拉斯特征向量的已知收敛率更快;换句话说,对于这个问题的回归估计的特征似乎更容易,统计上讲,而不是估计特征本身。我们通过经验证据支持这些理论结果。
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
对于高维和非参数统计模型,速率最优估计器平衡平方偏差和方差是一种常见的现象。虽然这种平衡被广泛观察到,但很少知道是否存在可以避免偏差和方差之间的权衡的方法。我们提出了一般的策略,以获得对任何估计方差的下限,偏差小于预先限定的界限。这表明偏差差异折衷的程度是不可避免的,并且允许量化不服从其的方法的性能损失。该方法基于许多抽象的下限,用于涉及关于不同概率措施的预期变化以及诸如Kullback-Leibler或Chi-Sque-diversence的信息措施的变化。其中一些不平等依赖于信息矩阵的新概念。在该物品的第二部分中,将抽象的下限应用于几种统计模型,包括高斯白噪声模型,边界估计问题,高斯序列模型和高维线性回归模型。对于这些特定的统计应用,发生不同类型的偏差差异发生,其实力变化很大。对于高斯白噪声模型中集成平方偏置和集成方差之间的权衡,我们将较低界限的一般策略与减少技术相结合。这允许我们将原始问题与估计的估计器中的偏差折衷联动,以更简单的统计模型中具有额外的对称性属性。在高斯序列模型中,发生偏差差异的不同相位转换。虽然偏差和方差之间存在非平凡的相互作用,但是平方偏差的速率和方差不必平衡以实现最小估计速率。
translated by 谷歌翻译
本文研究了在潜在的结果框架中使用深神经网络(DNN)的平均治疗效果(ATE)的估计和推理。在一些规则性条件下,观察到的响应可以作为与混杂变量和治疗指标作为自变量的平均回归问题的响应。使用这种配方,我们研究了通过使用特定网络架构的DNN回归基于估计平均回归函数的两种尝试估计和推断方法。我们表明ATE的两个DNN估计在底层真正的均值回归模型上的一些假设下与无维一致性率一致。我们的模型假设可容纳观察到的协变量的潜在复杂的依赖结构,包括治疗指标和混淆变量之间的潜在因子和非线性相互作用。我们还基于采样分裂的思想,确保精确推理和不确定量化,建立了我们估计的渐近常态。仿真研究和实际数据应用证明了我们的理论调查结果,支持我们的DNN估计和推理方法。
translated by 谷歌翻译
加权最近的邻居(WNN)估计量通常用作平均回归估计的灵活且易于实现的非参数工具。袋装技术是一种优雅的方式,可以自动生成最近邻居的重量的WNN估计器;我们将最终的估计量命名为分布最近的邻居(DNN),以便于参考。然而,这种估计器缺乏分布结果,从而将其应用于统计推断。此外,当平均回归函数具有高阶平滑度时,DNN无法达到最佳的非参数收敛率,这主要是由于偏差问题。在这项工作中,我们对DNN提供了深入的技术分析,我们建议通过线性将两个DNN估计量与不同的子采样量表进行线性相结合,从而提出了DNN估计量的偏差方法,从而导致新型的两尺度DNN(TDNN(TDNN) )估计器。两尺度的DNN估计量具有等效的WNN表示,重量承认明确形式,有些则是负面的。我们证明,由于使用负权重,两尺度DNN估计器在四阶平滑度条件下估算回归函数时享有最佳的非参数收敛速率。我们进一步超出了估计,并确定DNN和两个规模的DNN均无渐进地正常,因为亚次采样量表和样本量差异到无穷大。对于实际实施,我们还使用二尺度DNN的Jacknife和Bootstrap技术提供方差估计器和分配估计器。可以利用这些估计器来构建有效的置信区间,以用于回归函数的非参数推断。建议的两尺度DNN方法的理论结果和吸引人的有限样本性能用几个数值示例说明了。
translated by 谷歌翻译
我们研究基于度量传输的非参数密度估计器的收敛性和相关距离。这些估计量代表了利息的度量,作为传输图下选择的参考分布的推动力,其中地图是通过最大似然目标选择(等效地,将经验性的kullback-leibler损失)或其受惩罚版本选择。我们通过将M估计的技术与基于运输的密度表示的分析性能相结合,为一般惩罚措施估计量的一般类别的措施运输估计器建立了浓度不平等。然后,我们证明了我们的理论对三角形knothe-rosenblatt(kr)在$ d $维单元方面的运输的含义,并表明该估计器的惩罚和未化的版本都达到了Minimax最佳收敛速率,超过了H \ \ \'“较旧的密度类别。具体来说,我们建立了在有限的h \“较旧型球上,未确定的非参数最大似然估计,然后在某些sobolev-penalate的估计器和筛分的小波估计器中建立了最佳速率。
translated by 谷歌翻译
We provide results that exactly quantify how data augmentation affects the convergence rate and variance of estimates. They lead to some unexpected findings: Contrary to common intuition, data augmentation may increase rather than decrease the uncertainty of estimates, such as the empirical prediction risk. Our main theoretical tool is a limit theorem for functions of randomly transformed, high-dimensional random vectors. The proof draws on work in probability on noise stability of functions of many variables. The pathological behavior we identify is not a consequence of complex models, but can occur even in the simplest settings -- one of our examples is a ridge regressor with two parameters. On the other hand, our results also show that data augmentation can have real, quantifiable benefits.
translated by 谷歌翻译
量化概率分布之间的异化的统计分歧(SDS)是统计推理和机器学习的基本组成部分。用于估计这些分歧的现代方法依赖于通过神经网络(NN)进行参数化经验变化形式并优化参数空间。这种神经估算器在实践中大量使用,但相应的性能保证是部分的,并呼吁进一步探索。特别是,涉及的两个错误源之间存在基本的权衡:近似和经验估计。虽然前者需要NN课程富有富有表现力,但后者依赖于控制复杂性。我们通过非渐近误差界限基于浅NN的基于浅NN的估计的估算权,重点关注四个流行的$ \ mathsf {f} $ - 分离 - kullback-leibler,chi squared,squared hellinger,以及总变异。我们分析依赖于实证过程理论的非渐近功能近似定理和工具。界限揭示了NN尺寸和样品数量之间的张力,并使能够表征其缩放速率,以确保一致性。对于紧凑型支持的分布,我们进一步表明,上述上三次分歧的神经估算器以适当的NN生长速率接近Minimax率 - 最佳,实现了对数因子的参数速率。
translated by 谷歌翻译
We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise. Our estimator $\boldsymbol{z}_n$ of the minimizer $\boldsymbol{x}^*$ is based on a version of the projected gradient descent with the gradient estimated by a regularized local polynomial algorithm. Next, we propose a two-stage procedure for estimation of the minimum value $f^*$ of regression function $f$. At the first stage, we construct an accurate enough estimator of $\boldsymbol{x}^*$, which can be, for example, $\boldsymbol{z}_n$. At the second stage, we estimate the function value at the point obtained in the first stage using a rate optimal nonparametric procedure. We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $\boldsymbol{z}_n$, and for the risk of estimating $f^*$. We establish minimax lower bounds showing that, under certain choice of parameters, the proposed algorithms achieve the minimax optimal rates of convergence on the class of smooth and strongly convex functions.
translated by 谷歌翻译
我们提出了对非参数仪器变量(NPIV)模型中的结构函数的多面体锥体(例如,单调性,凸起)和平等(例如,参数,半游戏)限制的新的自适应假设试验。我们的测试统计是基于受限制和不受限制的筛估计之间的二次距离的改进的休假样本模拟。我们提供筛选调整参数的计算简单,数据驱动的选择和调整的CHI平方临界值。我们的测试在未知的内能性和仪器的未知强度存在下适应替代功能的未知平滑度。它达到了$ ^ 2 $以$ ^ 2 $的试验率。也就是说,通过未知规则的NPIV模型的任何其他假设测试,不能改善其在复合空缺上均匀地均匀地均匀的I型错误及其类型的II误差。通过反转自适应测试,可以获得数据驱动的置信度量为$ ^ 2 $。模拟确认我们的自适应测试控制规模及其有限样本功率极大地超过了NPIV模型中的单调性和参数限制的现有非自适应测试。介绍了对差异化产品需求和Engel曲线进行形状限制的经验应用。
translated by 谷歌翻译
我们研究了非参数混合模型中的一致性以及回归的密切相关的混合物(也称为混合回归)模型,其中允许回归函数是非参数的,并且假定误差分布是高斯密度的卷积。我们在一般条件下构建统一的一致估计器,同时突出显示了将现有的点一致性结果扩展到均匀结果的几个疼痛点。最终的分析事实并非如此,并且在此过程中开发了几种新颖的技术工具。在混合回归的情况下,我们证明了回归函数的$ l^1 $收敛性,同时允许组件回归函数任意地相交,这带来了其他技术挑战。我们还考虑对一般(即非跨方向)非参数混合物的概括。
translated by 谷歌翻译
我们研究了趋势过滤的多元版本,称为Kronecker趋势过滤或KTF,因为设计点以$ D $维度形成格子。 KTF是单变量趋势过滤的自然延伸(Steidl等,2006; Kim等人,2009; Tibshirani,2014),并通过最大限度地减少惩罚最小二乘问题,其罚款术语总和绝对(高阶)沿每个坐标方向估计参数的差异。相应的惩罚运算符可以编写单次趋势过滤惩罚运营商的Kronecker产品,因此名称Kronecker趋势过滤。等效,可以在$ \ ell_1 $ -penalized基础回归问题上查看KTF,其中基本功能是下降阶段函数的张量产品,是一个分段多项式(离散样条)基础,基于单变量趋势过滤。本文是Sadhanala等人的统一和延伸结果。 (2016,2017)。我们开发了一套完整的理论结果,描述了$ k \ grone 0 $和$ d \ geq 1 $的$ k ^ {\ mathrm {th}} $ over kronecker趋势过滤的行为。这揭示了许多有趣的现象,包括KTF在估计异构平滑的功能时KTF的优势,并且在$ d = 2(k + 1)$的相位过渡,一个边界过去(在高维对 - 光滑侧)线性泡沫不能完全保持一致。我们还利用Tibshirani(2020)的离散花键来利用最近的结果,特别是离散的花键插值结果,使我们能够将KTF估计扩展到恒定时间内的任何偏离晶格位置(与晶格数量的大小无关)。
translated by 谷歌翻译
This paper investigates the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite p-th moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks.
translated by 谷歌翻译
在本文中,我们的目标是提供对半监督(SS)因果推理的一般性和完全理解治疗效果。具体而言,我们考虑两个这样的估计值:(a)平均治疗效果和(b)定量处理效果,作为原型案例,在SS设置中,其特征在于两个可用的数据集:(i)标记的数据集大小$ N $,为响应和一组高维协变量以及二元治疗指标提供观察。 (ii)一个未标记的数据集,大小超过$ n $,但未观察到的响应。使用这两个数据集,我们开发了一个SS估计系列,该系列是:(1)更强大,并且(2)比其监督对应力更高的基于标记的数据集。除了通过监督方法可以实现的“标准”双重稳健结果(在一致性方面),我们还在正确指定模型中的倾向得分,我们进一步建立了我们SS估计的根本-N一致性和渐近常态。没有需要涉及的特定形式的滋扰职能。这种改善的鲁棒性来自使用大规模未标记的数据,因此通常不能在纯粹监督的环境中获得。此外,只要正确指定所有滋扰函数,我们的估计值都显示为半参数效率。此外,作为滋扰估计器的说明,我们考虑逆概率加权型核平滑估计,涉及未知的协变量转换机制,并在高维情景新颖的情况下建立其统一的收敛速率,这应该是独立的兴趣。两种模拟和实际数据的数值结果验证了我们对其监督对应物的优势,了解鲁棒性和效率。
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译
We study non-parametric estimation of the value function of an infinite-horizon $\gamma$-discounted Markov reward process (MRP) using observations from a single trajectory. We provide non-asymptotic guarantees for a general family of kernel-based multi-step temporal difference (TD) estimates, including canonical $K$-step look-ahead TD for $K = 1, 2, \ldots$ and the TD$(\lambda)$ family for $\lambda \in [0,1)$ as special cases. Our bounds capture its dependence on Bellman fluctuations, mixing time of the Markov chain, any mis-specification in the model, as well as the choice of weight function defining the estimator itself, and reveal some delicate interactions between mixing time and model mis-specification. For a given TD method applied to a well-specified model, its statistical error under trajectory data is similar to that of i.i.d. sample transition pairs, whereas under mis-specification, temporal dependence in data inflates the statistical error. However, any such deterioration can be mitigated by increased look-ahead. We complement our upper bounds by proving minimax lower bounds that establish optimality of TD-based methods with appropriately chosen look-ahead and weighting, and reveal some fundamental differences between value function estimation and ordinary non-parametric regression.
translated by 谷歌翻译