我们考虑估计与I.I.D的排名$ 1 $矩阵因素的问题。高斯,排名$ 1 $的测量值,这些测量值非线性转化和损坏。考虑到非线性的两种典型选择,我们研究了从随机初始化开始的此非convex优化问题的天然交流更新规则的收敛性能。我们通过得出确定性递归,即使在高维问题中也是准确的,我们显示出算法的样本分割版本的敏锐收敛保证。值得注意的是,虽然无限样本的种群更新是非信息性的,并提示单个步骤中的精确恢复,但算法 - 我们的确定性预测 - 从随机初始化中迅速地收敛。我们尖锐的非反应分析也暴露了此问题的其他几种细粒度,包括非线性和噪声水平如何影响收敛行为。从技术层面上讲,我们的结果可以通过证明我们的确定性递归可以通过我们的确定性顺序来预测我们的确定性序列,而当每次迭代都以$ n $观测来运行时,我们的确定性顺序可以通过$ n^{ - 1/2} $的波动。我们的技术利用了源自有关高维$ m $估计文献的遗留工具,并为通过随机数据的其他高维优化问题的随机初始化而彻底地分析了高阶迭代算法的途径。
translated by 谷歌翻译
近似消息传递(AMP)是解决高维统计问题的有效迭代范式。但是,当迭代次数超过$ o \ big(\ frac {\ log n} {\ log log \ log \ log n} \时big)$(带有$ n $问题维度)。为了解决这一不足,本文开发了一个非吸附框架,用于理解峰值矩阵估计中的AMP。基于AMP更新的新分解和可控的残差项,我们布置了一个分析配方,以表征在存在独立初始化的情况下AMP的有限样本行为,该过程被进一步概括以进行光谱初始化。作为提出的分析配方的两个具体后果:(i)求解$ \ mathbb {z} _2 $同步时,我们预测了频谱初始化AMP的行为,最高为$ o \ big(\ frac {n} {\ mathrm {\ mathrm { poly} \ log n} \ big)$迭代,表明该算法成功而无需随后的细化阶段(如最近由\ citet {celentano2021local}推测); (ii)我们表征了稀疏PCA中AMP的非反应性行为(在尖刺的Wigner模型中),以广泛的信噪比。
translated by 谷歌翻译
我们研究了随机近似程序,以便基于观察来自ergodic Markov链的长度$ n $的轨迹来求近求解$ d -dimension的线性固定点方程。我们首先表现出$ t _ {\ mathrm {mix}} \ tfrac {n}} \ tfrac {n}} \ tfrac {d}} \ tfrac {d} {n} $的非渐近性界限。$ t _ {\ mathrm {mix $是混合时间。然后,我们证明了一种在适当平均迭代序列上的非渐近实例依赖性,具有匹配局部渐近最小的限制的领先术语,包括对参数$的敏锐依赖(d,t _ {\ mathrm {mix}}) $以高阶术语。我们将这些上限与非渐近Minimax的下限补充,该下限是建立平均SA估计器的实例 - 最优性。我们通过Markov噪声的政策评估导出了这些结果的推导 - 覆盖了所有$ \ lambda \中的TD($ \ lambda $)算法,以便[0,1)$ - 和线性自回归模型。我们的实例依赖性表征为HyperParameter调整的细粒度模型选择程序的设计开放了门(例如,在运行TD($ \ Lambda $)算法时选择$ \ lambda $的值)。
translated by 谷歌翻译
Autoencoders are a popular model in many branches of machine learning and lossy data compression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly understood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanishing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders trained in the challenging proportional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise description of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.
translated by 谷歌翻译
现代神经网络通常以强烈的过度构造状态运行:它们包含许多参数,即使实际标签被纯粹随机的标签代替,它们也可以插入训练集。尽管如此,他们在看不见的数据上达到了良好的预测错误:插值训练集并不会导致巨大的概括错误。此外,过度散色化似乎是有益的,因为它简化了优化景观。在这里,我们在神经切线(NT)制度中的两层神经网络的背景下研究这些现象。我们考虑了一个简单的数据模型,以及各向同性协变量的矢量,$ d $尺寸和$ n $隐藏的神经元。我们假设样本量$ n $和尺寸$ d $都很大,并且它们在多项式上相关。我们的第一个主要结果是对过份术的经验NT内核的特征结构的特征。这种表征意味着必然的表明,经验NT内核的最低特征值在$ ND \ gg n $后立即从零界限,因此网络可以在同一制度中精确插值任意标签。我们的第二个主要结果是对NT Ridge回归的概括误差的表征,包括特殊情况,最小值-ULL_2 $ NORD插值。我们证明,一旦$ nd \ gg n $,测试误差就会被内核岭回归之一相对于无限宽度内核而近似。多项式脊回归的误差依次近似后者,从而通过与激活函数的高度组件相关的“自我诱导的”项增加了正则化参数。多项式程度取决于样本量和尺寸(尤其是$ \ log n/\ log d $)。
translated by 谷歌翻译
我们研究了线性函数近似的政策评估问题,并且目前具有强烈的最优性保证的高效实用算法。我们首先通过证明在这个问题中建立基线的下限来建立基线和随机错误。特别是,我们在与转换内核的静止分布相关联的实例相关规范中证明了Oracle复杂性下限,并使用本地渐近最低限度机械在随机误差中证明依赖于随机误差的实例相关的下限IID观察模型。现有算法未能匹配这些下限中的至少一个:为了说明,我们分析了时间差异学习的方差减少变体,特别是它未能实现Oracle复杂性下限。为了解决这个问题,我们开发了加速,方差减少的快速时间差算法(VRFTD),其同时匹配两个下限,并达到实例 - 最优性的强烈概念。最后,我们将VRFTD算法扩展到Markovian观察的设置,并提供与I.I.D中的实例相关的收敛结果。设置到与链条的混合时间成比例的乘法因子。我们的理论保证最佳的最佳保证是通过数值实验证实的。
translated by 谷歌翻译
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space. Focusing on a stochastic query model that provides noisy evaluations of the operator, we analyze a variance-reduced stochastic approximation scheme, and establish non-asymptotic bounds for both the operator defect and the estimation error, measured in an arbitrary semi-norm. In contrast to worst-case guarantees, our bounds are instance-dependent, and achieve the local asymptotic minimax risk non-asymptotically. For linear operators, contractivity can be relaxed to multi-step contractivity, so that the theory can be applied to problems like average reward policy evaluation problem in reinforcement learning. We illustrate the theory via applications to stochastic shortest path problems, two-player zero-sum Markov games, as well as policy evaluation and $Q$-learning for tabular Markov decision processes.
translated by 谷歌翻译
套索是一种高维回归的方法,当时,当协变量$ p $的订单数量或大于观测值$ n $时,通常使用它。由于两个基本原因,经典的渐近态性理论不适用于该模型:$(1)$正规风险是非平滑的; $(2)$估算器$ \ wideHat {\ boldsymbol {\ theta}} $与true参数vector $ \ boldsymbol {\ theta}^*$无法忽略。结果,标准的扰动论点是渐近正态性的传统基础。另一方面,套索估计器可以精确地以$ n $和$ p $大,$ n/p $的订单为一。这种表征首先是在使用I.I.D的高斯设计的情况下获得的。协变量:在这里,我们将其推广到具有非偏差协方差结构的高斯相关设计。这是根据更简单的``固定设计''模型表示的。我们在两个模型中各种数量的分布之间的距离上建立了非反应界限,它们在合适的稀疏类别中均匀地固定在信号上$ \ boldsymbol {\ theta}^*$。作为应用程序,我们研究了借助拉索的分布,并表明需要校正程度对于计算有效的置信区间是必要的。
translated by 谷歌翻译
我们调查与高斯的混合的数据分享共同但未知,潜在虐待协方差矩阵的数据。我们首先考虑具有两个等级大小的组件的高斯混合,并根据最大似然估计导出最大切割整数程序。当样品的数量在维度下线性增长时,我们证明其解决方案实现了最佳的错误分类率,直到对数因子。但是,解决最大切割问题似乎是在计算上棘手的。为了克服这一点,我们开发了一种高效的频谱算法,该算法达到最佳速率,但需要一种二次样本量。虽然这种样本复杂性比最大切割问题更差,但我们猜测没有多项式方法可以更好地执行。此外,我们收集了支持统计计算差距存在的数值和理论证据。最后,我们将MAX-CUT程序概括为$ k $ -means程序,该程序处理多组分混合物的可能性不平等。它享有相似的最优性保证,用于满足运输成本不平等的分布式的混合物,包括高斯和强烈的对数的分布。
translated by 谷歌翻译
元学习或学习学习,寻求设计算法,可以利用以前的经验快速学习新技能或适应新环境。表示学习 - 用于执行元学习的关键工具 - 了解可以在多个任务中传输知识的数据表示,这在数据稀缺的状态方面是必不可少的。尽管最近在Meta-Leature的实践中感兴趣的兴趣,但缺乏元学习算法的理论基础,特别是在学习可转让陈述的背景下。在本文中,我们专注于多任务线性回归的问题 - 其中多个线性回归模型共享常见的低维线性表示。在这里,我们提供了可提供的快速,采样高效的算法,解决了(1)的双重挑战,从多个相关任务和(2)将此知识转移到新的,看不见的任务中的常见功能。两者都是元学习的一般问题的核心。最后,我们通过在学习这些线性特征的样本复杂性上提供信息定理下限来补充这些结果。
translated by 谷歌翻译
在负面的感知问题中,我们给出了$ n $数据点$({\ boldsymbol x} _i,y_i)$,其中$ {\ boldsymbol x} _i $是$ d $ -densional vector和$ y_i \ in \ { + 1,-1 \} $是二进制标签。数据不是线性可分离的,因此我们满足自己的内容,以找到最大的线性分类器,具有最大的\ emph {否定}余量。换句话说,我们想找到一个单位常规矢量$ {\ boldsymbol \ theta} $,最大化$ \ min_ {i \ le n} y_i \ langle {\ boldsymbol \ theta},{\ boldsymbol x} _i \ rangle $ 。这是一个非凸优化问题(它相当于在Polytope中找到最大标准矢量),我们在两个随机模型下研究其典型属性。我们考虑比例渐近,其中$ n,d \ to \ idty $以$ n / d \ to \ delta $,并在最大边缘$ \ kappa _ {\ text {s}}(\ delta)上证明了上限和下限)$或 - 等效 - 在其逆函数$ \ delta _ {\ text {s}}(\ kappa)$。换句话说,$ \ delta _ {\ text {s}}(\ kappa)$是overparametization阈值:以$ n / d \ le \ delta _ {\ text {s}}(\ kappa) - \ varepsilon $一个分类器实现了消失的训练错误,具有高概率,而以$ n / d \ ge \ delta _ {\ text {s}}(\ kappa)+ \ varepsilon $。我们在$ \ delta _ {\ text {s}}(\ kappa)$匹配,以$ \ kappa \ to - \ idty $匹配。然后,我们分析了线性编程算法来查找解决方案,并表征相应的阈值$ \ delta _ {\ text {lin}}(\ kappa)$。我们观察插值阈值$ \ delta _ {\ text {s}}(\ kappa)$和线性编程阈值$ \ delta _ {\ text {lin {lin}}(\ kappa)$之间的差距,提出了行为的问题其他算法。
translated by 谷歌翻译
矩阵正常模型,高斯矩阵变化分布的系列,其协方差矩阵是两个较低尺寸因子的Kronecker乘积,经常用于模拟矩阵变化数据。张量正常模型将该家庭推广到三个或更多因素的Kronecker产品。我们研究了矩阵和张量模型中协方差矩阵的Kronecker因子的估计。我们向几个自然度量中的最大似然估计器(MLE)实现的误差显示了非因素界限。与现有范围相比,我们的结果不依赖于条件良好或稀疏的因素。对于矩阵正常模型,我们所有的所有界限都是最佳的对数因子最佳,对于张量正常模型,我们对最大因数和整体协方差矩阵的绑定是最佳的,所以提供足够的样品以获得足够的样品以获得足够的样品常量Frobenius错误。在与我们的样本复杂性范围相同的制度中,我们表明迭代程序计算称为触发器算法称为触发器算法的MLE的线性地收敛,具有高概率。我们的主要工具是Fisher信息度量诱导的正面矩阵的几何中的测地强凸性。这种强大的凸起由某些随机量子通道的扩展来决定。我们还提供了数值证据,使得将触发器算法与简单的收缩估计器组合可以提高缺乏采样制度的性能。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
成功的深度学习模型往往涉及培训具有比训练样本数量更多的参数的神经网络架构。近年来已经广泛研究了这种超分子化的模型,并且通过双下降现象和通过优化景观的结构特性,从统计的角度和计算视角都建立了过分统计化的优点。尽管在过上分层的制度中深入学习架构的显着成功,但也众所周知,这些模型对其投入中的小对抗扰动感到高度脆弱。即使在普遍培训的情况下,它们在扰动输入(鲁棒泛化)上的性能也会比良性输入(标准概括)的最佳可达到的性能更糟糕。因此,必须了解如何从根本上影响稳健性的情况下如何影响鲁棒性。在本文中,我们将通过专注于随机特征回归模型(具有随机第一层权重的两层神经网络)来提供超分度化对鲁棒性的作用的精确表征。我们考虑一个制度,其中样本量,输入维度和参数的数量彼此成比例地生长,并且当模型发生前列地训练时,可以为鲁棒泛化误差导出渐近精确的公式。我们的发达理论揭示了过分统计化对鲁棒性的非竞争效果,表明对于普遍训练的随机特征模型,高度公正化可能会损害鲁棒泛化。
translated by 谷歌翻译
在因果推理和强盗文献中,基于观察数据的线性功能估算线性功能的问题是规范的。我们分析了首先估计治疗效果函数的广泛的两阶段程序,然后使用该数量来估计线性功能。我们证明了此类过程的均方误差上的非反应性上限:这些边界表明,为了获得非反应性最佳程序,应在特定加权$ l^2 $中最大程度地估算治疗效果的误差。 -规范。我们根据该加权规范的约束回归分析了两阶段的程序,并通过匹配非轴突局部局部最小值下限,在有限样品中建立了实例依赖性最优性。这些结果表明,除了取决于渐近效率方差之外,最佳的非质子风险除了取决于样本量支持的最富有函数类别的真实结果函数与其近似类别之间的加权规范距离。
translated by 谷歌翻译
Tensor decomposition serves as a powerful primitive in statistics and machine learning. In this paper, we focus on using power iteration to decompose an overcomplete random tensor. Past work studying the properties of tensor power iteration either requires a non-trivial data-independent initialization, or is restricted to the undercomplete regime. Moreover, several papers implicitly suggest that logarithmically many iterations (in terms of the input dimension) are sufficient for the power method to recover one of the tensor components. In this paper, we analyze the dynamics of tensor power iteration from random initialization in the overcomplete regime. Surprisingly, we show that polynomially many steps are necessary for convergence of tensor power iteration to any of the true component, which refutes the previous conjecture. On the other hand, our numerical experiments suggest that tensor power iteration successfully recovers tensor components for a broad range of parameters, despite that it takes at least polynomially many steps to converge. To further complement our empirical evidence, we prove that a popular objective function for tensor decomposition is strictly increasing along the power iteration path. Our proof is based on the Gaussian conditioning technique, which has been applied to analyze the approximate message passing (AMP) algorithm. The major ingredient of our argument is a conditioning lemma that allows us to generalize AMP-type analysis to non-proportional limit and polynomially many iterations of the power method.
translated by 谷歌翻译
通过在线规范相关性分析的问题,我们提出了\ emph {随机缩放梯度下降}(SSGD)算法,以最小化通用riemannian歧管上的随机功能的期望。 SSGD概括了投影随机梯度下降的思想,允许使用缩放的随机梯度而不是随机梯度。在特殊情况下,球形约束的特殊情况,在广义特征向量问题中产生的,我们建立了$ \ sqrt {1 / t} $的令人反感的有限样本,并表明该速率最佳最佳,直至具有积极的积极因素相关参数。在渐近方面,一种新的轨迹平均争论使我们能够实现局部渐近常态,其速率与鲁普特 - Polyak-Quaditsky平均的速率匹配。我们将这些想法携带在一个在线规范相关分析,从事文献中的第一次获得了最佳的一次性尺度算法,其具有局部渐近融合到正常性的最佳一次性尺度算法。还提供了用于合成数据的规范相关分析的数值研究。
translated by 谷歌翻译
我们考虑与高斯数据的高维线性回归中的插值学习,并在类高斯宽度方面证明了任意假设类别中的内插器的泛化误差。将通用绑定到欧几里德常规球恢复了Bartlett等人的一致性结果。(2020)对于最小规范内插器,并确认周等人的预测。(2020)在高斯数据的特殊情况下,对于近乎最小常态的内插器。我们通过将其应用于单位来证明所界限的一般性,从而获得最小L1-NORM Interpoolator(基础追踪)的新型一致性结果。我们的结果表明,基于规范的泛化界限如何解释并用于分析良性过度装备,至少在某些设置中。
translated by 谷歌翻译
We study non-parametric estimation of the value function of an infinite-horizon $\gamma$-discounted Markov reward process (MRP) using observations from a single trajectory. We provide non-asymptotic guarantees for a general family of kernel-based multi-step temporal difference (TD) estimates, including canonical $K$-step look-ahead TD for $K = 1, 2, \ldots$ and the TD$(\lambda)$ family for $\lambda \in [0,1)$ as special cases. Our bounds capture its dependence on Bellman fluctuations, mixing time of the Markov chain, any mis-specification in the model, as well as the choice of weight function defining the estimator itself, and reveal some delicate interactions between mixing time and model mis-specification. For a given TD method applied to a well-specified model, its statistical error under trajectory data is similar to that of i.i.d. sample transition pairs, whereas under mis-specification, temporal dependence in data inflates the statistical error. However, any such deterioration can be mitigated by increased look-ahead. We complement our upper bounds by proving minimax lower bounds that establish optimality of TD-based methods with appropriately chosen look-ahead and weighting, and reveal some fundamental differences between value function estimation and ordinary non-parametric regression.
translated by 谷歌翻译
In many modern applications of deep learning the neural network has many more parameters than the data points used for its training. Motivated by those practices, a large body of recent theoretical research has been devoted to studying overparameterized models. One of the central phenomena in this regime is the ability of the model to interpolate noisy data, but still have test error lower than the amount of noise in that data. arXiv:1906.11300 characterized for which covariance structure of the data such a phenomenon can happen in linear regression if one considers the interpolating solution with minimum $\ell_2$-norm and the data has independent components: they gave a sharp bound on the variance term and showed that it can be small if and only if the data covariance has high effective rank in a subspace of small co-dimension. We strengthen and complete their results by eliminating the independence assumption and providing sharp bounds for the bias term. Thus, our results apply in a much more general setting than those of arXiv:1906.11300, e.g., kernel regression, and not only characterize how the noise is damped but also which part of the true signal is learned. Moreover, we extend the result to the setting of ridge regression, which allows us to explain another interesting phenomenon: we give general sufficient conditions under which the optimal regularization is negative.
translated by 谷歌翻译