稀疏的缩小添加剂模型和稀疏随机特征模型作为学习低阶函数的方法分别开发,其中变量之间几乎没有相互作用,但既不提供计算效率。另一方面,$ \ ell_2 $基上的缩小添加剂模型是有效的,但不提供特征选择,因为产生的系数矢量密集。灵感来自迭代幅度修剪技术在寻找神经网络的彩票票时,我们提出了一种新方法 - 通过IMP(虾)稀疏随机特征模型 - 以有效地拟合具有固有的低维结构的高维数据稀疏可变依赖性的形式。我们的方法可以被视为组合过程来构建和找到两个层密度网络的稀疏彩票票。我们通过对阈值基础追踪的泛化误差和产生的界限进行精细分析来解释虾的观察到的益处。从综合性数据和现实世界基准数据集的功能近似实验,我们展示了与最先进的稀疏特征和添加方法(如SRFE-S,SSAM和Salsa)相比获得的虾优于或竞争性测试准确性。同时,虾以低计算复杂度执行特征选择,并且对修剪速率强大,表示所获得的子网结构中的稳健性。通过注意到我们的模型和重量/神经元子网之间的对应关系,我们通过虾深入了解彩票假设。
translated by 谷歌翻译
我们在随机特征矩阵的条件数上提供(高概率)界限。特别是,我们表明,如果复杂性比率$ \ frac {n} $ where $ n $是n $ with n $ wore $ n $是$ m $的数量,如$ \ log ^ {-1}( n)$或$ \ log(m)$,然后随机功能矩阵很好。该结果在没有正则化的情况下保持并且依赖于在随机特征矩阵的相关组件之间建立各种浓度界限。另外,我们在随机特征矩阵的受限等距常数上获得界限。我们证明了使用随机特征矩阵的回归问题相关的风险表现出双重下降现象,并且这是条件数的双缩小行为的效果。风险范围包括使用最小二乘问题的underParamedAimed设置和使用最小规范插值问题或稀疏回归问题的过次参数化设置。对于最小二乘或稀疏的回归案例,我们表明风险降低为$ M $和$ N $增加,即使在存在有限或随机噪声时也是如此。风险绑定与文献中的最佳缩放匹配,我们的结果中的常量是显式的,并且独立于数据的维度。
translated by 谷歌翻译
我们研究了张量张量的回归,其中的目标是将张量的响应与张量协变量与塔克等级参数张量/矩阵连接起来,而没有其内在等级的先验知识。我们提出了Riemannian梯度下降(RGD)和Riemannian Gauss-Newton(RGN)方法,并通过研究等级过度参数化的影响来应对未知等级的挑战。我们通过表明RGD和RGN分别线性地和四边形地收敛到两个等级的统计最佳估计值,从而为一般的张量调节回归提供了第一个收敛保证。我们的理论揭示了一种有趣的现象:Riemannian优化方法自然地适应了过度参数化,而无需修改其实施。我们还为低度多项式框架下的标量调整回归中的统计计算差距提供了第一个严格的证据。我们的理论证明了``统计计算差距的祝福''现象:在张张量的张量回归中,对于三个或更高的张紧器,在张张量的张量回归中,计算所需的样本量与中等级别相匹配的计算量相匹配。在考虑计算可行的估计器时,虽然矩阵设置没有此类好处。这表明中等等级的过度参数化本质上是``在张量调整的样本量三分或更高的样本大小上,三分或更高的样本量。最后,我们进行仿真研究以显示我们提出的方法的优势并证实我们的理论发现。
translated by 谷歌翻译
我们认为,从其嘈杂的瞬间信息中,在任何维度上学习$ k $ spike混合物的稀疏力矩问题。我们使用运输距离来测量学习混合物的准确性。先前的算法要么假设某些分离假设,使用更多的恢复力矩,要么在(超级)指数时间内运行。我们针对一维问题的算法(也称为稀疏Hausdorff Moment问题)是经典Prony方法的强大版本,我们的贡献主要在于分析。我们比以前的工作进行了全球和更严格的分析(分析了Prony方法的中间结果的扰动)。有用的技术成分是由Vandermonde矩阵定义的线性系统与Schur多项式之间的连接,这使我们能够提供独立于分离的紧密扰动,并且在其他情况下可能很有用。为了解决高维问题,我们首先通过将1维算法和分析扩展到复数来解决二维问题。我们针对高维情况的算法通过将混合物的1-D投影与随机矢量和一组混合物的一组2D投影来确定每个尖峰的坐标。我们的结果在学习主题模型和高斯混合物中有应用,这意味着改善了样本复杂性结果或在先前的工作中运行时间。
translated by 谷歌翻译
在本文中,我们提出{\ it \下划线{r} ecursive} {\ it \ usef \ undesline {i} mortance} {\ it \ it \ usew supsline {s} ketching} algorithM squares {\ it \下划线{o} ptimization}(risro)。 Risro的关键步骤是递归重要性草图,这是一个基于确定性设计的递归投影的新素描框架,它与文献中的随机素描\ Citep {Mahoney2011 randomized,Woodruff2014sketching}有很大不同。在这个新的素描框架下,可以重新解释文献中的几种现有算法,而Risro比它们具有明显的优势。 Risro易于实现,并在计算上有效,其中每次迭代中的核心过程是解决降低尺寸最小二乘问题的问题。我们在某些轻度条件下建立了Risro的局部二次线性和二次收敛速率。我们还发现了Risro与Riemannian Gauss-Newton算法在固定等级矩阵上的联系。在机器学习和统计数据中的两种应用中,RISRO的有效性得到了证明:低级别矩阵痕量回归和相位检索。仿真研究证明了Risro的出色数值性能。
translated by 谷歌翻译
贝叶斯神经网络试图将神经网络的强大预测性能与与贝叶斯架构预测产出相关的不确定性的正式量化相结合。然而,它仍然不清楚如何在升入网络的输出空间时,如何赋予网络的参数。提出了一种可能的解决方案,使用户能够为手头的任务提供适当的高斯过程协方差函数。我们的方法构造了网络参数的先前分配,称为ridgelet,它近似于网络的输出空间中的Posited高斯过程。与神经网络和高斯过程之间的连接的现有工作相比,我们的分析是非渐近的,提供有限的样本大小的错误界限。这建立了贝叶斯神经网络可以近似任何高斯过程,其协方差函数是足够规律的任何高斯过程。我们的实验评估仅限于概念验证,在那里我们证明ridgele先前可以在可以提供合适的高斯过程的回归问题之前出现非结构化。
translated by 谷歌翻译
我们研究了非线性状态空间模型中对不可糊化的观察函数的无监督学习。假设观察过程的大量数据以及状态过程的分布,我们引入了一种非参数通用力矩方法,以通过约束回归来估计观察函数。主要的挑战来自观察函数的不可抑制性以及国家与观察之间缺乏数据对。我们解决了二次损失功能可识别性的基本问题,并表明可识别性的功能空间是闭合状态过程的RKHS。数值结果表明,前两个矩和时间相关以及上限和下限可以识别从分段多项式到平滑函数的功能,从而导致收敛估计器。还讨论了该方法的局限性,例如由于对称性和平稳性而引起的非识别性。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one. We consider the prototypical problem of estimating two statistically independent signals in a mixed generalized linear model with Gaussian covariates. Spectral methods are a popular class of estimators which output the top two eigenvectors of a suitable data-dependent matrix. However, despite the wide applicability, their design is still obtained via heuristic considerations, and the number of samples $n$ needed to guarantee recovery is super-linear in the signal dimension $d$. In this paper, we develop exact asymptotics on spectral methods in the challenging proportional regime in which $n, d$ grow large and their ratio converges to a finite constant. By doing so, we are able to optimize the design of the spectral method, and combine it with a simple linear estimator, in order to minimize the estimation error. Our characterization exploits a mix of tools from random matrices, free probability and the theory of approximate message passing algorithms. Numerical simulations for mixed linear regression and phase retrieval display the advantage enabled by our analysis over existing designs of spectral methods.
translated by 谷歌翻译
This paper provides estimation and inference methods for an identified set's boundary (i.e., support function) where the selection among a very large number of covariates is based on modern regularized tools. I characterize the boundary using a semiparametric moment equation. Combining Neyman-orthogonality and sample splitting ideas, I construct a root-N consistent, uniformly asymptotically Gaussian estimator of the boundary and propose a multiplier bootstrap procedure to conduct inference. I apply this result to the partially linear model, the partially linear IV model and the average partial derivative with an interval-valued outcome.
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
The spectra of random feature matrices provide essential information on the conditioning of the linear system used in random feature regression problems and are thus connected to the consistency and generalization of random feature models. Random feature matrices are asymmetric rectangular nonlinear matrices depending on two input variables, the data and the weights, which can make their characterization challenging. We consider two settings for the two input variables, either both are random variables or one is a random variable and the other is well-separated, i.e. there is a minimum distance between points. With conditions on the dimension, the complexity ratio, and the sampling variance, we show that the singular values of these matrices concentrate near their full expectation and near one with high-probability. In particular, since the dimension depends only on the logarithm of the number of random weights or the number of data points, our complexity bounds can be achieved even in moderate dimensions for many practical setting. The theoretical results are verified with numerical experiments.
translated by 谷歌翻译
在本文中,我们利用过度参数化来设计高维单索索引模型的无规矩算法,并为诱导的隐式正则化现象提供理论保证。具体而言,我们研究了链路功能是非线性且未知的矢量和矩阵单索引模型,信号参数是稀疏向量或低秩对称矩阵,并且响应变量可以是重尾的。为了更好地理解隐含正规化的角色而没有过度的技术性,我们假设协变量的分布是先验的。对于载体和矩阵设置,我们通过采用分数函数变换和专为重尾数据的强大截断步骤来构造过度参数化最小二乘损耗功能。我们建议通过将无规则化的梯度下降应用于损耗函数来估计真实参数。当初始化接近原点并且步骤中足够小时,我们证明了所获得的解决方案在载体和矩阵案件中实现了最小的收敛统计速率。此外,我们的实验结果支持我们的理论调查结果,并表明我们的方法在$ \ ell_2 $ -staticatisticated率和变量选择一致性方面具有明确的正则化的经验卓越。
translated by 谷歌翻译
尽管在机器学习中无处不在使用随机优化算法,但这些算法的确切影响及其对现实的非凸位设置中的概括性能的动态仍然知之甚少。尽管最近的工作揭示了随机优化中的概括与重尾行为之间的联系,但这项工作主要依赖于连续的近似值。对于原始离散时间迭代的严格处理尚未进行。为了弥合这一差距,我们提出了新颖的界限,将概括与在离散时间和连续时间设置中围绕局部最小值相关联的过渡内核的下尾指数。为了实现这一目标,我们首先证明了根据应用于优化器轨迹的著名的fernique-talagrand功能绑定的数据和算法依赖性的概括。然后,我们通过利用随机优化器的马尔可夫结构,并根据其(数据依赖性)过渡内核来得出界限来擅长于此结果。我们通过各种神经网络的经验结果来支持我们的理论,显示了概括误差与较低尾声之间的相关性。
translated by 谷歌翻译
我们为相互作用粒子系统的平均场方程中相互作用内核的可识别性提供了完整的表征。关键是识别概率二次损耗功能具有独特的最小化器的功能空间。我们考虑两个数据自适应$ l^2 $空间,一个带有Lebesgue度量,另一个具有均值固有的探索度量。对于每个$ l^2 $空间,损耗功能的Fr \'echet导数会导致半阳性的积分运算符,因此,可识别性在集成运算符的非零特征值和功能空间的特征空间上保留在特征空间上识别是与积分运算符相关的RKHS的$ l^2 $ clublosure。此外,仅当整体操作员严格呈正时,可识别性在$ l^2 $空间上。因此,逆问题是错误的,需要正则化。在截断的SVD正则化的背景下,我们从数值上证明了加权$ l^2 $空间比未加权的$ l^2 $空间更可取,因为它会导致更准确的正则化估计器。
translated by 谷歌翻译
作为一种特殊的无限级矢量自回旋(VAR)模型,矢量自回归移动平均值(VARMA)模型比广泛使用的有限级var模型可以捕获更丰富的时间模式。然而,长期以来,其实用性一直受到其不可识别性,计算疾病性和解释相对难度的阻碍。本文介绍了一种新颖的无限级VAR模型,该模型不仅避免了VARMA模型的缺点,而且继承了其有利的时间模式。作为另一个有吸引力的特征,可以单独解释该模型的时间和横截面依赖性结构,因为它们的特征是不同的参数集。对于高维时间序列,这种分离激发了我们对确定横截面依赖性的参数施加稀疏性。结果,可以在不牺牲任何时间信息的情况下实现更高的统计效率和可解释性。我们为提出的模型引入了一个$ \ ell_1 $调查估计量,并得出相应的非反应误差边界。开发了有效的块坐标下降算法和一致的模型顺序选择方法。拟议方法的优点得到了模拟研究和现实世界的宏观经济数据分析的支持。
translated by 谷歌翻译
In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We consider an efficient Riemannian Gauss-Newton (RGN) method for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first local quadratic convergence guarantee of RGN for low-rank tensor estimation in the noisy setting under some regularity conditions and provide the corresponding estimation error upper bounds. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.
translated by 谷歌翻译
了解现代机器学习设置中的概括一直是统计学习理论的主要挑战之一。在这种情况下,近年来见证了各种泛化范围的发展,表明了不同的复杂性概念,例如数据样本和算法输出之间的相互信息,假设空间的可压缩性以及假设空间的分形维度。尽管这些界限从不同角度照亮了手头的问题,但它们建议的复杂性概念似乎似乎无关,从而限制了它们的高级影响。在这项研究中,我们通过速率理论的镜头证明了新的概括界定,并明确地将相互信息,可压缩性和分形维度的概念联系起来。我们的方法包括(i)通过使用源编码概念来定义可压缩性的广义概念,(ii)表明“压缩错误率”可以与预期和高概率相关。我们表明,在“无损压缩”设置中,我们恢复并改善了现有的基于信息的界限,而“有损压缩”方案使我们能够将概括与速率延伸维度联系起来,这是分形维度的特定概念。我们的结果为概括带来了更统一的观点,并打开了几个未来的研究方向。
translated by 谷歌翻译
The matrix-based R\'enyi's entropy allows us to directly quantify information measures from given data, without explicit estimation of the underlying probability distribution. This intriguing property makes it widely applied in statistical inference and machine learning tasks. However, this information theoretical quantity is not robust against noise in the data, and is computationally prohibitive in large-scale applications. To address these issues, we propose a novel measure of information, termed low-rank matrix-based R\'enyi's entropy, based on low-rank representations of infinitely divisible kernel matrices. The proposed entropy functional inherits the specialty of of the original definition to directly quantify information from data, but enjoys additional advantages including robustness and effective calculation. Specifically, our low-rank variant is more sensitive to informative perturbations induced by changes in underlying distributions, while being insensitive to uninformative ones caused by noises. Moreover, low-rank R\'enyi's entropy can be efficiently approximated by random projection and Lanczos iteration techniques, reducing the overall complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2 s)$ or even $\mathcal{O}(ns^2)$, where $n$ is the number of data samples and $s \ll n$. We conduct large-scale experiments to evaluate the effectiveness of this new information measure, demonstrating superior results compared to matrix-based R\'enyi's entropy in terms of both performance and computational efficiency.
translated by 谷歌翻译
We study a double robust Bayesian inference procedure on the average treatment effect (ATE) under unconfoundedness. Our Bayesian approach involves a correction term for prior distributions adjusted by the propensity score. We prove asymptotic equivalence of our Bayesian estimator and efficient frequentist estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i.e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa. Consequently, the resulting Bayesian point estimator internalizes the bias correction as the frequentist-type doubly robust estimator, and the Bayesian credible sets form confidence intervals with asymptotically exact coverage probability. In simulations, we find that this corrected Bayesian procedure leads to significant bias reduction of point estimation and accurate coverage of confidence intervals, especially when the dimensionality of covariates is large relative to the sample size and the underlying functions become complex. We illustrate our method in an application to the National Supported Work Demonstration.
translated by 谷歌翻译