The amounts of data that need to be transmitted, processed, and stored by the modern deep neural networks have reached truly enormous volumes in the last few years calling for the invention of new paradigms both in hardware and software development. One of the most promising and rapidly advancing frontiers here is the creation of new numerical formats. In this work we focus on the family of block floating point numerical formats due to their combination of wide dynamic range, numerical accuracy, and efficient hardware implementation of inner products using simple integer arithmetic. These formats are characterized by a block of mantissas with a shared scale factor. The basic Block Floating Point (BFP) format quantizes the block scales into the nearest powers of two on the right. Its simple modification - Scaled BFP (SBFP) - stores the same scales in full precision and thus allows higher accuracy. In this paper, we study the statistical behavior of both these formats rigorously. We develop asymptotic bounds on the inner product error in SBFP- and BFP-quantized normally distributed vectors. Next, we refine those asymptotic results to finite dimensional settings and derive high-dimensional tight bounds for the same errors. Based on the obtained results we introduce a performance measure assessing accuracy of any block format. This measure allows us to determine the optimal parameters, such as the block size, yielding highest accuracy. In particular, we show that if the precision of the BFP format is fixed at 4 bits, the optimal block size becomes 64. All theoretical derivations are supported by numerical experiments and studies on the weights of publicly available pretrained neural networks.
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
我们提出了对学度校正随机块模型(DCSBM)的合适性测试。该测试基于调整后的卡方统计量,用于测量$ n $多项式分布的组之间的平等性,该分布具有$ d_1,\ dots,d_n $观测值。在网络模型的背景下,多项式的数量($ n $)的数量比观测值数量($ d_i $)快得多,与节点$ i $的度相对应,因此设置偏离了经典的渐近学。我们表明,只要$ \ {d_i \} $的谐波平均值生长到无穷大,就可以使统计量在NULL下分配。顺序应用时,该测试也可以用于确定社区数量。该测试在邻接矩阵的压缩版本上进行操作,因此在学位上有条件,因此对大型稀疏网络具有高度可扩展性。我们结合了一个新颖的想法,即在测试$ K $社区时根据$(k+1)$ - 社区分配来压缩行。这种方法在不牺牲计算效率的情况下增加了顺序应用中的力量,我们证明了它在恢复社区数量方面的一致性。由于测试统计量不依赖于特定的替代方案,因此其效用超出了顺序测试,可用于同时测试DCSBM家族以外的各种替代方案。特别是,我们证明该测试与具有社区结构的潜在可变性网络模型的一般家庭一致。
translated by 谷歌翻译
我们获得了具有重尾分布的独立和相同分布的随机变量的总和。我们的浓度结果与随机变量有关,其分布满足$ \ mathbb {p}(x> t)\ leq {\ rm e}^{ - i(t)} $,其中$ i:\ mathbb {r} \ rightarrow\ mathbb {r} $是一个增加的功能,$ i(t)/t \ rightArrow \ alpha \ in [0,\ infty)$ as $ t \ rightArrow \ rightArrow \ infty $。我们的主要定理不仅可以恢复一些现有结果,例如亚韦伯随机变量的总和的浓度,而且还可以为带有较重尾巴的随机变量的总和产生新的结果。我们表明,我们获得的浓度不平等足以为独立随机变量的总和提供较大的偏差结果。我们的基于标准截断参数的分析简化,统一和推广有关重尾随机变量的浓度和较大偏差的现有结果。
translated by 谷歌翻译
深度重新结合因实现最新的机器学习任务而被认可。但是,这些体系结构的出色性能取决于培训程序,需要精心制作以避免消失或爆炸梯度,尤其是随着深度$ l $的增加。关于如何减轻此问题,尚无共识,尽管广泛讨论的策略在于将每一层的输出缩放为$ \ alpha_l $。我们在概率环境中显示标准I.I.D.初始化,唯一的非平凡动力学是$ \ alpha_l = 1/\ sqrt {l} $(其他选择导致爆炸或身份映射)。该缩放因子在连续的时间限制中对应于神经随机微分方程,这与广泛的解释相反,即深度重新连接是神经普通微分方程的离散化。相比之下,在后一种制度中,具有特定相关初始化和$ \ alpha_l = 1/l $获得稳定性。我们的分析表明,与层指数的函数之间的缩放比例和规律性之间存在很强的相互作用。最后,在一系列实验中,我们表现出由这两个参数驱动的连续范围,这在训练之前和之后会共同影响性能。
translated by 谷歌翻译
我们建议在线变更点检测的新过程。我们的方法扩展了一个想法,即在变更前和变换后分布之间最大化差异度量。这将导致一个适合参数和非参数场景的灵活过程。我们证明了程序的平均运行长度及其预期的检测延迟。通过关于合成和现实世界数据集的数值实验来说明该算法的效率。
translated by 谷歌翻译
We provide results that exactly quantify how data augmentation affects the convergence rate and variance of estimates. They lead to some unexpected findings: Contrary to common intuition, data augmentation may increase rather than decrease the uncertainty of estimates, such as the empirical prediction risk. Our main theoretical tool is a limit theorem for functions of randomly transformed, high-dimensional random vectors. The proof draws on work in probability on noise stability of functions of many variables. The pathological behavior we identify is not a consequence of complex models, but can occur even in the simplest settings -- one of our examples is a ridge regressor with two parameters. On the other hand, our results also show that data augmentation can have real, quantifiable benefits.
translated by 谷歌翻译
我们在$ \ Gamma $ -diScounted MDP中使用Polyak-Ruppert平均(A.K.A.,平均Q-Leaning)进行同步Q学习。我们为平均迭代$ \ bar {\ boldsymbol {q}}建立渐近常态。此外,我们展示$ \ bar {\ boldsymbol {q}} _ t $实际上是一个常规的渐近线性(RAL)估计值,用于最佳q-value函数$ \ boldsymbol {q} ^ * $与最有效的影响功能。它意味着平均Q学习迭代在所有RAL估算器之间具有最小的渐近方差。此外,我们为$ \ ell _ {\ infty} $错误$ \ mathbb {e} \ | \ | \ bar {\ boldsymbol {q}} _ t- \ boldsymbol {q} ^ *} ^ *} _ {\ idty} $,显示它与实例相关的下限以及最佳最低限度复杂性下限。作为一个副产品,我们发现Bellman噪音具有var-gaussian坐标,具有方差$ \ mathcal {o}((1- \ gamma)^ {-1})$而不是现行$ \ mathcal {o}((1- \ Gamma)^ { - 2})$根据标准界限奖励假设。子高斯结果有可能提高许多R1算法的样本复杂性。简而言之,我们的理论分析显示平均Q倾斜在统计上有效。
translated by 谷歌翻译
Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature. 1
translated by 谷歌翻译
我们研究了小组测试问题,其目标是根据合并测试的结果,确定一组k感染的人,这些k含有稀有疾病,这些人在经过测试中至少有一个受感染的个体时返回阳性的结果。团体。我们考虑将个人分配给测试的两个不同的简单随机过程:恒定柱设计和伯努利设计。我们的第一组结果涉及基本统计限制。对于恒定柱设计,我们给出了一个新的信息理论下限,这意味着正确识别的感染者的比例在测试数量越过特定阈值时会经历急剧的“全或全或无所不包”的相变。对于Bernoulli设计,我们确定解决相关检测问题所需的确切测试数量(目的是区分小组测试实例和纯噪声),改善Truong,Aldridge和Scarlett的上限和下限(2020)。对于两个小组测试模型,我们还研究了计算有效(多项式时间)推理程序的能力。我们确定了解决检测问题的低度多项式算法所需的精确测试数量。这为在少量稀疏度的检测和恢复问题中都存在固有的计算统计差距提供了证据。值得注意的是,我们的证据与Iliopoulos和Zadik(2021)相反,后者预测了Bernoulli设计中没有计算统计差距。
translated by 谷歌翻译
强大的机器学习模型的开发中的一个重要障碍是协变量的转变,当训练和测试集的输入分布时发生的分配换档形式在条件标签分布保持不变时发生。尽管现实世界应用的协变量转变普遍存在,但在现代机器学习背景下的理论理解仍然缺乏。在这项工作中,我们检查协变量的随机特征回归的精确高尺度渐近性,并在该设置中提出了限制测试误差,偏差和方差的精确表征。我们的结果激发了一种自然部分秩序,通过协变速转移,提供足够的条件来确定何时何时损害(甚至有助于)测试性能。我们发现,过度分辨率模型表现出增强的协会转变的鲁棒性,为这种有趣现象提供了第一个理论解释之一。此外,我们的分析揭示了分销和分发外概率性能之间的精确线性关系,为这一令人惊讶的近期实证观察提供了解释。
translated by 谷歌翻译
随机森林仍然是最受欢迎的现成监督学习算法之一。尽管他们记录了良好的经验成功,但直到最近,很少有很少的理论结果来描述他们的表现和行为。在这项工作中,我们通过建立随机森林和其他受监督学习集合的融合率来推动最近的一致性和渐近正常的工作。我们培养了广义U形统计的概念,并显示在此框架内,随机森林预测可能对比以前建立的较大的子样本尺寸可能保持渐近正常。我们还提供Berry-esseen的界限,以量化这种收敛的速度,使得分列大小的角色和确定随机森林预测分布的树木的角色。
translated by 谷歌翻译
We develop new theoretical results on matrix perturbation to shed light on the impact of architecture on the performance of a deep network. In particular, we explain analytically what deep learning practitioners have long observed empirically: the parameters of some deep architectures (e.g., residual networks, ResNets, and Dense networks, DenseNets) are easier to optimize than others (e.g., convolutional networks, ConvNets). Building on our earlier work connecting deep networks with continuous piecewise-affine splines, we develop an exact local linear representation of a deep network layer for a family of modern deep networks that includes ConvNets at one end of a spectrum and ResNets, DenseNets, and other networks with skip connections at the other. For regression and classification tasks that optimize the squared-error loss, we show that the optimization loss surface of a modern deep network is piecewise quadratic in the parameters, with local shape governed by the singular values of a matrix that is a function of the local linear representation. We develop new perturbation results for how the singular values of matrices of this sort behave as we add a fraction of the identity and multiply by certain diagonal matrices. A direct application of our perturbation results explains analytically why a network with skip connections (such as a ResNet or DenseNet) is easier to optimize than a ConvNet: thanks to its more stable singular values and smaller condition number, the local loss surface of such a network is less erratic, less eccentric, and features local minima that are more accommodating to gradient-based optimization. Our results also shed new light on the impact of different nonlinear activation functions on a deep network's singular values, regardless of its architecture.
translated by 谷歌翻译
本文衍生了置信区间(CI)和时间统一的置信序列(CS),用于从有限观测值中估算未知平均值的经典问题。我们提出了一种衍生浓度界限的一般方法,可以看作是著名的切尔诺夫方法的概括(和改进)。它的核心是基于推导一类新的复合非负胸腔,通过投注和混合方法与测试的连接很强。我们展示了如何将这些想法扩展到无需更换的情况下,这是另一个经过深入研究的问题。在所有情况下,我们的界限都适应未知的差异,并且基于Hoeffding或经验的Bernstein不平等及其最近的Supermartingale概括,经验上大大优于现有方法。简而言之,我们为四个基本问题建立了一个新的最先进的问题:在有或没有替换的情况下进行采样时,CS和CI进行有限的手段。
translated by 谷歌翻译
我们考虑$ k $武装的随机土匪,并考虑到$ t $ t $的累积后悔界限。我们对同时获得最佳订单$ \ sqrt {kt} $的策略感兴趣,并与发行依赖的遗憾相关,即与$ \ kappa \ ln t $相匹配,该遗憾是最佳的。和Robbins(1985)以及Burnetas和Katehakis(1996),其中$ \ kappa $是最佳问题依赖性常数。这个常数的$ \ kappa $取决于所考虑的模型$ \ Mathcal {d} $(武器上可能的分布家族)。 M \'Enard and Garivier(2017)提供了在一维指数式家庭给出的模型的参数案例中实现这种双重偏见的策略,而Lattimore(2016,2018)为(Sub)高斯分布的家族而做到了这一点。差异小于$ 1 $。我们将此结果扩展到超过$ [0,1] $的所有分布的非参数案例。我们通过结合Audibert和Bubeck(2009)的MOSS策略来做到这一点,该策略享受了最佳订单$ \ sqrt {kt} $的无分配遗憾,以及Capp \'e等人的KL-UCB策略。 (2013年),我们为此提供了对最佳分布$ \ kappa \ ln t $遗憾的首次分析。我们能够在努力简化证明(以前已知的遗憾界限,因此进行的新分析)时,能够获得这种非参数两次审查结果;因此,本贡献的第二个优点是为基于$ k $武装的随机土匪提供基于索引的策略的经典后悔界限的证明。
translated by 谷歌翻译
分布式平均值估计(DME)是联邦学习中的一个中央构建块,客户将本地梯度发送到参数服务器,以平均和更新模型。由于通信限制,客户经常使用有损压缩技术来压缩梯度,从而导致估计不准确。当客户拥有多种网络条件(例如限制的通信预算和数据包损失)时,DME更具挑战性。在这种情况下,DME技术通常会导致估计误差显着增加,从而导致学习绩效退化。在这项工作中,我们提出了一种名为Eden的强大DME技术,该技术自然会处理异质通信预算和数据包损失。我们为伊甸园提供了有吸引力的理论保证,并通过经验进行评估。我们的结果表明,伊甸园对最先进的DME技术持续改进。
translated by 谷歌翻译
我们研究了与深神经网络分析有关的随机矩阵产物的奇异值的分布。然而,矩阵类似于样品协方差矩阵的乘积,一个重要的区别是,假定的种群协方差矩阵是非随机或随机的,但独立于统计和随机矩阵理论中的随机数据矩阵,现在是随机数据的某些功能矩阵(深神经网络术语中的突触重量矩阵)。该问题在最近的工作[25,13]中已通过使用自由概率理论的技术。但是,自由概率理论涉及独立于数据矩阵的人口协方差矩阵,因此必须证明其适用性。使用随机矩阵理论的技术版本,对于具有独立条目的高斯数据矩阵,具有独立条目的高斯数据矩阵(一种自由概率的标准分析模型)的理由。在本文中,我们使用另一种更简化的随机矩阵理论技术的版本将[22]的结果推广到突触重量矩阵的条目仅是独立分布的随机变量,均值和有限第四,片刻。特别是,这扩展了所谓的宏观普遍性在被考虑的随机矩阵上的特性。
translated by 谷歌翻译
成功的深度学习模型往往涉及培训具有比训练样本数量更多的参数的神经网络架构。近年来已经广泛研究了这种超分子化的模型,并且通过双下降现象和通过优化景观的结构特性,从统计的角度和计算视角都建立了过分统计化的优点。尽管在过上分层的制度中深入学习架构的显着成功,但也众所周知,这些模型对其投入中的小对抗扰动感到高度脆弱。即使在普遍培训的情况下,它们在扰动输入(鲁棒泛化)上的性能也会比良性输入(标准概括)的最佳可达到的性能更糟糕。因此,必须了解如何从根本上影响稳健性的情况下如何影响鲁棒性。在本文中,我们将通过专注于随机特征回归模型(具有随机第一层权重的两层神经网络)来提供超分度化对鲁棒性的作用的精确表征。我们考虑一个制度,其中样本量,输入维度和参数的数量彼此成比例地生长,并且当模型发生前列地训练时,可以为鲁棒泛化误差导出渐近精确的公式。我们的发达理论揭示了过分统计化对鲁棒性的非竞争效果,表明对于普遍训练的随机特征模型,高度公正化可能会损害鲁棒泛化。
translated by 谷歌翻译
切成薄片的相互信息(SMI)定义为在随机变量的一维随机投影之间的平均值(MI)项。它是对经典MI依赖的替代度量,该量子保留了许多特性,但更可扩展到高维度。但是,对SMI本身和其估计率的定量表征取决于环境维度,这对于理解可伸缩性至关重要,仍然晦涩难懂。这项工作将原始的SMI定义扩展到$ K $ -SMI,该定义将预测视为$ k $维二维子空间,并提供了有关其依赖性尺寸的多方面帐户。在2-Wasserstein指标中使用差分熵连续性的新结果,我们对Monte Carlo(MC)基于$ K $ -SMI的估计的错误得出了尖锐的界限,并明确依赖于$ K $和环境维度,揭示了他们与样品数量的相互作用。然后,我们将MC Integrator与神经估计框架相结合,以提供端到端$ K $ -SMI估算器,为此建立了最佳的收敛率。随着尺寸的增长,我们还探索了人口$ k $ -smi的渐近学,从而为高斯近似结果提供了在适当的力矩范围下衰减的残差。我们的理论通过数值实验验证,并适用于切片Infogan,该切片完全提供了$ k $ -smi的可伸缩性问题的全面定量说明,包括SMI作为特殊情况,当$ k = 1 $。
translated by 谷歌翻译