Jacobian and Hessian regularization aim to reduce the magnitude of the first and second-order partial derivatives with respect to neural network inputs, and they are predominantly used to ensure the adversarial robustness of image classifiers. In this work, we generalize previous efforts by extending the target matrix from zero to any matrix that admits efficient matrix-vector products. The proposed paradigm allows us to construct novel regularization terms that enforce symmetry or diagonality on square Jacobian and Hessian matrices. On the other hand, the major challenge for Jacobian and Hessian regularization has been high computational complexity. We introduce Lanczos-based spectral norm minimization to tackle this difficulty. This technique uses a parallelized implementation of the Lanczos algorithm and is capable of effective and stable regularization of large Jacobian and Hessian matrices. Theoretical justifications and empirical evidence are provided for the proposed paradigm and technique. We carry out exploratory experiments to validate the effectiveness of our novel regularization terms. We also conduct comparative experiments to evaluate Lanczos-based spectral norm minimization against prior methods. Results show that the proposed methodologies are advantageous for a wide range of tasks.
translated by 谷歌翻译
现有基于分数的生成模型(SGM)可以根据其参数化方法分类为约束的SGMS(CSGM)或无约束的SGMS(USGMS)。 CSGM模拟概率密度作为玻尔兹曼分布的作用,并将其预测作为某些标量值能量函数的负梯度。另一方面,USGM采用了能够直接估算得分的灵活体系结构,而无需明确模拟能量功能。在本文中,我们证明了CSGM的架构约束可能会限制其得分匹配能力。此外,我们表明USGMS无法保持保守性的财产可能导致严重的采样效率低下并在实践中降低采样性能。为了解决上述问题,我们提出了基于准保守分数的生成模型(QCSGM),以保持CSGM和USGM的优势。我们的理论推导表明,QCSGM的训练目标可以通过利用Hutchinson痕量估计器有效地整合到训练过程中。此外,我们对CIFAR-10,CIFAR-100,ImageNet和SVHN数据集的实验结果验证了QCSGM的有效性。最后,我们使用单层自动编码器的示例证明QCSGM的优势是合理的。
translated by 谷歌翻译
我们研究了使用尖刺,现场依赖的随机矩阵理论研究迷你批次对深神经网络损失景观的影响。我们表明,批量黑森州的极值值的大小大于经验丰富的黑森州。我们还获得了类似的结果对Hessian的概括高斯牛顿矩阵近似。由于我们的定理,我们推导出作为批量大小的最大学习速率的分析表达式,为随机梯度下降(线性缩放)和自适应算法(例如ADAM(Square Root Scaling)提供了通知实际培训方案,例如光滑,非凸深神经网络。虽然随机梯度下降的线性缩放是在我们概括的更多限制性条件下导出的,但是适应优化者的平方根缩放规则是我们的知识,完全小说。随机二阶方法和自适应方法的百分比,我们得出了最小阻尼系数与学习率与批量尺寸的比率成比例。我们在Cifar-$ 100 $和ImageNet数据集上验证了我们的VGG / WimerEsnet架构上的索赔。根据我们对象检的调查,我们基于飞行学习率和动量学习者开发了一个随机兰齐齐竞争,这避免了对这些关键的超参数进行昂贵的多重评估的需求,并在预残留的情况下显示出良好的初步结果Cifar的architecure - $ 100 $。
translated by 谷歌翻译
对抗性培训(AT)已成为培训强大网络的热门选择。然而,它倾向于牺牲清洁精度,以令人满意的鲁棒性,并且遭受大的概括误差。为了解决这些问题,我们提出了平稳的对抗培训(SAT),以我们对损失令人歉端的损失的终人谱指导。 We find that curriculum learning, a scheme that emphasizes on starting "easy" and gradually ramping up on the "difficulty" of training, smooths the adversarial loss landscape for a suitably chosen difficulty metric.我们展示了对普通环境中的课程学习的一般制定,并提出了一种基于最大Hessian特征值(H-SAT)和软MAX概率(P-SA)的两个难度指标。我们展示SAT稳定网络培训即使是大型扰动规范,并且允许网络以更好的清洁精度运行而与鲁棒性权衡曲线相比。与AT,交易和其他基线相比,这导致清洁精度和鲁棒性的显着改善。为了突出一些结果,我们的最佳模型将分别在CIFAR-100上提高6%和1%的稳健准确性。在Imagenette上,一个十一级想象成的子集,我们的模型分别以正常和强大的准确性达到23%和3%。
translated by 谷歌翻译
对于深层网络而言,这是一个非常理想的属性,可与小型输入更改保持强大。实现此属性的一种流行方法是设计具有小Lipschitz常数的网络。在这项工作中,我们提出了一种用于构建具有许多理想属性的Lipschitz网络的新技术:它可以应用于任何线性网络层(完全连接或卷积),它在Lipschitz常数上提供了正式的保证,它是易于实施和运行效率,可以与任何培训目标和优化方法结合使用。实际上,我们的技术是文献中第一个同时实现所有这些属性的技术。我们的主要贡献是基于重新的重量矩阵参数化,该参数保证每个网络层最多具有LIPSCHITZ常数,并且导致学习的权重矩阵接近正交。因此,我们称这种层几乎是正交的Lipschitz(AOL)。在图像分类的背景下,实验和消融研究具有认证的鲁棒精度证实,AOL层获得与大多数现有方法相当的结果。但是,它们更容易实现,并且更广泛地适用,因为它们不需要计算昂贵的矩阵正交化或反转步骤作为网络体系结构的一部分。我们在https://github.com/berndprach/aol上提供代码。
translated by 谷歌翻译
深度神经网络的高度非线性性质使它们容易受到对抗例子的影响,并且具有不稳定的梯度,从而阻碍了可解释性。但是,解决这些问题的现有方法,例如对抗性训练,是昂贵的,并且通常会牺牲预测的准确性。在这项工作中,我们考虑曲率,这是编码非线性程度的数学数量。使用此功能,我们展示了低曲率的神经网络(LCNN),这些神经网络(LCNN)的曲率比标准模型大大低,同时表现出相似的预测性能,从而导致稳健性和稳定梯度,并且只有略有增加的训练时间。为了实现这一目标,我们最大程度地减少了与数据依赖性的上限在神经网络的曲率上,该曲率分解了其组成层的曲率和斜率方面的总体曲率。为了有效地最大程度地减少这种结合,我们介绍了两个新型的建筑组件:首先,一种称为中心软pplus的非线性性,是SoftPlus非线性的稳定变体,其次是Lipschitz构成的批处理标准化层。我们的实验表明,与标准的高曲率对应物相比,LCNN具有较低的曲率,更稳定的梯度和增加现成的对抗性鲁棒性,而不会影响预测性能。我们的方法易于使用,可以很容易地将其纳入现有的神经网络模型中。
translated by 谷歌翻译
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
translated by 谷歌翻译
我们追求一系列研究,试图使深度神经网络的输入输出映射的雅各布频谱规范正规化。在先前的工作依赖上边界技术的同时,我们提供了针对确切光谱规范的方案。我们证明,与以前的光谱正则化技术相比,我们的算法可以提高概括性能,同时保持了防御自然和对抗性噪声的强大保护。此外,我们进一步探讨了一些以前的推理,这些推理是关于雅各布正规化提供的强大对抗保护,并表明它可能具有误导性。
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
网络数据通常在各种应用程序中收集,代表感兴趣的功能之间直接测量或统计上推断的连接。在越来越多的域中,这些网络会随着时间的流逝而收集,例如不同日子或多个主题之间的社交媒体平台用户之间的交互,例如在大脑连接性的多主体研究中。在分析多个大型网络时,降低降低技术通常用于将网络嵌入更易于处理的低维空间中。为此,我们通过专门的张量分解来开发用于网络集合的主组件分析(PCA)的框架,我们将半对称性张量PCA或SS-TPCA术语。我们得出计算有效的算法来计算我们提出的SS-TPCA分解,并在标准的低级别信号加噪声模型下建立方法的统计效率。值得注意的是,我们表明SS-TPCA具有与经典矩阵PCA相同的估计精度,并且与网络中顶点数的平方根成正比,而不是预期的边缘数。我们的框架继承了古典PCA的许多优势,适用于广泛的无监督学习任务,包括识别主要网络,隔离有意义的更改点或外出观察,以及表征最不同边缘的“可变性网络”。最后,我们证明了我们的提案对模拟数据的有效性以及经验法律研究的示例。用于建立我们主要一致性结果的技术令人惊讶地简单明了,可能会在其他各种网络分析问题中找到使用。
translated by 谷歌翻译
在本文中,我们考虑了第一和二阶技术来解决机器学习中产生的连续优化问题。在一阶案例中,我们提出了一种从确定性或半确定性到随机二次正则化方法的转换框架。我们利用随机优化的两相性质提出了一种具有自适应采样和自适应步长的新型一阶算法。在二阶案例中,我们提出了一种新型随机阻尼L-BFGS方法,该方法可以在深度学习的高度非凸起背景下提高先前的算法。这两种算法都在众所周知的深度学习数据集上进行评估并表现出有希望的性能。
translated by 谷歌翻译
经认证的稳健性是安全关键应用中的深度神经网络的理想性质,流行的训练算法可以通过计算其Lipschitz常数的全球界限来认证神经网络的鲁棒性。然而,这种界限往往松动:它倾向于过度规范神经网络并降低其自然精度。绑定的Lipschitz绑定可以在自然和认证的准确性之间提供更好的权衡,但通常很难根据网络的非凸起计算。在这项工作中,我们通过考虑激活函数(例如Relu)和权重矩阵之间的相互作用,提出了一种有效和培训的\ emph {本地} Lipschitz上限。具体地,当计算权重矩阵的诱发标准时,我们消除了相应的行和列,其中保证激活函数在每个给定数据点的邻域中是常数,它提供比全局Lipschitz常数的可怕更严格的绑定神经网络。我们的方法可用作插入式模块,以拧紧在许多可认证的训练算法中绑定的Lipschitz。此外,我们建议夹住激活功能(例如,Relu和Maxmin),具有可读的上限阈值和稀疏性损失,以帮助网络实现甚至更严格的本地嘴唇尖端。在实验上,我们表明我们的方法始终如一地优于Mnist,CiFar-10和Tinyimagenet数据集的清洁和认证准确性,具有各种网络架构的清洁和认证的准确性。
translated by 谷歌翻译
我们调查了Wigner半圈和Marcenko-Pastur分布,通常用于深度神经网络理论分析,匹配经验观察到的光谱密度。我们发现甚至允许异常值,观察到的光谱形状强烈地偏离了这种理论预测。这提出了关于这些模型在深度学习中的有用性的重要问题。我们进一步表明,理论结果,例如关键点的分层性质,强烈依赖于这些限制光谱密度的确切形式的使用。我们考虑两个新的矩阵集合;随机Wigner / Wishart集合产品和渗透的Wigner / Wishart集合,两者都更好地匹配观察光谱。它们还给出了原点的大型离散光谱峰,为观察提供了一种理论解释,即各种Optima可以通过一维的低损耗值连接。我们进一步表明,在随机矩阵产品的情况下,离散光谱分量的重量为0美元取决于权重矩阵的尺寸的比率。
translated by 谷歌翻译
通常希望通过将其投影到低维子空间来降低大数据集的维度。矩阵草图已成为一种非常有效地执行这种维度降低的强大技术。尽管有关于草图最差的表现的广泛文献,但现有的保证通常与实践中观察到的差异截然不同。我们利用随机矩阵的光谱分析中的最新发展来开发新技术,这些技术为通过素描获得的随机投影矩阵的期望值提供了准确的表达。这些表达式可以用来表征各种常见的机器学习任务中尺寸降低的性能,从低级别近似到迭代随机优化。我们的结果适用于几种流行的草图方法,包括高斯和拉德马赫草图,它们可以根据数据的光谱特性对这些方法进行精确的分析。经验结果表明,我们得出的表达式反映了这些草图方法的实际性能,直到低阶效应甚至不变因素。
translated by 谷歌翻译
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
translated by 谷歌翻译
我们研究基于Krylov子空间的迭代方法,用于在任何Schatten $ p $ Norm中的低级别近似值。在这里,通过矩阵向量产品访问矩阵$ a $ $如此$ \ | a(i -zz^\ top)\ | _ {s_p} \ leq(1+ \ epsilon)\ min_ {u^\ top u = i_k} } $,其中$ \ | m \ | _ {s_p} $表示$ m $的单数值的$ \ ell_p $ norm。对于$ p = 2 $(frobenius norm)和$ p = \ infty $(频谱规范)的特殊情况,musco and Musco(Neurips 2015)获得了基于Krylov方法的算法,该方法使用$ \ tilde {o}(k)(k /\ sqrt {\ epsilon})$ matrix-vector产品,改进na \“ ive $ \ tilde {o}(k/\ epsilon)$依赖性,可以通过功率方法获得,其中$ \ tilde {o} $抑制均可抑制poly $(\ log(dk/\ epsilon))$。我们的主要结果是仅使用$ \ tilde {o}(kp^{1/6}/\ epsilon^{1/3} {1/3})$ matrix $ matrix的算法 - 矢量产品,并为所有$ p \ geq 1 $。为$ p = 2 $工作,我们的限制改进了先前的$ \ tilde {o}(k/\ epsilon^{1/2})$绑定到$ \ tilde {o}(k/\ epsilon^{1/3})$。由于schatten- $ p $和schatten-$ \ infty $ norms在$(1+ \ epsilon)$ pers $ p时相同\ geq(\ log d)/\ epsilon $,我们的界限恢复了Musco和Musco的结果,以$ p = \ infty $。此外,我们证明了矩阵矢量查询$ \ omega的下限(1/\ epsilon^ {1/3})$对于任何固定常数$ p \ geq 1 $,表明令人惊讶的$ \ tilde {\ theta}(1/\ epsilon^{ 1/3})$是常数〜$ k $的最佳复杂性。为了获得我们的结果,我们介绍了几种新技术,包括同时对多个Krylov子空间进行优化,以及针对分区操作员的不平等现象。我们在[1,2] $中以$ p \的限制使用了Araki-lieb-thirring Trace不平等,而对于$ p> 2 $,我们呼吁对安装分区操作员的规范压缩不平等。
translated by 谷歌翻译
Inserting an SVD meta-layer into neural networks is prone to make the covariance ill-conditioned, which could harm the model in the training stability and generalization abilities. In this paper, we systematically study how to improve the covariance conditioning by enforcing orthogonality to the Pre-SVD layer. Existing orthogonal treatments on the weights are first investigated. However, these techniques can improve the conditioning but would hurt the performance. To avoid such a side effect, we propose the Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR). The effectiveness of our methods is validated in two applications: decorrelated Batch Normalization (BN) and Global Covariance Pooling (GCP). Extensive experiments on visual recognition demonstrate that our methods can simultaneously improve covariance conditioning and generalization. The combinations with orthogonal weight can further boost the performance. Moreover, we show that our orthogonality techniques can benefit generative models for better latent disentanglement through a series of experiments on various benchmarks. Code is available at: \href{https://github.com/KingJamesSong/OrthoImproveCond}{https://github.com/KingJamesSong/OrthoImproveCond}.
translated by 谷歌翻译
We describe an algorithm that learns two-layer residual units using rectified linear unit (ReLU) activation: suppose the input $\mathbf{x}$ is from a distribution with support space $\mathbb{R}^d$ and the ground-truth generative model is a residual unit of this type, given by $\mathbf{y} = \boldsymbol{B}^\ast\left[\left(\boldsymbol{A}^\ast\mathbf{x}\right)^+ + \mathbf{x}\right]$, where ground-truth network parameters $\boldsymbol{A}^\ast \in \mathbb{R}^{d\times d}$ represent a full-rank matrix with nonnegative entries and $\boldsymbol{B}^\ast \in \mathbb{R}^{m\times d}$ is full-rank with $m \geq d$ and for $\boldsymbol{c} \in \mathbb{R}^d$, $[\boldsymbol{c}^{+}]_i = \max\{0, c_i\}$. We design layer-wise objectives as functionals whose analytic minimizers express the exact ground-truth network in terms of its parameters and nonlinearities. Following this objective landscape, learning residual units from finite samples can be formulated using convex optimization of a nonparametric function: for each layer, we first formulate the corresponding empirical risk minimization (ERM) as a positive semi-definite quadratic program (QP), then we show the solution space of the QP can be equivalently determined by a set of linear inequalities, which can then be efficiently solved by linear programming (LP). We further prove the strong statistical consistency of our algorithm, and demonstrate its robustness and sample efficiency through experimental results on synthetic data and a set of benchmark regression datasets.
translated by 谷歌翻译
矩阵正常模型,高斯矩阵变化分布的系列,其协方差矩阵是两个较低尺寸因子的Kronecker乘积,经常用于模拟矩阵变化数据。张量正常模型将该家庭推广到三个或更多因素的Kronecker产品。我们研究了矩阵和张量模型中协方差矩阵的Kronecker因子的估计。我们向几个自然度量中的最大似然估计器(MLE)实现的误差显示了非因素界限。与现有范围相比,我们的结果不依赖于条件良好或稀疏的因素。对于矩阵正常模型,我们所有的所有界限都是最佳的对数因子最佳,对于张量正常模型,我们对最大因数和整体协方差矩阵的绑定是最佳的,所以提供足够的样品以获得足够的样品以获得足够的样品常量Frobenius错误。在与我们的样本复杂性范围相同的制度中,我们表明迭代程序计算称为触发器算法称为触发器算法的MLE的线性地收敛,具有高概率。我们的主要工具是Fisher信息度量诱导的正面矩阵的几何中的测地强凸性。这种强大的凸起由某些随机量子通道的扩展来决定。我们还提供了数值证据,使得将触发器算法与简单的收缩估计器组合可以提高缺乏采样制度的性能。
translated by 谷歌翻译
通过内核矩阵或图形laplacian矩阵代表数据点的光谱方法已成为无监督数据分析的主要工具。在许多应用程序场景中,可以通过神经网络嵌入的光谱嵌入可以在数据样本上进行训练,这为实现自动样本外扩展以及计算可扩展性提供了一种有希望的方法。在Spectralnet的原始论文中采用了这种方法(Shaham等人,2018年),我们称之为Specnet1。当前的论文引入了一种名为SpecNet2的新神经网络方法,以计算光谱嵌入,该方法优化了特征问题的等效目标,并删除了SpecNet1中的正交层。 SpecNet2还允许通过通过梯度公式跟踪每个数据点的邻居来分离图形亲和力矩阵的行采样和列。从理论上讲,我们证明了新的无正交物质目标的任何局部最小化均显示出领先的特征向量。此外,证明了使用基于批处理的梯度下降法的这种新的无正交目标的全局收敛。数值实验证明了在模拟数据和图像数据集上Specnet2的性能和计算效率的提高。
translated by 谷歌翻译