In this paper we look into the conjecture of Entezari et al. (2021) which states that if the permutation invariance of neural networks is taken into account, then there is likely no loss barrier to the linear interpolation between SGD solutions. First, we observe that neuron alignment methods alone are insufficient to establish low-barrier linear connectivity between SGD solutions due to a phenomenon we call variance collapse: interpolated deep networks suffer a collapse in the variance of their activations, causing poor performance. Next, we propose REPAIR (REnormalizing Permuted Activations for Interpolation Repair) which mitigates variance collapse by rescaling the preactivations of such interpolated networks. We explore the interaction between our method and the choice of normalization layer, network width, and depth, and demonstrate that using REPAIR on top of neuron alignment methods leads to 60%-100% relative barrier reduction across a wide variety of architecture families and tasks. In particular, we report a 74% barrier reduction for ResNet50 on ImageNet and 90% barrier reduction for ResNet18 on CIFAR10.
translated by 谷歌翻译
在本文中,我们推测,如果考虑到神经网络的置换不变性,SGD解决方案可能不会在它们之间的线性插值中没有障碍。尽管这是一个大胆的猜想,但我们展示了广泛的经验尝试却没有反驳。我们进一步提供了初步的理论结果来支持我们的猜想。我们的猜想对彩票票证假设,分布式培训和合奏方法有影响。
translated by 谷歌翻译
深度学习的成功归功于我们能够相对轻松地解决某些大规模的非凸优化问题。尽管非凸优化是NP硬化,但简单的算法(通常是随机梯度下降的变体)在拟合大型神经网络的实践中具有令人惊讶的有效性。我们认为,在考虑了所有可能的隐藏单元对称对称性之后,神经网络损失景观包含(几乎)一个盆地。我们介绍了三种算法以缩小一个模型的单元,以使它们与参考模型的单位保持一致。这种转换产生了一组功能等效的权重,该权重位于参考模型附近的大约凸盆地中。在实验上,我们证明了各种模型架构和数据集中的单个盆地现象,包括在CIFAR-10和CIFAR-100上独立训练的Resnet模型之间的第一个(据我们所知)的(据我们所知)的第一次演示。此外,我们确定了有趣的现象,将模型宽度和训练时间与各种模型和数据集的模式连接性有关。最后,我们讨论了单个盆地理论的缺点,包括对线性模式连接假设的反例。
translated by 谷歌翻译
我们通过将其基于实现功能空间而不是参数空间的几何形状来系统地研究深度神经网络景观的方法。将分类器分组到等效类中,我们开发了一个标准化的参数化,其中所有对称性都被删除,从而导致环形拓扑。在这个空间上,我们探讨了误差景观而不是损失。这使我们能够得出有意义的概念,即最小化器的平坦度和连接它们的地球通道的概念。使用不同的优化算法,这些算法采样具有不同平坦度的最小化器,我们研究模式连接性和相对距离。测试各种最先进的体系结构和基准数据集,我们确认了平面度和泛化性能之间的相关性;我们进一步表明,在功能空间中,minima彼此更近,并且连接它们的大地测量学的屏障很小。我们还发现,通过梯度下降的变体发现的最小化器可以通过由参数空间中的两个直线组成的零误差路径连接,即带有单个弯曲的多边形链。我们观察到具有二进制权重和激活的神经网络中相似的定性结果,这为在这种情况下的连通性提供了第一个结果之一。我们的结果取决于对称性的去除,并且与对简单浅层模型进行的一些分析研究所描述的丰富现象学非常吻合。
translated by 谷歌翻译
从不同的随机初始化开始,经过随机梯度下降(SGD)训练的神经网络通常在功能上非常相似,从而提出了一个问题,即不同的SGD溶液之间是否存在有意义的差异。 Entezari等。最近猜想,尽管初始化不同,但在考虑到神经网络的置换不变性后,SGD发现的解决方案位于相同的损失谷中。具体而言,他们假设可以将SGD找到的任何两种解决方案排列,以使其参数之间的线性插值形成一条路径,而不会显着增加损失。在这里,我们使用一种简单但功能强大的算法来找到这样的排列,使我们能够获得直接的经验证据,证明该假设在完全连接的网络中是正确的。引人注目的是,我们发现在初始化时已经存在两个网络,并且平均它们随机,但适当排列的初始化的性能大大高于机会。相反,对于卷积架构,我们的证据表明该假设不存在。特别是在大型学习率制度中,SGD似乎发现了各种模式。
translated by 谷歌翻译
The recent emergence of new algorithms for permuting models into functionally equivalent regions of the solution space has shed some light on the complexity of error surfaces, and some promising properties like mode connectivity. However, finding the right permutation is challenging, and current optimization techniques are not differentiable, which makes it difficult to integrate into a gradient-based optimization, and often leads to sub-optimal solutions. In this paper, we propose a Sinkhorn re-basin network with the ability to obtain the transportation plan that better suits a given objective. Unlike the current state-of-art, our method is differentiable and, therefore, easy to adapt to any task within the deep learning domain. Furthermore, we show the advantage of our re-basin method by proposing a new cost function that allows performing incremental learning by exploiting the linear mode connectivity property. The benefit of our method is compared against similar approaches from the literature, under several conditions for both optimal transport finding and linear mode connectivity. The effectiveness of our continual learning method based on re-basin is also shown for several common benchmark datasets, providing experimental results that are competitive with state-of-art results from the literature.
translated by 谷歌翻译
We perform an empirical study of the behaviour of deep networks when pushing its activation functions to become fully linear in some of its feature channels through a sparsity prior on the overall number of nonlinear units in the network. To measure the depth of the resulting partially linearized network, we compute the average number of active nonlinearities encountered along a path in the network graph. In experiments on CNNs with sparsified PReLUs on typical image classification tasks, we make several observations: Under sparsity pressure, the remaining nonlinear units organize into distinct structures, forming core-networks of near constant effective depth and width, which in turn depend on task difficulty. We consistently observe a slow decay of performance with depth until the onset of a rapid collapse in accuracy, allowing for surprisingly shallow networks at moderate losses in accuracy that outperform base-line networks of similar depth, even after increasing width to a comparable number of parameters. In terms of training, we observe a nonlinear advantage: Reducing nonlinearity after training leads to a better performance than before, in line with previous findings in linearized training, but with a gap depending on task difficulty that vanishes for easy problems.
translated by 谷歌翻译
The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits), $\text{OR}_\text{AIL}$, and $\text{XNOR}_\text{AIL}$, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ and $\text{OR}_\text{AIL}$ are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning.
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译
神经网络的架构和参数通常独立优化,这需要每当修改体系结构时对参数的昂贵再次再次再次进行验证。在这项工作中,我们专注于在不需要昂贵的再培训的情况下越来越多。我们提出了一种在训练期间添加新神经元的方法,而不会影响已经学到的内容,同时改善了培训动态。我们通过最大化新重量的梯度来实现后者,并通过奇异值分解(SVD)有效地找到最佳初始化。我们称这种技术渐变最大化增长(Gradmax),并展示其各种视觉任务和架构的效力。
translated by 谷歌翻译
Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood. The popular belief is that this effectiveness stems from controlling the change of the layers' input distributions during training to reduce the so-called "internal covariate shift". In this work, we demonstrate that such distributional stability of layer inputs has little to do with the success of BatchNorm. Instead, we uncover a more fundamental impact of BatchNorm on the training process: it makes the optimization landscape significantly smoother. This smoothness induces a more predictive and stable behavior of the gradients, allowing for faster training.
translated by 谷歌翻译
当我们扩大数据集,模型尺寸和培训时间时,深入学习方法的能力中存在越来越多的经验证据。尽管有一些关于这些资源如何调节统计能力的说法,但对它们对模型培训的计算问题的影响知之甚少。这项工作通过学习$ k $ -sparse $ n $ bits的镜头进行了探索,这是一个构成理论计算障碍的规范性问题。在这种情况下,我们发现神经网络在扩大数据集大小和运行时间时会表现出令人惊讶的相变。特别是,我们从经验上证明,通过标准培训,各种体系结构以$ n^{o(k)} $示例学习稀疏的平等,而损失(和错误)曲线在$ n^{o(k)}后突然下降。 $迭代。这些积极的结果几乎匹配已知的SQ下限,即使没有明确的稀疏性先验。我们通过理论分析阐明了这些现象的机制:我们发现性能的相变不到SGD“在黑暗中绊倒”,直到它找到了隐藏的特征集(自然算法也以$ n^中的方式运行{o(k)} $ time);取而代之的是,我们表明SGD逐渐扩大了人口梯度的傅立叶差距。
translated by 谷歌翻译
理解为什么深网络可以在大尺寸中对数据进行分类仍然是一个挑战。已经提出了它们通过变得稳定的差异术,但现有的经验测量值得支持它通常不是这种情况。我们通过定义弥散术的最大熵分布来重新审视这个问题,这允许研究给定规范的典型的扩散术。我们确认对基准数据集的稳定性与基准数据集的性能没有强烈关联。相比之下,我们发现,对于普通转换的稳定性,R_F $的稳定性与测试错误$ \ epsilon_t $相比。在初始化时,它是初始化的统一,但在最先进的架构培训期间减少了几十年。对于CiFar10和15名已知的架构,我们发现$ \ epsilon_t \约0.2 \ sqrt {r_f} $,表明获得小$ r_f $非常重要,无法实现良好的性能。我们研究R_F $如何取决于培训集的大小,并将其与简单的不变学习模型进行比较。
translated by 谷歌翻译
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet).
translated by 谷歌翻译
我们使用高斯过程扰动模型在高维二次上的真实和批量风险表面之间的高斯过程扰动模型分析和解释迭代平均的泛化性能。我们从我们的理论结果中获得了三个现象\姓名:}(1)将迭代平均值(ia)与大型学习率和正则化进行了改进的正规化的重要性。 (2)对较少频繁平均的理由。 (3)我们预计自适应梯度方法同样地工作,或者更好,而不是其非自适应对应物的迭代平均值。灵感来自这些结果\姓据{,一起与}对迭代解决方案多样性的适当正则化的重要性,我们提出了两个具有迭代平均的自适应算法。与随机梯度下降(SGD)相比,这些结果具有明显更好的结果,需要较少调谐并且不需要早期停止或验证设定监视。我们在各种现代和古典网络架构上展示了我们对CiFar-10/100,Imagenet和Penn TreeBank数据集的方法的疗效。
translated by 谷歌翻译
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that-when trained in isolationreach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
translated by 谷歌翻译
使用DataSet的真实标签培训而不是随机标签导致更快的优化和更好的泛化。这种差异归因于自然数据集中的输入和标签之间的对齐概念。我们发现,随机或真正标签上的具有不同架构和优化器的培训神经网络在隐藏的表示和训练标签之间强制执行相同的关系,阐明为什么神经网络表示为转移如此成功。我们首先突出显示为什么对齐的特征在经典的合成转移问题中促进转移和展示,即对齐是对相似和不同意任务的正负传输的确定因素。然后我们调查各种神经网络架构,并发现(a)在各种不同的架构和优化器中出现的对齐,并且从深度(b)对准产生的更多对准对于更接近输出的层和(c)现有的性能深度CNN表现出高级别的对准。
translated by 谷歌翻译
懒惰培训制度中的神经网络收敛到内核机器。在丰富的特征学习制度中可以在丰富的特征学习制度中可以使用数据依赖性内核来学习内核机器吗?我们证明,这可以是由于我们术语静音对准的现象,这可能需要网络的切线内核在特征内演变,而在小并且在损失明显降低,并且之后仅在整体尺度上生长。我们表明这种效果在具有小初始化和白化数据的同质神经网络中进行。我们在线性网络壳体提供了对这种效果的分析处理。一般来说,我们发现内核在训练的早期阶段开发了低级贡献,然后在总体上发展,产生了与最终网络的切线内核的内核回归解决方案等同的函数。内核的早期光谱学习取决于深度。我们还证明了非白化数据可以削弱无声的对准效果。
translated by 谷歌翻译
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https: //github.com/szagoruyko/wide-residual-networks.
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译