人们通常认为,修剪网络不仅会降低深网的计算成本,而且还可以通过降低模型容量来防止过度拟合。但是,我们的工作令人惊讶地发现,网络修剪有时甚至会加剧过度拟合。我们报告了出乎意料的稀疏双后裔现象,随着我们通过网络修剪增加模型稀疏性,首先测试性能变得更糟(由于过度拟合),然后变得更好(由于过度舒适),并且终于变得更糟(由于忘记了有用的有用信息)。尽管最近的研究集中在模型过度参数化方面,但他们未能意识到稀疏性也可能导致双重下降。在本文中,我们有三个主要贡献。首先,我们通过广泛的实验报告了新型的稀疏双重下降现象。其次,对于这种现象,我们提出了一种新颖的学习距离解释,即$ \ ell_ {2} $稀疏模型的学习距离(从初始化参数到最终参数)可能与稀疏的双重下降曲线良好相关,并更好地反映概括比最小平坦。第三,在稀疏的双重下降的背景下,彩票票假设中的获胜票令人惊讶地并不总是赢。
translated by 谷歌翻译
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that-when trained in isolationreach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
translated by 谷歌翻译
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet).
translated by 谷歌翻译
彩票票证假设(LTH)引起了人们的关注,因为它可以解释为什么过度参数化模型通常显示出很高的概括能力。众所周知,当我们使用迭代幅度修剪(IMP)时,这是一种算法,可以找到具有高概括能力的稀疏网络,可以独立从初始权重训练,称为获胜票,最初的大型学习率在深层神经网络,例如重新连接。但是,由于最初的较大学习率通常有助于优化器收敛到平坦的最小值,因此我们假设获胜票的最小值相对较高,这在概括能力方面被认为是不利的。在本文中,我们证实了这一假设,并表明Pac-Bayesian理论可以对LTH与概括行为之间的关系有明确的理解。根据我们的实验发现,平坦度可用于提高标签噪声的准确性和稳健性,并且与初始权重的距离深深涉及获胜的门票,我们提供了使用尖峰和slab分布的PAC-Bayes绑定到的pac-bayes分析获胜门票。最后,我们重新审视了现有的算法,以从Pac-Bayesian的角度查找获奖门票,并对这些方法提供新的见解。
translated by 谷歌翻译
网络修剪是一种广泛使用的技术,用于有效地压缩深神经网络,几乎没有在推理期间在性能下降低。迭代幅度修剪(IMP)是由几种迭代训练和修剪步骤组成的网络修剪的最熟悉的方法之一,其中在修剪后丢失了大量网络的性能,然后在随后的再培训阶段中恢复。虽然常用为基准参考,但经常认为a)通过不将稀疏纳入训练阶段来达到次优状态,b)其全球选择标准未能正确地确定最佳层面修剪速率和c)其迭代性质使它变得缓慢和不竞争。根据最近提出的再培训技术,我们通过严格和一致的实验来调查这些索赔,我们将Impr到培训期间的训练算法进行比较,评估其选择标准的建议修改,并研究实际需要的迭代次数和总培训时间。我们发现IMP与SLR进行再培训,可以优于最先进的修剪期间,没有或仅具有很少的计算开销,即全局幅度选择标准在很大程度上具有更复杂的方法,并且只有几个刷新时期在实践中需要达到大部分稀疏性与IMP的诽谤 - 与性能权衡。我们的目标既可以证明基本的进攻已经可以提供最先进的修剪结果,甚至优于更加复杂或大量参数化方法,也可以为未来的研究建立更加现实但易于可实现的基线。
translated by 谷歌翻译
在最近的几项研究中已经显示了过度参数化在实现卓越概括性能方面的好处,证明了在实践中使用较大模型的趋势。然而,在强大的学习背景下,神经网络大小的影响尚未得到很好的研究。在这项工作中,我们发现,在大量错误标记的示例的存在下,将网络大小的增加超出某个点可能是有害的。特别是,当标签噪声增加时,最初是单调或“双重下降”测试损失曲线(W.R.T.网络宽度)变成U形或双U形曲线,这表明某些模型具有中等大小的模型实现了最佳的概括。我们观察到,当通过随机修剪通过密度控制网络大小时,观察到相似的测试损失行为。我们还通过偏置变化分解和理论上表征标签噪声塑造方差项的方式来仔细研究现象。即使采用最新的鲁棒方法,也可以观察到测试损失的类似行为,这表明限制网络大小可以进一步提高现有方法。最后,我们从经验上检查网络大小对学习函数平稳性的影响,并发现最初的大小和平滑度之间的负相关性是由标签噪声翻转的。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译
Neural network pruning-the task of reducing the size of a network by removing parameters-has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
translated by 谷歌翻译
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. * Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work.
translated by 谷歌翻译
关于稀疏神经网络训练(稀疏训练)的最新研究表明,通过从头开始训练本质上稀疏的神经网络可以实现绩效和效率之间的令人信服的权衡。现有的稀疏训练方法通常努力在一次跑步中找到最佳的稀疏子网,而无需涉及任何昂贵的密集或预训练步骤。例如,作为最突出的方向之一,动态稀疏训练(DST)能够通过在训练过程中迭代发展稀疏拓扑来实现竞争性训练的竞争性能。在本文中,我们认为最好分配有限的资源来创建多个低损失的稀疏子网并将其超级置于更强的基因,而不是完全分配所有资源以找到单个子网络。为了实现这一目标,需要两个Desiderata:(1)在一个培训过程中有效生产许多低损失的子网,即所谓的廉价门票,仅限于用于密集培训的标准培训时间; (2)将这些廉价的门票有效地超级为一个更强的子网,而无需超越约束参数预算。为了证实我们的猜想,我们提出了一种新颖的稀疏训练方法,称为\ textbf {sup-tickets},可以在单个稀疏到较小的训练过程中同时满足上述两个desiderata。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明,SUP-Tickets与现有的稀疏训练方法无缝集成,并显示出一致的性能提高。
translated by 谷歌翻译
Pruning large neural networks to create highquality, independently trainable sparse masks, which can maintain similar performance to their dense counterparts, is very desirable due to the reduced space and time complexity. As research effort is focused on increasingly sophisticated pruning methods that leads to sparse subnetworks trainable from the scratch, we argue for an orthogonal, under-explored theme: improving training techniques for pruned sub-networks, i.e. sparse training. Apart from the popular belief that only the quality of sparse masks matters for sparse training, in this paper we demonstrate an alternative opportunity: one can carefully customize the sparse training techniques to deviate from the default dense network training protocols, consisting of introducing "ghost" neurons and skip connections at the early stage of training, and strategically modifying the initialization as well as labels. Our new sparse training recipe is generally applicable to improving training from scratch with various sparse masks. By adopting our newly curated techniques, we demonstrate significant performance gains across various popular datasets (CIFAR-10, CIFAR-100, TinyIma-geNet), architectures (ResNet-18/32/104, Vgg16, MobileNet), and sparse mask options (lottery ticket, SNIP/GRASP, SynFlow, or even randomly pruning), compared to the default training protocols, especially at high sparsity levels. Code is at https://github.com/VITA-Group/ToST.
translated by 谷歌翻译
彩票(LTS)能够发现准确而稀疏的子网,可以隔离训练以匹配密集网络的性能。合奏并行,是机器学习中最古老的预期技巧之一,可以通过结合多个独立模型的输出来提高性能。但是,在LTS背景下,合奏的好处将被稀释,因为合奏并没有直接导致更稀疏的子网,而是利用其预测来做出更好的决定。在这项工作中,我们首先观察到,直接平均相邻学习的子网的权重显着提高了LT的性能。在这一观察结果的鼓励下,我们进一步提出了另一种方法,通过简单的插值策略通过迭代幅度修剪来识别的子网执行“合奏”。我们称我们的方法彩票池。与幼稚的合奏相比,每一个子网都不会带来性能,彩票池比原始LTS产生的稀疏子网稀疏得多,而无需任何额外的培训或推理成本。在CIFAR-10/100和Imagenet上的各种现代体系结构中,我们表明我们的方法在分布和分发场景方面都取得了显着的性能。令人印象深刻的是,用VGG-16和RESNET-18进行评估,生产的子网稀疏的子网在CIFAR-100上优于原始LTS,在CIFAR-100-C上高达1.88%,而CIFAR-100-C则高于2.36%。最终的致密网络超过了CIFAR-100的预训练密集模型,在CIFAR-100-C上超过2.22%。
translated by 谷歌翻译
彩票票证假设(LTH)表明,密集的模型包含高度稀疏的子网(即获奖门票),可以隔离培训以完全准确。尽管做出了许多激动人心的努力,但仍有一个“常识”很少受到挑战:通过迭代级修剪(IMP)发现了一张获胜的票,因此由此产生的修剪子网仅具有非结构化的稀疏性。这一差距限制了在实践中赢得门票的吸引力,因为高度不规则的稀疏模式在硬件上加速的挑战是挑战性的。同时,直接将结构化修剪替换为非结构化的修剪,以更严重地损害绩效,并且通常无法找到获胜的票。在本文中,我们证明了第一个积极的结果是,总体上可以有效地找到结构上稀疏的获胜票。核心思想是在每一轮(非结构化)IMP之后附加“后处理技术”,以实施结构稀疏的形成。具体而言,我们首先在某些被认为很重要的通道中“重新填充”修剪元素,然后“重新组”非零元素以创建灵活的群体结构模式。我们确定的渠道和团体结构子网都赢得了彩票,并以现有硬件很容易支持的大量推理加速。广泛的实验,在多个网络骨架的不同数据集上进行,一致验证了我们的建议,表明LTH的硬件加速障碍现在已被删除。具体而言,结构上的获胜票最多可获得{64.93%,64.84%,60.23%}的运行时间节省,以{36%〜80%,74%,58%}的稀疏性在{Cifar,cifar,tiny-imageNet,imageNet}上保持可比较的精度。代码在https://github.com/vita-group/structure-lth上。
translated by 谷歌翻译
在本文中,我们推测,如果考虑到神经网络的置换不变性,SGD解决方案可能不会在它们之间的线性插值中没有障碍。尽管这是一个大胆的猜想,但我们展示了广泛的经验尝试却没有反驳。我们进一步提供了初步的理论结果来支持我们的猜想。我们的猜想对彩票票证假设,分布式培训和合奏方法有影响。
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译
观察到在训练期间重新定位神经网络,以改善最近的作品中的概括。然而,它既不在深度学习实践中被广泛采用,也不经常用于最先进的培训方案中。这就提出了一个问题,即何时重新定位起作用,以及是否应与正规化技术一起使用,例如数据增强,体重衰减和学习率计划。在这项工作中,我们对标准培训的经验比较进行了广泛的经验比较,并选择了一些重新定位方法来回答这个问题,并在各种图像分类基准上培训了15,000多个模型。我们首先确定在没有任何其他正则化的情况下,这种方法对概括始终有益。但是,当与其他经过精心调整的正则化技术一起部署时,重新定位方法几乎没有给予概括,尽管最佳的概括性能对学习率和体重衰减超参数的选择不太敏感。为了研究重新定位方法对嘈杂数据的影响,我们还考虑在标签噪声下学习。令人惊讶的是,在这种情况下,即使在存在其他经过精心调整的正则化技术的情况下,重新定位也会显着改善标准培训。
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译
预训练是在各种下游任务上转移学习的广泛采用的起点。对彩票假说(LTH)的最新研究表明,这种巨大的预训练模型可以用极稀疏的子网(又称匹配子网络)代替,而无需牺牲可传递性。但是,实际的安全 - 重要应用程序通常在标准转移之外提出了更具挑战性的要求,这也要求这些子网克服对抗性脆弱性。在本文中,我们制定了一个更严格的概念,双赢彩票,其中预训练模型的位置可以在各种下游任务上独立传输,以在两个标准下达到相同的标准和可靠的概括正如完整的预培训模型可以做到的那样,对抗性训练制度。我们全面检查了各种训练机制,发现强大的预训练倾向于制作出更少的双赢彩票,其性能优于标准对应物。例如,在下游CIFAR-10/100数据集上,我们识别出具有标准的,快速的对抗性和对抗性预训练的双赢匹配子网,以89.26%/73.79%,89.26%/79.03%和91.41%的匹配培训。 /83.22%稀疏。此外,我们观察到获得的双赢彩票票可以在实用数据限制(例如1%和10%)下游方案下传输的数据效率更高。我们的结果表明,彩票票务方案以及数据限制的转移设置可以扩大稳健的预训练的好处。代码可在https://github.com/vita-group/double-win-lth上找到。
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译