FerateAi是一个基于Pytorch的图书馆,旨在促进深度神经网络压缩技术的利用,例如稀疏,修剪,知识蒸馏或正则化。该库的构建是为了实现快速实施和实验。尤其是,压缩技术是利用Fastai和Pytorch Lightning等库的回调系统来带来用户友好和高级API。 Forperai的主要资产是其轻巧但功能强大,使用的简单性。确实,由于它是以非常细粒度的方式开发的,因此用户可以使用不同的参数组合来创建数千个独特的实验。在本文中,我们着重于代表图书馆的核心的Perastai的稀疏功能。在forperai中对神经网络进行稀疏只需要在传统培训循环中进行单一的代码,但允许执行最先进的技术,例如彩票票证假设实验
translated by 谷歌翻译
在神经网络中引入稀疏性是一种有效的方法,可以降低其复杂性,同时保持其性能几乎完好无损。在大多数情况下,使用三阶段管道引入稀疏性:1)训练模型以收敛,2)根据某些标准修剪模型,3)微调修剪模型以恢复性能。最后两个步骤通常是迭代执行的,从而导致合理的结果,但也取得了耗时且复杂的过程。在我们的工作中,我们建议摆脱管道的第一步,并在单个修剪训练周期中结合其他两个步骤,从而使模型在修剪时共同学习最佳权重。我们通过介绍一个名为One Cycle Pruning的小说修剪时间表来做到这一点,该时间表从培训开始就开始修剪,直到最后。采用这样的时间表不仅可以更好地执行修剪模型,而且还大大降低了修剪模型所需的培训预算。实验是在多种架构(VGG-16和RESNET-18)和数据集(CIFAR-10,CIFAR-100和CALTECH-101)上进行的,以及相对较高的稀疏值(80%,90%,95%的权重,删除)。我们的结果表明,按固定的培训预算,一环修剪始终优于通常使用的修剪时间表,例如单发修剪,迭代修剪和自动化逐渐修剪。
translated by 谷歌翻译
稀疏性已成为压缩和加速深度神经网络(DNN)的有前途方法之一。在不同类别的稀疏性中,由于其对现代加速器的有效执行,结构化的稀疏性引起了人们的关注。特别是,n:m稀疏性很有吸引力,因为已经有一些硬件加速器架构可以利用某些形式的n:m结构化稀疏性来产生更高的计算效率。在这项工作中,我们专注于N:M的稀疏性,并广泛研究和评估N:M稀疏性的各种培训食谱,以模型准确性和计算成本(FLOPS)之间的权衡(FLOPS)。在这项研究的基础上,我们提出了两种新的基于衰减的修剪方法,即“修剪面膜衰减”和“稀疏结构衰减”。我们的评估表明,这些提出的方法始终提供最新的(SOTA)模型精度,可与非结构化的稀疏性相当,在基于变压器的模型上用于翻译任务。使用新培训配方的稀疏模型准确性的提高是以总训练计算(FLOP)边际增加的成本。
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
虽然网络稀疏作为克服神经网络大小的有希望的方向,但它仍然是保持模型准确性的开放问题,并在一般CPU上实现了显着的加速。在本文中,我们提出了一篇新颖的1美元\ Times N $块稀疏模式(块修剪)的概念来打破这种限制。特别是,具有相同输入通道索引的连续$ N $输出内核被分组为一个块,该块用作我们修剪模式的基本修剪粒度。我们的$ 1 \ times n $ sparsity模式prunes这些块被认为不重要。我们还提供过滤器重新排列的工作流程,首先重新排列输出通道尺寸中的权重矩阵,以获得更具影响力的块,以便精度改进,然后将类似的重新排列到输入通道维度中的下一层权重,以确保正确的卷积操作。此外,可以通过并行化块 - 方向的矢量化操作实现在我们的$ 1 \ Times N $块稀疏之后的输出计算,从而导致总基于CPU的平台上的显着加速。通过对ILSVRC-2012的实验证明了我们修剪模式的功效。例如,在50%的稀疏性和$ n = 4 $的情况下,我们的模式在MobileNet-V2的前1个精度的过滤器修剪中获得了大约3.0%的改进。同时,它在Cortex-A7 CPU上获得56.04ms推断,超过体重修剪。代码可在https://github.com/lmbxmu/1xn处获得。
translated by 谷歌翻译
现代深度神经网络往往太大而无法在许多实际情况下使用。神经网络修剪是降低这种模型的大小的重要技术和加速推断。Gibbs修剪是一种表达和设计神经网络修剪方法的新框架。结合统计物理和随机正则化方法的方法,它可以同时培训和修剪网络,使得学习的权重和修剪面膜彼此很好地适应。它可用于结构化或非结构化修剪,我们为每个提出了许多特定方法。我们将拟议的方法与许多当代神经网络修剪方法进行比较,发现Gibbs修剪优于它们。特别是,我们通过CIFAR-10数据集来实现修剪Reset-56的新型最先进的结果。
translated by 谷歌翻译
网络修剪是一种广泛使用的技术,用于有效地压缩深神经网络,几乎没有在推理期间在性能下降低。迭代幅度修剪(IMP)是由几种迭代训练和修剪步骤组成的网络修剪的最熟悉的方法之一,其中在修剪后丢失了大量网络的性能,然后在随后的再培训阶段中恢复。虽然常用为基准参考,但经常认为a)通过不将稀疏纳入训练阶段来达到次优状态,b)其全球选择标准未能正确地确定最佳层面修剪速率和c)其迭代性质使它变得缓慢和不竞争。根据最近提出的再培训技术,我们通过严格和一致的实验来调查这些索赔,我们将Impr到培训期间的训练算法进行比较,评估其选择标准的建议修改,并研究实际需要的迭代次数和总培训时间。我们发现IMP与SLR进行再培训,可以优于最先进的修剪期间,没有或仅具有很少的计算开销,即全局幅度选择标准在很大程度上具有更复杂的方法,并且只有几个刷新时期在实践中需要达到大部分稀疏性与IMP的诽谤 - 与性能权衡。我们的目标既可以证明基本的进攻已经可以提供最先进的修剪结果,甚至优于更加复杂或大量参数化方法,也可以为未来的研究建立更加现实但易于可实现的基线。
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译
已知神经模型被过度参数化,最近的工作表明,稀疏的文本到语音(TTS)模型可以超过密集的模型。尽管已经为其他域提出了大量稀疏方法,但这种方法很少在TTS中应用。在这项工作中,我们试图回答以下问题:所选稀疏技术在性能和模型复杂性上的特征是什么?我们比较了Tacotron2基线和应用五种技术的结果。然后,我们通过自然性,清晰度和韵律来评估表现,同时报告模型规模和训练时间。与先前的研究相辅相成,我们发现在训练之前或期间进行修剪可以实现与训练后的修剪相似的性能,并且可以更快地进行培训,同时除去整个神经元降低了性能远不止于删除参数。据我们所知,这是比较文本到语音综合中稀疏范式的第一部作品。
translated by 谷歌翻译
网络的稀疏性主要是由于其降低网络复杂性的能力而受欢迎。广泛的研究挖掘了梯度驱动的稀疏性。通常,这些方法是在体重独立性前提下构建的,但是与重量受到相互影响的事实相反。因此,他们的性能仍有待改进。在本文中,我们建议通过解决这种独立悖论来进一步优化梯度驱动的稀疏性(OPTG)。我们的动机来自最近对超级策略训练的进步,该进步表明,稀疏子网可以通过简单地更新掩码值而无需修改任何权重的情况下将其位于随机初始化的网络中。我们证明,超级手机训练是积累重量梯度,并可以部分解决独立悖论。因此,OPTG将Supermask训练集成到梯度驱动的稀疏度中,并且设计了专门的掩模优化器来解决独立悖论。实验表明,OPTG可以很好地超越许多现有的最先进的竞争对手。我们的代码可在\ url {https://github.com/zyxxmu/optg}上找到。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
在过去几年中,神经字符动画已经出现并提供了一种动画虚拟字符的自动方法。它们的运动由神经网络合成。用用户定义的控制信号实时控制该运动也是视频游戏中的重要任务。基于全连接层(MLP)和专家混合物(MOE)的解决方案已经令人印象深刻的导致产生和控制环境与虚拟字符之间的近距离相互作用的各种运动。然而,完全连接层的主要缺点是它们的计算和内存成本,可能导致子优化的解决方案。在这项工作中,我们在交互式角色动画的背景下应用修剪算法以压缩MLP-Moe神经网络,这降低了其参数的数量,并在该加速度和合成的运动质量之间进行权衡加速其计算时间。这项工作表明,通过相同数量的专家和参数,修剪模型产生的运动伪像比密集模型更少,并且学习的高级运动功能对于两者相似
translated by 谷歌翻译
深度神经网络(DNN)在解决许多真实问题方面都有效。较大的DNN模型通常表现出更好的质量(例如,精度,精度),但它们的过度计算会导致长期推理时间。模型稀疏可以降低计算和内存成本,同时保持模型质量。大多数现有的稀疏算法是单向移除的重量,而其他人则随机或贪婪地探索每层进行修剪的小权重子集。这些算法的局限性降低了可实现的稀疏性水平。此外,许多算法仍然需要预先训练的密集模型,因此遭受大的内存占地面积。在本文中,我们提出了一种新颖的预定生长和修剪(间隙)方法,而无需预先培训密集模型。它通过反复生长一个层次的层来解决以前的作品的缺点,然后在一些训练后修剪回到稀疏。实验表明,使用所提出的方法修剪模型匹配或击败高度优化的密集模型的质量,在各种任务中以80%的稀疏度,例如图像分类,客观检测,3D对象分段和翻译。它们还优于模型稀疏的其他最先进的(SOTA)方法。作为一个例子,通过间隙获得的90%不均匀的稀疏resnet-50模型在想象中实现了77.9%的前1个精度,提高了先前的SOTA结果1.5%。所有代码将公开发布。
translated by 谷歌翻译
This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and largescale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task. Code available at https://github.com/ arunmallya/packnet
translated by 谷歌翻译
通过强迫连续重量的最多n非零,最近的N:M网络稀疏性因其两个有吸引力的优势而受到越来越多的关注:1)高稀疏性的有希望的表现。 2)对NVIDIA A100 GPU的显着加速。最近的研究需要昂贵的训练阶段或重型梯度计算。在本文中,我们表明N:M学习可以自然地将其描述为一个组合问题,该问题可以在有限的集合中寻找最佳组合候选者。由这种特征激励,我们以有效的分裂方式解决了n:m的稀疏性。首先,我们将重量向量分为$ c _ {\ text {m}}}^{\ text {n}} $组合s子集的固定大小N。然后,我们通过分配每个组合来征服组合问题,一个可学习的分数是共同优化了其关联权重。我们证明,引入的评分机制可以很好地模拟组合子集之间的相对重要性。通过逐渐去除低得分的子集,可以在正常训练阶段有效地优化N:M细粒稀疏性。全面的实验表明,我们的学习最佳组合(LBC)的表现始终如一,始终如一地比现成的N:m稀疏方法更好。我们的代码在\ url {https://github.com/zyxxmu/lbc}上发布。
translated by 谷歌翻译
Neural network pruning-the task of reducing the size of a network by removing parameters-has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
translated by 谷歌翻译
The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.
translated by 谷歌翻译
Most existing pruning works are resource-intensive, requiring retraining or fine-tuning of the pruned models for accuracy. We propose a retraining-free pruning method based on hyperspherical learning and loss penalty terms. The proposed loss penalty term pushes some of the model weights far from zero, while the rest weight values are pushed near zero and can be safely pruned with no need for retraining and a negligible accuracy drop. In addition, our proposed method can instantly recover the accuracy of a pruned model by replacing the pruned values with their mean value. Our method obtains state-of-the-art results in retraining-free pruning and is evaluated on ResNet-18/50 and MobileNetV2 with ImageNet dataset. One can easily get a 50\% pruned ResNet18 model with a 0.47\% accuracy drop. With fine-tuning, the experiment results show that our method can significantly boost the accuracy of the pruned models compared with existing works. For example, the accuracy of a 70\% pruned (except the first convolutional layer) MobileNetV2 model only drops 3.5\%, much less than the 7\% $\sim$ 10\% accuracy drop with conventional methods.
translated by 谷歌翻译
网络修剪是一种有效的方法,可以通过可接受的性能妥协降低网络复杂性。现有研究通过耗时的重量调谐或具有扩展宽度的网络的复杂搜索来实现神经网络的稀疏性,这极大地限制了网络修剪的应用。在本文中,我们表明,在没有权重调谐的情况下,高性能和稀疏的子网被称为“彩票奖线”,存在于具有膨胀宽度的预先训练的模型中。例如,我们获得了一个只有10%参数的彩票奖金,仍然达到了原始密度Vggnet-19的性能,而无需对CiFar-10的预先训练的重量进行任何修改。此外,我们观察到,来自许多现有修剪标准的稀疏面具与我们的彩票累积的搜索掩码具有高重叠,其中,基于幅度的修剪导致与我们的最相似的掩模。根据这种洞察力,我们使用基于幅度的修剪初始化我们的稀疏掩模,导致彩票累积搜索至少3倍降低,同时实现了可比或更好的性能。具体而言,我们的幅度基彩票奖学金在Reset-50中除去90%的重量,而在ImageNet上仅使用10个搜索时期可以轻松获得超过70%的前1个精度。我们的代码可在https://github.com/zyxxmu/lottery-jackpots获得。
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译