尽管在许多计算机视觉任务上具有卓越的性能,但深度卷积神经网络众所周知,在具有资源限制的设备上被压缩。大多数现有的网络修剪方法需要艰苦的人类努力和禁止的计算资源,特别是当约束改变时。当需要部署在各种设备上时,这实际上限制了模型压缩的应用。此外,现有的方法仍然受到缺失的理论指导挑战。在本文中,我们提出了一种信息理论启发的自动模型压缩策略。我们的方法背后的原理是信息瓶颈理论,即隐藏的表示应该彼此压缩信息。因此,我们在网络激活中介绍了标准化的Hilbert-Schmidt独立性标准(NHSIC),作为层重要性的稳定和广义指标。当给出某个资源约束时,我们将HSIC指示器与约束将架构搜索问题转换为具有二次约束的线性编程问题。这种问题很容易通过几秒钟的凸优化方法解决。我们还提供严格的证据,揭示优化归一化的HSIC同时最小化不同层之间的相互信息。没有任何搜索过程,我们的方法实现了与最先进的压缩算法相比的更好的压缩权衡。例如,通过Reset-50,我们达到了45.3%的杂志,在想象中有75.75前1个精度。代码是在https://github.com/mac-automl/itpruner/tree/master上的途径。
translated by 谷歌翻译
为了弥合深度神经网络的复杂性和硬件能力之间不断增加的差距,网络量化引起了越来越多的研究关注。混合精度量化的最新趋势利用硬件的多个位宽度算术运算来释放网络量化的全部潜力。然而,这也导致困难的整数编程配方,并且即使使用各种放松,大多数现有方法也能使用极其耗时的搜索过程。我们建议优化一个代理度量,而不是解决原始整数编程的问题,而是与整数编程的丢失高度相关的网络正交性的概念,而是用线性编程易于优化。该方法通过数量级的秩序减少了搜索时间和所需的数据量,符合量化精度几乎没有妥协。具体而言,我们在Reset-18上获得72.08%的前1个精度,6.7MB不需要任何搜索迭代。鉴于我们的算法的高效率和低数据依赖性,我们将其用于训练后量化,该量化仅在MobileNetv2上实现71.27%的前1个精度,只有1.5MB。我们的代码可在https://github.com/mac-automl/oppq上获得。
translated by 谷歌翻译
在过去几年中,神经网络的性能在越来越多的浮点操作(拖鞋)的成本上显着提高。但是,当计算资源有限时,更多的拖鞋可能是一个问题。作为解决这个问题的尝试,修剪过滤器是一种常见的解决方案,但大多数现有的修剪方法不有效地保持模型精度,因此需要大量的芬降时期。在本文中,我们提出了一种自动修剪方法,该方法学习保存的神经元以保持模型精度,同时将絮凝到预定目标。为了完成这项任务,我们介绍了一种可训练的瓶颈,只需要一个单一的单一时期,只需要一个数据集的25.6%(Cifar-10)或7.49%(ILSVRC2012)来了解哪些过滤器。在各种架构和数据集上的实验表明,该方法不仅可以在修剪后保持精度,而且在FineTuning之后也优越现有方法。我们在Reset-50上达到了52.00%的拖鞋,在ILSVRC2012上的灌溉后的前1个精度为47.51%,最先进的(SOTA)精度为76.63%。代码可用(链接匿名审核)。
translated by 谷歌翻译
神经网络修剪具有显着性能,可以降低深网络模型的复杂性。最近的网络修剪方法通常集中在网络中删除不重要或冗余过滤器。在本文中,通过探索特征图之间的相似性,我们提出了一种新颖的滤波器修剪方法,中央滤波器(CF),这表明在适当的调整之后滤波器大致等于一组其他滤波器。我们的方法基于发现特征贴图之间的平均相似性的发现,而不管输入图像的数量如何,都会很少变化。基于此发现,我们在特征映射上建立相似性图,并计算每个节点的近密中心以选择中央滤波器。此外,我们设计一种方法,可以在与中央滤波器对应的下一层中直接调整权重,有效地最小化由修剪引起的误差。通过对各种基准网络和数据集的实验,CF产生最先进的性能。例如,对于Reset-56,CF通过去除47.1%的参数来减少约39.7%的絮凝物,甚至在CiFar-10上的精度改善0.33%。通过Googlenet,CF通过去除55.6%的参数来减少大约63.2%的拖鞋,仅在CIFAR-10上的前1个精度下降0.35%的损失。通过resnet-50,CF通过去除36.9%的参数减少约47.9%的拖鞋,仅在Imagenet上的前1个精度下降1.07%。该代码可以在https://github.com/8ubpshlr23/centrter上获得。
translated by 谷歌翻译
Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resourcelimited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
通道修剪被广泛用于降低深网模型的复杂性。最近的修剪方法通常通过提出通道重要性标准来识别网络的哪些部分。但是,最近的研究表明,这些标准在所有情况下都不能很好地工作。在本文中,我们提出了一种新颖的功能最小化方法(FSM)方法来压缩CNN模型,该模型通过收敛功能和过滤器的信息来评估特征转移。具体而言,我们首先使用不同层深度的一些普遍方法研究压缩效率,然后提出特征转移概念。然后,我们引入了一种近似方法来估计特征移位的幅度,因为很难直接计算它。此外,我们提出了一种分布优化算法,以补偿准确性损失并提高网络压缩效率。该方法在各种基准网络和数据集上产生最先进的性能,并通过广泛的实验验证。这些代码可以在\ url {https://github.com/lscgx/fsm}上可用。
translated by 谷歌翻译
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, the optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well-pre-trained model is not a good initialization for the model with low-rank constraints. Thus, the performance of a low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. Compared to re-training on the compact model, this enables full utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Third, we propose batch normalization (BN) rectification to cut off its effect on the optimal low-rank approximation of the weight matrix, which further improves the performance. Comprehensive experiments on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods. Our code is available at https://github.com/BZQLin/LRPET.
translated by 谷歌翻译
由于其实现的实际加速,过滤器修剪已广泛用于神经网络压缩。迄今为止,大多数现有滤波器修剪工作探索过滤器通过使用通道内信息的重要性。在本文中,从频道间透视开始,我们建议使用信道独立性进行有效的滤波器修剪,该指标测量不同特征映射之间的相关性。较少独立的特征映射被解释为包含较少有用的信息$ / $知识,因此可以修剪其相应的滤波器而不会影响模型容量。我们在过滤器修剪的背景下系统地调查了渠道独立性的量化度量,测量方案和敏感性$ / $可靠性。我们对各种数据集不同模型的评估结果显示了我们方法的卓越性能。值得注意的是,在CIFAR-10数据集上,我们的解决方案可以分别为基线Resnet-56和Resnet-110型号的0.75 \%$ 0.94 \%$ 0.94 \%。模型大小和拖鞋减少了42.8 \%$和$ 47.4 \%$(for Resnet-56)和48.3 \%$ 48.3 \%$ 52.1 \%$(for resnet-110)。在ImageNet DataSet上,我们的方法可以分别达到40.8 \%$ 44.8 \%$ 74.8 \%$ 0.15 \%$ 0.15 \%$ 0.15美元的准确性。该代码可在https://github.com/eclipsess/chip_neurivs2021上获得。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
卷积神经网络(CNN)已在许多物联网(IoT)设备中应用于多种下游任务。但是,随着边缘设备上的数据量的增加,CNN几乎无法及时完成某些任务,而计算和存储资源有限。最近,过滤器修剪被认为是压缩和加速CNN的有效技术,但是从压缩高维张量的角度来看,现有的方法很少是修剪CNN。在本文中,我们提出了一种新颖的理论,可以在三维张量中找到冗余信息,即量化特征图(QSFM)之间的相似性,并利用该理论来指导滤波器修剪过程。我们在数据集(CIFAR-10,CIFAR-100和ILSVRC-12)上执行QSFM和Edge设备,证明所提出的方法可以在神经网络中找到冗余信息,具有可比的压缩和可耐受的准确性下降。没有任何微调操作,QSFM可以显着压缩CIFAR-56(48.7%的Flops和57.9%的参数),而TOP-1的准确性仅损失0.54%。对于边缘设备的实际应用,QSFM可以将Mobilenet-V2推理速度加速1.53倍,而ILSVRC-12 TOP-1的精度仅损失1.23%。
translated by 谷歌翻译
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31× FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
translated by 谷歌翻译
The mainstream approach for filter pruning is usually either to force a hard-coded importance estimation upon a computation-heavy pretrained model to select "important" filters, or to impose a hyperparameter-sensitive sparse constraint on the loss objective to regularize the network training. In this paper, we present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF), to derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification. Each filter in our DCFF is firstly given an inter-similarity distribution with a temperature parameter as a filter proxy, on top of which, a fresh Kullback-Leibler divergence based dynamic-coded criterion is proposed to evaluate the filter importance. In contrast to simply keeping high-score filters in other methods, we propose the concept of filter fusion, i.e., the weighted averages using the assigned proxies, as our preserved filters. We obtain a one-hot inter-similarity distribution as the temperature parameter approaches infinity. Thus, the relative importance of each filter can vary along with the training of the compact CNN, leading to dynamically changeable fused filters without both the dependency on the pretrained model and the introduction of sparse constraints. Extensive experiments on classification benchmarks demonstrate the superiority of our DCFF over the compared counterparts. For example, our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47% on CIFAR-10. A compact ResNet-50 is obtained with 63.8% FLOPs and 58.6% parameter reductions, retaining 75.60% top-1 accuracy on ILSVRC-2012. Our code, narrower models and training logs are available at https://github.com/lmbxmu/DCFF.
translated by 谷歌翻译
过滤器修剪的目标是搜索不重要的过滤器以删除以便使卷积神经网络(CNNS)有效而不牺牲过程中的性能。挑战在于找到可以帮助确定每个过滤器关于神经网络的最终输出的重要或相关的信息的信息。在这项工作中,我们分享了我们的观察说,预先训练的CNN的批量标准化(BN)参数可用于估计激活输出的特征分布,而无需处理训练数据。在观察时,我们通过基于预先训练的CNN的BN参数评估每个滤波器的重要性来提出简单而有效的滤波修剪方法。 CiFar-10和Imagenet的实验结果表明,该方法可以在准确性下降和计算复杂性的计算复杂性和降低的折衷方面具有和不进行微调的卓越性能。
translated by 谷歌翻译
Previous works utilized "smaller-norm-less-important" criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with "relatively less" importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52% FLOPs on ResNet-110 with even 2.69% relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42% FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/filter-pruning-geometric-median * Corresponding Author. Part of this work was done when Yi Yang was visiting Baidu Research during his Professional Experience Program.
translated by 谷歌翻译
过滤器修剪方法通过去除选定的过滤器来引入结构稀疏性,因此对于降低复杂性特别有效。先前的作品从验证较小规范的过滤器的角度从经验修剪网络中造成了较小的最终结果贡献。但是,此类标准已被证明对过滤器的分布敏感,并且由于修剪后的容量差距是固定的,因此准确性可能很难恢复。在本文中,我们提出了一种称为渐近软簇修剪(ASCP)的新型过滤器修剪方法,以根据过滤器的相似性来识别网络的冗余。首先通过聚类来区分来自参数过度的网络的每个过滤器,然后重建以手动将冗余引入其中。提出了一些聚类指南,以更好地保留特征提取能力。重建后,允许更新过滤器,以消除错误选择的效果。此外,还采用了各种修剪率的衰减策略来稳定修剪过程并改善最终性能。通过逐渐在每个群集中生成更相同的过滤器,ASCP可以通过通道添加操作将其删除,几乎没有准确性下降。 CIFAR-10和Imagenet数据集的广泛实验表明,与许多最新算法相比,我们的方法可以取得竞争性结果。
translated by 谷歌翻译
卷积神经网络(CNN)具有一定量的参数冗余,滤波器修剪旨在去除冗余滤波器,并提供在终端设备上应用CNN的可能性。但是,以前的作品更加注重设计了滤波器重要性的评估标准,然后缩短了具有固定修剪率的重要滤波器或固定数量,以减少卷积神经网络的冗余。它不考虑为每层预留有多少筛选器是最合理的选择。从这个角度来看,我们通过搜索适当的过滤器(SNF)来提出新的过滤器修剪方法。 SNF专用于搜索每层的最合理的保留过滤器,然后是具有特定标准的修剪过滤器。它可以根据不同的拖鞋定制最合适的网络结构。通过我们的方法进行过滤器修剪导致CIFAR-10的最先进(SOTA)精度,并在Imagenet ILSVRC-2012上实现了竞争性能。基于Reset-56网络,在Top-中增加了0.14%的增加0.14% 1对CIFAR-10拖出的52.94%的精度为52.94%。在减少68.68%拖鞋时,CiFar-10上的修剪Resnet-110还提高了0.03%的1 0.03%的精度。对于Imagenet,我们将修剪速率设置为52.10%的拖鞋,前1个精度只有0.74%。该代码可以在https://github.com/pk-l/snf上获得。
translated by 谷歌翻译
To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the secondto-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.
translated by 谷歌翻译
由于稀疏神经网络通常包含许多零权重,因此可以在不降低网络性能的情况下潜在地消除这些不必要的网络连接。因此,设计良好的稀疏神经网络具有显着降低拖鞋和计算资源的潜力。在这项工作中,我们提出了一种新的自动修剪方法 - 稀疏连接学习(SCL)。具体地,重量被重新参数化为可培训权重变量和二进制掩模的元素方向乘法。因此,由二进制掩模完全描述网络连接,其由单位步进函数调制。理论上,从理论上证明了使用直通估计器(STE)进行网络修剪的基本原理。这一原则是STE的代理梯度应该是积极的,确保掩模变量在其最小值处收敛。在找到泄漏的Relu后,SoftPlus和Identity Stes可以满足这个原理,我们建议采用SCL的身份STE以进行离散面膜松弛。我们发现不同特征的面具梯度非常不平衡,因此,我们建议将每个特征的掩模梯度标准化以优化掩码变量训练。为了自动训练稀疏掩码,我们将网络连接总数作为我们的客观函数中的正则化术语。由于SCL不需要由网络层设计人员定义的修剪标准或超级参数,因此在更大的假设空间中探讨了网络,以实现最佳性能的优化稀疏连接。 SCL克服了现有自动修剪方法的局限性。实验结果表明,SCL可以自动学习并选择各种基线网络结构的重要网络连接。 SCL培训的深度学习模型以稀疏性,精度和减少脚波特的SOTA人类设计和自动修剪方法训练。
translated by 谷歌翻译