We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2×, while keeping the accuracy within 1% of the original model.
translated by 谷歌翻译
我们为深神经网络提出了一种新的全球压缩框架,它自动分析每个层以识别最佳的每个层压缩比,同时实现所需的整体压缩。我们的算法通过将其通道切入多个组并通过低秩分解来分解每个组来铰接压缩每个卷积(或完全连接)层的想法。在我们的算法的核心处于从Eckart Young MiRSKY定理中推导了层面错误界限的推导。然后,我们利用这些界限将压缩问题框架作为优化问题,我们希望最小化层次的最大压缩误差并提出朝向解决方案的有效算法。我们的实验表明,我们的方法优于各种网络和数据集的现有低级压缩方法。我们认为,我们的结果为未来的全球性能大小的研究开辟了新的途径,即现代神经网络的全球性能大小。我们的代码可在https://github.com/lucaslie/torchprune获得。
translated by 谷歌翻译
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations (in terms of number of the high precision operations) and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
压缩预训练的深度神经网络的任务吸引了研究社区的广泛兴趣,因为它在使从业人员摆脱数据访问要求方面的巨大好处。在该域中,低级别的近似是一种有前途的方法,但是现有的解决方案被认为是限制的设计选择,并且未能有效地探索设计空间,从而导致严重的准确性降解和有限的压缩比达到了有限。为了解决上述局限性,这项工作提出了SVD-NAS框架,该框架将低级近似和神经体系结构搜索的域结合在一起。 SVD-NAS通用并扩展了以前作品的设计选择,通过引入低级别的建筑空间LR空间,这是一个更细粒度的低级别近似设计空间。之后,这项工作提出了基于梯度的搜索,以有效地穿越LR空间。对可能的设计选择的更精细,更彻底的探索导致了CNN模型的参数,失败和潜伏期的提高精度以及降低。结果表明,在数据限制问题设置下,SVD-NAS的成像网上的精度比最新方法高2.06-12.85pp。 SVD-NAS在https://github.com/yu-zhewen/svd-nas上开源。
translated by 谷歌翻译
用于图像分类的深神经网络通常使用卷积过滤器来提取区分特征,然后再将其传递到线性分类器。大多数可解释性文献都集中在为卷积过滤器提供语义含义,以解释模型的推理过程,并确认其从输入域中使用相关信息。可以通过使用单数值分解分解其重量矩阵来研究完全连接的层,实际上研究每个矩阵中的行之间的相关性以发现地图的动力学。在这项工作中,我们为卷积层的重量张量定义了一个奇异的值分解,该分解器提供了对过滤器之间的相关性的类似理解,从而揭示了卷积图的动力学。我们使用随机矩阵理论中的最新结果来验证我们的定义。通过在图像分类网络的线性层上应用分解,我们建议一个框架,可以使用HyperGraphs应用可解释性方法来模型类别分离。我们没有寻找激活来解释网络,而是使用每个线性层具有最大相应奇异值的奇异向量来识别对网络最重要的特征。我们用示例说明了我们的方法,并介绍了本研究使用的分析工具DeepDataProfiler库。
translated by 谷歌翻译
在本文中,我们提出了一种方法,以最大程度地减少训练有素的卷积神经网络(Convnet)的计算复杂性。这个想法是要近似给定的Convnet的所有元素,并替换原始的卷积过滤器和参数(汇总和偏置系数;以及激活函数),并有效地近似计算复杂性。低复杂性卷积过滤器是通过基于Frobenius Norm的二进制(零)线性编程方案获得的,该方案在一组二元理性的集合上获得。最终的矩阵允许无乘法计算,仅需要添加和位移动操作。这样的低复杂性结构为低功率,高效的硬件设计铺平了道路。我们将方法应用于三种不同复杂性的用例中:(i)“轻”但有效的转换供面部检测(约有1000个参数); (ii)另一个用于手写数字分类的(超过180000个参数); (iii)一个明显更大的Convnet:Alexnet,$ \ $ \ $ 120万美元。我们评估了不同近似级别的各个任务的总体绩效。在所有考虑的应用中,都得出了非常低的复杂性近似值,以保持几乎相等的分类性能。
translated by 谷歌翻译
High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn
translated by 谷歌翻译
在本文中,我们介绍了一种新颖的神经网络重量压缩方法。在我们的方法中,我们将重量张量存储为稀疏,量化的矩阵因子,其产品在推理过程中即时计算以生成目标模型的权重。我们使用预计的梯度下降方法来找到重量张量的量化和稀疏分解。我们表明,这种方法可以看作是重量SVD,矢量量化和稀疏PCA的统一。结合端到端微调,我们的方法超出了或与以前的最先进方法相提并论,就精度和模型大小之间的权衡而言。我们的方法适用于中等压缩方案,与矢量量化和极端压缩方案不同。
translated by 谷歌翻译
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, the optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well-pre-trained model is not a good initialization for the model with low-rank constraints. Thus, the performance of a low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. Compared to re-training on the compact model, this enables full utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Third, we propose batch normalization (BN) rectification to cut off its effect on the optimal low-rank approximation of the weight matrix, which further improves the performance. Comprehensive experiments on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods. Our code is available at https://github.com/BZQLin/LRPET.
translated by 谷歌翻译
在本文中,我们通过变换量化压缩卷积神经网络(CNN)权重。以前的CNN量化技术倾向于忽略权重和激活的联合统计,以给定的量化比特率产生次优CNN性能,或者在训练期间考虑其关节统计,并且不促进已经训练的CNN模型的有效压缩。我们最佳地转换(去相关)并使用速率失真框架来量化训练后的权重,以改善任何给定的量化比特率的压缩。变换量化在单个框架中统一量化和维度减少(去相关性)技术,以促进CNN的低比特率压缩和变换域中的有效推断。我们首先介绍CNN量化的速率和失真理论,并将最佳量化呈现为速率失真优化问题。然后,我们表明,通过在本文中获得的最佳端到端学习变换(ELT),可以使用最佳位深度分配来解决此问题。实验表明,变换量化在雷则和非烫伤量化方案中推进了CNN压缩中的技术状态。特别是,我们发现使用再培训的转换量化能够压缩CNN模型,例如AlexNet,Reset和DenSenet,以非常低的比特率(1-2比特)。
translated by 谷歌翻译
To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the secondto-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.
translated by 谷歌翻译
尽管取得了巨大的成功,卷积神经网络(CNNS)旨在产生高的计算/储存成本,并且易受对抗扰动的影响。最近的鲁棒模型压缩的作品通过与对抗训练组合模型压缩技术来解决这些挑战。但这些方法无法改善现实硬件上的吞吐量(每秒框架),同时保持对抗对抗扰动的鲁棒性。为了克服这个问题,我们提出了广义深度可分离(GDWS)卷积的方法 - 一种高效,通用,训练后标准2D卷积的近似值。 GDW大大提高了现实硬件上标准预先训练网络的吞吐量,同时保留其鲁棒性。最后,GDW可以扩展到大问题大小,因为它在预先训练的模型上运行,并且不需要任何额外的培训。我们为2D卷积近似器建立GDW的最优性,并提出了在复杂性和误差约束下构造最佳GDWS卷积的精确算法。我们通过CIFAR-10,SVHN和Imagenet数据集的广泛实验展示GDWS的有效性。我们的代码可以在https://github.com/hsndbk4/gdws找到。
translated by 谷歌翻译
卷积神经网络(CNNS)已被广泛应用。但随着CNN的成长,算术运算和内存占用的数量也增加。此外,典型的非线性激活函数不允许连续层编码的操作的相关性,通过组合它们来防止简化中间步骤。我们提出了一种新的激活函数,允许CNN的顺序层之间的关联性。即使我们的激活函数是非线性的,它也可以通过欧几里德几何形状的共形模型中的一系列线性操作来表示。在此域中,操作,但不限于卷积,平均池和丢失保持线性。我们利用关联性来组合所有的“保形层”并使推理的成本持续,而不管网络的深度如何。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
translated by 谷歌翻译
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets.This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed-either explicitly or implicitly-to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译