在本文中,我们介绍了一种新颖的神经网络重量压缩方法。在我们的方法中,我们将重量张量存储为稀疏,量化的矩阵因子,其产品在推理过程中即时计算以生成目标模型的权重。我们使用预计的梯度下降方法来找到重量张量的量化和稀疏分解。我们表明,这种方法可以看作是重量SVD,矢量量化和稀疏PCA的统一。结合端到端微调,我们的方法超出了或与以前的最先进方法相提并论,就精度和模型大小之间的权衡而言。我们的方法适用于中等压缩方案,与矢量量化和极端压缩方案不同。
translated by 谷歌翻译
我们为深神经网络提出了一种新的全球压缩框架,它自动分析每个层以识别最佳的每个层压缩比,同时实现所需的整体压缩。我们的算法通过将其通道切入多个组并通过低秩分解来分解每个组来铰接压缩每个卷积(或完全连接)层的想法。在我们的算法的核心处于从Eckart Young MiRSKY定理中推导了层面错误界限的推导。然后,我们利用这些界限将压缩问题框架作为优化问题,我们希望最小化层次的最大压缩误差并提出朝向解决方案的有效算法。我们的实验表明,我们的方法优于各种网络和数据集的现有低级压缩方法。我们认为,我们的结果为未来的全球性能大小的研究开辟了新的途径,即现代神经网络的全球性能大小。我们的代码可在https://github.com/lucaslie/torchprune获得。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
深度神经网络(DNN)的记录断裂性能具有沉重的参数化,导致外部动态随机存取存储器(DRAM)进行存储。 DRAM访问的禁用能量使得在资源受限的设备上部署DNN是不普遍的,呼叫最小化重量和数据移动以提高能量效率。我们呈现SmartDeal(SD),算法框架,以进行更高成本的存储器存储/访问的较低成本计算,以便在推理和培训中积极提高存储和能量效率。 SD的核心是一种具有结构约束的新型重量分解,精心制作以释放硬件效率潜力。具体地,我们将每个重量张量分解为小基矩阵的乘积以及大的结构稀疏系数矩阵,其非零被量化为-2的功率。由此产生的稀疏和量化的DNN致力于为数据移动和重量存储而大大降低的能量,因为由于稀疏的比特 - 操作和成本良好的计算,恢复原始权重的最小开销。除了推理之外,我们采取了另一次飞跃来拥抱节能培训,引入创新技术,以解决培训时出现的独特障碍,同时保留SD结构。我们还设计专用硬件加速器,充分利用SD结构来提高实际能源效率和延迟。我们在不同的设置中对多个任务,模型和数据集进行实验。结果表明:1)应用于推理,SD可实现高达2.44倍的能效,通过实际硬件实现评估; 2)应用于培训,储存能量降低10.56倍,减少了10.56倍和4.48倍,与最先进的训练基线相比,可忽略的准确性损失。我们的源代码在线提供。
translated by 谷歌翻译
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, the optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well-pre-trained model is not a good initialization for the model with low-rank constraints. Thus, the performance of a low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. Compared to re-training on the compact model, this enables full utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Third, we propose batch normalization (BN) rectification to cut off its effect on the optimal low-rank approximation of the weight matrix, which further improves the performance. Comprehensive experiments on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods. Our code is available at https://github.com/BZQLin/LRPET.
translated by 谷歌翻译
Model quantization enables the deployment of deep neural networks under resource-constrained devices. Vector quantization aims at reducing the model size by indexing model weights with full-precision embeddings, i.e., codewords, while the index needs to be restored to 32-bit during computation. Binary and other low-precision quantization methods can reduce the model size up to 32$\times$, however, at the cost of a considerable accuracy drop. In this paper, we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models. By integrating hyperspherical learning, pruning and reinitialization, our proposed Hyperspherical Quantization (HQ) method reduces the cosine distance between the full-precision and ternary weights, thus reducing the bias of the straight-through gradient estimator during ternary quantization. Compared with existing work at similar compression levels ($\sim$30$\times$, $\sim$40$\times$), our method significantly improves the test accuracy and reduces the model size.
translated by 谷歌翻译
近年来,大型预训练的变压器网络已显示出许多自然语言理解任务的巨大改进。但是,由于延迟和成本限制,这些模型的巨大规模给他们的微调和在线部署带来了重大挑战。支持N:M半结构化的稀疏性和低精油整数计算的新硬件是提高DNN模型效率的有前途解决方案。但是,很少有研究系统地研究预先训练的变压器网络在多大程度上受益于这些技术的组合,以及如何最好地压缩变压器的每个组件。我们提出了一个灵活的压缩框架NXMiformer,该框架使用ADMM和基于Ste的QAT执行同时进行稀疏和量化。此外,我们介绍且廉价的启发式驱动搜索算法,该算法标识了满足压缩比约束的有希望的异质压缩配置。当通过NLU基准测试的胶水套件进行评估时,我们的方法可以达到BERT模型编码器的93%压缩,同时保留了98.2%的原始模型准确性并充分利用硬件功能。异质配置通过搜索启发式发现了基线准确性的99.5%,同时仍将模型压缩为87.5%。
translated by 谷歌翻译
Inference time, model size, and accuracy are three key factors in deep model compression. Most of the existing work addresses these three key factors separately as it is difficult to optimize them all at the same time. For example, low-bit quantization aims at obtaining a faster model; weight sharing quantization aims at improving compression ratio and accuracy; and mixed-precision quantization aims at balancing accuracy and inference time. To simultaneously optimize bit-width, model size, and accuracy, we propose pruning ternary quantization (PTQ): a simple, effective, symmetric ternary quantization method. We integrate L2 normalization, pruning, and the weight decay term to reduce the weight discrepancy in the gradient estimator during quantization, thus producing highly compressed ternary weights. Our method brings the highest test accuracy and the highest compression ratio. For example, it produces a 939kb (49$\times$) 2bit ternary ResNet-18 model with only 4\% accuracy drop on the ImageNet dataset. It compresses 170MB Mask R-CNN to 5MB (34$\times$) with only 2.8\% average precision drop. Our method is verified on image classification, object detection/segmentation tasks with different network structures such as ResNet-18, ResNet-50, and MobileNetV2.
translated by 谷歌翻译
在本文中,我们通过变换量化压缩卷积神经网络(CNN)权重。以前的CNN量化技术倾向于忽略权重和激活的联合统计,以给定的量化比特率产生次优CNN性能,或者在训练期间考虑其关节统计,并且不促进已经训练的CNN模型的有效压缩。我们最佳地转换(去相关)并使用速率失真框架来量化训练后的权重,以改善任何给定的量化比特率的压缩。变换量化在单个框架中统一量化和维度减少(去相关性)技术,以促进CNN的低比特率压缩和变换域中的有效推断。我们首先介绍CNN量化的速率和失真理论,并将最佳量化呈现为速率失真优化问题。然后,我们表明,通过在本文中获得的最佳端到端学习变换(ELT),可以使用最佳位深度分配来解决此问题。实验表明,变换量化在雷则和非烫伤量化方案中推进了CNN压缩中的技术状态。特别是,我们发现使用再培训的转换量化能够压缩CNN模型,例如AlexNet,Reset和DenSenet,以非常低的比特率(1-2比特)。
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
我们考虑在具有挑战性的训练后环境中,深度神经网络(DNN)的模型压缩问题,在该设置中,我们将获得精确的训练模型,并且必须仅基于少量校准输入数据而无需任何重新培训即可压缩它。鉴于新兴软件和硬件支持通过加速修剪和/或量化压缩的模型,并且已经针对两种压缩方法独立提出了良好的表现解决方案,因此该问题已变得流行。在本文中,我们引入了一个新的压缩框架,该框架涵盖了统一环境中的重量修剪和量化,时间和空间效率高,并且在现有的后训练方法的实际性能上大大改善。在技​​术层面上,我们的方法基于[Lecun,Denker和Solla,1990年]在现代DNN的规模上的经典最佳脑外科医生(OBS)框架的第一个精确实现,我们进一步扩展到覆盖范围。重量量化。这是通过一系列可能具有独立利益的算法开发来实现的。从实际的角度来看,我们的实验结果表明,它可以在现有后训练方法的压缩 - 准确性权衡方面显着改善,并且甚至可以在训练后进行修剪和量化的准确共同应用。
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.
translated by 谷歌翻译
我们为神经网络提出了一种新颖,结构化修剪算法 - 迭代,稀疏结构修剪算法,称为I-Spasp。从稀疏信号恢复的思想启发,I-Spasp通过迭代地识别网络内的较大的重要参数组(例如,滤波器或神经元),这些参数组大多数对修剪和密集网络输出之间的残差贡献,然后基于这些组阈值以较小的预定定义修剪比率。对于具有Relu激活的双层和多层网络架构,我们展示了通过多项式修剪修剪诱导的错误,该衰减是基于密集网络隐藏表示的稀疏性任意大的。在我们的实验中,I-Spasp在各种数据集(即MNIST和ImageNet)和架构(即馈送前向网络,Resnet34和MobileNetv2)中进行评估,其中显示用于发现高性能的子网和改进经过几种数量级的可提供基线方法的修剪效率。简而言之,I-Spasp很容易通过自动分化实现,实现强大的经验结果,具有理论收敛保证,并且是高效的,因此将自己区分开作为少数几个计算有效,实用,实用,实用,实用,实用,实用,实用,实用和可提供的修剪算法之一。
translated by 谷歌翻译
最近对深神经网络(DNN)效率的重点已导致了模型压缩方法的重要工作,其中重量修剪是最受欢迎的方法之一。同时,有快速增长的计算支持,以有效地执行通过修剪获得的非结构化模型。但是,大多数现有的修剪方法最小化仅剩余权重的数量,即模型的大小,而不是针对推理时间进行优化。我们通过引入SPDY来解决这一差距,SPDY是一种新的压缩方法,该方法会自动确定层次的稀疏性目标,可以在给定系统上实现所需的推理速度,同时最大程度地减少准确性损失。 SPDY由两种新技术组成:第一个是一种有效的动态编程算法,用于求解一组给定的层敏感性得分,以解决加速约束的层压缩问题;第二个是一个局部搜索程序,用于确定准确的层敏感性得分。跨流行视觉和语言模型的实验表明,SPDY可以保证相对于现有策略的恢复较高的准确性,无论是一次性和逐步修剪方案,并且与大多数现有的修剪方法兼容。我们还将方法扩展到了最近实施的修剪任务,几乎没有数据,在该数据中,我们在修剪GPU支持的2:4稀疏模式时实现了最著名的准确性恢复。
translated by 谷歌翻译
我们日常生活中的深度学习是普遍存在的,包括自驾车,虚拟助理,社交网络服务,医疗服务,面部识别等,但是深度神经网络在训练和推理期间需要大量计算资源。该机器学习界主要集中在模型级优化(如深度学习模型的架构压缩),而系统社区则专注于实施级别优化。在其间,在算术界中提出了各种算术级优化技术。本文在模型,算术和实施级技术方面提供了关于资源有效的深度学习技术的调查,并确定了三种不同级别技术的资源有效的深度学习技术的研究差距。我们的调查基于我们的资源效率度量定义,阐明了较低级别技术的影响,并探讨了资源有效的深度学习研究的未来趋势。
translated by 谷歌翻译
In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
translated by 谷歌翻译
二进制神经网络(BNNS)已经证明了它们能够以可比精度(DNNS)的准确性来解决复杂任务的能力,同时还降低了计算能力和存储要求并提高处理速度。这些属性使它们成为开发和部署基于DNN的应用程序(IOT)设备的吸引人的替代方法。尽管最近有所改善,但它们的压缩因素可能会导致一些资源非常有限的设备可能导致不足。在这项工作中,我们提出了稀疏的二进制神经网络(SBNNS),这是一种新颖的模型和训练方案,它引入了BNN中的稀疏性和一种新的量化函数,以使网络的权重进行二进制。提出的SBNN能够达到高压因子,并减少了推理时的操作和参数数量。我们还提供工具来协助SBNN设计,同时尊重硬件资源约束。我们通过三个数据集的线性和卷积网络上的一组实验来研究我们方法对不同压缩因子的概括属性。我们的实验证实,SBNNS可以达到高压缩率,而不会损害概括,同时进一步降低了BNNS的操作,使SBNNS成为以廉价,低成本,限量资源的IOT设备和传感器部署DNN的可行选择。
translated by 谷歌翻译
我们研究了基于SGD的深神经网络(DNN)的优化是否可以适应高度准确且易于压缩的模型。我们提出了一种新的压缩意识的最小化器,称为CRAM,它以原则性的方式修改了SGD训练迭代,以产生在压缩操作(例如减肥或量化)下局部损失行为稳定的模型。标准图像分类任务的实验结果表明,CRAM产生的密集模型比标准SGD型基准线更准确,但在重量修剪下令人惊讶的是稳定的:例如,对于Imagenet上的Resnet50,CRAM训练的模型可能会损失到。他们的重量的70%一次性只有微小的精度损失。
translated by 谷歌翻译
数据剪辑对于降低量化操作中的噪声和提高量化感知训练(QAT)的准确性至关重要。当前的实践依靠启发式方法来设置剪接阈值标量,不能证明是最佳的。我们提出了最佳的剪切张量和向量(octav),这是一种递归算法,以确定MSE最佳的剪切标量。 OCTAV源自Fast Newton-Raphson方法,在QAT例程的每一个迭代中,都可以随时发现最佳的剪切标量。因此,QAT算法在每个步骤中都具有可证明的最小量化噪声配制。此外,我们揭示了QAT中常见梯度估计技术的局限性,并提出了幅度感知的分化,以进一步提高准确性。在实验上,启用了八度的QAT在多个任务上实现了最先进的精度。其中包括在ImageNet上进行训练,并在ImageNet上进行重新注册和Mobilenets,以及使用BERT模型进行微调,其中启用八叶速度的QAT始终以低精度(4到6位)保持准确性。我们的结果不需要对基线训练配方进行任何修改,除了在适当的情况下插入量化操作。
translated by 谷歌翻译