尽管取得了巨大的成功,卷积神经网络(CNNS)旨在产生高的计算/储存成本,并且易受对抗扰动的影响。最近的鲁棒模型压缩的作品通过与对抗训练组合模型压缩技术来解决这些挑战。但这些方法无法改善现实硬件上的吞吐量(每秒框架),同时保持对抗对抗扰动的鲁棒性。为了克服这个问题,我们提出了广义深度可分离(GDWS)卷积的方法 - 一种高效,通用,训练后标准2D卷积的近似值。 GDW大大提高了现实硬件上标准预先训练网络的吞吐量,同时保留其鲁棒性。最后,GDW可以扩展到大问题大小,因为它在预先训练的模型上运行,并且不需要任何额外的培训。我们为2D卷积近似器建立GDW的最优性,并提出了在复杂性和误差约束下构造最佳GDWS卷积的精确算法。我们通过CIFAR-10,SVHN和Imagenet数据集的广泛实验展示GDWS的有效性。我们的代码可以在https://github.com/hsndbk4/gdws找到。
translated by 谷歌翻译
尽管深层神经网络在各种任务中取得了巨大的成功,但它们对不可察觉的对抗性扰动的脆弱性阻碍了他们在现实世界中的部署。最近,与随机合奏的作品相对于经过最小的计算开销的标准对手训练(AT)模型,对对抗性训练(AT)模型的对抗性鲁棒性有了显着改善,这使它们成为安全临界资源限制应用程序的有前途解决方案。但是,这种令人印象深刻的表现提出了一个问题:这些稳健性是由随机合奏提供的吗?在这项工作中,我们从理论和经验上都解决了这个问题。从理论上讲,我们首先确定通常采用的鲁棒性评估方法(例如自适应PGD)在这种情况下提供了错误的安全感。随后,我们提出了一种理论上有效的对抗攻击算法(ARC),即使在自适应PGD无法做到这一点的情况下,也能妥协随机合奏。我们在各种网络体系结构,培训方案,数据集和规范上进行全面的实验,以支持我们的主张,并经验证明,随机合奏实际上比在模型上更容易受到$ \ ell_p $结合的对抗性扰动的影响。我们的代码可以在https://github.com/hsndbk4/arc上找到。
translated by 谷歌翻译
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, the optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well-pre-trained model is not a good initialization for the model with low-rank constraints. Thus, the performance of a low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. Compared to re-training on the compact model, this enables full utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Third, we propose batch normalization (BN) rectification to cut off its effect on the optimal low-rank approximation of the weight matrix, which further improves the performance. Comprehensive experiments on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods. Our code is available at https://github.com/BZQLin/LRPET.
translated by 谷歌翻译
随着移动设备的扩散和物联网,深入学习模型越来越多地部署在具有有限的计算资源和记忆的设备上,并且暴露于对抗性噪声的威胁。这些设备必须使用轻质和稳健性学习深层模型。然而,当前的深度学习解决方案难以学习具有在不降低一个或另一个的情况下拥有这两个性质的模型。众所周知,全连接层有助于卷积神经网络的大部分参数。我们执行完全连接层的可分离结构变换以减少参数,其中全连接层的大规模重量矩阵由几个可分离的小矩阵的张量产品分离。注意,在馈送到完全连接的层之前,诸如图像的数据(例如图像)不再需要扁平化,保留数据的有价值的空间几何信息。此外,为了进一步提高轻质和稳健性,我们提出了稀疏性和可分性条件数的关节约束,其施加在这些可分离基质上。我们评估了MLP,VGG-16和视觉变压器上提出的方法。数据集的实验结果如想象成,SVHN,CiFar-100和CiFAR10,表明我们成功将网络参数量减少了90%,而稳健的精度损耗小于1.5%,这比基于的SOTA方法更好原始完全连接的图层。有趣的是,即使以高压缩率,例如,它也可以实现压倒性的优势,例如,200次。
translated by 谷歌翻译
重量修剪是一种有效的模型压缩技术,可以解决在移动设备上实现实时深神经网络(DNN)推断的挑战。然而,由于精度劣化,难以利用硬件加速度,以及某些类型的DNN层的限制,难以降低的应用方案具有有限的应用方案。在本文中,我们提出了一般的细粒度的结构化修剪方案和相应的编译器优化,适用于任何类型的DNN层,同时实现高精度和硬件推理性能。随着使用我们的编译器优化所支持的不同层的灵活性,我们进一步探讨了确定最佳修剪方案的新问题,了解各种修剪方案的不同加速度和精度性能。两个修剪方案映射方法,一个是基于搜索,另一个是基于规则的,建议自动推导出任何给定DNN的每层的最佳修剪规则和块大小。实验结果表明,我们的修剪方案映射方法,以及一般细粒化结构修剪方案,优于最先进的DNN优化框架,最高可达2.48 $ \ times $和1.73 $ \ times $ DNN推理加速在CiFar-10和Imagenet DataSet上没有准确性损失。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
我们为深神经网络提出了一种新的全球压缩框架,它自动分析每个层以识别最佳的每个层压缩比,同时实现所需的整体压缩。我们的算法通过将其通道切入多个组并通过低秩分解来分解每个组来铰接压缩每个卷积(或完全连接)层的想法。在我们的算法的核心处于从Eckart Young MiRSKY定理中推导了层面错误界限的推导。然后,我们利用这些界限将压缩问题框架作为优化问题,我们希望最小化层次的最大压缩误差并提出朝向解决方案的有效算法。我们的实验表明,我们的方法优于各种网络和数据集的现有低级压缩方法。我们认为,我们的结果为未来的全球性能大小的研究开辟了新的途径,即现代神经网络的全球性能大小。我们的代码可在https://github.com/lucaslie/torchprune获得。
translated by 谷歌翻译
The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
压缩已成为必不可少的深度学习研究主题之一,特别是对于具有有限的计算能力和存储容量的边缘设备。在主要压缩技术中,已知通过矩阵分解的低秩压缩具有两个问题。首先,需要广泛的调整。其次,由此产生的压缩性能通常不令人印象深刻。在这项工作中,我们提出了一种低秩压缩方法,该方法利用修改的光束搜索自动等级选择和压缩型培训的修改稳定等级。得到的BSR(波束搜索和稳定等级)算法仅需要调谐所需压缩比的单个封路数据计。 BSR在精度和压缩比权衡曲线方面的性能转出优于先前已知的低秩压缩方法。此外,BSR可以与最先进的结构修剪方法进行或更好地执行。与修剪一样,BSR可以容易地与量化进行额外压缩。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
我们考虑在具有挑战性的训练后环境中,深度神经网络(DNN)的模型压缩问题,在该设置中,我们将获得精确的训练模型,并且必须仅基于少量校准输入数据而无需任何重新培训即可压缩它。鉴于新兴软件和硬件支持通过加速修剪和/或量化压缩的模型,并且已经针对两种压缩方法独立提出了良好的表现解决方案,因此该问题已变得流行。在本文中,我们引入了一个新的压缩框架,该框架涵盖了统一环境中的重量修剪和量化,时间和空间效率高,并且在现有的后训练方法的实际性能上大大改善。在技​​术层面上,我们的方法基于[Lecun,Denker和Solla,1990年]在现代DNN的规模上的经典最佳脑外科医生(OBS)框架的第一个精确实现,我们进一步扩展到覆盖范围。重量量化。这是通过一系列可能具有独立利益的算法开发来实现的。从实际的角度来看,我们的实验结果表明,它可以在现有后训练方法的压缩 - 准确性权衡方面显着改善,并且甚至可以在训练后进行修剪和量化的准确共同应用。
translated by 谷歌翻译
边缘设备上卷积神经网络(CNN)的部署受到性能要求和可用处理能力之间的巨大差距的阻碍。尽管最近的研究在开发网络修剪方法以减少CNN的计算开销方面取得了长足的进步,但仍然存在相当大的准确性损失,尤其是在高修剪比率下。质疑为非封闭网络设计的架构可能对修剪网络没有效,我们建议通过定义新的搜索空间和新颖的搜索目标来搜索架构修剪方法。为了改善修剪网络的概括,我们提出了两个新型的原始孔和prunedlinearaare操作。具体而言,这些操作通过正规化修剪网络的目标函数来缓解不稳定梯度的问题。提出的搜索目标使我们能够培训有关修剪权重元素的体系结构参数。定量分析表明,我们的搜索架构优于在CIFAR-10和Imagenet上最先进的修剪网络中使用的体系结构。就硬件效率而言,PR-DARTS将Mobilenet-V2的准确性从73.44%提高到81.35%(+7.91%提高),并且运行3.87 $ \ times $的速度更快。
translated by 谷歌翻译
深度神经网络(DNN)的记录断裂性能具有沉重的参数化,导致外部动态随机存取存储器(DRAM)进行存储。 DRAM访问的禁用能量使得在资源受限的设备上部署DNN是不普遍的,呼叫最小化重量和数据移动以提高能量效率。我们呈现SmartDeal(SD),算法框架,以进行更高成本的存储器存储/访问的较低成本计算,以便在推理和培训中积极提高存储和能量效率。 SD的核心是一种具有结构约束的新型重量分解,精心制作以释放硬件效率潜力。具体地,我们将每个重量张量分解为小基矩阵的乘积以及大的结构稀疏系数矩阵,其非零被量化为-2的功率。由此产生的稀疏和量化的DNN致力于为数据移动和重量存储而大大降低的能量,因为由于稀疏的比特 - 操作和成本良好的计算,恢复原始权重的最小开销。除了推理之外,我们采取了另一次飞跃来拥抱节能培训,引入创新技术,以解决培训时出现的独特障碍,同时保留SD结构。我们还设计专用硬件加速器,充分利用SD结构来提高实际能源效率和延迟。我们在不同的设置中对多个任务,模型和数据集进行实验。结果表明:1)应用于推理,SD可实现高达2.44倍的能效,通过实际硬件实现评估; 2)应用于培训,储存能量降低10.56倍,减少了10.56倍和4.48倍,与最先进的训练基线相比,可忽略的准确性损失。我们的源代码在线提供。
translated by 谷歌翻译
Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resourcelimited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译
有效地近似损失函数的局部曲率信息是用于深神经网络的优化和压缩的关键工具。然而,大多数现有方法近似二阶信息具有高计算或存储成本,这可以限制其实用性。在这项工作中,我们调查矩阵,用于估计逆象征的矢量产品(IHVPS)的矩阵线性时间方法,因为当Hessian可以近似为乘语 - 一个矩阵的总和时,如Hessian的经典近似由经验丰富的Fisher矩阵。我们提出了两个新的算法作为称为M-FAC的框架的一部分:第一个算法朝着网络压缩量身定制,如果Hessian给出了M $等级的总和,则可以计算Dimension $ D $的IHVP。 ,使用$ O(DM ^ 2)$预压制,$ O(DM)$代价计算IHVP,并查询逆Hessian的任何单个元素的费用$ O(m)$。第二算法针对优化设置,我们希望在反向Hessian之间计算产品,估计在优化步骤的滑动窗口和给定梯度方向上,根据预先说明的SGD所需的梯度方向。我们为计算IHVP和OHVP和O(DM + M ^ 3)$ of $ o(dm + m ^ 2)$提供算法,以便从滑动窗口添加或删除任何渐变。这两种算法产生最先进的结果,用于网络修剪和相对于现有二阶方法的计算开销的优化。在[9]和[17]可用实现。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
在本文中,我们介绍了一种新颖的神经网络重量压缩方法。在我们的方法中,我们将重量张量存储为稀疏,量化的矩阵因子,其产品在推理过程中即时计算以生成目标模型的权重。我们使用预计的梯度下降方法来找到重量张量的量化和稀疏分解。我们表明,这种方法可以看作是重量SVD,矢量量化和稀疏PCA的统一。结合端到端微调,我们的方法超出了或与以前的最先进方法相提并论,就精度和模型大小之间的权衡而言。我们的方法适用于中等压缩方案,与矢量量化和极端压缩方案不同。
translated by 谷歌翻译
Model compression and model defense for deep neural networks (DNNs) have been extensively and individually studied. Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks. However, the structured sparse models obtained by the exiting works suffer severe performance degradation for both benign and robust accuracy, thereby causing a challenging dilemma between robustness and structuredness of the compact DNNs. To address this problem, in this paper, we propose CSTAR, an efficient solution that can simultaneously impose the low-rankness-based Compactness, high STructuredness and high Adversarial Robustness on the target DNN models. By formulating the low-rankness and robustness requirement within the same framework and globally determining the ranks, the compressed DNNs can simultaneously achieve high compression performance and strong adversarial robustness. Evaluations for various DNN models on different datasets demonstrate the effectiveness of CSTAR. Compared with the state-of-the-art robust structured pruning methods, CSTAR shows consistently better performance. For instance, when compressing ResNet-18 on CIFAR-10, CSTAR can achieve up to 20.07% and 11.91% improvement for benign accuracy and robust accuracy, respectively. For compressing ResNet-18 with 16x compression ratio on Imagenet, CSTAR can obtain 8.58% benign accuracy gain and 4.27% robust accuracy gain compared to the existing robust structured pruning method.
translated by 谷歌翻译