对于移动设备上的实际深度神经网络设计,必须考虑计算资源产生的约束以及各种应用中的推理延迟。在深度网络加速相关方法中,修剪是广泛采用的做法,以平衡计算资源消耗和准确性,可以在明智地或随机地拆除通道的不重要连接,并对模型精度的最小影响最小。信道修剪立即导致显着的延迟降低,而随机重量灌注更加灵活,以平衡延迟和精度。在本文中,我们介绍了一个统一的框架,具有联合通道修剪和重量修剪(JCW),并且在比以前的模型压缩方法的延迟和准确性之间实现更好的静脉前沿。为了完全优化延迟和准确性之间的权衡,我们在JCW框架中开发了一定量身定制的多目标进化算法,这使得一个搜索能够获得各种部署要求的最佳候选架构。广泛的实验表明,JCW在想象集分类数据集上的各种最先进的修剪方法之间实现了更好的折衷和准确性。我们的代码在https://github.com/jcw-anonymous/jcw提供。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
AD相关建模在包括Microsoft Bing在内的在线广告系统中起着至关重要的作用。为了利用强大的变压器在这种低延迟设置中,许多现有方法脱机执行广告端计算。虽然有效,但这些方法无法提供冷启动广告,从而导致对此类广告的相关性预测不佳。这项工作旨在通过结构化修剪设计一种新的低延迟BERT,以在CPU平台上授权实时在线推断对Cold Start Ads相关性。我们的挑战是,以前的方法通常将变压器的所有层都缩减为高,均匀的稀疏性,从而产生无法以可接受的精度实现令人满意的推理速度的模型。在本文中,我们提出了SwiftPruner - 一个有效的框架,利用基于进化的搜索自动在所需的延迟约束下自动找到表现最佳的稀疏BERT模型。与进行随机突变的现有进化算法不同,我们提出了一个具有潜伏意见的多目标奖励的增强突变器,以进行更好的突变,以有效地搜索层稀疏模型的大空间。广泛的实验表明,与均匀的稀疏基线和最先进的搜索方法相比,我们的方法始终达到更高的ROC AUC和更低的潜伏度。值得注意的是,根据我们在1900年的延迟需求,SwiftPruner的AUC比Bert-Mini在大型现实世界数据集中的最先进的稀疏基线高0.86%。在线A/B测试表明,我们的模型还达到了有缺陷的冷启动广告的比例,并获得了令人满意的实时服务延迟。
translated by 谷歌翻译
In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning at search time. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. Compared to the state-of-the-art pruning methods, we have demonstrated superior performances on Mo-bileNet V1/V2 and ResNet. Codes are available on https: //github.com/liuzechun/MetaPruning. This work is done when Zechun Liu and Haoyuan Mu are interns at Megvii Technology.
translated by 谷歌翻译
神经结构搜索(NAS)引起了日益增长的兴趣。为了降低搜索成本,最近的工作已经探讨了模型的重量分享,并在单枪NAS进行了重大进展。然而,已经观察到,单次模型精度较高的模型并不一定在独立培训时更好地执行更好。为了解决这个问题,本文提出了搜索空间的逐步自动设计,名为Pad-NAS。与超字幕中的所有层共享相同操作搜索空间的先前方法不同,我们根据操作修剪制定逐行搜索策略,并构建层面操作搜索空间。通过这种方式,Pad-NAS可以自动设计每层的操作,并在搜索空间质量和模型分集之间实现权衡。在搜索过程中,我们还考虑了高效神经网络模型部署的硬件平台约束。关于Imagenet的广泛实验表明我们的方法可以实现最先进的性能。
translated by 谷歌翻译
重量修剪是一种有效的模型压缩技术,可以解决在移动设备上实现实时深神经网络(DNN)推断的挑战。然而,由于精度劣化,难以利用硬件加速度,以及某些类型的DNN层的限制,难以降低的应用方案具有有限的应用方案。在本文中,我们提出了一般的细粒度的结构化修剪方案和相应的编译器优化,适用于任何类型的DNN层,同时实现高精度和硬件推理性能。随着使用我们的编译器优化所支持的不同层的灵活性,我们进一步探讨了确定最佳修剪方案的新问题,了解各种修剪方案的不同加速度和精度性能。两个修剪方案映射方法,一个是基于搜索,另一个是基于规则的,建议自动推导出任何给定DNN的每层的最佳修剪规则和块大小。实验结果表明,我们的修剪方案映射方法,以及一般细粒化结构修剪方案,优于最先进的DNN优化框架,最高可达2.48 $ \ times $和1.73 $ \ times $ DNN推理加速在CiFar-10和Imagenet DataSet上没有准确性损失。
translated by 谷歌翻译
Recently, Neural architecture search has achieved great success on classification tasks for mobile devices. The backbone network for object detection is usually obtained on the image classification task. However, the architecture which is searched through the classification task is sub-optimal because of the gap between the task of image and object detection. As while work focuses on backbone network architecture search for mobile device object detection is limited, mainly because the backbone always requires expensive ImageNet pre-training. Accordingly, it is necessary to study the approach of network architecture search for mobile device object detection without expensive pre-training. In this work, we propose a mobile object detection backbone network architecture search algorithm which is a kind of evolutionary optimized method based on non-dominated sorting for NAS scenarios. It can quickly search to obtain the backbone network architecture within certain constraints. It better solves the problem of suboptimal linear combination accuracy and computational cost. The proposed approach can search the backbone networks with different depths, widths, or expansion sizes via a technique of weight mapping, making it possible to use NAS for mobile devices detection tasks a lot more efficiently. In our experiments, we verify the effectiveness of the proposed approach on YoloX-Lite, a lightweight version of the target detection framework. Under similar computational complexity, the accuracy of the backbone network architecture we search for is 2.0% mAP higher than MobileDet. Our improved backbone network can reduce the computational effort while improving the accuracy of the object detection network. To prove its effectiveness, a series of ablation studies have been carried out and the working mechanism has been analyzed in detail.
translated by 谷歌翻译
Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4× FLOPs reduction, we achieved 2.7% better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81× speedup of measured inference latency on an Android phone and 1.43× speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy.
translated by 谷歌翻译
This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pretrained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42% FLOPs on ResNet-101 with even 0.2% top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/softfilter-pruning
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Compressing neural network architectures is important to allow the deployment of models to embedded or mobile devices, and pruning and quantization are the major approaches to compress neural networks nowadays. Both methods benefit when compression parameters are selected specifically for each layer. Finding good combinations of compression parameters, so-called compression policies, is hard as the problem spans an exponentially large search space. Effective compression policies consider the influence of the specific hardware architecture on the used compression methods. We propose an algorithmic framework called Galen to search such policies using reinforcement learning utilizing pruning and quantization, thus providing automatic compression for neural networks. Contrary to other approaches we use inference latency measured on the target hardware device as an optimization goal. With that, the framework supports the compression of models specific to a given hardware target. We validate our approach using three different reinforcement learning agents for pruning, quantization and joint pruning and quantization. Besides proving the functionality of our approach we were able to compress a ResNet18 for CIFAR-10, on an embedded ARM processor, to 20% of the original inference latency without significant loss of accuracy. Moreover, we can demonstrate that a joint search and compression using pruning and quantization is superior to an individual search for policies using a single compression method.
translated by 谷歌翻译
最近对深神经网络(DNN)效率的重点已导致了模型压缩方法的重要工作,其中重量修剪是最受欢迎的方法之一。同时,有快速增长的计算支持,以有效地执行通过修剪获得的非结构化模型。但是,大多数现有的修剪方法最小化仅剩余权重的数量,即模型的大小,而不是针对推理时间进行优化。我们通过引入SPDY来解决这一差距,SPDY是一种新的压缩方法,该方法会自动确定层次的稀疏性目标,可以在给定系统上实现所需的推理速度,同时最大程度地减少准确性损失。 SPDY由两种新技术组成:第一个是一种有效的动态编程算法,用于求解一组给定的层敏感性得分,以解决加速约束的层压缩问题;第二个是一个局部搜索程序,用于确定准确的层敏感性得分。跨流行视觉和语言模型的实验表明,SPDY可以保证相对于现有策略的恢复较高的准确性,无论是一次性和逐步修剪方案,并且与大多数现有的修剪方法兼容。我们还将方法扩展到了最近实施的修剪任务,几乎没有数据,在该数据中,我们在修剪GPU支持的2:4稀疏模式时实现了最著名的准确性恢复。
translated by 谷歌翻译
近年来,行业和学术界的深度学习(DL)迅速发展。但是,找到DL模型的最佳超参数通常需要高计算成本和人类专业知识。为了减轻上述问题,进化计算(EC)作为一种强大的启发式搜索方法显示出在DL模型的自动设计中,所谓的进化深度学习(EDL)具有重要优势。本文旨在从自动化机器学习(AUTOML)的角度分析EDL。具体来说,我们首先从机器学习和EC阐明EDL,并将EDL视为优化问题。根据DL管道的说法,我们系统地介绍了EDL方法,从功能工程,模型生成到具有新的分类法的模型部署(即,什么以及如何发展/优化),专注于解决方案表示和搜索范式的讨论通过EC处理优化问题。最后,提出了关键的应用程序,开放问题以及可能有希望的未来研究线。这项调查回顾了EDL的最新发展,并为EDL的开发提供了有见地的指南。
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译
近年来,通过开发大型的深层模型,图像修复任务已经见证了绩效的巨大提高。尽管表现出色,但深层模型要求的重量计算限制了图像恢复的应用。为了提高限制,需要减少网络的大小,同时保持准确性。最近,N:M结构化修剪似乎是使模型具有准确性约束的有效且实用的修剪方法之一。但是,它无法解释图像恢复网络不同层的不同计算复杂性和性能要求。为了进一步优化效率和恢复精度之间的权衡,我们提出了一种新型的修剪方法,该方法确定了每一层N:M结构稀疏性的修剪比。关于超分辨率和脱张任务的广泛实验结果证明了我们方法的功效,该方法的表现胜过以前的修剪方法。拟议方法的Pytorch实施将在https://github.com/junghunoh/sls_cvpr2r2022上公开获得。
translated by 谷歌翻译
Structured channel pruning has been shown to significantly accelerate inference time for convolution neural networks (CNNs) on modern hardware, with a relatively minor loss of network accuracy. Recent works permanently zero these channels during training, which we observe to significantly hamper final accuracy, particularly as the fraction of the network being pruned increases. We propose Soft Masking for cost-constrained Channel Pruning (SMCP) to allow pruned channels to adaptively return to the network while simultaneously pruning towards a target cost constraint. By adding a soft mask re-parameterization of the weights and channel pruning from the perspective of removing input channels, we allow gradient updates to previously pruned channels and the opportunity for the channels to later return to the network. We then formulate input channel pruning as a global resource allocation problem. Our method outperforms prior works on both the ImageNet classification and PASCAL VOC detection datasets.
translated by 谷歌翻译
深度神经网络(DNN)的记录断裂性能具有沉重的参数化,导致外部动态随机存取存储器(DRAM)进行存储。 DRAM访问的禁用能量使得在资源受限的设备上部署DNN是不普遍的,呼叫最小化重量和数据移动以提高能量效率。我们呈现SmartDeal(SD),算法框架,以进行更高成本的存储器存储/访问的较低成本计算,以便在推理和培训中积极提高存储和能量效率。 SD的核心是一种具有结构约束的新型重量分解,精心制作以释放硬件效率潜力。具体地,我们将每个重量张量分解为小基矩阵的乘积以及大的结构稀疏系数矩阵,其非零被量化为-2的功率。由此产生的稀疏和量化的DNN致力于为数据移动和重量存储而大大降低的能量,因为由于稀疏的比特 - 操作和成本良好的计算,恢复原始权重的最小开销。除了推理之外,我们采取了另一次飞跃来拥抱节能培训,引入创新技术,以解决培训时出现的独特障碍,同时保留SD结构。我们还设计专用硬件加速器,充分利用SD结构来提高实际能源效率和延迟。我们在不同的设置中对多个任务,模型和数据集进行实验。结果表明:1)应用于推理,SD可实现高达2.44倍的能效,通过实际硬件实现评估; 2)应用于培训,储存能量降低10.56倍,减少了10.56倍和4.48倍,与最先进的训练基线相比,可忽略的准确性损失。我们的源代码在线提供。
translated by 谷歌翻译
在本文中,我们提出了用于卷积神经网络的可分散的信道稀疏性搜索(DCS)。与需要用户手动设置每个卷积层的紫星比的传统信道修剪算法不同,DCSS自动搜索稀疏的最佳组合。灵感来自可怜的架构搜索(飞镖),我们从连续放松中汲取课程,并利用梯度信息来平衡计算成本和指标。由于直接应用飞镖方案引起形状不匹配和过度的记忆消耗,因此在过滤器内引入一种名为重量共享的新技术。这种技术优雅地消除了具有可忽略额外资源的形状不匹配的问题。我们不仅开展全面的实验,不仅是图像分类,还可以找到包括语义分割和图像超分辨率的粒度任务,以验证DCSS的有效性。与以前的网络修剪方法相比,DCSS实现了图像分类的最先进结果。语义分割和图像超分辨率的实验结果表明,特定于任务特定搜索的性能比转移超薄模型实现了更好的性能,展示了广泛的适用性和高效率的DCSS。
translated by 谷歌翻译
卷积神经网络(CNNS)用于许多现实世界应用,例如基于视觉的自主驾驶和视频内容分析。要在各种目标设备上运行CNN推断,硬件感知神经结构搜索(NAS)至关重要。有效的硬件感知NAS的关键要求是对推理延迟的快速评估,以便对不同的架构进行排名。在构建每个目标设备的延迟预测器的同时,在本领域中通常使用,这是一个非常耗时的过程,在极定的设备存在下缺乏可扩展性。在这项工作中,我们通过利用延迟单调性来解决可扩展性挑战 - 不同设备上的架构延迟排名通常相关。当存在强烈的延迟单调性时,我们可以重复使用在新目标设备上搜索一个代理设备的架构,而不会丢失最佳状态。在没有强烈的延迟单调性的情况下,我们提出了一种有效的代理适应技术,以显着提高延迟单调性。最后,我们验证了我们的方法,并在多个主流搜索空间上使用不同平台的设备进行实验,包括MobileNet-V2,MobileNet-V3,NAS-Bench-201,Proxylessnas和FBNet。我们的结果突出显示,通过仅使用一个代理设备,我们可以找到几乎与现有的每个设备NAS相同的帕累托最优架构,同时避免为每个设备构建延迟预测器的禁止成本。 github:https://github.com/ren-research/oneproxy.
translated by 谷歌翻译