现代卷积神经网络对图像中的每个像素应用相同的操作。但是,并非所有图像区域都同样重要。为了解决此效率低下,我们提出了一种动态应用在输入图像条件下的卷积的方法。我们引入了一个残留的块,其中一个小的门控分支学会了应评估哪些空间位置。这些离散的门控决策是使用Gumbel-Softmax技巧端到端训练的,结合了稀疏标准。我们对CIFAR,ImageNet和MPII的实验表明,与现有方法相比,我们的方法更好地关注感兴趣的区域和更好的准确性,并且在较低的计算复杂性下。此外,我们使用聚集筛选方法为我们的动态卷积提供了有效的CUDA实施,从而通过MobileNETV2残留块实现了推理速度的显着提高。根据人类姿势估计,一项固有的空间稀疏任务,处理速度增加了60%,而准确性没有损失。
translated by 谷歌翻译
Segblocks通过根据图像区域的复杂性动态调整处理分辨率来降低现有神经网络的计算成本。我们的方法将图像拆分为低复杂性的块和尺寸块块,从而减少了操作数量和内存消耗的数量。轻量级的政策网络选择复杂区域,是使用强化学习训练的。此外,我们介绍了CUDA中实现的几个模块以处理块中的图像。最重要的是,我们的新颖的阻止模块可以防止现有方法遭受的块边界的特征不连续性,同时保持记忆消耗受到控制。我们对语义分割的城市景观,Camvid和Mapillary Vistas数据集进行的实验表明,与具有相似复杂性的静态基准相比,动态处理图像与复杂性的折衷相对于复杂性更高。例如,我们的方法将SwiftNet-RN18的浮点操作数量降低了60%,并将推理速度提高50%,而CityScapes的MIOU准确性仅降低0.3%。
translated by 谷歌翻译
在本文中,我们提出了区块拷贝,该方案与标准的逐帧处理相比,可以加速基于框架的CNN以更有效地处理视频。为此,轻巧的策略网络确定图像中的重要区域,并且仅使用自定义的块 - 帕斯斯卷积应用于选定区域。简单地从前一个帧复制了非选择区域的特征,从而减少了计算和延迟的数量。执行策略是通过在线方式使用强化学习培训的,而无需进行地面真相注释。我们的通用框架在密集的预测任务上进行了证明,例如人行人检测,实例分割和语义分割,同时使用最新技术(中心和比例预测指标,MGAN,MGAN,SWIFTNET)和标准基线网络(Mask-RCNN,DeepLabV3+)。区块拷贝可实现大量的拖放节省和推理速度,对准确性的影响最小。
translated by 谷歌翻译
深度卷积神经网络(CNNS)通常是复杂的设计,具有许多可学习的参数,用于准确性原因。为了缓解在移动设备上部署它们的昂贵成本,最近的作品使挖掘预定识别架构中的冗余作出了巨大努力。然而,尚未完全研究现代CNN的输入分辨率的冗余,即输入图像的分辨率是固定的。在本文中,我们观察到,用于准确预测给定图像的最小分辨率使用相同的神经网络是不同的。为此,我们提出了一种新颖的动态分辨率网络(DRNET),其中基于每个输入样本动态地确定输入分辨率。其中,利用所需网络共同地探索具有可忽略的计算成本的分辨率预测器。具体地,预测器学习可以保留的最小分辨率,并且甚至超过每个图像的原始识别准确性。在推断过程中,每个输入图像将被调整为其预测的分辨率,以最小化整体计算负担。然后,我们对几个基准网络和数据集进行了广泛的实验。结果表明,我们的DRNET可以嵌入到任何现成的网络架构中,以获得计算复杂性的相当大降低。例如,DR-RESET-50实现了类似的性能,计算减少约34%,同时增加了1.4%的准确度,与原始Resnet-50上的计算减少相比,在ImageNet上的原始resnet-50增加了10%。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
动态神经网络是深度学习中的新兴的研究课题。与具有推断阶段的固定计算图和参数的静态模型相比,动态网络可以使其结构或参数适应不同的输入,从而在本调查中的准确性,计算效率,适应性等方面的显着优势。我们全面地通过将动态网络分为三个主要类别:1)使用数据相关的架构或参数进行处理的实例 - Wise-Wise DiveS动态模型的速度开发区域2)关于图像数据的不同空间位置和3)沿着诸如视频和文本的顺序数据的时间维度执行自适应推断的时间明智的动态模型进行自适应计算的空间 - 方向动态网络。系统地审查了动态网络的重要研究问题,例如架构设计,决策方案,优化技术和应用。最后,我们与有趣的未来研究方向讨论了该领域的开放问题。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient. We exploit this observation by learning to skip convolutional layers on a per-input basis. We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. We formulate the dynamic skipping problem in the context of sequential decision making and propose a hybrid learning algorithm that combines supervised learning and reinforcement learning to address the challenges of non-differentiable skipping decisions. We show SkipNet reduces computation by 30 90% while preserving the accuracy of the original model on four benchmark datasets and outperforms the state-of-the-art dynamic networks and static compression methods. We also qualitatively evaluate the gating policy to reveal a relationship between image scale and saliency and the number of layers skipped.
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
由于稀疏神经网络通常包含许多零权重,因此可以在不降低网络性能的情况下潜在地消除这些不必要的网络连接。因此,设计良好的稀疏神经网络具有显着降低拖鞋和计算资源的潜力。在这项工作中,我们提出了一种新的自动修剪方法 - 稀疏连接学习(SCL)。具体地,重量被重新参数化为可培训权重变量和二进制掩模的元素方向乘法。因此,由二进制掩模完全描述网络连接,其由单位步进函数调制。理论上,从理论上证明了使用直通估计器(STE)进行网络修剪的基本原理。这一原则是STE的代理梯度应该是积极的,确保掩模变量在其最小值处收敛。在找到泄漏的Relu后,SoftPlus和Identity Stes可以满足这个原理,我们建议采用SCL的身份STE以进行离散面膜松弛。我们发现不同特征的面具梯度非常不平衡,因此,我们建议将每个特征的掩模梯度标准化以优化掩码变量训练。为了自动训练稀疏掩码,我们将网络连接总数作为我们的客观函数中的正则化术语。由于SCL不需要由网络层设计人员定义的修剪标准或超级参数,因此在更大的假设空间中探讨了网络,以实现最佳性能的优化稀疏连接。 SCL克服了现有自动修剪方法的局限性。实验结果表明,SCL可以自动学习并选择各种基线网络结构的重要网络连接。 SCL培训的深度学习模型以稀疏性,精度和减少脚波特的SOTA人类设计和自动修剪方法训练。
translated by 谷歌翻译
在本文中,我们通过利用视觉数据中的空间稀疏性提出了一种新的模型加速方法。我们观察到,视觉变压器中的最终预测仅基于最有用的令牌的子集,这足以使图像识别。基于此观察,我们提出了一个动态的令牌稀疏框架,以根据加速视觉变压器的输入逐渐和动态地修剪冗余令牌。具体而言,我们设计了一个轻量级预测模块,以估计给定当前功能的每个令牌的重要性得分。该模块被添加到不同的层中以层次修剪冗余令牌。尽管该框架的启发是我们观察到视觉变压器中稀疏注意力的启发,但我们发现自适应和不对称计算的想法可能是加速各种体系结构的一般解决方案。我们将我们的方法扩展到包括CNN和分层视觉变压器在内的层次模型,以及更复杂的密集预测任务,这些任务需要通过制定更通用的动态空间稀疏框架,并具有渐进性的稀疏性和非对称性计算,用于不同空间位置。通过将轻质快速路径应用于少量的特征,并使用更具表现力的慢速路径到更重要的位置,我们可以维护特征地图的结构,同时大大减少整体计算。广泛的实验证明了我们框架对各种现代体系结构和不同视觉识别任务的有效性。我们的结果清楚地表明,动态空间稀疏为模型加速提供了一个新的,更有效的维度。代码可从https://github.com/raoyongming/dynamicvit获得
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
动态模型修剪是最近的方向,其允许不同的子网络中的部署过程中每个输入采样的推断。然而,当前的动态方法依赖于学习的连续通道通过诱导稀疏性损失通过正则化门控。这一提法介绍了平衡不同损失的复杂性(如任务的损失,正规化损失)。此外,基于正则化方法缺乏透明的折衷选择超参数,实现计算的预算。我们的贡献是双重的:1)分离任务和修剪培训。 2)简单的超参数选择,使训练前FLOPS减少估计。在神经科学的赫布理论的启发:“神经元一起火一起丝”,我们提出来预测基于其上一层的活化层口罩方法K过滤器。我们提出的问题,因为自监督二元分类问题。每个掩模预测模块被训练以预测,如果对数似然在当前层中的每个过滤器属于前k激活的过滤器。值k被动态地估计基于使用热图的质量的新颖标准每个输入。我们发现在几个神经结构,如VGG,RESNET和MobileNet上CIFAR和ImageNet数据集实验。在CIFAR,我们得出了类似的精度SOTA方法有15%和24%以上FLOPS减少。同样,在ImageNet,我们达到的精度低下降高达13%的改善FLOPS减少。
translated by 谷歌翻译
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https: //github.com/facebookresearch/LeViT.
translated by 谷歌翻译
我们提出了一种多移民通道(MGIC)方法,该方法可以解决参数数量相对于标准卷积神经网络(CNN)中的通道数的二次增长。因此,我们的方法解决了CNN中的冗余,这也被轻量级CNN的成功所揭示。轻巧的CNN可以达到与参数较少的标准CNN的可比精度。但是,权重的数量仍然随CNN的宽度四倍地缩放。我们的MGIC体系结构用MGIC对应物代替了每个CNN块,该块利用了小组大小的嵌套分组卷积的层次结构来解决此问题。因此,我们提出的架构相对于网络的宽度线性扩展,同时保留了通道的完整耦合,如标准CNN中。我们对图像分类,分割和点云分类进行的广泛实验表明,将此策略应用于Resnet和MobilenetV3等不同体系结构,可以减少参数的数量,同时获得相似或更好的准确性。
translated by 谷歌翻译
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability. To address this issue, we present Dynamic Convolution, a new design that increases model complexity without increasing the network depth or width. Instead of using a single convolution kernel per layer, dynamic convolution aggregates multiple parallel convolution kernels dynamically based upon their attentions, which are input dependent. Assembling multiple kernels is not only computationally efficient due to the small kernel size, but also has more representation power since these kernels are aggregated in a non-linear way via attention. By simply using dynamic convolution for the state-ofthe-art architecture MobileNetV3-Small, the top-1 accuracy of ImageNet classification is boosted by 2.9% with only 4% additional FLOPs and 2.9 AP gain is achieved on COCO keypoint detection.
translated by 谷歌翻译
卷积神经网络(CNN)压缩对于在资源有限的边缘设备中部署这些模型至关重要。 CNN的现有通道修剪算法在复杂模型上取得了很大的成功。他们从各个角度解决了修剪问题,并使用不同的指标来指导修剪过程。但是,这些指标主要集中于模型的“输出”或“权重”,而忽略了其“解释”信息。为了填补这一空白,我们建议通过利用模型的解释来引导修剪过程,从而从新颖的角度解决通道修剪问题,从而利用来自模型的输入和输出的信息。但是,现有的解释方法不能被部署以实现我们的目标,因为它们的修剪效率低下,或者可能预测了非固定解释。我们通过引入选择器模型来解决这一挑战,该模型可以预测修剪模型的实时平滑显着性掩码。我们通过径向基函数(RBF)函数来参数化解释性掩码的分布,以在我们选择器模型的电感偏置中纳入自然图像的几何事物。因此,我们可以获得解释的紧凑表示,以降低修剪方法的计算成本。我们利用我们的选择器模型来引导网络修剪,以最大程度地提高修剪和原始模型的解释性表示的相似性。关于CIFAR-10和Imagenet基准数据集的广泛实验证明了我们提出的方法的功效。我们的实现可在\ url {https://github.com/alii-ganjj/interpretationssteerpruning}中获得
translated by 谷歌翻译
Structured channel pruning has been shown to significantly accelerate inference time for convolution neural networks (CNNs) on modern hardware, with a relatively minor loss of network accuracy. Recent works permanently zero these channels during training, which we observe to significantly hamper final accuracy, particularly as the fraction of the network being pruned increases. We propose Soft Masking for cost-constrained Channel Pruning (SMCP) to allow pruned channels to adaptively return to the network while simultaneously pruning towards a target cost constraint. By adding a soft mask re-parameterization of the weights and channel pruning from the perspective of removing input channels, we allow gradient updates to previously pruned channels and the opportunity for the channels to later return to the network. We then formulate input channel pruning as a global resource allocation problem. Our method outperforms prior works on both the ImageNet classification and PASCAL VOC detection datasets.
translated by 谷歌翻译
Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2.Comprehensive ablation experiments verify that our model is the stateof-the-art in terms of speed and accuracy tradeoff.
translated by 谷歌翻译
This paper presents X3D, a family of efficient video networks that progressively expand a tiny 2D image classification architecture along multiple network axes, in space, time, width and depth. Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed that expands a single axis in each step, such that good accuracy to complexity trade-off is achieved. To expand X3D to a specific target complexity, we perform progressive forward expansion followed by backward contraction. X3D achieves state-of-the-art performance while requiring 4.8× and 5.5× fewer multiply-adds and parameters for similar accuracy as previous work. Our most surprising finding is that networks with high spatiotemporal resolution can perform well, while being extremely light in terms of network width and parameters. We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks. Code will be available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译