尽管胶囊网络在为视觉识别任务中定义了深度神经网络中的特征之间的位置关系,但它们是计算昂贵的并且不适合于在移动设备上运行的能力。瓶颈处于胶囊之间使用的动态路由机构的计算复杂性。另一方面,诸如Xnor-Net之类的神经网络是快速和计算的高效,但由于其在二值化过程中的信息丢失,具有相对低的精度。本文通过XNorize在CAPSFC层内的动态路由外部或内部的线性投影仪来提出新的完全连接(FC)层。具体而言,我们提出的FC层有两个版本,XNODR(Xnorizing线性投影仪外部动态路由)和XNIDR(动态路由内的xnorizing线性投影仪)。要测试其泛化,我们将它们插入MobileNet V2和Reset-50分别。在三个数据集,Mnist,CiFar-10,多方派的实验验证其有效性。我们的实验结果表明,XNODR和XNIDR都有助于网络具有高精度,具有较低的拖波和更少的参数(例如,95.32 \%的精度,在2.99M参数和311.22M拖薄的CIFAR-10上)。
translated by 谷歌翻译
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations (in terms of number of the high precision operations) and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
我们提出了一种多移民通道(MGIC)方法,该方法可以解决参数数量相对于标准卷积神经网络(CNN)中的通道数的二次增长。因此,我们的方法解决了CNN中的冗余,这也被轻量级CNN的成功所揭示。轻巧的CNN可以达到与参数较少的标准CNN的可比精度。但是,权重的数量仍然随CNN的宽度四倍地缩放。我们的MGIC体系结构用MGIC对应物代替了每个CNN块,该块利用了小组大小的嵌套分组卷积的层次结构来解决此问题。因此,我们提出的架构相对于网络的宽度线性扩展,同时保留了通道的完整耦合,如标准CNN中。我们对图像分类,分割和点云分类进行的广泛实验表明,将此策略应用于Resnet和MobilenetV3等不同体系结构,可以减少参数的数量,同时获得相似或更好的准确性。
translated by 谷歌翻译
胶囊网络(CAPSNET)是图像处理的新兴趋势。与卷积神经网络相反,CAPSNET不容易受到对象变形的影响,因为对象的相对空间信息在整个网络中保存。但是,它们的复杂性主要与胶囊结构和动态路由机制有关,这使得以其原始形式部署封闭式以由小型微控制器(MCU)供电的设备几乎是不合理的。在一个智力从云到边缘迅速转移的时代,这种高复杂性对在边缘的采用capsnets的采用构成了严重的挑战。为了解决此问题,我们提出了一个API,用于执行ARM Cortex-M和RISC-V MCUS中的量化capsnet。我们的软件内核扩展了ARM CMSIS-NN和RISC-V PULP-NN,以用8位整数作为操作数支持胶囊操作。随之而来的是,我们提出了一个框架,以执行CAPSNET的训练后量化。结果显示,记忆足迹的减少近75%,准确性损失范围从0.07%到0.18%。在吞吐量方面,我们的ARM Cortex-M API可以分别在仅119.94和90.60毫秒(MS)的中型胶囊和胶囊层执行(STM32H7555ZIT6U,Cortex-M7 @ 480 MHz)。对于GAP-8 SOC(RISC-V RV32IMCXPULP @ 170 MHz),延迟分别降至7.02和38.03 ms。
translated by 谷歌翻译
To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the secondto-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.
translated by 谷歌翻译
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31× FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
translated by 谷歌翻译
模型二进制化是一种压缩神经网络并加速其推理过程的有效方法。但是,1位模型和32位模型之间仍然存在显着的性能差距。实证研究表明,二进制会导致前进和向后传播中的信息损失。我们提出了一个新颖的分布敏感信息保留网络(DIR-NET),该网络通过改善内部传播和引入外部表示,将信息保留在前后传播中。 DIR-NET主要取决于三个技术贡献:(1)最大化二进制(IMB)的信息:最小化信息损失和通过重量平衡和标准化同时同时使用权重/激活的二进制误差; (2)分布敏感的两阶段估计器(DTE):通过共同考虑更新能力和准确的梯度来通过分配敏感的软近似来保留梯度的信息; (3)代表性二进制 - 意识蒸馏(RBD):通过提炼完整精确和二元化网络之间的表示来保留表示信息。 DIR-NET从统一信息的角度研究了BNN的前进过程和后退过程,从而提供了对网络二进制机制的新见解。我们的DIR-NET中的三种技术具有多功能性和有效性,可以在各种结构中应用以改善BNN。关于图像分类和客观检测任务的综合实验表明,我们的DIR-NET始终优于主流和紧凑型体系结构(例如Resnet,vgg,vgg,EfficityNet,darts和mobilenet)下最新的二进制方法。此外,我们在现实世界中的资源有限设备上执行DIR-NET,该设备可实现11.1倍的存储空间和5.4倍的速度。
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
卷积一直是现代深层神经网络的核心运作。众所周知,可以在傅立叶变换域中实现卷积。在本文中,我们建议使用二进制块WALSH-HATAMARD变换(WHT)而不是傅里叶变换。我们使用基于WHT的二进制层来替换深度神经网络中的一些常规卷积层。我们本文利用了一维(1-D)和二维(2-D)二进制WHT。在两个1-D和2-D层中,我们计算输入特征图的二进制WHT,并使用非线性去噪该WHT域系数,该非线性通过将软阈值与TanH函数组合而获得的非线性。在去噪后,我们计算反相WHT。我们使用1d-wht来取代$ 1 \ times 1 $卷积层,2d-wht层可以取代3 $ \ times $ 3卷积层和挤压和激发层。具有可培训重量的2D-WHT层也可以在全局平均池(间隙)层之前插入以辅助致密层。通过这种方式,我们可以显着降低可训练参数的衡量参数的数量。在本文中,我们将WHT层实施到MobileNet-V2,MobileNet-V3大,并重新阅读,以显着降低参数的数量,以可忽略不计的精度损失。此外,根据我们的速度测试,2D-FWWHT层的运行大约是常规3美元3美元3美元的速度大约为19.51次较少的RAM使用率在NVIDIA Jetson Nano实验中的使用率。
translated by 谷歌翻译
胶囊网络(CAPSNET)旨在将图像解析为由对象,部分及其关系组成的层次组件结构。尽管它们具有潜力,但它们在计算上还是很昂贵的,并且构成了一个主要的缺点,这限制了在更复杂的数据集中有效利用这些网络的限制。当前的CAPSNET模型仅将其性能与胶囊基线进行比较,并且在复杂任务上的基于CNN的DEEP基于DEEP基于CNN的级别的性能。本文提出了一种学习胶囊的有效方法,该胶囊通过一组子封装来检测输入图像的原子部分,并在其上投射输入向量。随后,我们提出了Wasserstein嵌入模块,该模块首先测量由子胶囊建模的输入和组件之间的差异,然后根据学习的最佳运输找到它们的对齐程度。该策略利用基于其各自的组件分布之间的相似性来定义输入和子胶囊之间的一致性的新见解。我们提出的模型(i)是轻量级的,允许将胶囊应用于更复杂的视觉任务; (ii)在这些具有挑战性的任务上的表现要好于或与基于CNN的模型相提并论。我们的实验结果表明,Wasserstein嵌入胶囊(Wecapsules)在仿射转换方面更加强大,有效地扩展到较大的数据集,并且在几个视觉任务中胜过CNN和CAPSNET模型。
translated by 谷歌翻译
使用卷积神经网络(CNN)已经显着改善了几种图像处理任务,例如图像分类和对象检测。与Reset和Abseralnet一样,许多架构在创建时至少在一个数据集中实现了出色的结果。培训的一个关键因素涉及网络的正规化,这可以防止结构过度装备。这项工作分析了在过去几年中开发的几种正规化方法,显示了不同CNN模型的显着改进。该作品分为三个主要区域:第一个称为“数据增强”,其中所有技术都侧重于执行输入数据的更改。第二个,命名为“内部更改”,旨在描述修改神经网络或内核生成的特征映射的过程。最后一个称为“标签”,涉及转换给定输入的标签。这项工作提出了与关于正则化的其他可用调查相比的两个主要差异:(i)第一个涉及在稿件中收集的论文并非超过五年,并第二个区别是关于可重复性,即所有作品此处推荐在公共存储库中可用的代码,或者它们已直接在某些框架中实现,例如Tensorflow或Torch。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
在本文中,我们提出了解决稳定性和卷积神经网络(CNN)的稳定性和视野的问题的神经网络。作为提高网络深度或宽度以提高性能的替代方案,我们提出了与全球加权拉普拉斯,分数拉普拉斯和逆分数拉普拉斯算子有关的基于积分的空间非识别算子,其在物理科学中的几个问题中出现。这种网络的前向传播由部分积分微分方程(PIDE)启发。我们在自动驾驶中测试基准图像分类数据集和语义分段任务的提出神经架构的有效性。此外,我们调查了这些密集的运营商的额外计算成本以及提出神经网络的前向传播的稳定性。
translated by 谷歌翻译
胶囊网络是一类神经网络,可在许多计算机视觉任务上取得有希望的结果。但是,由于高计算和内存要求,基线胶囊网络未能在更复杂的数据集上达到最新结果。我们通过提出一种称为动量胶囊网络(Mocapsnet)的新网络体系结构来解决这个问题。Mocapsnets的灵感来自动量Resnets,这是一种应用可逆残留构建块的网络。可逆的网络允许重新计算后反向传播算法中正向通行的激活,因此可以大大减少这些内存要求。在本文中,我们提供了一个框架,介绍如何将可逆的残留构建块应用于胶囊网络。我们将证明Mocapsnet在MNIST,SVHN,CIFAR-10和CIFAR-100上击败基线胶囊网络的准确性,同时使用的内存较少。源代码可在https://github.com/moejoe95/mocapsnet上找到。
translated by 谷歌翻译
从神经网络统治了图像处理的那一刻,解决目标任务所需的计算复杂性飙升:根据这种不可持续的趋势,已经制定了许多策略,雄心勃勃地针对绩效的保存。例如,促进稀疏拓扑允许在嵌入式,资源约束的设备上部署深神网络模型。最近,引入了胶囊网络以增强模型的解释性,其中每个胶囊都是对象或其零件的明确表示。这些模型在玩具数据集上显示出令人鼓舞的结果,但是它们的低可伸缩性可阻止在更复杂的任务上部署。在这项工作中,我们通过减少胶囊数量来探索稀疏性以提高其计算效率。我们展示了胶囊网络的修剪如何通过更少的内存需求,计算工作以及推理和训练时间来实现高概括。
translated by 谷歌翻译