最近的工作表明,二值化的神经网络(BNN)能够大大降低计算成本和内存占用空间,促进在资源受限设备上进行模型部署。然而,与其全精密对应物相比,BNN患有严重的精度降解。旨在降低这种精度差距的研究已经很大程度上主要集中在具有少量或没有1x1卷积层的特定网络架构上,标准二值化方法不起作用。由于1x1卷积在现代架构的设计中是常见的(例如,Googlenet,Reset,DenSenet),开发一种方法以有效地为BNN进行更广泛采用的方法是至关重要的。在这项工作中,我们提出了一个“弹性链路”(EL)模块,通过自适应地将实值的输入特征自适应地添加到后续卷积输出功能来丰富了BNN内的信息流。所提出的EL模块很容易实现,并且可以与BNN的其他方法结合使用。我们证明将EL添加到BNNS对挑战大规模想象数数据集产生显着改进。例如,我们将二值化resnet26的前1个精度从57.9%提高到64.0%。 EL也有助于培训二值化Mobilenet的趋同,为此实现了56.4%的前1个精度。最后,随着RESTNET的整合,它产生了新的最新的最新性,最新的171.9%的前1个精度。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, several recent approaches have shown the benefit of enhancing spatial encoding. In this work, we focus on the channel relationship and propose a novel architectural unit, which we term the "Squeezeand-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We demonstrate that by stacking these blocks together, we can construct SENet architectures that generalise extremely well across challenging datasets. Crucially, we find that SE blocks produce significant performance improvements for existing state-ofthe-art deep architectures at minimal additional computational cost. SENets formed the foundation of our ILSVRC 2017 classification submission which won first place and significantly reduced the top-5 error to 2.251%, achieving a ∼25% relative improvement over the winning entry of 2016.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
二进制神经网络(BNN)是卷积神经网络(CNN)的极端量化版本,其所有功能和权重映射到仅1位。尽管BNN节省了大量的内存和计算需求以使CNN适用于边缘或移动设备,但由于二进制后的表示能力降低,BNN遭受了网络性能的下降。在本文中,我们提出了一个新的可更换且易于使用的卷积模块reponv,该模块reponv通过复制输入或沿通道维度的输出来增强特征地图,而不是$ \ beta $ times,而没有额外的参数和卷积计算费用。我们还定义了一组Reptran规则,可以在整个BNN模块中使用Repconv,例如二进制卷积,完全连接的层和批处理归一化。实验表明,在Reptran转换之后,一组高度引用的BNN与原始BNN版本相比,实现了普遍的性能。例如,Rep-Recu-Resnet-20的前1位准确性,即REPBCONV增强的RECU-RESNET-20,在CIFAR-10上达到了88.97%,比原始网络高1.47%。 Rep-Adambnn-Reactnet-A在Imagenet上获得了71.342%的TOP-1精度,这是BNN的最新结果。代码和型号可在以下网址提供:https://github.com/imfinethanks/rep_adambnn。
translated by 谷歌翻译
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations (in terms of number of the high precision operations) and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
translated by 谷歌翻译
This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size $1\times1$. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs. The MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/BiMLP}.
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
我们提出了一种多移民通道(MGIC)方法,该方法可以解决参数数量相对于标准卷积神经网络(CNN)中的通道数的二次增长。因此,我们的方法解决了CNN中的冗余,这也被轻量级CNN的成功所揭示。轻巧的CNN可以达到与参数较少的标准CNN的可比精度。但是,权重的数量仍然随CNN的宽度四倍地缩放。我们的MGIC体系结构用MGIC对应物代替了每个CNN块,该块利用了小组大小的嵌套分组卷积的层次结构来解决此问题。因此,我们提出的架构相对于网络的宽度线性扩展,同时保留了通道的完整耦合,如标准CNN中。我们对图像分类,分割和点云分类进行的广泛实验表明,将此策略应用于Resnet和MobilenetV3等不同体系结构,可以减少参数的数量,同时获得相似或更好的准确性。
translated by 谷歌翻译
Top-1 ImageNet优化促进了可能在推理设置中不切实际的网络。二元神经网络(BNN)具有显着降低计算强度,但现有模型的质量低。为了克服这种缺陷,我们提出了PokeConv,一个二进制卷积块,这是通过添加多个剩余路径的技术提高BNN的质量,并调整激活函数。我们将其应用于Reset-50并优化Reset的初始卷积层,这很难二向化。我们命名由此产生的网络系列POKBNN。选择这些技术以产生最高1精度和网络成本的良好改进。为了使成本的联合优化以及准确性,我们定义算术计算工作(ACE),用于量化和二值化网络的硬件和能量启发成本度量。我们还确定需要优化控制二值化梯度近似的探索过的超参数。我们在高精度上建立了一种新的,强大的最先进(SOTA),以及常用的CPU64成本,ACE成本和网络大小指标。 ReactNET-ADAM是BNN中的先前SOTA,实现了7.9 ACE的70.5%的前1个精度。一小块的炭达到70.5%的前1个,成本降低超过3倍;一个较大的POKBNN以7.8 ACE获得75.6%的顶级1,在不增加成本的情况下,准确性提高超过5%以上。 JAX /亚麻和再现说明中的POKEBNN实现是开放的。
translated by 谷歌翻译
本文研究了重量和激活都将二进制神经网络(BNN)二进制为1位值,从而大大降低了记忆使用率和计算复杂性。由于现代深层神经网络具有复杂的设计,具有复杂的架构,其准确性,因此权重和激活分布的多样性非常高。因此,传统的符号函数不能很好地用于有效地在BNN中进行全精度值。为此,我们提出了一种称为Adabin的简单而有效的方法,可自适应获得最佳的二进制集$ \ {b_1,b_2 \} $($ b_1,b_1,b_2 \ in \ mathbb {r} $)的重量和激活而不是固定集(即$ \ { - 1,+1 \} $)。通过这种方式,提出的方法可以更好地拟合不同的分布,并提高二进制特征的表示能力。实际上,我们使用中心位置和1位值的距离来定义新的二进制量化函数。对于权重,我们提出了一种均衡方法,将对称分布的对称中心与实价分布相对,并最大程度地减少它们的kullback-leibler差异。同时,我们引入了一种基于梯度的优化方法,以获取这两个激活参数,这些参数以端到端的方式共同训练。基准模型和数据集的实验结果表明,拟议的Adabin能够实现最新性能。例如,我们使用RESNET-18体系结构在Imagenet上获得66.4 \%TOP-1的精度,并使用SSD300获得了Pascal VOC的69.4映射。
translated by 谷歌翻译
现有的二进制神经网络(BNN)主要在具有二进制功能的局部卷积上运作。但是,这种简单的位操作缺乏建模上下文依赖性的能力,这对于学习视觉模型中的歧视性深度表示至关重要。在这项工作中,我们通过介绍二进制神经模块的新设计来解决这个问题,这使BNN能够学习有效的上下文依赖性。首先,我们建议二进制多层感知器(MLP)块作为二进制卷积块的替代方案,以直接建模上下文依赖性。短距离和远程特征依赖性均由二进制MLP建模,其中前者提供局部电感偏置,后者在二元卷积中有限的接受场有限。其次,为了提高具有上下文依赖性的二进制模型的鲁棒性,我们计算上下文动态嵌入,以确定一般二进制卷积块中的二进化阈值。用我们的二进制MLP块和改进的二进制卷积,我们用明确的上下文依赖性建模构建了BNN,称为BCDNET。在标准Imagenet-1K分类基准上,BCDNET可实现72.3%的TOP-1准确性,并且优于领先的二进制方法的差距很大。尤其是,提出的BCDNET超过了最新的ReactNet-A,具有相似操作的2.9%TOP-1准确性。我们的代码可从https://github.com/sense-gvt/bcdn获得
translated by 谷歌翻译
In standard Convolutional Neural Networks (CNNs), the receptive fields of artificial neurons in each layer are designed to share the same size. It is well-known in the neuroscience community that the receptive field size of visual cortical neurons are modulated by the stimulus, which has been rarely considered in constructing CNNs. We propose a dynamic selection mechanism in CNNs that allows each neuron to adaptively adjust its receptive field size based on multiple scales of input information. A building block called Selective Kernel (SK) unit is designed, in which multiple branches with different kernel sizes are fused using softmax attention that is guided by the information in these branches. Different attentions on these branches yield different sizes of the effective receptive fields of neurons in the fusion layer. Multiple SK units are stacked to a deep network termed Selective Kernel Networks (SKNets). On the ImageNet and CIFAR benchmarks, we empirically show that SKNet outperforms the existing state-of-the-art architectures with lower model complexity. Detailed analyses show that the neurons in SKNet can capture target objects with different scales, which verifies the capability of neurons for adaptively adjusting their receptive field sizes according to the input. The code and models are available at https://github.com/implus/SKNet.
translated by 谷歌翻译
模型二进制化是一种压缩神经网络并加速其推理过程的有效方法。但是,1位模型和32位模型之间仍然存在显着的性能差距。实证研究表明,二进制会导致前进和向后传播中的信息损失。我们提出了一个新颖的分布敏感信息保留网络(DIR-NET),该网络通过改善内部传播和引入外部表示,将信息保留在前后传播中。 DIR-NET主要取决于三个技术贡献:(1)最大化二进制(IMB)的信息:最小化信息损失和通过重量平衡和标准化同时同时使用权重/激活的二进制误差; (2)分布敏感的两阶段估计器(DTE):通过共同考虑更新能力和准确的梯度来通过分配敏感的软近似来保留梯度的信息; (3)代表性二进制 - 意识蒸馏(RBD):通过提炼完整精确和二元化网络之间的表示来保留表示信息。 DIR-NET从统一信息的角度研究了BNN的前进过程和后退过程,从而提供了对网络二进制机制的新见解。我们的DIR-NET中的三种技术具有多功能性和有效性,可以在各种结构中应用以改善BNN。关于图像分类和客观检测任务的综合实验表明,我们的DIR-NET始终优于主流和紧凑型体系结构(例如Resnet,vgg,vgg,EfficityNet,darts和mobilenet)下最新的二进制方法。此外,我们在现实世界中的资源有限设备上执行DIR-NET,该设备可实现11.1倍的存储空间和5.4倍的速度。
translated by 谷歌翻译
现有的多尺度解决方案会导致仅增加接受场大小的风险,同时忽略小型接受场。因此,有效构建自适应神经网络以识别各种空间尺度对象是一个具有挑战性的问题。为了解决这个问题,我们首先引入一个新的注意力维度,即除了现有的注意力维度(例如渠道,空间和分支)之外,并提出了一个新颖的选择性深度注意网络,以对称地处理各种视觉中的多尺度对象任务。具体而言,在给定神经网络的每个阶段内的块,即重新连接,输出层次功能映射共享相同的分辨率但具有不同的接收场大小。基于此结构属性,我们设计了一个舞台建筑模块,即SDA,其中包括树干分支和类似SE的注意力分支。躯干分支的块输出融合在一起,以通过注意力分支指导其深度注意力分配。根据提出的注意机制,我们可以动态选择不同的深度特征,这有助于自适应调整可变大小输入对象的接收场大小。这样,跨块信息相互作用会导致沿深度方向的远距离依赖关系。与其他多尺度方法相比,我们的SDA方法结合了从以前的块到舞台输出的多个接受场,从而提供了更广泛,更丰富的有效接收场。此外,我们的方法可以用作其他多尺度网络以及注意力网络的可插入模块,并创造为SDA- $ x $ net。它们的组合进一步扩展了有效的接受场的范围,可以实现可解释的神经网络。我们的源代码可在\ url {https://github.com/qingbeiguo/sda-xnet.git}中获得。
translated by 谷歌翻译
最近,低精确的深度学习加速器(DLA)由于其在芯片区域和能源消耗方面的优势而变得流行,但是这些DLA上的低精确量化模型导致严重的准确性降解。达到高精度和高效推断的一种方法是在低精度DLA上部署高精度神经网络,这很少被研究。在本文中,我们提出了平行的低精确量化(PALQUANT)方法,该方法通过从头开始学习并行低精度表示来近似高精度计算。此外,我们提出了一个新型的循环洗牌模块,以增强平行低精度组之间的跨组信息通信。广泛的实验表明,PALQUANT的精度和推理速度既优于最先进的量化方法,例如,对于RESNET-18网络量化,PALQUANT可以获得0.52 \%的准确性和1.78 $ \ times $ speedup同时获得在最先进的2位加速器上的4位反片机上。代码可在\ url {https://github.com/huqinghao/palquant}中获得。
translated by 谷歌翻译
We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.
translated by 谷歌翻译
虽然残留连接使训练非常深的神经网络,但由于其多分支拓扑而​​导致在线推断不友好。这鼓励许多研究人员在推动时没有残留连接的情况下设计DNN。例如,repvgg在部署时将多分支拓扑重新参数化为vgg型(单分支)模型,当网络相对较浅时显示出具有很大的性能。但是,RepVGG不能等效地将Reset转换为VGG,因为重新参数化方法只能应用于线性块,并且必须将非线性层(Relu)放在残余连接之外,这导致了有限的表示能力,特别是更深入网络。在本文中,我们的目标是通过在Resblock上的保留和合并(RM)操作等效地纠正此问题,并提出删除Vanilla Reset中的残留连接。具体地,RM操作允许输入特征映射通过块,同时保留其信息,并在每个块的末尾合并所有信息,这可以去除残差而不改变原始输出。作为一个插件方法,RM操作基本上有三个优点:1)其实现使其实现高比率网络修剪。 2)它有助于打破RepVGG的深度限制。 3)与Reset和RepVGG相比,它导致更好的精度速度折衷网络(RMNet)。我们相信RM操作的意识形态可以激发对未来社区的模型设计的许多见解。代码可用:https://github.com/fxmeng/rmnet。
translated by 谷歌翻译
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on Ima-geNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ∼13× actual speedup over AlexNet while maintaining comparable accuracy.
translated by 谷歌翻译