二进制神经网络利用$标志$函数来二进制真实值,其非衍生属性不可避免地会在反向传播期间带来巨大的梯度错误。尽管已经提出了许多手工设计的软功能来近似梯度,但它们的机制尚不清楚,并且在二进制模型及其完整精确的对应物之间仍然存在巨大的性能差距。为了解决这个问题,我们建议将网络二进制作为二进制分类问题解决,并使用多层感知器(MLP)作为分类器。基于MLP的分类器理论上可以符合任何连续功能,并可以自适应地学习,以对网络进行二进制和反向流向梯度,而无需任何特定的软函数。通过这种观点,我们进一步证明,即使是简单的线性函数也可以胜过先前的复杂软函数。广泛的实验表明,所提出的方法在图像分类和人类姿势估计任务中产生令人惊讶的表现。具体而言,我们在ImageNet数据集上实现了Resnet-34的65.7%的TOP-1准确性,绝对提高了2.8%。在评估具有挑战性的Microsoft可可关键数据集时,提出的方法使二进制网络能够首次获得60.6的地图,并与一些完整的方法相当。
translated by 谷歌翻译
模型二进制化是一种压缩神经网络并加速其推理过程的有效方法。但是,1位模型和32位模型之间仍然存在显着的性能差距。实证研究表明,二进制会导致前进和向后传播中的信息损失。我们提出了一个新颖的分布敏感信息保留网络(DIR-NET),该网络通过改善内部传播和引入外部表示,将信息保留在前后传播中。 DIR-NET主要取决于三个技术贡献:(1)最大化二进制(IMB)的信息:最小化信息损失和通过重量平衡和标准化同时同时使用权重/激活的二进制误差; (2)分布敏感的两阶段估计器(DTE):通过共同考虑更新能力和准确的梯度来通过分配敏感的软近似来保留梯度的信息; (3)代表性二进制 - 意识蒸馏(RBD):通过提炼完整精确和二元化网络之间的表示来保留表示信息。 DIR-NET从统一信息的角度研究了BNN的前进过程和后退过程,从而提供了对网络二进制机制的新见解。我们的DIR-NET中的三种技术具有多功能性和有效性,可以在各种结构中应用以改善BNN。关于图像分类和客观检测任务的综合实验表明,我们的DIR-NET始终优于主流和紧凑型体系结构(例如Resnet,vgg,vgg,EfficityNet,darts和mobilenet)下最新的二进制方法。此外,我们在现实世界中的资源有限设备上执行DIR-NET,该设备可实现11.1倍的存储空间和5.4倍的速度。
translated by 谷歌翻译
本文研究了重量和激活都将二进制神经网络(BNN)二进制为1位值,从而大大降低了记忆使用率和计算复杂性。由于现代深层神经网络具有复杂的设计,具有复杂的架构,其准确性,因此权重和激活分布的多样性非常高。因此,传统的符号函数不能很好地用于有效地在BNN中进行全精度值。为此,我们提出了一种称为Adabin的简单而有效的方法,可自适应获得最佳的二进制集$ \ {b_1,b_2 \} $($ b_1,b_1,b_2 \ in \ mathbb {r} $)的重量和激活而不是固定集(即$ \ { - 1,+1 \} $)。通过这种方式,提出的方法可以更好地拟合不同的分布,并提高二进制特征的表示能力。实际上,我们使用中心位置和1位值的距离来定义新的二进制量化函数。对于权重,我们提出了一种均衡方法,将对称分布的对称中心与实价分布相对,并最大程度地减少它们的kullback-leibler差异。同时,我们引入了一种基于梯度的优化方法,以获取这两个激活参数,这些参数以端到端的方式共同训练。基准模型和数据集的实验结果表明,拟议的Adabin能够实现最新性能。例如,我们使用RESNET-18体系结构在Imagenet上获得66.4 \%TOP-1的精度,并使用SSD300获得了Pascal VOC的69.4映射。
translated by 谷歌翻译
二进制神经网络(BNNS)对现实世界中嵌入式设备显示出巨大的希望。作为实现强大BNN的关键步骤之一,规模因子计算在减少其实价对应物的性能差距方面起着至关重要的作用。然而,现有的BNN忽略了实价重量和尺度因子的固有双线关系,从而导致训练过程不足引起的亚最佳模型。为了解决这个问题,提出了复发性双线性优化,以通过将固有的双线性变量关联到背面传播过程中,以改善BNNS(RBONN)的学习过程。我们的工作是从双线性角度优化BNN的首次尝试。具体而言,我们采用经常​​性优化和密度 - 列表来依次回溯稀疏的实价过滤器,该过滤器将经过充分的训练并基于可控的学习过程达到其性能限制。我们获得了强大的rbonn,在各种模型和数据集上的最先进的BNN上表现出令人印象深刻的性能。特别是,在对象检测的任务下,rbonn具有出色的概括性能。我们的代码在https://github.com/stevetsui/rbonn上进行开源。
translated by 谷歌翻译
二进制神经网络(BNNS)将原始的全精度权重和激活为1位,带有符号功能。由于传统符号函数的梯度几乎归零,因此不能用于反向传播,因此已经提出了几次尝试来通过使用近似梯度来缓解优化难度。然而,这些近似损坏了事实梯度的主要方向。为此,我们建议使用用于训练BNN的正弦函数的组合来估计傅立叶频域中的符号功能的梯度,即频域近似(FDA)。该提出的方法不会影响占据大部分整体能量的原始符号功能的低频信息,并且将忽略高频系数以避免巨大的计算开销。此外,我们将噪声适配模块嵌入到训练阶段以补偿近似误差。关于多个基准数据集和神经架构的实验说明了使用我们的方法学习的二进制网络实现了最先进的准确性。代码将在\ texit {https://gitee.com/mindspore/models/tree/master/research/cv/fda-bnn}上获得。
translated by 谷歌翻译
二进制神经网络(BNN)是卷积神经网络(CNN)的极端量化版本,其所有功能和权重映射到仅1位。尽管BNN节省了大量的内存和计算需求以使CNN适用于边缘或移动设备,但由于二进制后的表示能力降低,BNN遭受了网络性能的下降。在本文中,我们提出了一个新的可更换且易于使用的卷积模块reponv,该模块reponv通过复制输入或沿通道维度的输出来增强特征地图,而不是$ \ beta $ times,而没有额外的参数和卷积计算费用。我们还定义了一组Reptran规则,可以在整个BNN模块中使用Repconv,例如二进制卷积,完全连接的层和批处理归一化。实验表明,在Reptran转换之后,一组高度引用的BNN与原始BNN版本相比,实现了普遍的性能。例如,Rep-Recu-Resnet-20的前1位准确性,即REPBCONV增强的RECU-RESNET-20,在CIFAR-10上达到了88.97%,比原始网络高1.47%。 Rep-Adambnn-Reactnet-A在Imagenet上获得了71.342%的TOP-1精度,这是BNN的最新结果。代码和型号可在以下网址提供:https://github.com/imfinethanks/rep_adambnn。
translated by 谷歌翻译
在资源受限的嵌入式系统上部署卷积神经网络的关键推动力是二进制神经网络(BNN)。 BNNS通过将功能和权重进行分配来保存内存并简化计算。不幸的是,二进制不可避免地伴随着准确性的严重降低。为了减少二进制和完整精确网络之间的准确性差距,最近提出了许多维修方法,我们已经将其分类并在本章中进行了单一概述。维修方法分为两个主要分支,培训技术和网络拓扑变化,可以进一步分为较小的类别。后一个类别为嵌入式系统引入了额外的成本(能源消耗或额外的面积),而前者则没有。从我们的概述中,我们可以观察到在减少准确性差距方面取得了进展,但是BNN论文并不对应使用哪种修复方法进行对齐,以获得高度准确的BNN。因此,本章包含一项经验综述,该综述评估了许多维修方法的好处,而不是Resnet-20 \&Cifar10和Resnet-18 \&Cifar100基准。我们发现三个维修类别最有益:功能二进制器,功能归一化和双重残留。基于这篇评论,我们讨论未来的方向和研究机会。我们勾勒出与BNN在嵌入式系统上相关的收益和成本,因为BNN是否能够缩小准确性差距,同时在资源受限的嵌入式系统上保持高能效率仍然有待观察。
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
通过移除昂贵的乘法操作并将连续权重量化成低比特离散值来减少计算复杂性,与传统的神经网络相比,这是快速且节能的低比特离散值。然而,现有的换档网络对重量初始化敏感,并且还产生由消失梯度和重量率冻结问题引起的降级性能。为了解决这些问题,我们提出了一种低点重新参数化,这是一种用于训练低位换档网络的新技术。我们的方法以符号稀疏偏移3倍的方式分解离散参数。以这种方式,它有效地学习了一个低比特网络,其权重动力学类似于全精密网络并对重量初始化不敏感。我们所提出的培训方法推动移位神经网络的界限,并以在想象中的前1个精度方面显示出3位换档网络。
translated by 谷歌翻译
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. With this representation, we can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism which allows us to obtain multiple quantized networks from one full precision source model by progressively mapping the higher precision weights to their adjacent lower precision counterparts. Then, with networks of different bit-widths from one source model, multi-objective optimization is employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. By doing this, the shared weights will be optimized to balance the performance of different quantized models, thus making the weights transferable among different bit widths. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. Code will be available.
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations (in terms of number of the high precision operations) and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
依靠这样的前提是,二进制神经网络的性能可以在很大程度上恢复,而完全精确的权重向量与其相应的二进制向量之间的量化错误,网络二线化的现有作品经常采用模型鲁棒性的想法以达到上述目标。但是,鲁棒性仍然是一个不明智的概念,而没有扎实的理论支持。在这项工作中,我们介绍了Lipschitz的连续性,即定义明确的功能特性,是定义BNN模型鲁棒性的严格标准。然后,我们建议将Lipschitz连续性保留为正规化项,以提高模型的鲁棒性。特别是,虽然流行的Lipschitz涉及正则化方法由于其极端稀疏而经常在BNN中崩溃,但我们将保留矩阵设计以近似于目标重量矩阵的光谱规范,可以将其作为BNN的Lipschitz常数的近似值部署精确的L​​ipschitz恒定计算(NP-HARD)。我们的实验证明,我们的BNN特异性正则化方法可以有效地增强BNN的鲁棒性(在Imagenet-C上作证),从而在CIFAR和Imagenet上实现最新性能。
translated by 谷歌翻译
This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size $1\times1$. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs. The MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/BiMLP}.
translated by 谷歌翻译
Model quantization enables the deployment of deep neural networks under resource-constrained devices. Vector quantization aims at reducing the model size by indexing model weights with full-precision embeddings, i.e., codewords, while the index needs to be restored to 32-bit during computation. Binary and other low-precision quantization methods can reduce the model size up to 32$\times$, however, at the cost of a considerable accuracy drop. In this paper, we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models. By integrating hyperspherical learning, pruning and reinitialization, our proposed Hyperspherical Quantization (HQ) method reduces the cosine distance between the full-precision and ternary weights, thus reducing the bias of the straight-through gradient estimator during ternary quantization. Compared with existing work at similar compression levels ($\sim$30$\times$, $\sim$40$\times$), our method significantly improves the test accuracy and reduces the model size.
translated by 谷歌翻译
无数据量化是一项将神经网络压缩到低位的任务,而无需访问原始培训数据。大多数现有的无数据量化方法导致由于不准确的激活剪辑范围和量化误差而导致严重的性能降解,尤其是对于低位宽度。在本文中,我们提出了一种简单而有效的无数据量化方法,具有准确的激活剪辑和自适应批准化。精确的激活剪辑(AAC)通过利用完全精确模型的准确激活信息来提高模型的准确性。自适应批准归一化首先建议通过自适应更新批处理层次来解决分布更改中的量化误差。广泛的实验表明,所提出的无数据量化方法可以产生令人惊讶的性能,在Imagenet数据集上达到RESNET18的64.33%的TOP-1准确性,绝对改进的3.7%优于现有的最新方法。
translated by 谷歌翻译
Adder Neural Network (AdderNet) provides a new way for developing energy-efficient neural networks by replacing the expensive multiplications in convolution with cheaper additions (i.e.l1-norm). To achieve higher hardware efficiency, it is necessary to further study the low-bit quantization of AdderNet. Due to the limitation that the commutative law in multiplication does not hold in l1-norm, the well-established quantization methods on convolutional networks cannot be applied on AdderNets. Thus, the existing AdderNet quantization techniques propose to use only one shared scale to quantize both the weights and activations simultaneously. Admittedly, such an approach can keep the commutative law in the l1-norm quantization process, while the accuracy drop after low-bit quantization cannot be ignored. To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations. Specifically, the pre-trained full-precision weights in different kernels are clustered into different groups, then the intra-group sharing and inter-group independent scales can be adopted. To further compensate the accuracy drop caused by the distribution difference, we then develop a lossless range clamp scheme for weights and a simple yet effective outliers clamp strategy for activations. Thus, the functionality of full-precision weights and the representation ability of full-precision activations can be fully preserved. The effectiveness of the proposed quantization method for AdderNet is well verified on several benchmarks, e.g., our 4-bit post-training quantized adder ResNet-18 achieves an 66.5% top-1 accuracy on the ImageNet with comparable energy efficiency, which is about 8.5% higher than that of the previous AdderNet quantization methods.
translated by 谷歌翻译
最近,低精确的深度学习加速器(DLA)由于其在芯片区域和能源消耗方面的优势而变得流行,但是这些DLA上的低精确量化模型导致严重的准确性降解。达到高精度和高效推断的一种方法是在低精度DLA上部署高精度神经网络,这很少被研究。在本文中,我们提出了平行的低精确量化(PALQUANT)方法,该方法通过从头开始学习并行低精度表示来近似高精度计算。此外,我们提出了一个新型的循环洗牌模块,以增强平行低精度组之间的跨组信息通信。广泛的实验表明,PALQUANT的精度和推理速度既优于最先进的量化方法,例如,对于RESNET-18网络量化,PALQUANT可以获得0.52 \%的准确性和1.78 $ \ times $ speedup同时获得在最先进的2位加速器上的4位反片机上。代码可在\ url {https://github.com/huqinghao/palquant}中获得。
translated by 谷歌翻译
Binary neural networks are the extreme case of network quantization, which has long been thought of as a potential edge machine learning solution. However, the significant accuracy gap to the full-precision counterparts restricts their creative potential for mobile applications. In this work, we revisit the potential of binary neural networks and focus on a compelling but unanswered problem: how can a binary neural network achieve the crucial accuracy level (e.g., 80%) on ILSVRC-2012 ImageNet? We achieve this goal by enhancing the optimization process from three complementary perspectives: (1) We design a novel binary architecture BNext based on a comprehensive study of binary architectures and their optimization process. (2) We propose a novel knowledge-distillation technique to alleviate the counter-intuitive overfitting problem observed when attempting to train extremely accurate binary models. (3) We analyze the data augmentation pipeline for binary networks and modernize it with up-to-date techniques from full-precision models. The evaluation results on ImageNet show that BNext, for the first time, pushes the binary model accuracy boundary to 80.57% and significantly outperforms all the existing binary networks. Code and trained models are available at: https://github.com/hpi-xnor/BNext.git.
translated by 谷歌翻译