经常性神经网络语言模型(RNNLMS)的高存储器消耗和计算成本限制了它们对资源受限设备的更广泛的应用。近年来,能够产生极低比特压缩的神经网络量化技术,例如二值化的RNNLMS正在获得增加的研究兴趣。直接培训量化神经网络是困难的。通过将量化的RNNLMS培训作为优化问题的制定,使用乘法器(ADMM)的交替方向方法从头开始训练量化RNNLMS的新方法。使用捆绑的低比特量化表,此方法还可以灵活地调整压缩率和模型性能之间的权衡。两项任务的实验:Penn TreeBank(PTB)和交换机(SWBD)建议所提出的ADMM量化在全精密基线RNNLMS上实现了高达31次的模型尺寸压缩因子。还获得了在基线二值化RNNLM量化上模型训练中的5倍的更快收敛性。索引项:语言模型,经常性神经网络,量化,乘法器的交替方向方法。
translated by 谷歌翻译
由长期记忆复发网络(LSTM-RNN)和变压器代表的最先进的神经网络语言模型(NNLMS)和变压器变得非常复杂。当获得有限的培训数据时,它们容易过度拟合和泛化。为此,本文提出了一个总体完整的贝叶斯学习框架,其中包含三种方法,以说明LSTM-RNN和Transformer LMS的潜在不确定性。分别使用贝叶斯,高斯过程和变异LSTM-RNN或变压器LMS对其模型参数,神经激活的选择和隐藏输出表示的不确定性。有效的推理方法被用来自动选择使用神经体系结构搜索的最佳网络内部组件作为贝叶斯学习。还使用了最少数量的蒙特卡洛参数样本。这些允许贝叶斯NNLM培训和评估中产生的计算成本最小化。实验是针对两项任务进行的:AMI符合转录和牛津-BBC唇读句子2(LRS2)使用最先进的LF-MMI培训的有效的TDNN系统重叠的语音识别,具有数据增强,扬声器的适应和多种音频,频道横梁成形以进行重叠的语音。基线LSTM-RNN和Transformer LMS具有估计的模型参数和辍学正则化的一致性改进,就困惑性和单词错误率(WER)获得了两项任务。特别是,在LRS2数据上,在基线LSTM-RNN和Transformer LMS中,在贝叶斯NNLMS及其各自的Baselines之间的模型组合后,在基线LSTM-RNN和Transferes LMS上分别获得了最高1.3%和1.2%的绝对降低(相对12.1%和11.3%)。 。
translated by 谷歌翻译
Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile. The code is available at: https://github.com/synxlin/ deep-gradient-compression.
translated by 谷歌翻译
Recurrent neural networks (RNN) are the backbone of many text and speech applications. These architectures are typically made up of several computationally complex components such as; non-linear activation functions, normalization, bi-directional dependence and attention. In order to maintain good accuracy, these components are frequently run using full-precision floating-point computation, making them slow, inefficient and difficult to deploy on edge devices. In addition, the complex nature of these operations makes them challenging to quantize using standard quantization methods without a significant performance drop. We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN). Our approach supports layer normalization, attention, and an adaptive piecewise linear (PWL) approximation of activation functions, to serve a wide range of state-of-the-art RNNs. The proposed method enables RNN-based language models to run on edge devices with $2\times$ improvement in runtime, and $4\times$ reduction in model size while maintaining similar accuracy as its full-precision counterpart.
translated by 谷歌翻译
深入学习模型的压缩在将这些模型部署到边缘设备方面具有根本重要性。在压缩期间,在压缩期间结合硬件模型和应用限制可以最大限度地提高优势,但使其专为一种情况而设计。因此,压缩需要自动化。搜索最佳压缩方法参数被认为是一个优化问题。本文介绍了一种多目标硬件感知量化(MohaQ)方法,其将硬件效率和推理误差视为混合精度量化的目标。该方法通过依赖于两个步骤,在很大的搜索空间中评估候选解决方案。首先,应用训练后量化以进行快速解决方案评估。其次,我们提出了一个名为“基于信标的搜索”的搜索技术,仅在搜索空间中重新选出所选解决方案,并将其用作信标以了解刷新对其他解决方案的影响。为了评估优化潜力,我们使用Timit DataSet选择语音识别模型。该模型基于简单的复发单元(SRU),由于其相当大的加速在其他复发单元上。我们应用了我们在两个平台上运行的方法:SILAGO和BETFUSION。实验评估表明,SRU通过训练后量化可以压缩高达8倍,而误差的任何显着增加,误差只有1.5个百分点增加。在Silago上,唯一的搜索发现解决方案分别实现了最大可能加速和节能的80 \%和64 \%,错误的误差增加了0.5个百分点。在BETFUSION上,对于小SRAM尺寸的约束,基于信标的搜索将推断搜索的错误增益减少4个百分点,并且与BitFusion基线相比,可能的达到的加速度增加到47倍。
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
Mixed-precision quantization has been widely applied on deep neural networks (DNNs) as it leads to significantly better efficiency-accuracy tradeoffs compared to uniform quantization. Meanwhile, determining the exact precision of each layer remains challenging. Previous attempts on bit-level regularization and pruning-based dynamic precision adjustment during training suffer from noisy gradients and unstable convergence. In this work, we propose Continuous Sparsification Quantization (CSQ), a bit-level training method to search for mixed-precision quantization schemes with improved stability. CSQ stabilizes the bit-level mixed-precision training process with a bi-level gradual continuous sparsification on both the bit values of the quantized weights and the bit selection in determining the quantization precision of each layer. The continuous sparsification scheme enables fully-differentiable training without gradient approximation while achieving an exact quantized model in the end.A budget-aware regularization of total model size enables the dynamic growth and pruning of each layer's precision towards a mixed-precision quantization scheme of the desired size. Extensive experiments show CSQ achieves better efficiency-accuracy tradeoff than previous methods on multiple models and datasets.
translated by 谷歌翻译
我们报告了激进的量化策略,这些策略极大地加速了复发性神经网络传感器(RNN-T)的推理。我们使用4位整数表示进行权重和激活,并应用量化意识训练(QAT)来重新训练完整模型(声学编码器和语言模型)并实现近乎ISO的准确性。我们表明,根据网络本地属性量身定制的自定义量化方案对于在限制QAT的计算开销的同时,至关重要。密度比语言模型融合已显示出在RNN-T工作负载上的准确性提高,但严重增加了推理的计算成本。我们表明,我们的量化策略可以使用大型宽度宽度进行假设搜索,同时实现与流媒体兼容的运行时间,并且与完整的Precision模型相比,我们可以实现与流相兼容的运行时间和7.6 $ \ times $的完整模型压缩比。通过硬件仿真,我们估计端到端量化的RNN-T(包括LM Fusion)的3.4 $ \ times $从fp16到INT4,导致实时因子(RTF)为0.06。在NIST HUB5 2000,HUB5 2001和RT-03测试集中,我们保留了与LM Fusion相关的大部分收益,将平均WER提高了$ 1.5%。
translated by 谷歌翻译
在本文中,我们介绍了一种新颖的神经网络重量压缩方法。在我们的方法中,我们将重量张量存储为稀疏,量化的矩阵因子,其产品在推理过程中即时计算以生成目标模型的权重。我们使用预计的梯度下降方法来找到重量张量的量化和稀疏分解。我们表明,这种方法可以看作是重量SVD,矢量量化和稀疏PCA的统一。结合端到端微调,我们的方法超出了或与以前的最先进方法相提并论,就精度和模型大小之间的权衡而言。我们的方法适用于中等压缩方案,与矢量量化和极端压缩方案不同。
translated by 谷歌翻译
在本文中,提出了一种新的方法,该方法允许基于神经网络(NN)均衡器的低复杂性发展,以缓解高速相干光学传输系统中的损伤。在这项工作中,我们提供了已应用于馈电和经常性NN设计的各种深层模型压缩方法的全面描述和比较。此外,我们评估了这些策略对每个NN均衡器的性能的影响。考虑量化,重量聚类,修剪和其他用于模型压缩的尖端策略。在这项工作中,我们提出并评估贝叶斯优化辅助压缩,其中选择了压缩的超参数以同时降低复杂性并提高性能。总之,通过使用模拟和实验数据来评估每种压缩方法的复杂性及其性能之间的权衡,以完成分析。通过利用最佳压缩方法,我们表明可以设计基于NN的均衡器,该均衡器比传统的数字背部传播(DBP)均衡器具有更好的性能,并且只有一个步骤。这是通过减少使用加权聚类和修剪算法后在NN均衡器中使用的乘数数量来完成的。此外,我们证明了基于NN的均衡器也可以实现卓越的性能,同时仍然保持与完整的电子色色散补偿块相同的复杂性。我们通过强调开放问题和现有挑战以及未来的研究方向来结束分析。
translated by 谷歌翻译
代表低精度的深度神经网络(DNN)是一种有希望的方法来实现有效的加速和记忆力。以前的方法在低精度中培训DNN的方法通常在重量更新期间在高精度中保持重量的重量副本。由于低精度数字系统与学习算法之间的复杂相互作用,直接具有低精度重量的培训导致精度下降。为了解决这个问题,我们开发了一个共同设计的低精度训练框架,被称为LNS-MADAM,我们共同设计了对数号系统(LNS)和乘法权重算法(MADAM)。我们证明了LNS-MADAM在重量更新期间导致低量化误差,即使精度有限,也导致稳定的收敛。我们进一步提出了LNS-MADAM的硬件设计,可以解决实现LNS计算的有效数据路径的实际挑战。我们的实现有效地降低了LNS - 整数转换和部分总和累积所产生的能量开销。实验结果表明,LNS-MADAM为全精密对应物达到了可比的准确性,只有8位对流行的计算机视觉和自然语言任务。与全精密浮点实施相比,LNS-MADAM将能耗降低超过90。
translated by 谷歌翻译
诸如BERT的预先接受的语言模型在各种自然语言处理任务中显示出显着的效果。但是,这些模型通常包含数百万个参数,这可以防止它们在资源受限设备上实际部署。已知知识蒸馏,重量修剪和量化是模型压缩中的主要方向。然而,通过知识蒸馏获得的紧凑型模型即使对于相对小的压缩比也可能遭受显着的精度下降。另一方面,只有少数量化尝试专门用于自然语言处理任务。它们患有小的压缩比或较大的错误率,因为需要对超参数的手动设置,并且不支持微粒子组 - 方向量化。在本文中,我们提出了一种自动混合精密量化框架,设计用于伯特,其可以同时在亚组 - 明智的水平中进行量化和修剪。具体而言,我们所提出的方法利用可微分的神经结构搜索,搜索自动地分配每个子组中的参数的比例和精度,同时捕获冗余参数组。对BERT下游任务的广泛评估揭示了我们所提出的方法通过提供相同的模型尺寸来实现相同的性能。我们还通过将我们的解决方案与Ottherbert等正交方法相结合来展示获得极其轻量级模型的可行性。
translated by 谷歌翻译
长期记忆(LSTM)经常性网络经常用于涉及时间序列数据(例如语音识别)的任务。与以前的LSTM加速器相比,它可以利用空间重量稀疏性或时间激活稀疏性,本文提出了一种称为“ Spartus”的新加速器,该加速器可利用时空的稀疏性来实现超低潜伏期推断。空间稀疏性是使用新的圆柱平衡的靶向辍学(CBTD)结构化修剪法诱导的,从而生成平衡工作负载的结构化稀疏重量矩阵。在Spartus硬件上运行的修剪网络可实现高达96%和94%的重量稀疏度,而Timit和LibrisPeech数据集的准确性损失微不足道。为了在LSTM中诱导时间稀疏性,我们将先前的Deltagru方法扩展到Deltalstm方法。将时空的稀疏与CBTD和Deltalstm相结合,可以节省重量存储器访问和相关的算术操作。 Spartus体系结构是可扩展的,并且在大小FPGA上实现时支持实时在线语音识别。 1024个神经元的单个deltalstm层的Spartus每样本延迟平均1 US。使用TIMIT数据集利用我们的测试LSTM网络上的时空稀疏性导致Spartus在其理论硬件性能上达到46倍的加速,以实现9.4 TOP/S有效批次1吞吐量和1.1 TOP/S/W PARTIC效率。
translated by 谷歌翻译
Quantization has become a predominant approach for model compression, enabling deployment of large models trained on GPUs onto smaller form-factor devices for inference. Quantization-aware training (QAT) optimizes model parameters with respect to the end task while simulating quantization error, leading to better performance than post-training quantization. Approximation of gradients through the non-differentiable quantization operator is typically achieved using the straight-through estimator (STE) or additive noise. However, STE-based methods suffer from instability due to biased gradients, whereas existing noise-based methods cannot reduce the resulting variance. In this work, we incorporate exponentially decaying quantization-error-aware noise together with a learnable scale of task loss gradient to approximate the effect of a quantization operator. We show this method combines gradient scale and quantization noise in a better optimized way, providing finer-grained estimation of gradients at each weight and activation layer's quantizer bin size. Our controlled noise also contains an implicit curvature term that could encourage flatter minima, which we show is indeed the case in our experiments. Experiments training ResNet architectures on the CIFAR-10, CIFAR-100 and ImageNet benchmarks show that our method obtains state-of-the-art top-1 classification accuracy for uniform (non mixed-precision) quantization, out-performing previous methods by 0.5-1.2% absolute.
translated by 谷歌翻译
Embedding tables are usually huge in click-through rate (CTR) prediction models. To train and deploy the CTR models efficiently and economically, it is necessary to compress their embedding tables at the training stage. To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT). Also, we provide theoretical analysis on its convergence. The results show that stochastic weight quantization has a faster convergence rate and a smaller convergence error than deterministic weight quantization in LPT. Further, to reduce the accuracy degradation, we propose adaptive low-precision training (ALPT) that learns the step size (i.e., the quantization resolution) through gradient descent. Experiments on two real-world datasets confirm our analysis and show that ALPT can significantly improve the prediction accuracy, especially at extremely low bit widths. For the first time in CTR models, we successfully train 8-bit embeddings without sacrificing prediction accuracy. The code of ALPT is publicly available.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward-and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
translated by 谷歌翻译
梁搜索是端到端模型的主要ASR解码算法,生成树结构化假设。但是,最近的研究表明,通过假设合并进行解码可以通过可比或更好的性能实现更有效的搜索。但是,复发网络中的完整上下文与假设合并不兼容。我们建议在RNN传感器的预测网络中使用矢量定量的长期记忆单元(VQ-LSTM)。通过与ASR网络共同培训离散表示形式,可以积极合并假设以生成晶格。我们在总机语料库上进行的实验表明,提出的VQ RNN传感器改善了具有常规预测网络的换能器的ASR性能,同时还产生了具有相同光束尺寸的Oracle Word错误率(WER)的密集晶格。其他语言模型撤退实验还证明了拟议的晶格生成方案的有效性。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译