Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile. The code is available at: https://github.com/synxlin/ deep-gradient-compression.
translated by 谷歌翻译
High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1 .
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
培训广泛和深度神经网络(DNN)需要大量的存储资源,例如内存,因为在转发传播期间必须在存储器中保存中间激活数据,然后恢复以便向后传播。然而,由于硬件设计约束,诸如GPU之类的最先进的加速器(例如GPU)仅配备了非常有限的存储容量,这显着限制了在训练大规模DNN时的最大批量大小和性能加速。传统的记忆保存技术均受性能开销或受限互连带宽或特定互连技术的约束。在本文中,我们提出了一种新颖的记忆高效的CNN训练框架(称为Comet),利用错误界限的损耗压缩来显着降低训练的内存要求,以允许培训更大的模型或加速培训。不同于采用基于图像的有损压缩机(例如JPEG)的最先进的解决方案来压缩激活数据,我们的框架故意采用严格的错误控制机制来采用错误界限的损耗压缩。具体而言,我们对从改变的激活数据传播到梯度的压缩误差传播的理论分析,并经验探讨改变梯度对训练过程的影响。基于这些分析,我们优化了误报的损耗压缩,并提出了一种用于激活数据压缩的自适应误差控制方案。我们评估我们对最先进的解决方案的设计,其中包含五个广泛采用的CNN和Imagenet DataSet。实验表明,我们所提出的框架可以在基线训练中显着降低13.5倍,并分别在另一个最先进的基于压缩框架上的1.8倍,几乎没有准确性损失。
translated by 谷歌翻译
Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward-and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
translated by 谷歌翻译
经常性神经网络语言模型(RNNLMS)的高存储器消耗和计算成本限制了它们对资源受限设备的更广泛的应用。近年来,能够产生极低比特压缩的神经网络量化技术,例如二值化的RNNLMS正在获得增加的研究兴趣。直接培训量化神经网络是困难的。通过将量化的RNNLMS培训作为优化问题的制定,使用乘法器(ADMM)的交替方向方法从头开始训练量化RNNLMS的新方法。使用捆绑的低比特量化表,此方法还可以灵活地调整压缩率和模型性能之间的权衡。两项任务的实验:Penn TreeBank(PTB)和交换机(SWBD)建议所提出的ADMM量化在全精密基线RNNLMS上实现了高达31次的模型尺寸压缩因子。还获得了在基线二值化RNNLM量化上模型训练中的5倍的更快收敛性。索引项:语言模型,经常性神经网络,量化,乘法器的交替方向方法。
translated by 谷歌翻译
现代深度学习模型通常在分布式机器集合中并行培训,以减少训练时间。在这种情况下,机器之间模型更新的通信变成了一个重要的性能瓶颈,并且已经提出了各种有损的压缩技术来减轻此问题。在这项工作中,我们介绍了一种新的,简单但理论上和实践上有效的压缩技术:自然压缩(NC)。我们的技术分别应用于要进行压缩的更新向量的所有条目,并通过随机舍入到两个的(负或正)两种功能,可以通过忽略Mantissa来以“自然”方式计算。我们表明,与没有压缩相比,NC将压缩向量的第二刻增加不超过微小因子$ \ frac {9} {8} $,这意味着NC对流行训练算法的收敛速度的影响,例如分布式SGD,可以忽略不计。但是,NC启用的通信节省是可观的,导致$ 3 $ - $ 4 \ times $ $改善整体理论运行时间。对于需要更具侵略性压缩的应用,我们将NC推广到自然抖动,我们证明这比常见的随机抖动技术要好得多。我们的压缩操作员可以自行使用,也可以与现有操作员结合使用,从而产生更具侵略性的结合效果,并在理论和实践中提供新的最先进。
translated by 谷歌翻译
随着大数据的快速增长,分布式机器学习(ML)已广泛应用于培训大型模型。随机梯度下降(SGD)可以说是ML的Workhorse算法。 SGD培训的分布式ML型号涉及大量的梯度通信,这限制了分布式ML的可扩展性。因此,压缩梯度以减少通信是重要的。在本文中,我们提出了FastSGD,一种用于分布式ML的快速压缩的SGD框架。为了以低成本实现高压缩比,FastSGD表示梯度作为键值对,并在线性时间复杂度压缩梯度键和值。对于梯度值压缩,FASTSGD首先使用互焦数映射器将原始值转换为互焦值,然后,它利用对数量化来进一步将互焦值减少到小整数。最后,FastSGD通过给定阈值过滤减少梯度整数。对于渐变键压缩,FastSGD提供了一种自适应细粒度的Δ编码方法,用于存储具有更少位的渐变键。实际ML模型和数据集的广泛实验证明,与最先进的方法相比,FastSGD实现了高达4个级别的压缩比,并加速了高达8倍的收敛时间。
translated by 谷歌翻译
Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning however comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods however are only of limited utility in the Federated Learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions such as iid distribution of the client data, which typically can not be found in Federated Learning. In this work, we propose Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. STC extends the existing compression technique of top-k gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms Federated Averaging in common Federated Learning scenarios where clients either a) hold non-iid data, b) use small batch sizes during training, or where c) the number of clients is large and the participation rate in every communication round is low. We furthermore show that even if the clients hold iid data and use medium sized batches for training, STC still behaves paretosuperior to Federated Averaging in the sense that it achieves fixed target accuracies on our benchmarks within both fewer training iterations and a smaller communication budget. These results advocate for a paradigm shift in Federated optimization towards high-frequency low-bitwidth communication, in particular in bandwidth-constrained learning environments.
translated by 谷歌翻译
分散的分布式学习是利用私有用户生成的本地数据在边缘设备上启用大规模机器学习(训练)的关键,而不依赖于云。然而,实际实现这种设备培训受到通信瓶颈的限制,训练深层模型的计算复杂性和跨设备的显着数据分布偏差。在文献中提出了许多基于反馈的压缩技术,以降低通信成本,并且通过提高收敛速率,少数作品提出算法改变,以帮助存在偏斜数据分布的性能。据我们所知,文献中没有工作,适用并显示计算有效的训练技术这种量化,修剪等,用于对等对等分散的学习设置。在本文中,我们分析并展示了低精度分散培训的趋同,旨在降低培训和推论的计算复杂性。此外,我们研究偏斜和通信压缩程度对各种计算机视觉和自然语言处理(NLP)任务的低精度分散训练的影响。我们的实验表明,与其全面的数据相比,8位分散的训练与其完整的精密对手相比,即使具有异质数据,也具有最小的精度损失。但是,当通过稀疏的沟通压缩伴随着低精度训练时,我们观察1-2%的准确性。所提出的低精度分散培训减少了计算复杂性,内存使用量和通信成本,同时交易低于IID和非IID数据的1%准确性。特别是具有更高的偏斜值,我们观察精度增加(〜0.5%),具有低精度训练,表明量化的正则化效果。
translated by 谷歌翻译
扩展培训工作负载的能力是深度学习的关键性能推动者之一。主要缩放方法是基于数据并行GPU的培训,该培训已经被硬件和软件支持高效地支持高效的GPU通信,特别是通过带宽过度曝光。此支持以A价格出现:相对于其“消费者级”对应物,“云级”服务器之间存在幅度成本差异,但相对于其“消费者级”对应物,虽然服务器级和消费者级GPU可以具有类似的计算信封。在本文中,我们调查了昂贵的硬件过度控制方法是否可以通过算法和系统设计所涵盖,并提出称为CGX的框架,为通信压缩提供有效的软件支持。我们认为,在没有硬件支持的情况下,该框架能够从消费者级多GPU系统中删除通信瓶颈:在没有硬件支持的情况下:在培训现代模型和全部准确性方面时,我们的框架可以在商品上进行2-3倍的自动加速系统使用8个消费者级NVIDIA RTX 3090 GPU,并使其超越NVIDIA DGX-1服务器的吞吐量,其具有类似的峰值闪光,但是从带宽过度提供的益处。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
沟通压缩是现代分布式学习系统的至关重要技术,可以减轻其在较慢的网络上的交流瓶颈。尽管最近对数据并行式训练的梯度压缩进行了深入的研究,但压缩了通过管道并行性训练的模型的激活仍然是一个空旷的问题。在本文中,我们提出了AC-SGD,这是一种新型的激活压缩算法,用于在慢速网络上进行通信有效的管道并行性训练。 AC-SGD与以前的激活压缩方面的努力不同,而不是直接压缩激活值,而是压缩激活的变化。这使我们能够首次向我们的知识表明,仍然可以实现$ o(1/\ sqrt {t})$收敛速率,即激活压缩的非convex目标,而无需对梯度做出假设无偏见对于具有非线性激活功能的深度学习模型不符合。然后,我们证明AC-SGD可以有效地优化和实施,而无需额外的端到端运行时开销。我们将AC-SGD评估为微调语言具有高达15亿个参数的模型,将激活压缩至2-4位。AC-SGD在较慢的网络中可提供高达4.3倍的端到端速度,而无需牺牲模型质量。此外,我们还表明,AC-SGD可以与最先进的梯度压缩算法结合使用,以启用“端到端通信压缩:机器之间的所有通信,包括模型梯度,远期激活和后退梯度压缩为较低的精度。这提供了高达4.9倍的端到端加速,而无需牺牲模型质量。
translated by 谷歌翻译
Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8× faster than the full-precision variant. time to the same target accuracy is 2.7×. Further, even computationally-heavy architectures such as Inception and ResNet can benefit from the reduction in communication: on 16GPUs, QSGD reduces the end-to-end convergence time of ResNet152 by approximately 2×. Networks trained with QSGD can converge to virtually the same accuracy as full-precision variants, and that gradient quantization may even slightly improve accuracy in some settings. Related Work. One line of related research studies the communication complexity of convex optimization. In particular, [40] studied two-processor convex minimization in the same model, provided a lower bound of Ω(n(log n + log(1/ ))) bits on the communication cost of n-dimensional convex problems, and proposed a non-stochastic algorithm for strongly convex problems, whose communication cost is within a log factor of the lower bound. By contrast, our focus is on stochastic gradient methods. Recent work [5] focused on round complexity lower bounds on the number of communication rounds necessary for convex learning.Buckwild! [10] was the first to consider the convergence guarantees of low-precision SGD. It gave upper bounds on the error probability of SGD, assuming unbiased stochastic quantization, convexity, and gradient sparsity, and showed significant speedup when solving convex problems on CPUs. QSGD refines these results by focusing on the trade-off between communication and convergence. We view quantization as an independent source of variance for SGD, which allows us to employ standard convergence results [7]. The main differences from Buckw
translated by 谷歌翻译
数据爆炸和模型尺寸的增加推动了大规模机器学习的显着进步,但也使模型训练时间耗时和模型存储变得困难。为了解决具有较高计算效率和设备限制的分布式模型培训设置中的上述问题,仍然存在两个主要困难。一方面,交换信息的沟通成本,例如,不同工人之间的随机梯度是分布式培训效率的关键瓶颈。另一方面,较少的参数模型容易用于存储和通信,但是损坏模型性能的风险。为了同时平衡通信成本,模型容量和模型性能,我们提出了量化的复合镜下降自适应亚基(QCMD Adagrad),并量化正规化双平均平均自适应亚级别(QRDA ADAGRAD)进行分布式培训。具体来说,我们探讨了梯度量化和稀疏模型的组合,以降低分布式培训中每次迭代的通信成本。构建了基于量化梯度的自适应学习率矩阵,以在沟通成本,准确性和模型稀疏性之间达到平衡。此外,从理论上讲,我们发现大量化误差会引起额外的噪声,从而影响模型的收敛性和稀疏性。因此,在QCMD Adagrad和QRDA Adagrad中采用了具有相对较小误差的阈值量化策略,以提高信噪比并保留模型的稀疏性。理论分析和经验结果都证明了所提出的算法的功效和效率。
translated by 谷歌翻译
当可用的硬件无法满足内存和计算要求以有效地训练高性能的机器学习模型时,需要妥协训练质量或模型复杂性。在联合学习(FL)中,节点是比传统服务器级硬件更具限制的数量级,并且通常是电池供电的,严重限制了可以在此范式下训练的模型的复杂性。尽管大多数研究都集中在设计更好的聚合策略上以提高收敛速度并减轻FL的沟通成本,但更少的努力致力于加快设备培训。这样的阶段重复数百次(即每回合)并可能涉及数千个设备,这是培训联合模型所需的大部分时间,以及客户端的全部能源消耗。在这项工作中,我们介绍了第一个研究在FL工作负载中培训时间引入稀疏性时出现的独特方面的研究。然后,我们提出了Zerofl,该框架依赖于高度稀疏的操作来加快设备训练。与通过将最先进的稀疏训练框架适应FL设置相比,接受Zerofl和95%稀疏性训练的模型高达2.3%的精度。
translated by 谷歌翻译
Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently. Large batch training of neural networksRecent approaches by [10], [8], [41] and others show that by adapting the optimization regime (i.e., hyperparameter schedule), large batch training can achieve equally good
translated by 谷歌翻译
由于培训数据集的大小爆炸,分布式学习近年来受到了日益增长的兴趣。其中一个主要瓶颈是中央服务器和本地工人之间的沟通成本。虽然已经证明错误反馈压缩以通过随机梯度下降(SGD)降低通信成本,但在培训大规模机器学习方面广泛用于培训的通信有效的适应性梯度方法楷模。在本文中,我们提出了一种新的通信 - 压缩AMSGRAD,用于分布式非透明的优化问题,可提供有效的效率。我们所提出的分布式学习框架具有有效的渐变压缩策略和工人侧模型更新设计。我们证明所提出的通信有效的分布式自适应梯度方法会聚到具有与随机非凸化优化设置中的未压缩的vanilla amsgrad相同的迭代复杂度的一阶静止点。关于各种基准备份我们理论的实验。
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译