现代深度学习模型通常在分布式机器集合中并行培训,以减少训练时间。在这种情况下,机器之间模型更新的通信变成了一个重要的性能瓶颈,并且已经提出了各种有损的压缩技术来减轻此问题。在这项工作中,我们介绍了一种新的,简单但理论上和实践上有效的压缩技术:自然压缩(NC)。我们的技术分别应用于要进行压缩的更新向量的所有条目,并通过随机舍入到两个的(负或正)两种功能,可以通过忽略Mantissa来以“自然”方式计算。我们表明,与没有压缩相比,NC将压缩向量的第二刻增加不超过微小因子$ \ frac {9} {8} $,这意味着NC对流行训练算法的收敛速度的影响,例如分布式SGD,可以忽略不计。但是,NC启用的通信节省是可观的,导致$ 3 $ - $ 4 \ times $ $改善整体理论运行时间。对于需要更具侵略性压缩的应用,我们将NC推广到自然抖动,我们证明这比常见的随机抖动技术要好得多。我们的压缩操作员可以自行使用,也可以与现有操作员结合使用,从而产生更具侵略性的结合效果,并在理论和实践中提供新的最先进。
translated by 谷歌翻译
Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8× faster than the full-precision variant. time to the same target accuracy is 2.7×. Further, even computationally-heavy architectures such as Inception and ResNet can benefit from the reduction in communication: on 16GPUs, QSGD reduces the end-to-end convergence time of ResNet152 by approximately 2×. Networks trained with QSGD can converge to virtually the same accuracy as full-precision variants, and that gradient quantization may even slightly improve accuracy in some settings. Related Work. One line of related research studies the communication complexity of convex optimization. In particular, [40] studied two-processor convex minimization in the same model, provided a lower bound of Ω(n(log n + log(1/ ))) bits on the communication cost of n-dimensional convex problems, and proposed a non-stochastic algorithm for strongly convex problems, whose communication cost is within a log factor of the lower bound. By contrast, our focus is on stochastic gradient methods. Recent work [5] focused on round complexity lower bounds on the number of communication rounds necessary for convex learning.Buckwild! [10] was the first to consider the convergence guarantees of low-precision SGD. It gave upper bounds on the error probability of SGD, assuming unbiased stochastic quantization, convexity, and gradient sparsity, and showed significant speedup when solving convex problems on CPUs. QSGD refines these results by focusing on the trade-off between communication and convergence. We view quantization as an independent source of variance for SGD, which allows us to employ standard convergence results [7]. The main differences from Buckw
translated by 谷歌翻译
在过去的几年中,各种通信压缩技术已经出现为一个不可或缺的工具,有助于缓解分布式学习中的通信瓶颈。然而,尽管{\ em偏见}压缩机经常在实践中显示出卓越的性能,但与更多的研究和理解的{\ EM无偏见}压缩机相比,非常少见。在这项工作中,我们研究了三类偏置压缩操作员,其中两个是新的,并且它们在施加到(随机)梯度下降和分布(随机)梯度下降时的性能。我们首次展示偏置压缩机可以在单个节点和分布式设置中导致线性收敛速率。我们证明了具有错误反馈机制的分布式压缩SGD方法,享受ergodic速率$ \ mathcal {o} \ left(\ delta l \ exp [ - \ frac {\ mu k} {\ delta l}] + \ frac {(c + \ delta d)} {k \ mu} \右)$,其中$ \ delta \ ge1 $是一个压缩参数,它在应用更多压缩时增长,$ l $和$ \ mu $是平滑性和强凸常数,$ C $捕获随机渐变噪声(如果在每个节点上计算完整渐变,则$ C = 0 $如果在每个节点上计算),则$ D $以最佳($ d = 0 $ for over参数化模型)捕获渐变的方差)。此外,通过对若干合成和经验的通信梯度分布的理论研究,我们阐明了为什么和通过多少偏置压缩机优于其无偏的变体。最后,我们提出了几种具有有希望理论担保和实际表现的新型偏置压缩机。
translated by 谷歌翻译
使用多个计算节点通常可以加速在大型数据集上的深度神经网络。这种方法称为分布式训练,可以通过专门的消息传递协议,例如环形全部减少。但是,以比例运行这些协议需要可靠的高速网络,其仅在专用集群中可用。相比之下,许多现实世界应用程序,例如联合学习和基于云的分布式训练,在具有不稳定的网络带宽的不可靠的设备上运行。因此,这些应用程序仅限于使用参数服务器或基于Gossip的平均协议。在这项工作中,我们通过提出MOSHPIT全部减少的迭代平均协议来提升该限制,该协议指数地收敛于全局平均值。我们展示了我们对具有强烈理论保证的分布式优化方案的效率。该实验显示了与使用抢占从头开始训练的竞争性八卦的策略和1.5倍的加速,显示了1.3倍的Imagenet培训的加速。
translated by 谷歌翻译
分布式优化和学习的最新进展表明,沟通压缩是减少交流的最有效手段之一。尽管在通信压缩下的收敛速率有很多结果,但理论下限仍然缺失。通过通信压缩的算法的分析将收敛归因于两个抽象属性:无偏见的属性或承包属性。它们可以通过单向压缩(仅从工人到服务器的消息被压缩)或双向压缩来应用它们。在本文中,我们考虑了分布式随机算法,以最大程度地减少通信压缩下的平滑和非凸目标函数。我们为算法建立了收敛的下限,无论是在单向或双向中使用无偏压缩机还是使用承包压缩机。为了缩小下限和现有上限之间的差距,我们进一步提出了一种新石器时代的算法,该算法在轻度条件下几乎达到了我们的下限(达到对数因素)。我们的结果还表明,使用承包双向压缩可以产生迭代方法,该方法的收敛速度与使用无偏见的单向压缩的方法一样快。实验结果验证了我们的发现。
translated by 谷歌翻译
与训练数据中心的训练传统机器学习(ML)模型相反,联合学习(FL)训练ML模型,这些模型在资源受限的异质边缘设备上包含的本地数据集上。现有的FL算法旨在为所有参与的设备学习一个单一的全球模型,这对于所有参与培训的设备可能没有帮助,这是由于整个设备的数据的异质性。最近,Hanzely和Richt \'{A} Rik(2020)提出了一种新的配方,以培训个性化的FL模型,旨在平衡传统的全球模型与本地模型之间的权衡,该模型可以使用其私人数据对单个设备进行培训只要。他们得出了一种称为无环梯度下降(L2GD)的新算法,以解决该算法,并表明该算法会在需要更多个性化的情况下,可以改善沟通复杂性。在本文中,我们为其L2GD算法配备了双向压缩机制,以进一步减少本地设备和服务器之间的通信瓶颈。与FL设置中使用的其他基于压缩的算法不同,我们的压缩L2GD算法在概率通信协议上运行,在概率通信协议中,通信不会按固定的时间表进行。此外,我们的压缩L2GD算法在没有压缩的情况下保持与香草SGD相似的收敛速率。为了验证算法的效率,我们在凸和非凸问题上都进行了多种数值实验,并使用各种压缩技术。
translated by 谷歌翻译
Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. SIGNSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative 1 / 2 geometry of gradients, noise and curvature informs whether SIGNSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of SIGNSGD is able to match the accuracy and convergence speed of ADAM on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss (1823) we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD.
translated by 谷歌翻译
分布式平均值估计(DME)是联邦学习中的一个中央构建块,客户将本地梯度发送到参数服务器,以平均和更新模型。由于通信限制,客户经常使用有损压缩技术来压缩梯度,从而导致估计不准确。当客户拥有多种网络条件(例如限制的通信预算和数据包损失)时,DME更具挑战性。在这种情况下,DME技术通常会导致估计误差显着增加,从而导致学习绩效退化。在这项工作中,我们提出了一种名为Eden的强大DME技术,该技术自然会处理异质通信预算和数据包损失。我们为伊甸园提供了有吸引力的理论保证,并通过经验进行评估。我们的结果表明,伊甸园对最先进的DME技术持续改进。
translated by 谷歌翻译
由于培训数据集的大小爆炸,分布式学习近年来受到了日益增长的兴趣。其中一个主要瓶颈是中央服务器和本地工人之间的沟通成本。虽然已经证明错误反馈压缩以通过随机梯度下降(SGD)降低通信成本,但在培训大规模机器学习方面广泛用于培训的通信有效的适应性梯度方法楷模。在本文中,我们提出了一种新的通信 - 压缩AMSGRAD,用于分布式非透明的优化问题,可提供有效的效率。我们所提出的分布式学习框架具有有效的渐变压缩策略和工人侧模型更新设计。我们证明所提出的通信有效的分布式自适应梯度方法会聚到具有与随机非凸化优化设置中的未压缩的vanilla amsgrad相同的迭代复杂度的一阶静止点。关于各种基准备份我们理论的实验。
translated by 谷歌翻译
我们开发和分析码头:在异构数据集中的非凸分布式学习的新通信高效方法。 Marina采用了一种基于渐变差异的新颖沟通压缩策略,这些差异让人想起,但与Mishchenko等人的Diana方法中所采用的策略不同。 (2019)。与几乎所有竞争对手的分布式一阶方法不同,包括Diana,我们的基于精心设计的偏置渐变估计,这是其卓越理论和实践性能的关键。我们向码头证明的通信复杂性界限明显比以前所有的一阶方法的方式更好。此外,我们开发和分析码头的两种变体:VR-Marina和PP-Marina。当客户所拥有的本地丢失功能是有限和期望形式的局部丢失功能时,第一种方法设计了第一种方法,并且第二种方法允许客户端的部分参与 - 在联合学习中重要的功能。我们所有的方法都优于前面的oracle /通信复杂性的最先进的方法。最后,我们提供了满足Polyak-Lojasiewicz条件的所有方法的收敛分析。
translated by 谷歌翻译
深度学习在广泛的AI应用方面取得了有希望的结果。较大的数据集和模型一致地产生更好的性能。但是,我们一般花费更长的培训时间,以更多的计算和沟通。在本调查中,我们的目标是在模型精度和模型效率方面提供关于大规模深度学习优化的清晰草图。我们调查最常用于优化的算法,详细阐述了大批量培训中出现的泛化差距的可辩论主题,并审查了解决通信开销并减少内存足迹的SOTA策略。
translated by 谷歌翻译
我们介绍了一个框架 - Artemis-,以解决分布式或联合设置中的学习问题,并具有通信约束和设备部分参与。几位工人(随机抽样)使用中央服务器执行优化过程来汇总其计算。为了减轻通信成本,Artemis允许在两个方向上(从工人到服务器,相反)将发送的信息与内存机制相结合。它改进了仅考虑单向压缩(对服务器)的现有算法,或在压缩操作员上使用非常强大的假设,并且通常不考虑设备的部分参与。我们在非I.I.D中的随机梯度(仅在最佳点界定的噪声方差)提供了快速的收敛速率(线性最高到阈值)。设置,突出显示内存对单向和双向压缩的影响,分析Polyak-Ruppert平均。我们在分布中使用收敛性,以获得渐近方差的下限,该方差突出了实际的压缩极限。我们提出了两种方法,以解决设备部分参与的具有挑战性的案例,并提供实验结果以证明我们的分析有效性。
translated by 谷歌翻译
许多深度学习领域都受益于使用越来越大的神经网络接受公共数据训练的培训,就像预先训练的NLP和计算机视觉模型一样。培训此类模型需要大量的计算资源(例如,HPC群集),而小型研究小组和独立研究人员则无法使用。解决问题的一种方法是,几个较小的小组将其计算资源汇总在一起并训练一种使所有参与者受益的模型。不幸的是,在这种情况下,任何参与者都可以通过故意或错误地发送错误的更新来危害整个培训。在此类同龄人的情况下进行培训需要具有拜占庭公差的专门分布式培训算法。这些算法通常通过引入冗余通信或通过受信任的服务器传递所有更新来牺牲效率,从而使它们无法应用于大规模深度学习,在该大规模深度学习中,模型可以具有数十亿个参数。在这项工作中,我们提出了一种新的协议,用于强调沟通效率的安全(容忍)分散培训。
translated by 谷歌翻译
我们开发了一种新方法来解决中央服务器中分布式学习问题中的通信约束。我们提出和分析了一种执行双向压缩的新算法,并仅使用uplink(从本地工人到中央服务器)压缩达到与算法相同的收敛速率。为了获得此改进,我们设计了MCM,一种算法,使下行链路压缩仅影响本地模型,而整体模型则保留。结果,与以前的工作相反,本地服务器上的梯度是在干扰模型上计算的。因此,融合证明更具挑战性,需要精确控制这种扰动。为了确保它,MCM还将模型压缩与存储机制相结合。该分析打开了新的门,例如纳入依赖工人的随机模型和部分参与。
translated by 谷歌翻译
High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1 .
translated by 谷歌翻译
In distributed training of deep neural networks, parallel minibatch SGD is widely used to speed up the training process by using multiple workers. It uses multiple workers to sample local stochastic gradient in parallel, aggregates all gradients in a single server to obtain the average, and update each worker's local model using a SGD update with the averaged gradient. Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker. However, such linear scalability in practice is significantly limited by the growing demand for gradient communication as more workers are involved. Model averaging, which periodically averages individual models trained over parallel workers, is another common practice used for distributed training of deep neural networks since (Zinkevich et al. 2010) (McDonald, Hall, andMann 2010). Compared with parallel mini-batch SGD, the communication overhead of model averaging is significantly reduced. Impressively, tremendous experimental works have verified that model averaging can still achieve a good speed-up of the training time as long as the averaging interval is carefully controlled. However, it remains a mystery in theory why such a simple heuristic works so well. This paper provides a thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead.
translated by 谷歌翻译
我们在限制下研究了一阶优化算法,即使用每个维度的$ r $ bits预算进行量化下降方向,其中$ r \ in(0,\ infty)$。我们提出了具有收敛速率的计算有效优化算法,与信息理论性能匹配:(i):(i)具有访问精确梯度甲骨文的平稳且强烈的符合目标,以及(ii)一般凸面和非平滑目标访问嘈杂的亚级别甲骨文。这些算法的关键是一种多项式复杂源编码方案,它在量化它之前将矢量嵌入随机子空间中。这些嵌入使得具有很高的概率,它们沿着转换空间的任何规范方向的投影很小。结果,量化这些嵌入,然后对原始空间进行逆变换产生一种源编码方法,具有最佳的覆盖效率,同时仅利用每个维度的$ r $ bits。我们的算法保证了位预算$ r $的任意值的最佳性,其中包括次线性预算制度($ r <1 $),以及高预算制度($ r \ geq 1 $),虽然需要$ o \ left(n^2 \右)$乘法,其中$ n $是尺寸。我们还提出了使用Hadamard子空间对这种编码方案的有效放松扩展以显着提高梯度稀疏方案的性能。数值模拟验证我们的理论主张。我们的实现可在https://github.com/rajarshisaha95/distoptconstrocncomm上获得。
translated by 谷歌翻译
Sign-based algorithms (e.g. SIGNSGD) have been proposed as a biased gradient compression technique to alleviate the communication bottleneck in training large neural networks across multiple workers. We show simple convex counter-examples where signSGD does not converge to the optimum. Further, even when it does converge, signSGD may generalize poorly when compared with SGD. These issues arise because of the biased nature of the sign compression operator.We then show that using error-feedback, i.e. incorporating the error made by the compression operator into the next step, overcomes these issues. We prove that our algorithm (EF-SGD) with arbitrary compression operator achieves the same rate of convergence as SGD without any additional assumptions. Thus EF-SGD achieves gradient compression for free. Our experiments thoroughly substantiate the theory and show that error-feedback improves both convergence and generalization. Code can be found at https://github.com/epfml/error-feedback-SGD.
translated by 谷歌翻译
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized. This framework faces several systemsoriented challenges which include (i) communication bottleneck since a large number of devices upload their local updates to a parameter server, and (ii) scalability as the federated network consists of millions of devices. Due to these systems challenges as well as issues related to statistical heterogeneity of data and privacy concerns, designing a provably efficient federated learning method is of significant importance yet it remains challenging. In this paper, we present FedPAQ, a communication-efficient Federated Learning method with Periodic Averaging and Quantization. FedPAQ relies on three key features: (1) periodic averaging where models are updated locally at devices and only periodically averaged at the server; (2) partial device participation where only a fraction of devices participate in each round of the training; and (3) quantized messagepassing where the edge nodes quantize their updates before uploading to the parameter server. These features address the communications and scalability challenges in federated learning. We also show that FedPAQ achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by our method.
translated by 谷歌翻译
沟通是大规模机器学习模型的分布式培训中的关键瓶颈之一,而交换信息(例如随机梯度或模型)的有损压缩是减轻此问题的最有效工具之一。研究最多的压缩技术之一是无偏压缩操作员的类别,其方差为我们希望压缩的向量的平方规范的倍数界定。根据设计,该方差可能保持较高,并且只有在输入向量接近零时才会减少。但是,除非被训练的模型过度参数化,否则我们希望在经典方法的迭代(例如分布式压缩{\ sf sgd}的迭代术中,我们希望压缩的矢量有A的理由,对收敛产生不利影响速度。由于这个问题,最近提出了一些更详尽且看似截然不同的算法,目的是规避了这个问题。这些方法基于在我们通常希望压缩的向量和一些辅助向量之间压缩{\ em差异}的想法,这些辅助向量会在整个迭代过程中变化。在这项工作中,我们退后一步,并在概念上和理论上开发了研究此类方法的统一框架。我们的框架结合了使用无偏和有偏的压缩机压缩梯度和模型的方法,并阐明了辅助向量的构造。此外,我们的一般框架可以改善几种现有算法,并可以产生新的算法。最后,我们进行了几个数字实验,以说明和支持我们的理论发现。
translated by 谷歌翻译