稀疏激活的变压器(例如专家的混合物(MOE))由于其极端的缩放能力而引起了极大的兴趣,这可以使模型大小的急剧增加而没有大幅增加计算成本。为了实现这一目标,MOE模型用变压器中的Experts子层取代了前馈子层,并使用门控网络将每个令牌路由到其指定的专家。由于对此类模型进行有效培训的共同实践需要在不同的机器上分发专家和代币,因此这种路由策略通常会产生巨大的跨机器通信成本,因为代币及其分配的专家可能居住在不同的机器中。在本文中,我们提出了\ emph {门控辍学},它允许代币忽略门控网络并留在其本地机器,从而减少了交叉机器的通信。与传统辍学类似,我们还表明,门控辍学在训练过程中具有正规化效果,从而改善了概括性能。我们验证了对多语言机器翻译任务中门控辍学的有效性。我们的结果表明,门控辍学可改善具有更快的壁式时间收敛速率的最先进的MOE模型,并为各种模型尺寸和数据集提供更好的BLEU分数。
translated by 谷歌翻译
稀疏的专家模型是一个三十年来的概念,作为深度学习中流行的建筑。这类体系结构包括专家的混合物,交换变压器,路由网络,基础层等,所有这些都以一个统一的想法,即每个示例都由参数的一个子集进行。通过这样做,稀疏度将参数计数与每个示例的计算分解,从而允许使用极大但有效的模型。最终的模型显示了各种领域的显着改善,例如自然语言处理,计算机视觉和语音识别。我们回顾了稀疏专家模型的概念,提供了对常见算法的基本描述,将深度学习时代的进步进行上下文化,并通过突出未来工作的领域来结束。
translated by 谷歌翻译
专家(MOE)的稀疏混合物由于具有负担得起的计算开销而有希望的缩放能力,因此引起了极大的兴趣。 Moe将密集的层转换为稀疏的专家,并利用封闭式路由网络使专家有条件地激活。但是,随着专家的数量的增长,带有残酷参数的MOE会受到过度拟合和稀疏数据分配的影响。此类问题在数据有限的任务上尤为严重,因此阻碍了MOE模型通过扩展来提高性能的进度。在这项工作中,我们提出了专家群集的混合 - 一种通用方法,可以使专家层通过在路由阶段施加基于方差的约束来学习更多多样化和适当的知识。我们进一步提出了专门为专家集群结构设计的集群级专家辍学策略。我们的实验表明,MEEC可以提高机器翻译和自然语言理解任务的性能,并提高在有限数据下扩展专家的性能上限。我们还验证了MEEC在缓解过度拟合和稀疏数据分配中起积极的作用。
translated by 谷歌翻译
在深度学习中,模型通常重用所有输入的相同参数。专家的混合(MOE)违反了这一点,而是为每个传入示例选择不同的参数。结果是一个稀疏激活的模型 - 具有残酷数量的参数 - 但恒定的计算成本。然而,尽管MOE取得了一些显着的成功,但复杂性,沟通成本和培训不稳定的阻碍了广泛的采用 - 我们使用Switch Transformer解决了这些领域。我们简化了MOE路由算法和设计直观的改进模型,以降低的通信和计算成本。我们提出的培训技术有助于纠缠不稳定,我们表明稀疏模型可能首次以较低的精度(BFLOAT16)格式进行培训。我们设计了基于T5基数和T5总数的模型,以使用相同的计算资源获得高达7倍的训练速度。这些改进扩展到多语言设置,我们在所有101种语言中衡量对MT5基本版本的收益。最后,我们通过在“巨大的清洁爬行语料库”上预先培训高达数万亿个参数模型,并在T5-XXL模型上实现4倍的速度,从而提高了语言模型的当前规模。
translated by 谷歌翻译
Sparsely gated Mixture of Experts (MoE) models have been shown to be a compute-efficient method to scale model capacity for multilingual machine translation. However, for low-resource tasks, MoE models severely over-fit. We show effective regularization strategies, namely dropout techniques for MoE layers in EOM and FOM, Conditional MoE Routing and Curriculum Learning methods that prevent over-fitting and improve the performance of MoE models on low-resource tasks without adversely affecting high-resource tasks. On a massively multilingual machine translation benchmark, our strategies result in about +1 chrF++ improvement in very low resource language pairs. We perform an extensive analysis of the learned MoE routing to better understand the impact of our regularization methods and how we can improve them.
translated by 谷歌翻译
专家混合物(MOE)由于其成功提高了模型质量,特别是在变压器方面的成功而变得流行。通过向几个专家提供稀疏门的令牌,每个专家只包含完整模型的一部分,Moe将模型尺寸保持不变,并且显着降低了每次标记计算,从而有效地缩放神经网络。但是,我们发现,目前的联合训练专家和稀疏门的方法引入了对模型精度的负面影响,缩短了昂贵的大规模模型训练的效率。在这项工作中,我们提出了用于MOE训练的密集至稀疏的门(DTS-Gate)。具体而言,代替使用永久稀疏门,DTS-Gate开始作为向所有专家路由令牌的密集栅极开始,然后逐渐和自适应地成为稀疏,而路线较少到更少的专家。与DTS-Gate的Moe自然地通过培训所有专家训练专家和稀疏门的训练,然后学习稀疏门。实验表明,与GPT-MOE(1.5B)模型中的最先进的开关门相比,使用OpenWeBtext数据集(40GB),DTS-Gate可以获得2.0倍的加速以达到相同的验证困惑,如以及更高的拖鞋 - 效率为1.42倍的加速。
translated by 谷歌翻译
Compared to conventional bilingual translation systems, massively multilingual machine translation is appealing because a single model can translate into multiple languages and benefit from knowledge transfer for low resource languages. On the other hand, massively multilingual models suffer from the curse of multilinguality, unless scaling their size massively, which increases their training and inference costs. Sparse Mixture-of-Experts models are a way to drastically increase model capacity without the need for a proportional amount of computing. The recently released NLLB-200 is an example of such a model. It covers 202 languages but requires at least four 32GB GPUs just for inference. In this work, we propose a pruning method that allows the removal of up to 80\% of experts with a negligible loss in translation quality, which makes it feasible to run the model on a single 32GB GPU. Further analysis suggests that our pruning metrics allow to identify language-specific experts and prune non-relevant experts for a given language pair.
translated by 谷歌翻译
The mixture of Expert (MoE) parallelism is a recent advancement that scales up the model size with constant computational cost. MoE selects different sets of parameters (i.e., experts) for each incoming token, resulting in a sparsely-activated model. Despite several successful applications of MoE, its training efficiency degrades significantly as the number of experts increases. The routing stage in MoE relies on the efficiency of the All2All communication collective, which suffers from network congestion and has poor scalability. To mitigate these issues, we introduce SMILE, which exploits heterogeneous network bandwidth and splits a single-step routing into bi-level routing. Our experimental results show that the proposed method obtains a 2.5x speedup over Switch Transformer in terms of pretraining throughput on the Colossal Clean Crawled Corpus without losing any convergence speed.
translated by 谷歌翻译
随着巨型密集模型的训练在当今硬件资源的可用性和能力方面达到了界限,由于其质量降低了大量培训成本,因此Experts(MOE)模型成为最有前途的模型体系结构之一等效密集模型。它的培训成本节省从编码器模型(先前的工作)展示到自动攻击性语言模型的5倍(这项工作以及并行探索)。但是,由于模型的规模和独特的架构,如何提供快速MOE模型推理仍然具有挑战性和未解决,从而限制了其实际用途。为了解决这个问题,我们提出了DeepSpeed-Moe,这是DeepSpeed库的一部分,包括新型MOE架构设计和模型压缩技术,将MOE模型大小降低到3.7倍,以及一个,以及一个与现有的MOE推理解决方案相比,高度优化的推理系统可提供7.3倍的延迟和成本。 DeepSpeed-Moe提供了前所未有的量表和效率,可与质量等效的密集模型相比,提供高达4.5倍和9倍的推理的大型MOE模型。我们希望我们的创新和系统有助于在大型模型景观中打开通往新方向的有前途的途径,从密集到稀疏的MOE模型转变,在这种模型中,培训和部署具有更少资源的更高质量模型变得更加广泛。
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
Code completion is a valuable topic in both academia and industry. Recently, large-scale mono-programming-lingual (MonoPL) pre-training models have been proposed to boost the performance of code completion. However, the code completion on low-resource programming languages (PL) is difficult for the data-driven paradigm, while there are plenty of developers using low-resource PLs. On the other hand, there are few studies exploring the effects of multi-programming-lingual (MultiPL) pre-training for the code completion, especially the impact on low-resource programming languages. To this end, we propose the MultiCoder to enhance the low-resource code completion via MultiPL pre-training and MultiPL Mixture-of-Experts (MoE) layers. We further propose a novel PL-level MoE routing strategy (PL-MoE) for improving the code completion on all PLs. Experimental results on CodeXGLUE and MultiCC demonstrate that 1) the proposed MultiCoder significantly outperforms the MonoPL baselines on low-resource programming languages, and 2) the PL-MoE module further boosts the performance on six programming languages. In addition, we analyze the effects of the proposed method in details and explore the effectiveness of our method in a variety of scenarios.
translated by 谷歌翻译
模型大小的范围不断增加,并且持续改进性能使大型模型时代的到来的到来。在本报告中,我们通过潜入培训目标和培训方法来探讨大型模型培训如何运作。具体而言,培训目标描述了如何利用Web规模数据来开发基于自我监督的学习以及基于分布式培训的培训方法,开发出极强的大型模型,描述了如何使大型模型培训成为现实。我们将现有的培训方法总结为三个主要类别:训练并行性,节省记忆技术和模型稀疏设计。训练并行性可以根据发生的并行性维度分类为数据,管道和张量并行性。节省记忆的技术是正交的,并且与训练并行性互补。和模型稀疏设计以恒定的计算成本进一步扩大模型大小。在https://github.com/qhliu26/bm-training提供了不断更新的大型模型培训清单。
translated by 谷歌翻译
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, as users must choose between dropping tokens from the computation or wasting computation and memory on padding. To address these limitations, we reformulate MoE computation in terms of block-sparse operations and develop new block-sparse GPU kernels that efficiently handle the dynamism present in MoEs. Our approach never drops tokens and maps efficiently to modern hardware, enabling end-to-end training speedups of up to 40% over MoEs trained with the state-of-the-art Tutel library and 2.4x over DNNs trained with the highly-optimized Megatron-LM framework.
translated by 谷歌翻译
专家层(MOES)的混合物通过条件计算实现语言模型的高效缩放。本文提出了一个详细的实证研究,自回归鞋语言模型与广泛的设置中的密集模型相比:在域外语言建模,零和少量射击和全部微调。除了微调外,我们发现Moes基本上更加计算效率。在更适度的培训预算下,MOES可以使用$ \ SIM值4倍的计算,符合密集模型的性能。该差距在比例下变窄,但我们最大的MOE模型(1.1T参数)始终如一地优于计算等效的密集模型(6.7b参数)。总体而言,这种表现差距在任务和域中有很大差异,表明MOE和密集模型以不值得研究的方式概括不同的方式。我们使我们的代码和模型公开可用于研究使用。
translated by 谷歌翻译
最近在单语数据和机器翻译(MT)进行微调的预培训方面取得了成功,但尚不清楚如何最好地利用预先训练的模型来完成给定的MT任务。本文在微调MT上的预训练模型时研究了冻结参数的好处和缺点。我们专注于1)微调仅在英语单语言数据的BART上训练的模型。2)微调一个模型,该模型对25种语言的单语言数据进行了培训,Mbart。对于Bart,我们通过冻结大多数模型参数并添加额外的位置嵌入来获得最佳性能。对于MBART,我们将大多数语言对的天真微调的性能与编码器以及大多数解码器搭配。编码器的注意参数对于微调最重要。当将自己限制为越南人对英语的室外训练套装时,我们看到了基线的最大进步。
translated by 谷歌翻译
最近,Experts(简称为MOE)体系结构在提高大规模语言模型的模型能力方面取得了巨大的成功。但是,MOE需要比要扩展的基本模型要合并更多的参数。在本文中,我们建议通过跨专家共享信息来构建一个有效的MOE架构。我们采用矩阵产品运营商(MPO,量子多体物理学的张量分解)来重建专家层中的参数矩阵,并通过共享中央张量的参数(包含核心信息)来增加预训练语言模型的模型容量( )在不同专家的同时,通过不同专家的辅助张量(补充中央张量)实现特异性。为了解决不平衡的优化问题,我们进一步设计了基于MPO的MOE体系结构的梯度面膜策略。基于T5和GPT-2的广泛实验表明,预训练的语言模型的性能和效率提高(与开关变压器相比,高级模型性能的总参数降低了27.2倍)。我们的代码可在\ url {https://github.com/rucaibox/mpo/mpoe}上公开获得。
translated by 谷歌翻译
多语种NMT已成为MT在生产中部署的有吸引力的解决方案。但是要匹配双语质量,它符合较大且较慢的型号。在这项工作中,我们考虑了几种方法在推理时更快地使多语言NMT变得更快而不会降低其质量。我们在两种20语言多平行设置中尝试几个“光解码器”架构:在TED会谈中小规模和帕拉克曲线上的大规模。我们的实验表明,将具有词汇过滤的浅解码器组合在于,在翻译质量下没有损失的速度超过两倍。我们用Bleu和Chrf(380语言对),鲁棒性评估和人类评估验证了我们的研究结果。
translated by 谷歌翻译
Multilingual machine translation suffers from negative interference across languages. A common solution is to relax parameter sharing with language-specific modules like adapters. However, adapters of related languages are unable to transfer information, and their total number of parameters becomes prohibitively expensive as the number of languages grows. In this work, we overcome these drawbacks using hyper-adapters -- hyper-networks that generate adapters from language and layer embeddings. While past work had poor results when scaling hyper-networks, we propose a rescaling fix that significantly improves convergence and enables training larger hyper-networks. We find that hyper-adapters are more parameter efficient than regular adapters, reaching the same performance with up to 12 times less parameters. When using the same number of parameters and FLOPS, our approach consistently outperforms regular adapters. Also, hyper-adapters converge faster than alternative approaches and scale better than regular dense networks. Our analysis shows that hyper-adapters learn to encode language relatedness, enabling positive transfer across languages.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
我们为神经机翻译(NMT)提供了一个开源工具包。新工具包主要基于拱形变压器(Vaswani等,2017)以及下面详述的许多其他改进,以便创建一个独立的,易于使用,一致和全面的各个领域的机器翻译任务框架。它是为了支持双语和多语言翻译任务的工具,从构建各个语料库的模型开始推断新的预测或将模型打包给提供功能的JIT格式。
translated by 谷歌翻译