Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method supports existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.
translated by 谷歌翻译
基于变压器的大语言模型(LLM)的最新进展已导致许多任务的性能改进。这些收益随着模型的大小而大幅增加,可能导致推理时间缓慢且昂贵的使用。但是,实际上,LLMS制造的一代人由不同的难度组成。尽管某些预测确实从模型的全部容量中受益,但其他延续更为微不足道,可以通过减少的计算来解决。在这项工作中,我们介绍了自信的自适应语言建模(平静),该框架用于动态分配每个输入和生成时间段的不同计算。提前退出解码涉及我们在这里解决的几个挑战,例如:(1)使用什么信心措施; (2)将序列级别的约束连接到局部人口退出决策; (3)由于以前的令牌中的早期退出而返回丢失的隐藏表示形式。通过对三个不同文本生成任务的理论分析和经验实验,我们证明了框架在减少计算的效果 - 潜在的速度最高为$ \ times 3 $ - 同时可维持高性能。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, FiD suffers from very expensive inference. We show that the majority of inference time results from memory bandwidth constraints in the decoder, and propose two simple changes to the FiD architecture to speed up inference by 7x. The faster decoder inference then allows for a much larger decoder. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
translated by 谷歌翻译
基于变压器的神经模型在许多AI应用中使用。培训这些模型很昂贵,因为它需要大量的GPU资源和较长的持续时间。这是具有挑战性的,因为诸如句子之类的典型数据具有可变的长度,而变压器的计算模式比卷积神经网络更为复杂。现有系统要么仅专注于模型推理,要么仅针对BERT样编码器模型进行优化。在本文中,我们提出了LightSeq2,该系统是为GPU上的一般变压器模型加速培训的系统。我们提出了一系列针对变压器模型的特定计算流量和内存访问模式量身定制的GPU优化技术。 LightSeq2支持许多模型体系结构,包括BERT(仅编码),GPT(仅解码器),变压器(编码器编码器)和视觉变压器。我们对各种模型和基准测试的实验表明,LightSeq2始终比不同GPU上的先前系统更快(1.4-3.5倍)。特别是,与大型公共机器翻译基准(WMT14英语 - 德国人)上的现有系统相比,它获得了308%的培训速度。
translated by 谷歌翻译
过去的几年见证了基于变压器的模型的成功,其规模和应用方案继续积极发展。变压器模型的当前景观越来越多样化:该模型大小差异很大,最大的参数是最大的。模型特性由于特征的混合物所引入的稀疏性而有所不同。目标应用程序方案可以是关键延迟或面向吞吐量的情况;部署硬件可以是具有不同类型的内存和存储等单身或多GPU系统。随着多样性的增加和变压器模型的快速发展速度,设计高性能和高效的推理系统非常具有挑战性。在本文中,我们提出了DeepSpeed推断,这是用于解决上述挑战的变压器模型推理的全面系统解决方案。深速推理包括(1)一种多GPU推理解决方案,可最大程度地减少潜伏度,同时最大化密集和稀疏变压器模型的吞吐量,当它们适合聚集的GPU内存时,以及(2)一种异质推理解决方案,该解决方案利用CPU和NVME内存中的CPU和NVME内存。除了GPU内存和计算以使高推理吞吐量具有不适合聚集GPU内存的大型推理吞吐量。对于面向延迟的方案,深速推理可将延迟降低到最新的7倍,而对于面向吞吐量的方案,延迟的潜伏期将延迟减少到1.5倍以上。此外,它通过利用数百个GPU来实现实时延迟约束下的参数量表推断,这是一个前所未有的推理。它可以比仅使用GPU的解决方案更大的25倍模型,同时提供84个TFLOPS(超过50美元的A6000峰值)。
translated by 谷歌翻译
大型变压器模型在许多任务中产生令人印象深刻的结果,但培训昂贵,甚至微调,如此慢,在解码中,他们的使用和研究变得无法触及。我们通过利用稀疏性来解决这个问题。我们研究变压器中的所有层的稀疏变体,并提出缩放变压器,一个缩放变压器模型,使用稀疏层的型号有效地缩放,并在我们扩展模型大小时比标准变压器更快地执行不匹配的解码。令人惊讶的是,稀疏层足以获得与具有相同数量的参数的标准变压器相同的困惑。我们还与现有的稀疏性融合,即使存储器有限,也能够对长期序列进行快速推断。这导致在长期摘要上对最先进的表现竞争。
translated by 谷歌翻译
在深度学习中,模型通常重用所有输入的相同参数。专家的混合(MOE)违反了这一点,而是为每个传入示例选择不同的参数。结果是一个稀疏激活的模型 - 具有残酷数量的参数 - 但恒定的计算成本。然而,尽管MOE取得了一些显着的成功,但复杂性,沟通成本和培训不稳定的阻碍了广泛的采用 - 我们使用Switch Transformer解决了这些领域。我们简化了MOE路由算法和设计直观的改进模型,以降低的通信和计算成本。我们提出的培训技术有助于纠缠不稳定,我们表明稀疏模型可能首次以较低的精度(BFLOAT16)格式进行培训。我们设计了基于T5基数和T5总数的模型,以使用相同的计算资源获得高达7倍的训练速度。这些改进扩展到多语言设置,我们在所有101种语言中衡量对MT5基本版本的收益。最后,我们通过在“巨大的清洁爬行语料库”上预先培训高达数万亿个参数模型,并在T5-XXL模型上实现4倍的速度,从而提高了语言模型的当前规模。
translated by 谷歌翻译
随着巨型密集模型的训练在当今硬件资源的可用性和能力方面达到了界限,由于其质量降低了大量培训成本,因此Experts(MOE)模型成为最有前途的模型体系结构之一等效密集模型。它的培训成本节省从编码器模型(先前的工作)展示到自动攻击性语言模型的5倍(这项工作以及并行探索)。但是,由于模型的规模和独特的架构,如何提供快速MOE模型推理仍然具有挑战性和未解决,从而限制了其实际用途。为了解决这个问题,我们提出了DeepSpeed-Moe,这是DeepSpeed库的一部分,包括新型MOE架构设计和模型压缩技术,将MOE模型大小降低到3.7倍,以及一个,以及一个与现有的MOE推理解决方案相比,高度优化的推理系统可提供7.3倍的延迟和成本。 DeepSpeed-Moe提供了前所未有的量表和效率,可与质量等效的密集模型相比,提供高达4.5倍和9倍的推理的大型MOE模型。我们希望我们的创新和系统有助于在大型模型景观中打开通往新方向的有前途的途径,从密集到稀疏的MOE模型转变,在这种模型中,培训和部署具有更少资源的更高质量模型变得更加广泛。
translated by 谷歌翻译
State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train and release ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and introspect the model's understanding of style conditions. We make our code, models, and datasets publicly available.
translated by 谷歌翻译
Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction. Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks, while abstracting language model internals and providing high-level semantics. To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model. We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (13-85% cost savings).
translated by 谷歌翻译
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusion-inspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
translated by 谷歌翻译
与变压器架构相关的自我监督学习的最新进展使自然语言处理(NLP)表现出极低的困惑。如此强大的模型需要越来越多的模型大小,因此需要大量的计算和内存足迹。在本文中,我们为大规模生成语言模型提出了一个有效的推理框架。作为减少模型大小的关键,我们通过不均匀的量化方法量化权重。然后,我们提出的称为NUQMM的量化矩阵乘法加速了,该内核可以在压缩比和准确性之间进行广泛的权衡。我们提出的NUQMM不仅减少了每个GPU的延迟,还减少了大LMS的全部推断,因为高压缩比(通过低位量化)减轻了最小所需的GPU数量。我们证明NUQMM可以将GPT-3(175b)模型的推理速度加速约14.4倍,并将能源消耗降低93%。
translated by 谷歌翻译
从有限的资源中获得最大收益可以进步自然语言处理(NLP)研究和实践,同时保守资源。这些资源可能是数据,时间,存储或能源。NLP的最新工作从缩放率产生了有趣的结果。但是,仅使用比例来改善结果意味着资源消耗也会扩展。这种关系激发了对有效方法的研究,这些方法需要更少的资源才能获得相似的结果。这项调查涉及NLP效率的方法和发现,旨在指导该领域的新研究人员并激发新方法的发展。
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
变形金刚是一种深入学习语言模型,用于数据中心中的自然语言处理(NLP)服务。在变压器模型中,生成的预训练的变压器(GPT)在文本生成或自然语言生成(NLG)中取得了显着的性能,它需要在摘要阶段处理大型输入上下文,然后是产生一个生成阶段的一次单词。常规平台(例如GPU)专门用于在摘要阶段平行处理大型输入,但是由于其顺序特征,它们的性能在生成阶段显着降低。因此,需要一个有效的硬件平台来解决由文本生成的顺序特征引起的高潜伏期。在本文中,我们提出了DFX,这是一种多FPGA加速器,该设备在摘要和发电阶段中执行GPT-2模型端到端,并具有低延迟和高吞吐量。 DFX使用模型并行性和优化的数据流,这是模型和硬件感知的设备之间快速同时执行执行。其计算核心根据自定义说明运行,并提供GPT-2操作端到端。我们在四个Xilinx Alveo U280 FPGAS上实现了建议的硬件体系结构,并利用了高带宽内存(HBM)的所有频道,以及用于高硬件效率的最大计算资源数量。 DFX在现代GPT-2模型上实现了四个NVIDIA V100 GPU的5.58倍加速度和3.99倍的能效。 DFX的成本效益比GPU设备更具成本效益,这表明它是云数据中心中文本生成工作负载的有前途解决方案。
translated by 谷歌翻译
在本文中,我们提出了一种新的生成模型,逐步逐步的去噪AutoEncoder(Sundae),不依赖于自回归模型。类似地与去噪扩散技术,在从随机输入开始并从随机输入开始并每次直到收敛改善它们时,日出施加Sundae。我们提出了一个简单的新改进运算符,它比扩散方法更少迭代,同时在定性地在自然语言数据集上产生更好的样本。Sundae在WMT'14英语到德语翻译任务上实现最先进的结果(非自回归方法),在巨大清洁的常见爬网数据集和Python代码的数据集上对无条件语言建模的良好定性结果来自GitHub。通过在模板中填充任意空白模式,Sundae的非自动增加性质开辟了超出左右提示的可能性。
translated by 谷歌翻译
We study the problem of efficient generative inference for Transformer models, in one of its most challenging settings: large deep models, with tight latency targets and long sequence lengths. Better understanding of the engineering tradeoffs for inference for large Transformer-based models is important as use cases of these models are growing rapidly throughout application areas. We develop a simple analytical model for inference efficiency to select the best multi-dimensional partitioning techniques optimized for TPU v4 slices based on the application requirements. We combine these with a suite of low-level optimizations to achieve a new Pareto frontier on the latency and model FLOPS utilization (MFU) tradeoffs on 500B+ parameter models that outperforms the FasterTransformer suite of benchmarks. We further show that with appropriate partitioning, the lower memory requirements of multiquery attention (i.e. multiple query heads share single key/value head) enables scaling up to 32x larger context lengths. Finally, we achieve a low-batch-size latency of 29ms per token during generation (using int8 weight quantization) and a 76% MFU during large-batch-size processing of input tokens, while supporting a long 2048-token context length on the PaLM 540B parameter model.
translated by 谷歌翻译
状态空间模型已显示在建模远距离依赖性方面有效,特别是序列分类任务。在这项工作中,我们着重于对英语书籍,GitHub源代码和Arxiv数学文章的自回旋序列建模。基于围绕封闭激活功能的有效性的最新发展,我们提出了一个名为“封闭状态空间(GSS)”的新层,并表明它的训练速度明显快于TPU的S4(即DSS)的对角线版本,具有相当竞争力 - 基于变压器的基线,并表现出零击向更长的输入,同时直接实施。最后,我们表明,利用自我意见来建模局部依赖性,可以进一步提高GSS的性能。
translated by 谷歌翻译
由于在开放式文本生成中取得了重大进展,衡量机器生成的文本是如何对人类语言的关键问题。我们介绍紫红色,一个开放式文本生成的比较措施,它直接将文本生成模型的学习分布与使用发散边界的分发进行了分布到人写的文本。淡紫色通过计算量化嵌入空间中的信息分流来缩放到现代文本生成模型。通过对三个开放式发电任务的广泛实证研究,我们发现紫红色标识了所生成文本的已知属性,天然存在模型大小,并与人类判断相关,而不是现有的分布评估度量的限制较少。
translated by 谷歌翻译