大型变压器模型在许多任务中产生令人印象深刻的结果,但培训昂贵,甚至微调,如此慢,在解码中,他们的使用和研究变得无法触及。我们通过利用稀疏性来解决这个问题。我们研究变压器中的所有层的稀疏变体,并提出缩放变压器,一个缩放变压器模型,使用稀疏层的型号有效地缩放,并在我们扩展模型大小时比标准变压器更快地执行不匹配的解码。令人惊讶的是,稀疏层足以获得与具有相同数量的参数的标准变压器相同的困惑。我们还与现有的稀疏性融合,即使存储器有限,也能够对长期序列进行快速推断。这导致在长期摘要上对最先进的表现竞争。
translated by 谷歌翻译
在深度学习中,模型通常重用所有输入的相同参数。专家的混合(MOE)违反了这一点,而是为每个传入示例选择不同的参数。结果是一个稀疏激活的模型 - 具有残酷数量的参数 - 但恒定的计算成本。然而,尽管MOE取得了一些显着的成功,但复杂性,沟通成本和培训不稳定的阻碍了广泛的采用 - 我们使用Switch Transformer解决了这些领域。我们简化了MOE路由算法和设计直观的改进模型,以降低的通信和计算成本。我们提出的培训技术有助于纠缠不稳定,我们表明稀疏模型可能首次以较低的精度(BFLOAT16)格式进行培训。我们设计了基于T5基数和T5总数的模型,以使用相同的计算资源获得高达7倍的训练速度。这些改进扩展到多语言设置,我们在所有101种语言中衡量对MT5基本版本的收益。最后,我们通过在“巨大的清洁爬行语料库”上预先培训高达数万亿个参数模型,并在T5-XXL模型上实现4倍的速度,从而提高了语言模型的当前规模。
translated by 谷歌翻译
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
translated by 谷歌翻译
从有限的资源中获得最大收益可以进步自然语言处理(NLP)研究和实践,同时保守资源。这些资源可能是数据,时间,存储或能源。NLP的最新工作从缩放率产生了有趣的结果。但是,仅使用比例来改善结果意味着资源消耗也会扩展。这种关系激发了对有效方法的研究,这些方法需要更少的资源才能获得相似的结果。这项调查涉及NLP效率的方法和发现,旨在指导该领域的新研究人员并激发新方法的发展。
translated by 谷歌翻译
稀疏的专家模型是一个三十年来的概念,作为深度学习中流行的建筑。这类体系结构包括专家的混合物,交换变压器,路由网络,基础层等,所有这些都以一个统一的想法,即每个示例都由参数的一个子集进行。通过这样做,稀疏度将参数计数与每个示例的计算分解,从而允许使用极大但有效的模型。最终的模型显示了各种领域的显着改善,例如自然语言处理,计算机视觉和语音识别。我们回顾了稀疏专家模型的概念,提供了对常见算法的基本描述,将深度学习时代的进步进行上下文化,并通过突出未来工作的领域来结束。
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
变压器在长序列上是缓慢的,渴望记忆力,因为自我注意的时间和记忆复杂性在序列上是二次的。近似关注方法试图通过交易模型质量以降低计算复杂性来解决此问题,但通常无法实现墙壁锁定的加速。我们认为,缺失的原则是提出注意力算法,以考虑读取和在GPU记忆层次之间写入。我们提出了FlashAttention,这是一种IO意识的精确注意算法,该算法使用平铺来减少GPU高带宽内存(HBM)和GPU芯片SRAM之间的内存读数/写入/写入。我们分析了闪存的IO复杂性,表明它所需的HBM访问少于标准注意力,并且对于一系列SRAM尺寸而言是最佳的。我们还扩展了闪光词,以引起障碍物的注意,从而产生了比任何现有的近似关注方法更快的近似关注算法。闪存火车的变压器​​比现有基准快:与MLPERF 1.1训练速度记录相比,Bert-Large(第512秒)的端到端壁式锁定加速度为15%,GPT-2上的3 $ \ times $ speedup(seq) 。闪存表现和块状闪光词可在变压器中实现更长的上下文,从而产生更高质量的模型(GPT-2上的0.7更好的困惑和长期分类的6.4点升力)和全新的功能:第一个实现优于更好的Chance的变压器PATH-X挑战(Seq。Length16K,61.4%精度)和PATH-256(Seq。Length64K,63.1%精度)上的性能。
translated by 谷歌翻译
我们介绍了块状变压器,该变压器以序列的反复方式应用变压器层,并且相对于序列长度具有线性复杂性。我们的复发单元在训练过程中在代币的块而不是单个令牌上运行,并利用块内并行计算,以便有效利用加速器硬件。单元本身非常简单。它仅仅是一个变压器层:它使用自我注意事项和交叉注意力来有效计算大量状态向量和令牌上的复发函数。我们的设计部分受到LSTM单元的启发,它使用LSTM风格的大门,但它可以将典型的LSTM单元缩放为几个数量级。我们的复发实现在计算时间和参数计数中都具有相同的成本作为传统的变压器层,但是在很长的序列中,语言建模任务中的语言建模任务的困惑极大地改善了。我们的模型比远程变压器XL基线的表现宽大,同时运行的速度是两倍。我们证明了它在PG19(书籍),Arxiv论文和GitHub源代码上的有效性。我们的代码已发布为开​​源。
translated by 谷歌翻译
We study the problem of efficient generative inference for Transformer models, in one of its most challenging settings: large deep models, with tight latency targets and long sequence lengths. Better understanding of the engineering tradeoffs for inference for large Transformer-based models is important as use cases of these models are growing rapidly throughout application areas. We develop a simple analytical model for inference efficiency to select the best multi-dimensional partitioning techniques optimized for TPU v4 slices based on the application requirements. We combine these with a suite of low-level optimizations to achieve a new Pareto frontier on the latency and model FLOPS utilization (MFU) tradeoffs on 500B+ parameter models that outperforms the FasterTransformer suite of benchmarks. We further show that with appropriate partitioning, the lower memory requirements of multiquery attention (i.e. multiple query heads share single key/value head) enables scaling up to 32x larger context lengths. Finally, we achieve a low-batch-size latency of 29ms per token during generation (using int8 weight quantization) and a 76% MFU during large-batch-size processing of input tokens, while supporting a long 2048-token context length on the PaLM 540B parameter model.
translated by 谷歌翻译
专家层(MOES)的混合物通过条件计算实现语言模型的高效缩放。本文提出了一个详细的实证研究,自回归鞋语言模型与广泛的设置中的密集模型相比:在域外语言建模,零和少量射击和全部微调。除了微调外,我们发现Moes基本上更加计算效率。在更适度的培训预算下,MOES可以使用$ \ SIM值4倍的计算,符合密集模型的性能。该差距在比例下变窄,但我们最大的MOE模型(1.1T参数)始终如一地优于计算等效的密集模型(6.7b参数)。总体而言,这种表现差距在任务和域中有很大差异,表明MOE和密集模型以不值得研究的方式概括不同的方式。我们使我们的代码和模型公开可用于研究使用。
translated by 谷歌翻译
State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 1.6$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 1.3B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.
translated by 谷歌翻译
状态空间模型已显示在建模远距离依赖性方面有效,特别是序列分类任务。在这项工作中,我们着重于对英语书籍,GitHub源代码和Arxiv数学文章的自回旋序列建模。基于围绕封闭激活功能的有效性的最新发展,我们提出了一个名为“封闭状态空间(GSS)”的新层,并表明它的训练速度明显快于TPU的S4(即DSS)的对角线版本,具有相当竞争力 - 基于变压器的基线,并表现出零击向更长的输入,同时直接实施。最后,我们表明,利用自我意见来建模局部依赖性,可以进一步提高GSS的性能。
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
Despite achieving state-of-the-art performance on many NLP tasks, the high energy cost and long inference delay prevent Transformer-based pretrained language models (PLMs) from seeing broader adoption including for edge and mobile computing. Efficient NLP research aims to comprehensively consider computation, time and carbon emission for the entire life-cycle of NLP, including data preparation, model training and inference. In this survey, we focus on the inference stage and review the current state of model compression and acceleration for pretrained language models, including benchmarks, metrics and methodology.
translated by 谷歌翻译
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L 2 ) to O(L log L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
translated by 谷歌翻译
几乎没有射击的内在学习(ICL)使预训练的语言模型能够通过为输入的一部分提供少量的培训示例来执行以前的任务,而无需任何基于梯度的培训。 ICL会产生大量的计算,内存和存储成本,因为它每次进行预测时都涉及处理所有培训示例。参数有效的微调(PEFT)(例如,适配器模块,提示调谐,稀疏更新方法等)提供了替代范式,其中训练了一组少量参数以启用模型来执行新任务。在本文中,我们严格地比较了几个ICL和PEFT,并证明后者提供了更好的准确性,并大大降低了计算成本。在此过程中,我们引入了一种称为(IA)$^3 $的新PEFT方法,该方法通过学习的向量来扩展激活,从而获得更强的性能,同时仅引入相对少量的新参数。我们还提出了一个基于称为T-FEW的T0模型的简单食谱,可以将其应用于新任务,而无需特定于任务的调整或修改。我们通过将T-FEW应用于木筏基准,首次实现超人性能,并以6%的绝对性能优于最先进的方法来验证T-FEW对完全看不见的任务的有效性。我们实验中使用的所有代码均可公开使用。
translated by 谷歌翻译
基于变压器的语言模型利用注意机制在几乎所有自然语言处理(NLP)任务中进行大量绩效改进。在其他几个领域也广泛研究了类似的关注结构。尽管注意力机制可显着增强模型的性能,但其二次复杂性阻止了长序列的有效处理。最近的工作着重于消除计算效率低下的缺点,并表明基于变压器的模型仍然可以在没有注意力层的情况下达到竞争结果。一项开创性的研究提出了FNET,该研究将注意力层取代了变压器编码器体系结构中的傅立叶变换(FT)。 FNET通过消除注意机制的计算负担来加速训练过程,在加速训练过程的同时,实现了有关原始变压器编码器模型的竞争性能。但是,FNET模型忽略了FT的基本特性,可以利用经典信号处理,以进一步提高模型效率。我们提出了不同的方法,以有效地部署FT在变压器编码器模型中。我们提出的架构具有较少的模型参数,较短的培训时间,较少的内存使用情况以及一些额外的性能改进。我们通过对共同基准的广泛实验来证明这些改进。
translated by 谷歌翻译
现实世界中的数据是高维的:即使在压缩后,书籍,图像或音乐表演也很容易包含数十万个元素。但是,最常用的自回归模型,变压器非常昂贵,以缩放捕获这种远程结构所需的输入和层数。我们开发了感知者AR,这是一种自回归的模态 - 不合骨架构,它使用交叉注意力将远程输入映射到少数潜在的潜在,同时还可以维护端到端的因果关系掩盖。感知器AR可以直接进行十万个令牌,从而实现了实用的长篇小写密度估计,而无需手工制作的稀疏模式或记忆机制。当对图像或音乐进行培训时,感知器AR会生成具有清晰长期连贯性和结构的输出。我们的架构还获得了长期基准测试的最新可能性,包括64 x 64个Imagenet图像和PG-19书籍。
translated by 谷歌翻译
Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations, wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译