基于变压器的NLP模型是使用数亿甚至数十亿个参数训练的,从而限制了其在计算受限环境中的适用性。尽管参数的数量通常与性能相关,但尚不清楚下游任务是否需要整个网络。在最新的修剪和提炼预培训模型的工作中,我们探索了在预训练模型中放下层的策略,并观察修剪对下游胶水任务的影响。我们能够修剪Bert,Roberta和XLNet型号高达40%,同时保持其原始性能的98%。此外,我们证明,在大小和性能方面,您的修剪模型与使用知识蒸馏的型号相提并论。我们的实验产生有趣的观察结果,例如(i)下层对于维持下游任务性能最重要,(ii)某些任务(例如释义检测和句子相似性)对于降低层的降低和(iii)经过训练的模型更强大。使用不同的目标函数表现出不同的学习模式,并且层掉落。
translated by 谷歌翻译
Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large "teacher" BERT can be effectively transferred to a small "student" Tiny-BERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specific knowledge in BERT. TinyBERT 41 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERT BASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT 4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Moreover, TinyBERT 6 with 6 layers performs on-par with its teacher BERT BASE .
translated by 谷歌翻译
静态嵌入的后处理已成为提高其在词汇和序列级任务上的性能。但是,在上下文化嵌入的后处理是一个研究不足的问题。在这项工作中,我们质疑从不同训练的语言模型获得的上下文化嵌入的后处理的有用性。更具体地说,我们使用Z分数,Min-Max归一化以及使用全而top方法来删除顶部原理组件,将单个神经元激活标准化。此外,我们将单位长度标准化应用于单词表示。在各种预训练的模型集中,我们表明,在表示两个词汇任务(例如单词相似性和类比)和序列分类任务的表示后处理中存在重要信息。我们的发现提出了有关使用上下文表示表示的研究研究的有趣点,并建议在应用程序中使用Z分数归一化作为要考虑的重要步骤。
translated by 谷歌翻译
近年来,变形金刚大大提高了自然语言处理(NLP)的最新技术,但呈现出非常大的计算和存储要求。我们观察到,变压器的设计过程(以自我监督的方式预先培训是大型数据集上的基础模型,随后将其用于不同的下游任务)导致特定于任务的模型,这些模型高度过度参数化,参数过度过度化,不利地影响准确性和推理效率。我们提出了Axformer,这是一个系统的框架,该框架应用精度驱动的近似值来为给定的下游任务创建优化的变压器模型。 Axformer结合了两个关键的优化 - 准确驱动的修剪和选择性的硬注意。准确驱动的修剪确定并删除了微调变压器的一部分,从而阻碍了给定下游任务的性能。稀疏的硬注意通过消除无关的单词聚合来优化选定层中的注意力块,从而帮助模型仅关注输入的相关部分。实际上,Axformer会导致更准确的模型,同时更快,更小。我们在胶水和小队任务上的实验表明,轴形模型的准确性高达4.5%,同时比传统的微调模型快2.5倍,高达3.2倍。此外,我们证明了轴形式可以与先前的努力(例如蒸馏或量化)结合使用,以实现进一步的效率提高。
translated by 谷歌翻译
基于变压器的语言模型应用于自然语言处理的广泛应用程序。但是,它们效率低,难以部署。近年来,已经提出了许多压缩算法来提高目标硬件上大型变压器的模型的实现效率。在这项工作中,我们通过整合体重修剪和模型蒸馏来提出一种训练稀疏预训练的变压器语言模型的新方法。这些稀疏的预训练型号可用于在维护稀疏模式的同时传输广泛的任务。我们展示了我们有三个已知的架构的方法,以创建稀疏的预训练伯特基,BERT-MAT​​RY和DISTOLBERT。我们展示了压缩稀疏的预训练模型如何培训他们的知识,以最小的精度损失将他们的知识转移到五种不同的下游自然语言任务。此外,我们展示了如何使用量化感知培训进一步将稀疏模型的重量压缩为8位精度。例如,在SQUAdv1.1上使用我们稀疏预训练的BERT频率,并量化为8位,我们为编码器达到40美元的压缩比,而不是1 \%$精度损失。据我们所知,我们的结果表明Bert-Base,Bert-Light和Distilbert的最佳压缩至准确率。
translated by 谷歌翻译
大型的语言模型(PRELMS)正在彻底改变所有基准的自然语言处理。但是,它们的巨大尺寸对于小型实验室或移动设备上的部署而言是过分的。修剪和蒸馏等方法可减少模型尺寸,但通常保留相同的模型体系结构。相反,我们探索了蒸馏预告片中的更有效的架构,单词的持续乘法(CMOW),该构造将每个单词嵌入为矩阵,并使用矩阵乘法来编码序列。我们扩展了CMOW体系结构及其CMOW/CBOW-HYBRID变体,具有双向组件,以提供更具表现力的功能,在预绘制期间进行一般(任务无义的)蒸馏的单次表示,并提供了两种序列编码方案,可促进下游任务。句子对,例如句子相似性和自然语言推断。我们的基于矩阵的双向CMOW/CBOW-HYBRID模型在问题相似性和识别文本范围内的Distilbert具有竞争力,但仅使用参数数量的一半,并且在推理速度方面快三倍。除了情感分析任务SST-2和语言可接受性任务COLA外,我们匹配或超过ELMO的ELMO分数。但是,与以前的跨架结构蒸馏方法相比,我们证明了检测语言可接受性的分数增加了一倍。这表明基于基质的嵌入可用于将大型预赛提炼成竞争模型,并激励朝这个方向进行进一步的研究。
translated by 谷歌翻译
预先接受的语言模型(PLMS)在预训练和微调范式下,在各种自然语言处理(NLP)任务中取得了巨大成功。具有大量参数,PLMS是计算密集型和资源饥饿的。因此,已经引入了模型修剪来压缩大规模的PLM。然而,大多数先前的方法只考虑对下游任务的任务特定知识,但忽略了修剪期间的基本任务无关知识,这可能导致灾难性的遗忘问题并导致普遍性较差。为了在我们的修剪模型中维护任务不可行的特定知识,我们提出了在预训练和微调范式下的对比修剪(盖子)。它设计为一​​般框架,与结构化和非结构化修剪兼容。统一的对比学习,CAP使修剪模型能够从预训练的模型中学到任务无关的知识,以及特定于任务知识的微调模型。此外,为了更好地保留修剪模型的性能,快照(即,每个修剪迭代的中间模型)也是修剪的有效监督。我们广泛的实验表明,采用盖子一致地产生显着的改善,特别是在极高的稀疏性方案中。只有3%的型号参数保留(即97%的稀疏性),CAP成功达到了QQP和MNLI任务的原始BERT性能的99.2%和96.3%。此外,我们的探测实验表明,CAP修剪的模型趋于达到更好的泛化能力。
translated by 谷歌翻译
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-theedge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller generalpurpose language representation model, called DistilBERT, which can then be finetuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
translated by 谷歌翻译
Despite achieving state-of-the-art performance on many NLP tasks, the high energy cost and long inference delay prevent Transformer-based pretrained language models (PLMs) from seeing broader adoption including for edge and mobile computing. Efficient NLP research aims to comprehensively consider computation, time and carbon emission for the entire life-cycle of NLP, including data preparation, model training and inference. In this survey, we focus on the inference stage and review the current state of model compression and acceleration for pretrained language models, including benchmarks, metrics and methodology.
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
将最新的变压器模型蒸馏成轻量级的学生模型是降低推理时计算成本的有效方法。学生模型通常是紧凑的变压器,参数较少,而昂贵的操作(例如自我发项)持续存在。因此,对于实时或大量用例,提高的推理速度仍然不令人满意。在本文中,我们旨在通过将教师模型提炼成更大,更稀疏的学生模型来进一步推动推理速度的极限 - 更大的是它们扩展到数十亿个参数;稀疏,大多数模型参数是N-gram嵌入。我们对六个单词文本分类任务的实验表明,这些学生模型平均保留了罗伯塔大师教师表现的97%,同时推理时GPU和CPU的加速速度最高为600倍。进一步的调查表明,我们的管道也有助于句子对分类任务和域泛化设置。
translated by 谷歌翻译
我们介绍了BitFit,这是一种稀疏的重点方法,其中仅修改了模型的偏差(或其中一个子集)。我们表明,通过在预训练的BERT模型上应用BITFIT的小型至中等训练数据具有竞争力(有时比)对整个模型进行微调。对于较大的数据,该方法与其他稀疏微调方法具有竞争力。除了它们的实际实用性外,这些发现与理解常用的填补过程的问题有关:它们支持以下假设:填充主要是关于揭示通过语言模型培训引起的知识,而不是学习新的任务特定的语言知识。
translated by 谷歌翻译
存在多种自然语言处理(NLP)任务的多种效率方法,例如修剪,蒸馏,动态推断,量化等。我们可以将效率方法视为应用于模型的操作员。自然,我们可以构建多个效率方法的管道,即,顺序将多个操作员应用于模型。在本文中,我们研究了这一想法的合理性,更重要的是,效率运营商的合理性和累积性。我们做出了两个有趣的观察结果:(1)效率运营商是可交换的 - 管道中效率方法的顺序对最终结果几乎没有影响;(2)效率运算符也是累积的 - 结合几种效率方法的最终结果可以通过组合单个方法的结果来估算。这些观察结果加深了我们对效率运营商的理解,并为其现实世界应用提供了有用的准则。
translated by 谷歌翻译
我们从任务特定的BERT基教师模型执行知识蒸馏(KD)基准到各种学生模型:Bilstm,CNN,Bert-Tiny,Bert-Mini和Bert-small。我们的实验涉及在两个任务中分组的12个数据集:印度尼西亚语言中的文本分类和序列标记。我们还比较蒸馏的各个方面,包括使用Word Embeddings和未标记的数据增强的使用。我们的实验表明,尽管基于变压器的模型的普及程度不断上升,但是使用Bilstm和CNN学生模型,与修剪的BERT模型相比,使用Bilstm和CNN学生模型提供了性能和计算资源(CPU,RAM和存储)之间的最佳权衡。我们进一步提出了一些快速胜利,通过涉及涉及丢失功能,Word Embeddings和未标记的数据准备的简单选择的高效KD培训机制来生产小型NLP模型。
translated by 谷歌翻译
量化,知识蒸馏和修剪是NLP中神经网络压缩的最流行方法之一。独立地,这些方法降低了模型的大小并可以加速推断,但是尚未严格研究它们的相对益处和组合相互作用。对于这些技术的八个可能子集中的每一个,我们比较了六个BERT体系结构和八个胶水任务的准确性与模型大小的权衡。我们发现量化和蒸馏始终比修剪更大的好处。出乎意料的是,除了将多种方法一起使用多种修剪和量化之外,很少会产生回报的减少。取而代之的是,我们观察到互补和超级义务减少了模型大小。我们的工作定量表明,结合压缩方法可以协同降低模型大小,并且从业者应优先考虑(1)量化,(2)知识蒸馏,(3)修剪以最大程度地提高准确性与模型大小的权衡。
translated by 谷歌翻译
在生物医学语料库中预先培训的语言模型,例如Biobert,最近在下游生物医学任务上显示出令人鼓舞的结果。另一方面,由于嵌入尺寸,隐藏尺寸和层数等因素,许多现有的预训练模型在资源密集型和计算上都是沉重的。自然语言处理(NLP)社区已经制定了许多策略来压缩这些模型,利用修剪,定量和知识蒸馏等技术,从而导致模型更快,更小,随后更易于使用。同样,在本文中,我们介绍了六种轻型模型,即Biodistilbert,Biotinybert,BioMobilebert,Distilbiobert,Tinybiobert和Cmpactactbiobert,并通过掩护的语言在PubMed DataSet上通过掩护数据进行了知识蒸馏而获得的知识蒸馏来获得。建模(MLM)目标。我们在三个生物医学任务上评估了所有模型,并将它们与Biobert-V1.1进行比较,以创建有效的轻量级模型,以与较大的对应物相同。所有模型将在我们的HuggingFace配置文件上公开可用,网址为https://huggingface.co/nlpie,用于运行实验的代码将在https://github.com/nlpie-research/compact-compact-biomedical-transformers上获得。
translated by 谷歌翻译
尽管在理解深度NLP模型中学到的表示形式以及他们所捕获的知识方面已经做了很多工作,但对单个神经元的关注很少。我们提出了一种称为语言相关性分析的技术,可在任何外部特性中提取模型中的显着神经元 - 目的是了解如何保留这种知识在神经元中。我们进行了细粒度的分析以回答以下问题:(i)我们可以识别网络中捕获特定语言特性的神经元子集吗? (ii)整个网络中的局部或分布式神经元如何? iii)信息保留了多么冗余? iv)针对下游NLP任务的微调预训练模型如何影响学习的语言知识? iv)架构在学习不同的语言特性方面有何不同?我们的数据驱动的定量分析阐明了有趣的发现:(i)我们发现了可以预测不同语言任务的神经元的小亚集,ii)捕获基本的词汇信息(例如后缀),而这些神经元位于较低的大多数层中,iii,iii),而这些神经元,而那些神经元,而那些神经元则可以预测。学习复杂的概念(例如句法角色)主要是在中间和更高层中,iii),在转移学习过程中,显着的语言神经元从较高到较低的层移至较低的层,因为网络保留了较高的层以特定于任务信息,iv)我们发现很有趣在培训预训练模型之间的差异,关于如何保留语言信息,V)我们发现概念在多语言变压器模型中跨不同语言表现出相似的神经元分布。我们的代码作为Neurox工具包的一部分公开可用。
translated by 谷歌翻译
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7x more parameters and run 2.5x slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameterreduction techniques to lower memory consumption and increase the training speed of BERT (Devlin et al., 2019). Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT. * Work done as an intern at Google Research, driving data processing and downstream task evaluations.
translated by 谷歌翻译