语言模型偏见已成为NLP社区的重要研究领域。提出了许多偏见技术,但偏见消融仍然是一个未解决的问题。我们展示了一个新颖的框架,用于通过运动修剪来检查预训练的基于变压器的语言模型的偏见。给定模型和一个偏见的目标,我们的框架找到了与原始模型相比,偏差少的模型子集。我们通过对模型进行修剪来实现我们的框架,同时将其按照歧义目标进行微调。优化仅是修剪分数 - 参数以及模型的权重,该参数充当门。我们尝试修剪注意力头,这是变形金刚的重要组成部分:我们修剪正方形块,并建立了一种修剪整个头部的新方法。最后,我们使用性别偏见证明了我们的框架的用法,并且根据我们的发现,我们提出了对现有辩论方法的改进。此外,我们重新发现了偏见 - 绩效权衡:模型执行越好,其包含的偏见就越多。
translated by 谷歌翻译
Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads in the encoder to the overall performance of the model and analyze the roles played by them. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L 0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU. 1
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
基于变压器的语言模型应用于自然语言处理的广泛应用程序。但是,它们效率低,难以部署。近年来,已经提出了许多压缩算法来提高目标硬件上大型变压器的模型的实现效率。在这项工作中,我们通过整合体重修剪和模型蒸馏来提出一种训练稀疏预训练的变压器语言模型的新方法。这些稀疏的预训练型号可用于在维护稀疏模式的同时传输广泛的任务。我们展示了我们有三个已知的架构的方法,以创建稀疏的预训练伯特基,BERT-MAT​​RY和DISTOLBERT。我们展示了压缩稀疏的预训练模型如何培训他们的知识,以最小的精度损失将他们的知识转移到五种不同的下游自然语言任务。此外,我们展示了如何使用量化感知培训进一步将稀疏模型的重量压缩为8位精度。例如,在SQUAdv1.1上使用我们稀疏预训练的BERT频率,并量化为8位,我们为编码器达到40美元的压缩比,而不是1 \%$精度损失。据我们所知,我们的结果表明Bert-Base,Bert-Light和Distilbert的最佳压缩至准确率。
translated by 谷歌翻译
Attention is a powerful and ubiquitous mechanism for allowing neural models to focus on particular salient pieces of information by taking their weighted average when making predictions. In particular, multi-headed attention is a driving force behind many recent state-of-the-art natural language processing (NLP) models such as Transformer-based MT models and BERT. These models apply multiple attention mechanisms in parallel, with each attention "head" potentially focusing on different parts of the input, which makes it possible to express sophisticated functions beyond the simple weighted average. In this paper we make the surprising observation that even if models have been trained using multiple heads, in practice, a large percentage of attention heads can be removed at test time without significantly impacting performance. In fact, some layers can even be reduced to a single head. We further examine greedy algorithms for pruning down models, and the potential speed, memory efficiency, and accuracy improvements obtainable therefrom. Finally, we analyze the results with respect to which parts of the model are more reliant on having multiple heads, and provide precursory evidence that training dynamics play a role in the gains provided by multi-head attention 1 .1 Code to replicate our experiments is provided at https://github.com/pmichel31415/ are-16-heads-really-better-than-1
translated by 谷歌翻译
近年来,变形金刚大大提高了自然语言处理(NLP)的最新技术,但呈现出非常大的计算和存储要求。我们观察到,变压器的设计过程(以自我监督的方式预先培训是大型数据集上的基础模型,随后将其用于不同的下游任务)导致特定于任务的模型,这些模型高度过度参数化,参数过度过度化,不利地影响准确性和推理效率。我们提出了Axformer,这是一个系统的框架,该框架应用精度驱动的近似值来为给定的下游任务创建优化的变压器模型。 Axformer结合了两个关键的优化 - 准确驱动的修剪和选择性的硬注意。准确驱动的修剪确定并删除了微调变压器的一部分,从而阻碍了给定下游任务的性能。稀疏的硬注意通过消除无关的单词聚合来优化选定层中的注意力块,从而帮助模型仅关注输入的相关部分。实际上,Axformer会导致更准确的模型,同时更快,更小。我们在胶水和小队任务上的实验表明,轴形模型的准确性高达4.5%,同时比传统的微调模型快2.5倍,高达3.2倍。此外,我们证明了轴形式可以与先前的努力(例如蒸馏或量化)结合使用,以实现进一步的效率提高。
translated by 谷歌翻译
The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this mechanism, while powerful and elegant, is not as important as typically thought for pretrained language models. We introduce PAPA, a new probing method that replaces the input-dependent attention matrices with constant ones -- the average attention weights over multiple inputs. We use PAPA to analyze several established pretrained Transformers on six downstream tasks. We find that without any input-dependent attention, all models achieve competitive performance -- an average relative drop of only 8% from the probing baseline. Further, little or no performance drop is observed when replacing half of the input-dependent attention matrices with constant (input-independent) ones. Interestingly, we show that better-performing models lose more from applying our method than weaker models, suggesting that the utilization of the input-dependent attention mechanism might be a factor in their success. Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture.
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
预磨料的语言模型的大小使它们在有多个所需的下游任务时使用挑战和昂贵。在这项工作中,我们采用了最近的近期模型修剪策略,以探索是否有可能修剪单个编码器,以便它可以用于多个任务。我们分配了固定的参数预算,并将修剪修剪单个模型,对单任务模型的最佳集合进行多任务目标。我们发现,根据两个修剪策略(元素 - 明智和排名修剪),当在所有任务中平均时,具有多任务目标的方法优于培训模型,并且在每个任务中都具有竞争力。其他分析发现,在修剪期间使用多任务目标也可以是减少低资源任务的模型大小的有效方法。
translated by 谷歌翻译
Pre-trained language models achieve superior performance, but they are computationally expensive due to their large size. Techniques such as pruning and knowledge distillation (KD) have been developed to reduce their size and latency. In most structural pruning methods, the pruning units, such as attention heads and feed-forward hidden dimensions, only span a small model structure space and limit the structures that the pruning algorithm can explore. In this work, we propose Gradient-based Intra-attention pruning (GRAIN), which inspects fine intra-attention structures, and allows different heads to have different sizes. Intra-attention pruning greatly expands the searching space of model structures and yields highly heterogeneous structures. We further propose structure regularization to encourage generating more regular structures, which achieves higher speedups than heterogeneous ones. We also integrate KD into the pruning process with a gradient separation strategy to reduce the interference of KD with the pruning process. GRAIN is evaluated on a variety of tasks. Results show that it notably outperforms other methods at the same or similar model size. Even under extreme compression where only $3\%$ weights in transformers remain, the pruned model is still competitive.
translated by 谷歌翻译
近年来,在自然语言处理中的伯特等变压器模型越来越多地采用了越来越多的采用,甚至在计算机视觉中。然而,由于大小,在资源受限的计算环境中,在资源受限的计算环境中采用了有限的采用本文提出了通过消除冗余注意头来压缩变压器模型的新颖修剪算法。我们应用A *搜索算法,以获得最小精度保证的修剪模型。我们的结果表明,该方法可以消除BERT变压器模型中的40%的注意力头,几乎没有精确损失。
translated by 谷歌翻译
大型的预训练的语言模型成功地用于多种语言的各种任务中。随着这种不断增加的使用,有害副作用的风险也会上升,例如通过再现和加强刻板印象。但是,在解决多种语言或考虑不同的偏见时,发现和缓解这些危害通常很难做到,并且在计算上变得昂贵。为了解决这个问题,我们提出了Fairdistiltation:一种基于知识蒸馏的跨语性方法,可以在控制特定偏见的同时构建较小的语言模型。我们发现,我们的蒸馏方法不会对大多数任务的下游性能产生负面影响,并成功减轻刻板印象和代表性危害。我们证明,与替代方法相比,Fairdistillation可以以低得多的成本创建更公平的语言模型。
translated by 谷歌翻译
基于变压器的NLP模型是使用数亿甚至数十亿个参数训练的,从而限制了其在计算受限环境中的适用性。尽管参数的数量通常与性能相关,但尚不清楚下游任务是否需要整个网络。在最新的修剪和提炼预培训模型的工作中,我们探索了在预训练模型中放下层的策略,并观察修剪对下游胶水任务的影响。我们能够修剪Bert,Roberta和XLNet型号高达40%,同时保持其原始性能的98%。此外,我们证明,在大小和性能方面,您的修剪模型与使用知识蒸馏的型号相提并论。我们的实验产生有趣的观察结果,例如(i)下层对于维持下游任务性能最重要,(ii)某些任务(例如释义检测和句子相似性)对于降低层的降低和(iii)经过训练的模型更强大。使用不同的目标函数表现出不同的学习模式,并且层掉落。
translated by 谷歌翻译
Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large "teacher" BERT can be effectively transferred to a small "student" Tiny-BERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specific knowledge in BERT. TinyBERT 41 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERT BASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT 4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Moreover, TinyBERT 6 with 6 layers performs on-par with its teacher BERT BASE .
translated by 谷歌翻译
我们介绍了BitFit,这是一种稀疏的重点方法,其中仅修改了模型的偏差(或其中一个子集)。我们表明,通过在预训练的BERT模型上应用BITFIT的小型至中等训练数据具有竞争力(有时比)对整个模型进行微调。对于较大的数据,该方法与其他稀疏微调方法具有竞争力。除了它们的实际实用性外,这些发现与理解常用的填补过程的问题有关:它们支持以下假设:填充主要是关于揭示通过语言模型培训引起的知识,而不是学习新的任务特定的语言知识。
translated by 谷歌翻译
探测是一种流行的方法,可以辨别预先训练的语言模型表示中包含哪些语言信息。但是,选择探针模型的机制最近受到了激烈的争论,因为尚不清楚探针是否只是在提取信息或对语言属性进行建模。为了应对这一挑战,本文通过将探测作为提示任务提出探测来介绍一种新颖的探测方法。我们对五个探测任务进行实验,并表明我们的方法在提取信息方面比诊断探针更为可比或更好,而自行学习得更少。我们通过提示方法与注意力头修剪进一步结合探测,以分析模型将语言信息存储在其体系结构中的位置。然后,我们通过删除对该属性至关重要的头部并评估所得模型在语言建模上的性能来检查特定语言属性对预训练的有用性。
translated by 谷歌翻译
最近经过彻底调查了变压器多头自我关注机制。一方面,研究人员对理解为什么以及变压器如何工作。另一方面,他们提出了新的注意增强方法,使变压器更准确,高效和可解释。在本文中,我们在循环管道中协同促使这两条研究线,首先找到了重要的任务特定的注意模式。然后应用那些模式,不仅应用于原始模型,还应用于较小的模型,作为人类引导的知识蒸馏过程。在提取摘要任务的情况下,在案例研究中对我们的管道的好处。在受欢迎的Bertsum模型中找到三种有意义的关注模式之后,实验表明,当我们注入这种模式时,原始和较小模型都显示出性能的改进,并且可以说是可争议的解释性。
translated by 谷歌翻译
基于变压器的大型语言模型在自然语言处理中表现出色。通过考虑这些模型在一个领域中获得的知识的可传递性,以及自然语言与高级编程语言(例如C/C ++)的亲密关系,这项工作研究了如何利用(大)基于变压器语言模型检测软件漏洞以及这些模型在漏洞检测任务方面的良好程度。在这方面,首先提出了一个系统的(凝聚)框架,详细介绍了源代码翻译,模型准备和推理。然后,使用具有多个漏洞的C/C ++源代码的软件漏洞数据集进行经验分析,该数据集对应于库功能调用,指针使用,数组使用情况和算术表达式。我们的经验结果证明了语言模型在脆弱性检测中的良好性能。此外,这些语言模型具有比当代模型更好的性能指标,例如F1得分,即双向长期记忆和双向封闭式复发单元。由于计算资源,平台,库和依赖项的要求,对语言模型进行实验始终是具有挑战性的。因此,本文还分析了流行的平台,以有效地微调这些模型并在选择平台时提出建议。
translated by 谷歌翻译
对自然语言处理资源中的偏置模式的提高意识,如BERT,具有许多度量来量化“偏见”和“公平”。但是,如果没有完全不可能,请比较不同指标的结果和评估这些度量的作品仍然困难。我们调查了对预用语言模型的公平度量标准的现有文献,并通过实验评估兼容性,包括语言模型中的偏差,如在其下游任务中。我们通过传统文献调查和相关分析的混合来实现这一目标,以及运行实证评估。我们发现许多指标不兼容,高度依赖于(i)模板,(ii)属性和目标种子和(iii)选择嵌入式。这些结果表明,公平或偏见评估对情境化语言模型仍然具有挑战性,如果不是至少高度主观。为了提高未来的比较和公平评估,我们建议避免嵌入基于的指标并专注于下游任务中的公平评估。
translated by 谷歌翻译
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models. However, these tickets are proved to be notrobust to adversarial examples, and even worse than their PLM counterparts. To address this problem, we propose a novel method based on learning binary weight masks to identify robust tickets hidden in the original PLMs. Since the loss is not differentiable for the binary mask, we assign the hard concrete distribution to the masks and encourage their sparsity using a smoothing approximation of L0 regularization.Furthermore, we design an adversarial loss objective to guide the search for robust tickets and ensure that the tickets perform well bothin accuracy and robustness. Experimental results show the significant improvement of the proposed method over previous work on adversarial robustness evaluation.
translated by 谷歌翻译