探测是一种流行的方法,可以辨别预先训练的语言模型表示中包含哪些语言信息。但是,选择探针模型的机制最近受到了激烈的争论,因为尚不清楚探针是否只是在提取信息或对语言属性进行建模。为了应对这一挑战,本文通过将探测作为提示任务提出探测来介绍一种新颖的探测方法。我们对五个探测任务进行实验,并表明我们的方法在提取信息方面比诊断探针更为可比或更好,而自行学习得更少。我们通过提示方法与注意力头修剪进一步结合探测,以分析模型将语言信息存储在其体系结构中的位置。然后,我们通过删除对该属性至关重要的头部并评估所得模型在语言建模上的性能来检查特定语言属性对预训练的有用性。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
petroni等。 (2019)证明,可以通过将它们表达为冻结式提示并将模型的预测准确性解释为下限,作为其编码的事实信息量的较低限制,从预先接收的语言模型中检索世界事实。随后的工作已经尝试通过搜索更好的提示来缩回估计,使用不相交的事实作为培训数据。在这项工作中,我们制作两个互补贡献,以更好地了解这些事实探测技术。首先,我们提出了OptiPrompt,一种新颖的和有效的方法,直接在连续嵌入空间中优化。我们发现这种简单的方法能够预测喇嘛基准中的额外6.4%的事实。其次,我们提出了一个更重要的问题:我们真的可以将这些探测结果解释为下限吗?这些提示搜索方法是否有可能从培训数据中学习?我们发现,有些令人惊讶的是,这些方法使用的培训数据包含了潜在的事实分布的某些规则,以及所有现有的提示方法,包括我们的方法,可以利用它们以获得更好的事实预测。我们开展一系列控制实验来解除“学习”从“学习召回”,提供了更详细的图片,不同的提示可以揭示关于预先接受的语言模型。
translated by 谷歌翻译
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more taskspecific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
translated by 谷歌翻译
预先接受训练的语言模型的进展导致了对自然语言理解的下游任务的令人印象深刻的结果。探索预先训练的语言模型的最新工作揭示了在其上下围化表示中编码的广泛的语言属性。然而,目前尚不清楚他们是否编码对符号推理方法至关重要的语义知识。我们提出了一种用于探测预先接受训练的语言模型表示的逻辑推断的语言信息的方法。我们的探测数据集涵盖主要符号推理系统所需的语言现象列表。我们发现(i)预先接受的语言模型为推断编码几种类型的语言信息,但是还有一些类型的信息弱编码,(ii)语言模型可以通过微调有效地学习语言信息缺少语言信息。总体而言,我们的调查结果提供了逻辑推理语言模型的语言信息的洞察力,以及他们的预训练程序捕获。此外,我们已经证明了语言模型作为语义和背景知识库的潜力,用于支持符号推断方法。
translated by 谷歌翻译
Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention. 1 Code will be released at https://github.com/ clarkkev/attention-analysis.2 We use the English base-sized model.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT
translated by 谷歌翻译
尽管在理解深度NLP模型中学到的表示形式以及他们所捕获的知识方面已经做了很多工作,但对单个神经元的关注很少。我们提出了一种称为语言相关性分析的技术,可在任何外部特性中提取模型中的显着神经元 - 目的是了解如何保留这种知识在神经元中。我们进行了细粒度的分析以回答以下问题:(i)我们可以识别网络中捕获特定语言特性的神经元子集吗? (ii)整个网络中的局部或分布式神经元如何? iii)信息保留了多么冗余? iv)针对下游NLP任务的微调预训练模型如何影响学习的语言知识? iv)架构在学习不同的语言特性方面有何不同?我们的数据驱动的定量分析阐明了有趣的发现:(i)我们发现了可以预测不同语言任务的神经元的小亚集,ii)捕获基本的词汇信息(例如后缀),而这些神经元位于较低的大多数层中,iii,iii),而这些神经元,而那些神经元,而那些神经元则可以预测。学习复杂的概念(例如句法角色)主要是在中间和更高层中,iii),在转移学习过程中,显着的语言神经元从较高到较低的层移至较低的层,因为网络保留了较高的层以特定于任务信息,iv)我们发现很有趣在培训预训练模型之间的差异,关于如何保留语言信息,V)我们发现概念在多语言变压器模型中跨不同语言表现出相似的神经元分布。我们的代码作为Neurox工具包的一部分公开可用。
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
预训练的语言模型的目的是学习文本数据的上下文表示。预训练的语言模型已成为自然语言处理和代码建模的主流。使用探针,一种研究隐藏矢量空间的语言特性的技术,以前的作品表明,这些预训练的语言模型在其隐藏表示中编码简单的语言特性。但是,以前的工作都没有评估这些模型是否编码编程语言的整个语法结构。在本文中,我们证明了\ textit {句法子空间}的存在,该{语法子空间}位于预训练的语言模型的隐藏表示中,其中包含编程语言的句法信息。我们表明,可以从模型的表示形式中提取此子空间,并定义一种新颖的探测方法AST-Probe,该方法可以恢复输入代码段的整个抽象语法树(AST)。在我们的实验中,我们表明这种句法子空间存在于五个最先进的预训练的语言模型中。此外,我们强调说,模型的中间层是编码大多数AST信息的模型。最后,我们估计该句法子空间的最佳大小,并表明其尺寸大大低于模型的表示空间。这表明,预训练的语言模型使用其表示空间的一小部分来编码编程语言的句法信息。
translated by 谷歌翻译
Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. Qualitative analysis reveals that the model can and often does adjust this pipeline dynamically, revising lowerlevel decisions on the basis of disambiguating information from higher-level representations.
translated by 谷歌翻译
The long-distance agreement, evidence for syntactic structure, is increasingly used to assess the syntactic generalization of Neural Language Models. Much work has shown that transformers are capable of high accuracy in varied agreement tasks, but the mechanisms by which the models accomplish this behavior are still not well understood. To better understand transformers' internal working, this work contrasts how they handle two superficially similar but theoretically distinct agreement phenomena: subject-verb and object-past participle agreement in French. Using probing and counterfactual analysis methods, our experiments show that i) the agreement task suffers from several confounders which partially question the conclusions drawn so far and ii) transformers handle subject-verb and object-past participle agreements in a way that is consistent with their modeling in theoretical linguistics.
translated by 谷歌翻译
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model's representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that exactly resembles a transformer's self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence's syntax tree is mostly extractable by our probe, suggesting these models have access to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.
translated by 谷歌翻译
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly enhances the input representations closing the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.
translated by 谷歌翻译
已经做出了许多努力,试图理解什么语法知识(例如,理解代币的语音部分的能力)是在大型预训练的语言模型(LM)中编码的。这是通过“边缘探测”(EP)测试完成的:监督分类任务,以预测SPAN的语法属性(是否具有语音的特定部分)仅使用来自LM编码器的令牌表示。但是,大多数NLP应用程序对这些LM编码器进行了微调,以用于特定任务。在这里,我们问:如果通过EP测试来衡量,LM是否进行了微调,它的语言信息的编码会改变吗?具体来说,我们专注于回答(QA)的任务,并在多个数据集上进行实验。我们发现,当微调模型表现良好或在模型被迫学习错误的相关性的对抗情况下,EP测试结果不会发生显着变化。从类似的发现来看,最近的一些论文得出结论,微调不会改变编码器中的语言知识,但它们没有提供解释。我们发现,EP模型本身容易利用EP数据集中的虚假相关性。当纠正该数据集偏差时,我们确实会看到EP测试结果的改善。
translated by 谷歌翻译
Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译
及时调整是将预训练模型调整到下游任务的极其有效的工具。但是,基于标准及时的方法主要考虑下游任务的足够数据的情况。目前尚不清楚是否可以将优势传输到几杆式制度,在每个下游任务中只有有限的数据。尽管有些作品证明了在几次弹奏设置下及时调整的潜力,但通过搜索离散提示或使用有限数据调整软提示的主流方法仍然非常具有挑战性。通过广泛的实证研究,我们发现迅速调整和完全微调之间的学习差距仍然存在差距。为了弥合差距,我们提出了一个新的及时调整框架,称为软模板调整(STT)。 STT结合了手册和自动提示,并将下游分类任务视为掩盖语言建模任务。对不同设置的全面评估表明,STT可以在不引入其他参数的情况下缩小微调和基于及时的方法之间的差距。值得注意的是,它甚至可以胜过情感分类任务的时间和资源消耗的微调方法。
translated by 谷歌翻译