Language models have been shown to be very effective in predicting brain recordings of subjects experiencing complex language stimuli. For a deeper understanding of this alignment, it is important to understand the alignment between the detailed processing of linguistic information by the human brain versus language models. In NLP, linguistic probing tasks have revealed a hierarchy of information processing in neural language models that progresses from simple to complex with an increase in depth. On the other hand, in neuroscience, the strongest alignment with high-level language brain regions has consistently been observed in the middle layers. These findings leave an open question as to what linguistic information actually underlies the observed alignment between brains and language models. We investigate this question via a direct approach, in which we eliminate information related to specific linguistic properties in the language model representations and observe how this intervention affects the alignment with fMRI brain recordings obtained while participants listened to a story. We investigate a range of linguistic properties (surface, syntactic and semantic) and find that the elimination of each one results in a significant decrease in brain alignment across all layers of a language model. These findings provide direct evidence for the role of specific linguistic information in the alignment between brain and language models, and opens new avenues for mapping the joint information processing in both systems.
translated by 谷歌翻译
Pretrained language models that have been trained to predict the next word over billions of text documents have been shown to also significantly predict brain recordings of people comprehending language. Understanding the reasons behind the observed similarities between language in machines and language in the brain can lead to more insight into both systems. Recent works suggest that the prediction of the next word is a key mechanism that contributes to the alignment between the two. What is not yet understood is whether prediction of the next word is necessary for this observed alignment or simply sufficient, and whether there are other shared mechanisms or information that is similarly important. In this work, we take a first step towards a better understanding via two simple perturbations in a popular pretrained language model. The first perturbation is to improve the model's ability to predict the next word in the specific naturalistic stimulus text that the brain recordings correspond to. We show that this indeed improves the alignment with the brain recordings. However, this improved alignment may also be due to any improved word-level or multi-word level semantics for the specific world that is described by the stimulus narrative. We aim to disentangle the contribution of next word prediction and semantic knowledge via our second perturbation: scrambling the word order at inference time, which reduces the ability to predict the next word, but maintains any newly learned word-level semantics. By comparing the alignment with brain recordings of these differently perturbed models, we show that improvements in alignment with brain recordings are due to more than improvements in next word prediction and word-level semantics.
translated by 谷歌翻译
Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP). Towards this goal, recent works have begun to train language models on narrative datasets which require extracting the most critical information by integrating across long contexts. However, it is still an open question whether these models are learning a deeper understanding of the text, or if the models are simply learning a heuristic to complete the task. This work investigates this further by turning to the one language processing system that truly understands complex language: the human brain. We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity. We further find that the improvements in brain alignment are larger for character names than for other discourse features, which indicates that these models are learning important narrative elements. Taken together, these results suggest that this type of training can indeed lead to deeper language understanding. These findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting that understanding of long-range context can be improved beyond language modeling.
translated by 谷歌翻译
如何相关的是神经语言模型,翻译模型和语言标记任务所学到的表示?我们通过调整计算机愿景的编码器 - 解码器传输学习方法来回回答这个问题,以研究从在培训的语言任务上训练的各种网络的隐藏表示中提取的100个不同的特征空间中的结构。该方法揭示了一种低维结构,其中语言模型和翻译模型在Word Embeddings,语法和语义任务中平滑地插入,以及未来的Word Embeddings。我们称之为这种低维结构的语言表示嵌入,因为它会对处理语言进行各种NLP任务所需的表示之间的关系。我们发现,这种代表性嵌入可以预测每个具有FMRI记录的自然语言刺激的人脑响应的每个特征空间映射如何。此外,我们发现这种结构的主要维度可用于创建一个突出大脑的自然语言处理层次结构的度量。这表明嵌入捕获了大脑的自然语言表示结构的某些部分。
translated by 谷歌翻译
深度学习最近在自然语言处理方面取得了显着进展。然而,所得到的算法仍然远离人类脑的语言能力。预测编码理论提供了对这种差异的潜在说明:虽然优化深语算法以预测相邻词,但是人类大脑将被调整以进行远程和分层预测。为了测试这一假设,我们分析了304名受试者的FMRI脑信号,每个受试者每张聆听70min的短篇小说。在确认深语算法的激活之后,我们表明,通过远程预测表示增强了这些模型,提高了他们的脑映射。结果进一步揭示了大脑中预测的层次,由此前景皮质预测比时间皮质更摘要和更远的差异。总体而言,这项研究增强了预测编码理论,并表明了在自然语言处理中的远程和分层预测的关键作用。
translated by 谷歌翻译
在过去的几年中,神经语言模型(NLM)取得了巨大进步,在各种语言任务上取得了令人印象深刻的表现。利用这一点,对神经科学的研究已开始使用NLMS在语言处理过程中研究人脑中的神经活动。但是,关于哪些因素决定了神经语言模型捕获大脑活动的能力(又称其“大脑评分”)的能力,许多问题仍未得到解决。在这里,我们朝这个方向迈出了第一步,并检查了测试丢失,训练语料库和模型架构的影响(比较手套,LSTM,GPT-2和BERT),对参与者的功能磁共振成像的预测时间表的预测时间表。 。我们发现(1)每个模型的未经训练的版本已经通过捕获相同单词的大脑响应的相似性来解释大脑中的大量信号,而未经训练的LSTM优于基于变压器的模型,受到上下文效果的影响较小。 (2)训练NLP模型可改善同一大脑区域的大脑评分,而与模型的结构无关; (3)困惑(测试损失)不是大脑评分的良好预测指标; (4)训练数据对结果有很大的影响,尤其是,现成的模型可能缺乏检测大脑激活的统计能力。总体而言,我们概述了模型训练选择的影响,并为未来的研究提出了良好的实践,旨在使用神经语言模型来解释人类语言系统。
translated by 谷歌翻译
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more taskspecific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
translated by 谷歌翻译
People constantly use language to learn about the world. Computational linguists have capitalized on this fact to build large language models (LLMs) that acquire co-occurrence-based knowledge from language corpora. LLMs achieve impressive performance on many tasks, but the robustness of their world knowledge has been questioned. Here, we ask: do LLMs acquire generalized knowledge about real-world events? Using curated sets of minimal sentence pairs (n=1215), we tested whether LLMs are more likely to generate plausible event descriptions compared to their implausible counterparts. We found that LLMs systematically distinguish possible and impossible events (The teacher bought the laptop vs. The laptop bought the teacher) but fall short of human performance when distinguishing likely and unlikely events (The nanny tutored the boy vs. The boy tutored the nanny). In follow-up analyses, we show that (i) LLM scores are driven by both plausibility and surface-level sentence features, (ii) LLMs generalize well across syntactic sentence variants (active vs passive) but less well across semantic sentence variants (synonymous sentences), (iii) some, but not all LLM deviations from ground-truth labels align with crowdsourced human judgments, and (iv) explicit event plausibility information emerges in middle LLM layers and remains high thereafter. Overall, our analyses reveal a gap in LLMs' event knowledge, highlighting their limitations as generalized knowledge bases. We conclude by speculating that the differential performance on impossible vs. unlikely events is not a temporary setback but an inherent property of LLMs, reflecting a fundamental difference between linguistic knowledge and world knowledge in intelligent systems.
translated by 谷歌翻译
类似于心血管和肌肉骨骼系统的熟练程度的差异如何预测个人的运动能力,同一大脑区域如何编码个人的差异可以解释他们的行为。然而,在研究大脑如何编码信息时,研究人员选择不同的神经影像任务(例如,语言或电机任务),其可以依赖于处理不同类型的信息并且可以调制不同的脑区。我们假设信息如何在大脑中编码信息的个人差异是特定于任务的,并预测不同的行为措施。我们提出了一种使用编码模型的框架,以识别大脑编码和测试中的单个差异,如果这些差异可以预测行为。我们使用任务功能磁共振成像数据评估我们的框架。我们的结果表明,编码模型显示的个体差异是预测行为的强大工具,并且研究人员应优化他们对其感兴趣行为的任务和编码模型的选择。
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
探测方法允许人们使用外部分类器和统计分析获得存储在神经网络内层中的语言现象的部分表示。预先训练的基于变压器的语言模型被广泛用于自然语言理解(NLU)和自然语言生成(NLG)任务,使其最常用于下游应用程序。但是,几乎没有进行分析,无论这些模型是否已进行了足够的培训或包含与语言理论相关的知识。我们介绍了变压器英语模型(例如Multibert和T5)的时间顺序探测研究。我们依次比较模型在语料库培训过程中学到的语言的信息。结果表明,1)在培训的早期阶段获得语言信息2)两种语言模型都表明能够捕获各种语言的各种功能,包括形态学,语法甚至话语,而它们也可能不一致地失败。被认为很容易。我们还介绍了与其他基于变压器的模型兼容的时间顺序探测研究的开源框架。 https://github.com/ekaterinavoloshina/Chronology_probing
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
我们研究了Levin(1993)所述的动词交替类的程度和句子级预测任务。我们遵循并扩展了Kann等人的实验。(2019年),旨在探测静态嵌入是否编码动词的框架选择性。在单词和句子级别上,我们发现来自PLM的上下文嵌入不仅超过了非上下文嵌入,而且在大多数交替类中的任务上达到了惊人的高精度。此外,我们发现证据表明,PLM的中间层平均比所有探测任务中的较低层都能取得更好的性能。
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
语言基础与视觉是一个积极的研究领域,旨在通过利用视觉感知知识来丰富基于文本的单词含义的表示。尽管进行了多次接地尝试,但仍不清楚如何以一种保持文本和视觉知识的适当平衡的方式将视觉知识注入语言嵌入一词。一些普遍的问题是以下内容。视觉基础对抽象单词有益吗?还是仅限于具体单词的贡献?弥合文本和视觉之间差距的最佳方法是什么?通过视觉接地的文本嵌入,我们可以获得多少收益?本研究通过提出一种简单但非常有效的基础方法来解决这些问题,以预先训练的单词嵌入。我们的模型将文本嵌入与视觉保持一致,同时在很大程度上保留了在文本语料库中使用单词使用的分布统计数据。通过应用学习的对齐方式,我们能够生成视觉接地的嵌入,用于看不见的单词,包括抽象单词。一系列对单词相似性基准的评估表明,视觉接地不仅对具体单词有益,而且对抽象单词也有益。我们还表明,我们的视觉接地方法为上下文化的嵌入提供了优势,但只有在对相对尺寸相对较小的语料库进行培训时,我们才能提供优势。可以在https://github.com/hazel1994/visaly_grounded_word_word_embeddings_2上获得英语的代码和接地嵌入。
translated by 谷歌翻译
我们提出了一个新颖的框架概念,以分析在预训练的语言模型中学习的潜在概念如何编码。它使用聚类来发现编码的概念,并通过与大量的人类定义概念对齐来解释它们。我们对七个变压器语言模型的分析揭示了有趣的见解:i)学习表示形式中的潜在空间与不同程度的不同语言概念重叠,ii)ii)模型中的下层以词汇概念(例如附加物为附加物)为主,而却是核心语言概念(例如形态学或句法关系)在中间和更高层中更好地表示,iii)一些编码的概念是多方面的,无法使用现有的人类定义的概念来充分解释。
translated by 谷歌翻译
预先接受训练的语言模型的进展导致了对自然语言理解的下游任务的令人印象深刻的结果。探索预先训练的语言模型的最新工作揭示了在其上下围化表示中编码的广泛的语言属性。然而,目前尚不清楚他们是否编码对符号推理方法至关重要的语义知识。我们提出了一种用于探测预先接受训练的语言模型表示的逻辑推断的语言信息的方法。我们的探测数据集涵盖主要符号推理系统所需的语言现象列表。我们发现(i)预先接受的语言模型为推断编码几种类型的语言信息,但是还有一些类型的信息弱编码,(ii)语言模型可以通过微调有效地学习语言信息缺少语言信息。总体而言,我们的调查结果提供了逻辑推理语言模型的语言信息的洞察力,以及他们的预训练程序捕获。此外,我们已经证明了语言模型作为语义和背景知识库的潜力,用于支持符号推断方法。
translated by 谷歌翻译
The long-distance agreement, evidence for syntactic structure, is increasingly used to assess the syntactic generalization of Neural Language Models. Much work has shown that transformers are capable of high accuracy in varied agreement tasks, but the mechanisms by which the models accomplish this behavior are still not well understood. To better understand transformers' internal working, this work contrasts how they handle two superficially similar but theoretically distinct agreement phenomena: subject-verb and object-past participle agreement in French. Using probing and counterfactual analysis methods, our experiments show that i) the agreement task suffers from several confounders which partially question the conclusions drawn so far and ii) transformers handle subject-verb and object-past participle agreements in a way that is consistent with their modeling in theoretical linguistics.
translated by 谷歌翻译
预训练的语言模型的目的是学习文本数据的上下文表示。预训练的语言模型已成为自然语言处理和代码建模的主流。使用探针,一种研究隐藏矢量空间的语言特性的技术,以前的作品表明,这些预训练的语言模型在其隐藏表示中编码简单的语言特性。但是,以前的工作都没有评估这些模型是否编码编程语言的整个语法结构。在本文中,我们证明了\ textit {句法子空间}的存在,该{语法子空间}位于预训练的语言模型的隐藏表示中,其中包含编程语言的句法信息。我们表明,可以从模型的表示形式中提取此子空间,并定义一种新颖的探测方法AST-Probe,该方法可以恢复输入代码段的整个抽象语法树(AST)。在我们的实验中,我们表明这种句法子空间存在于五个最先进的预训练的语言模型中。此外,我们强调说,模型的中间层是编码大多数AST信息的模型。最后,我们估计该句法子空间的最佳大小,并表明其尺寸大大低于模型的表示空间。这表明,预训练的语言模型使用其表示空间的一小部分来编码编程语言的句法信息。
translated by 谷歌翻译
解释视觉场景的含义不仅需要识别其成分对象,还需要对象相互关系的丰富语义表征。在这里,我们通过将现代计算技术应用于复杂自然场景引起的人类脑反应的大规模7T fMRI数据集,研究视觉语义转换的神经机制。使用通过将语言深度学习模型应用于人类生成的场景描述获得的语义嵌入,我们确定了编码语义场景描述的大脑区域的广泛分布网络。重要的是,这些语义嵌入比传统对象类别标签更好地解释了这些区域的活动。此外,尽管参与者没有积极从事语义任务,但它们还是活动的有效预测指标,这表明Visuo-Semantic转换是默认的视觉方式。为了支持这种观点,我们表明,可以直接通过大脑活动模式直接将场景字幕的高度精确重建。最后,经过语义嵌入训练的经常性卷积神经网络进一步超过了语义嵌入在预测大脑活动时的语义嵌入,从而提供了大脑视觉语义转换的机械模型。这些实验和计算结果在一起表明,将视觉输入转换为丰富的语义场景描述可能是视觉系统的核心目标,并且将重点放在这一新目标上可能会导致改进人类大脑中视觉信息处理的模型。
translated by 谷歌翻译