最近进行了许多研究来调查语境语言模型的属性,但令人惊讶的是,其中只有少数人在语义相似性方面考虑这些模型的属性。在本文中,我们首先通过利用Semcor和Wordnet范式关系的受控背景下的探测机制来专注于英语的这些属性。然后,我们建议将相同的方法调整到更开放的设置,以表征静态和上下文模型之间的差异。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
我们对基于上下文化的基于嵌入的方法的(可能错误的)输出进行了定性分析,以检测直接性语义变化。首先,我们引入了一种合奏方法优于先前描述的上下文化方法。该方法被用作对5年英语单词预测的语义变化程度进行深入分析的基础。我们的发现表明,上下文化的方法通常可以预测单词的高变化分数,这些单词在术语的词典意义上没有经历任何实际的历时语义转移(或至少这些转移的状态值得怀疑)。详细讨论了此类具有挑战性的案例,并提出了它们的语言分类。我们的结论是,预训练的情境化语言模型容易产生词典感官和上下文方差变化的变化,这自然源于它们的分布性质,但与基于静态嵌入的方法中观察到的问题类型不同。此外,他们经常将词汇实体的句法和语义方面合并在一起。我们为这些问题提出了一系列可能的未来解决方案。
translated by 谷歌翻译
隐喻检测的最先进方法比较他们的文字或核心 - 使用基于神经网络的顺序隐喻分类器的含义及其语境含义。表示字面含义的信号通常由(非语境)字嵌入式表示。然而,隐喻表达由于各种原因,例如文化和社会影响,随着时间的推移而发展。已知隐喻表达式通过语言和文字词含义,甚至在某种程度上驾驶这一进化。这升起了对文字含义不同,可能是特定于特定的,可能影响隐喻检测任务的问题。据我们所知,这是第一项研究,该研究在详细的探索性分析中检查了隐喻检测任务,其中使用不同的时间和静态字嵌入来占对字面意义的不同表示。我们的实验分析基于用于隐喻检测的三个流行基准,并从不同的Corpora中提取的单词嵌入式,并在时间上对齐到不同的最先进的方法。结果表明,不同的单词嵌入对隐喻检测任务的影响和一些时间字嵌入略高于一些性能措施的静态方法。然而,结果还表明,时间字嵌入可以提供单词“核心意义的表示,即使太接近其隐喻意义,因此令人困惑的分类器。总的来说,时间语言演化和隐喻检测之间的相互作用在我们的实验中使用的基准数据集中出现了微小。这表明对这种重要语言现象的计算分析的未来工作应该首先创建一个新的数据集,其中这个交互是更好的代表。
translated by 谷歌翻译
静态嵌入的后处理已成为提高其在词汇和序列级任务上的性能。但是,在上下文化嵌入的后处理是一个研究不足的问题。在这项工作中,我们质疑从不同训练的语言模型获得的上下文化嵌入的后处理的有用性。更具体地说,我们使用Z分数,Min-Max归一化以及使用全而top方法来删除顶部原理组件,将单个神经元激活标准化。此外,我们将单位长度标准化应用于单词表示。在各种预训练的模型集中,我们表明,在表示两个词汇任务(例如单词相似性和类比)和序列分类任务的表示后处理中存在重要信息。我们的发现提出了有关使用上下文表示表示的研究研究的有趣点,并建议在应用程序中使用Z分数归一化作为要考虑的重要步骤。
translated by 谷歌翻译
专利数据是创新研究知识的重要来源。尽管专利对之间的技术相似性是用于专利分析的关键指标。最近,研究人员一直在使用基于不同NLP嵌入模型的专利矢量空间模型来计算专利对之间的技术相似性,以帮助更好地了解创新,专利景观,技术映射和专利质量评估。据我们所知,没有一项全面的调查来建立嵌入模型的性能以计算专利相似性指标的大图。因此,在这项研究中,我们根据专利分类性能概述了这些算法的准确性。在详细的讨论中,我们报告了部分,类和子类级别的前3个算法的性能。基于专利的第一个主张的结果表明,专利,贝特(Bert-For)和tf-idf加权单词嵌入具有最佳准确性,可以在亚类级别计算句子嵌入。根据第一个结果,不同类别中模型的性能各不相同,这表明专利分析中的研究人员可以利用本研究的结果根据他们使用的专利数据的特定部分选择最佳的适当模型。
translated by 谷歌翻译
越来越多的自然语言处理研究(NLP)和自然语言理解(NLU)正在研究从大语言模型的嵌入一词中学习或编码的人类知识。这是了解哪些知识语言模型捕获的一步,类似于人类对语言和交流的理解。在这里,我们调查了单词(即价,唤醒,主导地位)的影响以及如何在大型神经网络中预先训练的单词嵌入中编码。我们将人类标记的数据集用作地面真理,并对四种单词嵌入方式进行了各种相关和分类测试。嵌入在静态或上下文化方面有所不同,以及在训练和微调阶段优先考虑特定信息的程度。我们的分析表明,嵌入Vanilla Bert模型的单词并未明显编码英语单词的影响信息。只有在与情绪相关的任务上进行微调或包含来自情感丰富的环境的额外上下文化信息时,只有在bert模型进行微调时,相应的嵌入方式可以编码更相关的影响信息。
translated by 谷歌翻译
我们提出了一种使用预训练的语言模型的新的无监督方法,用于词汇替换。与以前使用语言模型的生成能力预测替代品的方法相比,我们的方法基于上下文化和脱皮的单词嵌入的相似性检索替代品,即单词在多个上下文中的平均上下文表示。我们以英语和意大利语进行实验,并表明我们的方法基本上要优于强大的基准,并在没有任何明确的监督或微调的情况下建立了新的最新技术。我们进一步表明,我们的方法在预测低频替代品方面的表现特别出色,还产生了多种替代候选者列表,从而减少了根据文章 - 名称协议引起的形态寄电或形态句法偏见。
translated by 谷歌翻译
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fillin-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-theart pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https: //github.com/facebookresearch/LAMA.
translated by 谷歌翻译
语言模型中的上下文化单词嵌入已为NLP提供了很大的进步。直观地,句子信息集成到单词的表示中,这可以帮助模型多义。但是,上下文灵敏度也导致表示形式的差异,这可能会破坏同义词的语义一致性。我们量化了典型的预训练模型中每个单词sense的上下文嵌入的程度各不相同。结果表明,在上下文中,上下文化的嵌入可以高度一致。此外,词性,单词感官的数量和句子长度对感官表示的差异有影响。有趣的是,我们发现单词表示是偏见的,在不同上下文中的第一个单词往往更相似。我们分析了这种现象,还提出了一种简单的方法来减轻基于距离的单词sense剥夺歧义设置的偏见。
translated by 谷歌翻译
在NLP社区中有一个正在进行的辩论,无论现代语言模型是否包含语言知识,通过所谓的探针恢复。在本文中,我们研究了语言知识是否是现代语言模型良好表现的必要条件,我们称之为\ Texit {重新发现假设}。首先,我们展示了语言模型,这是显着压缩的,但在预先磨普目标上表现良好,以便在语言结构探讨时保持良好的分数。这一结果支持重新发现的假设,并导致我们的论文的第二款贡献:一个信息 - 理论框架,与语言建模目标相关。该框架还提供了测量语言信息对字词预测任务的影响的度量标准。我们通过英语综合和真正的NLP任务加固我们的分析结果。
translated by 谷歌翻译
News articles both shape and reflect public opinion across the political spectrum. Analyzing them for social bias can thus provide valuable insights, such as prevailing stereotypes in society and the media, which are often adopted by NLP models trained on respective data. Recent work has relied on word embedding bias measures, such as WEAT. However, several representation issues of embeddings can harm the measures' accuracy, including low-resource settings and token frequency differences. In this work, we study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles. To cover the whole spectrum of political bias in the US, we collect 500k articles and review psychology literature with respect to expected social bias. We then quantify social bias using WEAT along with embedding algorithms that account for the aforementioned issues. We compare how models trained with the algorithms on news articles represent the expected social bias. Our results suggest that the standard way to quantify bias does not align well with knowledge from psychology. While the proposed algorithms reduce the~gap, they still do not fully match the literature.
translated by 谷歌翻译
People constantly use language to learn about the world. Computational linguists have capitalized on this fact to build large language models (LLMs) that acquire co-occurrence-based knowledge from language corpora. LLMs achieve impressive performance on many tasks, but the robustness of their world knowledge has been questioned. Here, we ask: do LLMs acquire generalized knowledge about real-world events? Using curated sets of minimal sentence pairs (n=1215), we tested whether LLMs are more likely to generate plausible event descriptions compared to their implausible counterparts. We found that LLMs systematically distinguish possible and impossible events (The teacher bought the laptop vs. The laptop bought the teacher) but fall short of human performance when distinguishing likely and unlikely events (The nanny tutored the boy vs. The boy tutored the nanny). In follow-up analyses, we show that (i) LLM scores are driven by both plausibility and surface-level sentence features, (ii) LLMs generalize well across syntactic sentence variants (active vs passive) but less well across semantic sentence variants (synonymous sentences), (iii) some, but not all LLM deviations from ground-truth labels align with crowdsourced human judgments, and (iv) explicit event plausibility information emerges in middle LLM layers and remains high thereafter. Overall, our analyses reveal a gap in LLMs' event knowledge, highlighting their limitations as generalized knowledge bases. We conclude by speculating that the differential performance on impossible vs. unlikely events is not a temporary setback but an inherent property of LLMs, reflecting a fundamental difference between linguistic knowledge and world knowledge in intelligent systems.
translated by 谷歌翻译
我们使用释义作为独特的数据来源来分析上下文化的嵌入,特别关注BERT。由于释义自然编码一致的单词和短语语义,因此它们提供了一种独特的镜头来研究嵌入的特性。使用释义数据库的比对,我们在释义和短语表示中研究单词。我们发现,上下文嵌入有效地处理多义单词,但在许多情况下给出了同义词,具有令人惊讶的不同表示。我们证实了先前的发现,即Bert对单词顺序敏感,但是就BERT层的情境化水平而言,发现与先前工作的模式略有不同。
translated by 谷歌翻译
在论文中,我们测试了两个不同的方法,以获得波兰语的{令人难过的}词感人歧义任务。在这两种方法中,我们使用神经语言模型来预测与消歧的词语类似,并且在这些词的基础上,我们以不同的方式预测单词感官的分区。在第一种方法中,我们群集选定类似的单词,而在第二个中,我们群集代表其子集的群集向量。评估是在用PLONDNET感应注释的文本上进行的,并提供了相对良好的结果(对于所有模糊单词F1 = 0.68)。结果明显优于\ Cite {WAW:MYK:17:Sense}的神经模型的无人监督方法所获得的结果,并且处于在那里提供的监督方法的水平。所提出的方法可以是解决缺乏有义注释数据的语言的词语感义歧消声问题的方式。
translated by 谷歌翻译
具体/抽象的单词用于越来越多的心理学和神经生理研究。对于几种语言,已经手动创建了大型词典。这是一个非常耗时且昂贵的过程。为了生成大型的高质量词典/混凝土/抽象词,需要自动推断出在较小样本上获得的专家评估。出现的研究问题是,这样的样本应多么小的外推。在本文中,我们提出了一种自动排名单词的具体性的方法,并提出了一种显着降低专家评估量的方法。该方法已在大型英语测试集上进行了评估。构造词典的质量与专家的质量相媲美。与最先进的方法相比,预测和专家评级之间的相关性更高。
translated by 谷歌翻译
It is well-known that typical word embedding methods such as Word2Vec and GloVe have the property that the meaning can be composed by adding up the embeddings (additive compositionality). Several theories have been proposed to explain additive compositionality, but the following questions remain unanswered: (Q1) The assumptions of those theories do not hold for the practical word embedding. (Q2) Ordinary additive compositionality can be seen as an AND operation of word meanings, but it is not well understood how other operations, such as OR and NOT, can be computed by the embeddings. We address these issues by the idea of frequency-weighted centering at its core. This paper proposes a post-processing method for bridging the gap between practical word embedding and the assumption of theory about additive compositionality as an answer to (Q1). It also gives a method for taking OR or NOT of the meaning by linear operation of word embedding as an answer to (Q2). Moreover, we confirm experimentally that the accuracy of AND operation, i.e., the ordinary additive compositionality, can be improved by our post-processing method (3.5x improvement in top-100 accuracy) and that OR and NOT operations can be performed correctly.
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译