在本文中,我们提供了一种新颖的方法,以生成WordNet中名词和动词Synsets的低维矢量嵌入,其中嵌入中保留了高态 - 主词的关系。我们称之为嵌入感官频谱(以及嵌入的感觉频谱)。为了创建适合训练感觉光谱的标签,我们为WordNet中的名词和动词Synset设计了一个新的相似性测量。我们称这种相似性测量值高nym交叉点相似性(HIS),因为它比较了两个合成集之间的常见和独特的高呼气。我们的实验表明,在Simlex-999数据集的名词和动词对上,他的表现优于WordNet中的三个相似性测量值。此外,据我们所知,感觉光谱提供了保留WordNet语义关系的第一个密集的综合嵌入。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
分布式形态框架的支持者提出了两个形态形成的两个层面:一个较低的单词形成,导致输入输出语义关系松散;和一个高层,导致了紧密的输入输出语义关系。在这项工作中,我们建议在希伯来语单词嵌入的背景下测试该假设的有效性。如果两个级别的假设得到了证实,我们期望最先进的希伯来语单词嵌入将编码(1)名词,(2)从其衍生而来(通过上级操作)和(3)和(3 )与名词相关的动词(通过名词根部的低级操作),以使得(2)在嵌入空间中应比相关动词(3)更接近名词(1)。是相同的名词(1)。我们报告说,这一假设通过希伯来语的四个嵌入模型来验证:FastText,Glove,Word2Vec和Alephbert。这表明单词嵌入模型能够捕获出于形态学动机的复杂而细粒的语义属性。
translated by 谷歌翻译
成语与大多数短语不同。首先,成语中的单词具有非规范含义。其次,习语中单词的非传统含义取决于习惯中其他单词的存在。语言理论在这些特性是否相互依赖,以及是否需要特殊的理论机制来容纳成语方面有所不同。我们定义了与上述属性相对应的两个度量,并使用BERT(Devlin等,2019)和XLNet实施它们(Yang等,2019)。我们表明,成语落在两个维度的预期交集处,但是尺寸本身并不相关。我们的结果表明,处理习语的特殊机械可能不保证。
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
十年自2010年以来,人工智能成功一直处于计算机科学和技术的最前沿,传染媒介空间模型已经巩固了人工智能最前沿的位置。与此同时,量子计算机已经变得更加强大,主要进步的公告经常在新闻中。这些区域的基础的数学技术比有时意识到更多的共同之处。传染媒介空间在20世纪30年代的量子力学的公理心脏上采取了位置,这一采用是从矢量空间的线性几何形状推导逻辑和概率的关键动机。粒子之间的量子相互作用是使用张量产品进行建模的,其也用于表达人工神经网络中的物体和操作。本文介绍了这些常见的数学区域中的一些,包括如何在人工智能(AI)中使用的示例,特别是在自动推理和自然语言处理(NLP)中。讨论的技术包括矢量空间,标量产品,子空间和含义,正交投影和否定,双向矩阵,密度矩阵,正算子和张量产品。应用领域包括信息检索,分类和含义,建模字传感和歧义,知识库的推断和语义构成。其中一些方法可能会在量子硬件上实现。该实施中的许多实际步骤都处于早期阶段,其中一些已经实现了。解释一些常见的数学工具可以帮助AI和量子计算中的研究人员进一步利用这些重叠,识别和沿途探索新方向。
translated by 谷歌翻译
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
translated by 谷歌翻译
当前的语言模型可以产生高质量的文本。他们只是复制他们之前看到的文本,或者他们学习了普遍的语言抽象吗?要取笑这些可能性,我们介绍了乌鸦,这是一套评估生成文本的新颖性,专注于顺序结构(n-gram)和句法结构。我们将这些分析应用于四种神经语言模型(LSTM,变压器,变换器-XL和GPT-2)。对于本地结构 - 例如,单个依赖性 - 模型生成的文本比来自每个模型的测试集的人类生成文本的基线显着不那么新颖。对于大规模结构 - 例如,总句结构 - 模型生成的文本与人生成的基线一样新颖甚至更新颖,但模型仍然有时复制,在某些情况下,在训练集中重复超过1000字超过1,000字的通道。我们还表现了广泛的手动分析,表明GPT-2的新文本通常在形态学和语法中形成良好,但具有合理的语义问题(例如,是自相矛盾)。
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
我们对基于上下文化的基于嵌入的方法的(可能错误的)输出进行了定性分析,以检测直接性语义变化。首先,我们引入了一种合奏方法优于先前描述的上下文化方法。该方法被用作对5年英语单词预测的语义变化程度进行深入分析的基础。我们的发现表明,上下文化的方法通常可以预测单词的高变化分数,这些单词在术语的词典意义上没有经历任何实际的历时语义转移(或至少这些转移的状态值得怀疑)。详细讨论了此类具有挑战性的案例,并提出了它们的语言分类。我们的结论是,预训练的情境化语言模型容易产生词典感官和上下文方差变化的变化,这自然源于它们的分布性质,但与基于静态嵌入的方法中观察到的问题类型不同。此外,他们经常将词汇实体的句法和语义方面合并在一起。我们为这些问题提出了一系列可能的未来解决方案。
translated by 谷歌翻译
It is well-known that typical word embedding methods such as Word2Vec and GloVe have the property that the meaning can be composed by adding up the embeddings (additive compositionality). Several theories have been proposed to explain additive compositionality, but the following questions remain unanswered: (Q1) The assumptions of those theories do not hold for the practical word embedding. (Q2) Ordinary additive compositionality can be seen as an AND operation of word meanings, but it is not well understood how other operations, such as OR and NOT, can be computed by the embeddings. We address these issues by the idea of frequency-weighted centering at its core. This paper proposes a post-processing method for bridging the gap between practical word embedding and the assumption of theory about additive compositionality as an answer to (Q1). It also gives a method for taking OR or NOT of the meaning by linear operation of word embedding as an answer to (Q2). Moreover, we confirm experimentally that the accuracy of AND operation, i.e., the ordinary additive compositionality, can be improved by our post-processing method (3.5x improvement in top-100 accuracy) and that OR and NOT operations can be performed correctly.
translated by 谷歌翻译
大语言模型中的表示形式包含多种类型的性别信息。我们专注于英语文本中的两种此类信号:事实性别信息,这是语法或语义属性,以及性别偏见,这是单词和特定性别之间的相关性。我们可以解开模型的嵌入,并识别编码两种类型信息的组件。我们的目标是减少表示形式中的刻板印象偏见,同时保留事实性别信号。我们的过滤方法表明,可以减少性别中立职业名称的偏见,而不会严重恶化能力。这些发现可以应用于语言生成,以减轻对刻板印象的依赖,同时保留核心方面的性别协议。
translated by 谷歌翻译
语法提示有时具有自然语言的单词含义。例如,英语单词顺序规则限制了句子的单词顺序,例如“狗咀嚼骨头”,即使可以从世界知识和合理性中推断出“狗”作为代理人和“骨头”的状态。量化这种冗余的发生频率,以及冗余水平如何在类型上多样化的语言中变化,可以阐明语法的功能和演变。为此,我们在英语和俄语中进行了一个行为实验,并进行了跨语言计算分析,以测量从自然主义文本中提取的及物子句中语法线索的冗余性。从自然发生的句子中提取的主题,动词和物体(按随机顺序和形态标记)提出了英语和俄罗斯说话者(n = 484),并被要求确定哪个名词是该动作的推动者。两种语言的准确性都很高(英语约为89%,俄语为87%)。接下来,我们在类似的任务上训练了神经网络机分类器:预测主题对象三合会中的哪个名义是主题。在来自八个语言家庭的30种语言中,性能始终很高:中位准确性为87%,与人类实验中观察到的准确性相当。结论是,语法提示(例如单词顺序)对于仅在10-15%的自然句子中传达了代理和耐心是必要的。然而,他们可以(a)提供重要的冗余来源,(b)对于传达无法从单词中推断出的预期含义至关重要,包括对人类互动的描述,在这些含义中,角色通常是可逆的(例如,雷(Ray)帮助lu/ Lu帮助雷),表达了非典型的含义(例如,“骨头咀嚼狗”。)。
translated by 谷歌翻译
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
translated by 谷歌翻译
我们建议使用在相对较大的语料库上培训的单词嵌入式来解释隐喻解释的模型。我们的系统处理名义隐喻,就像“时间是金钱”一样。它产生了对给定隐喻的潜在解释列表。从主题(“时间”)和车辆(“MONEY”)组件的搭配中汲取候选意义,从依赖于解析的语料库中提取。我们探索添加来自单词协会规范的候选人(对提示的常见人为响应)。我们的排名程序考虑在语义矢量空间中测量的候选解释和隐喻组件之间的相似性。最后,群集算法去除语义相关的重复项,从而允许其他候选解释获得更高等级。我们使用不同的注释隐喻评估,令人鼓舞的初步结果。
translated by 谷歌翻译
观察到对于某些NLP任务,例如语义角色预测或主题拟合估计,随机嵌入性能以及预处理的嵌入方式,我们探索了哪些设置允许并检查大多数学习的编码:语义角色,语义角色,语义角色嵌入或``网络''。我们发现细微的答案,具体取决于任务及其与培训目标的关系。我们研究了多任务学习中的这些表示学习方面,在这些方面,角色预测和角色填充是受监督的任务,而几个主题拟合任务不在模型的直接监督之外。我们观察到某些任务的质量得分与培训数据规模之间的非单调关系。为了更好地理解此观察结果,我们使用这些任务的每个动力版本来分析这些结果。
translated by 谷歌翻译
随着日常生活中的自然语言处理(NLP)的部署扩大,来自NLP模型的继承的社会偏见变得更加严重和有问题。以前的研究表明,在人生成的Corpora上培训的单词嵌入式具有强烈的性别偏见,可以在下游任务中产生鉴别结果。以前的脱叠方法主要侧重于建模偏差,并且仅隐含地考虑语义信息,同时完全忽略偏置和语义组件之间的复杂潜在的因果结构。为了解决这些问题,我们提出了一种新的方法,利用了因果推断框架来有效消除性别偏见。所提出的方法允许我们构建和分析促进性别信息流程的复杂因果机制,同时保留单词嵌入中的Oracle语义信息。我们的综合实验表明,该方法达到了最先进的性别脱叠任务。此外,我们的方法在字相似性评估和各种外在下游NLP任务中产生了更好的性能。
translated by 谷歌翻译