具体/抽象的单词用于越来越多的心理学和神经生理研究。对于几种语言,已经手动创建了大型词典。这是一个非常耗时且昂贵的过程。为了生成大型的高质量词典/混凝土/抽象词,需要自动推断出在较小样本上获得的专家评估。出现的研究问题是,这样的样本应多么小的外推。在本文中,我们提出了一种自动排名单词的具体性的方法,并提出了一种显着降低专家评估量的方法。该方法已在大型英语测试集上进行了评估。构造词典的质量与专家的质量相媲美。与最先进的方法相比,预测和专家评级之间的相关性更高。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
我们使用不同的语言支持特征预处理方法研究特征密度(FD)的有效性,以估计数据集复杂性,这又用于比较估计任何训练之前机器学习(ML)分类器的潜在性能。我们假设估计数据集复杂性允许减少所需实验迭代的数量。这样我们可以优化ML模型的资源密集型培训,这是由于可用数据集大小的增加以及基于深神经网络(DNN)的模型的不断增加的普及而成为一个严重问题。由于训练大规模ML模型引起的令人惊叹的二氧化碳排放量,不断增加对更强大的计算资源需求的问题也在影响环境。该研究是在多个数据集中进行的,包括流行的数据集,例如用于培训典型情感分析模型的Yelp业务审查数据集,以及最近的数据集尝试解决网络欺凌问题,这是一个严重的社会问题,也是一个严重的社会问题一个更复杂的问题,形成了语言代表的观点。我们使用收集多种语言的网络欺凌数据集,即英语,日语和波兰语。数据集的语言复杂性的差异允许我们另外讨论语言备份的单词预处理的功效。
translated by 谷歌翻译
预测过程的结果需要建模系统动态并观察状态。在社会行为的背景下,情感是系统的状态。影响控制理论(ACT)使用情感来表现潜在的相互作用。ACT是基于三维情感词典的文化和行为的生成理论。传统上,使用调查数据对情感进行量化,该数据被输入回归模型以解释社会行为。调查中使用的词典由于成本过高而受到限制。本文使用来自变压器(BERT)模型的微调双向编码器表示,以开发这些调查的替代品。该模型在估计情感含义,扩展情感词典并允许解释更多行为方面达到了最新的准确性。
translated by 谷歌翻译
对于自然语言处理应用可能是有问题的,因为它们的含义不能从其构成词语推断出来。缺乏成功的方法方法和足够大的数据集防止了用于检测成语的机器学习方法的开发,特别是对于在训练集中不发生的表达式。我们提出了一种叫做小鼠的方法,它使用上下文嵌入来实现此目的。我们展示了一个新的多字表达式数据集,具有文字和惯用含义,并使用它根据两个最先进的上下文单词嵌入式培训分类器:Elmo和Bert。我们表明,使用两个嵌入式的深度神经网络比现有方法更好地执行,并且能够检测惯用词使用,即使对于训练集中不存在的表达式。我们展示了开发模型的交叉传输,并分析了所需数据集的大小。
translated by 谷歌翻译
Multiple choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, due to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average 3 distractors out of the 10 shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects & languages and a 77k multilingual pool of distractor vocabulary for future research.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.
translated by 谷歌翻译
我们使用释义作为独特的数据来源来分析上下文化的嵌入,特别关注BERT。由于释义自然编码一致的单词和短语语义,因此它们提供了一种独特的镜头来研究嵌入的特性。使用释义数据库的比对,我们在释义和短语表示中研究单词。我们发现,上下文嵌入有效地处理多义单词,但在许多情况下给出了同义词,具有令人惊讶的不同表示。我们证实了先前的发现,即Bert对单词顺序敏感,但是就BERT层的情境化水平而言,发现与先前工作的模式略有不同。
translated by 谷歌翻译
我们提出了Rudsi,这是俄罗斯语言感官诱导(WSI)的新基准。该数据集是使用单词用法图(WUGS)的手动注释和半自动聚类创建的。与俄罗斯的先前WSI数据集不同,Rudsi完全由数据驱动(基于俄罗斯国家语料库的文本),没有对注释者强加的外部词感官。根据图聚类的参数,可以从原始注释中产生不同的导数数据集。我们报告了几种基线WSI方法在Rudsi上获得的性能,并讨论了改善这些分数的可能性。
translated by 谷歌翻译
即使在高度发达的国家,多达15-30%的人口只能理解使用基本词汇编写的文本。他们对日常文本的理解是有限的,这阻止了他们在社会中发挥积极作用,并就医疗保健,法律代表或民主选择做出明智的决定。词汇简化是一项自然语言处理任务,旨在通过更简单地替换复杂的词汇和表达方式来使每个人都可以理解文本,同时保留原始含义。在过去的20年中,它引起了极大的关注,并且已经针对各种语言提出了全自动词汇简化系统。该领域进步的主要障碍是缺乏用于构建和评估词汇简化系统的高质量数据集。我们提出了一个新的基准数据集,用于英语,西班牙语和(巴西)葡萄牙语中的词汇简化,并提供有关数据选择和注释程序的详细信息。这是第一个可直接比较三种语言的词汇简化系统的数据集。为了展示数据集的可用性,我们将两种具有不同体系结构(神经与非神经)的最先进的词汇简化系统适应所有三种语言(英语,西班牙语和巴西葡萄牙语),并评估他们的表演在我们的新数据集中。为了进行更公平的比较,我们使用多种评估措施来捕获系统功效的各个方面,并讨论其优势和缺点。我们发现,最先进的神经词汇简化系统优于所有三种语言中最先进的非神经词汇简化系统。更重要的是,我们发现最先进的神经词汇简化系统对英语的表现要比西班牙和葡萄牙语要好得多。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
Wikidata是一个经常更新,社区驱动和多语言知识图形。因此,Wikidata是实体联系的一个有吸引力的基础,这是最近发表论文的增加显而易见的。该调查侧重于四个主题:(1)存在哪些Wikidata实体链接数据集,它们是多么广泛使用,它们是如何构建的? (2)对实体联系数据集的设计进行Wikidata的特点,如果是的话,怎么样? (3)当前实体链接方法如何利用Wikidata的特定特征? (4)现有实体链接方法未开发哪种Wikidata特征?本次调查显示,当前的Wikidata特定实体链接数据集在其他知识图表中的方案中的注释方案中没有不同。因此,没有提升多语言和时间依赖数据集的可能性,是自然适合维基帽的数据集。此外,我们表明大多数实体链接方法使用Wikidata以与任何其他知识图相同的方式,因为任何其他知识图都缺少了利用Wikidata特定特征来提高质量的机会。几乎所有方法都使用标签等特定属性,有时是描述,而是忽略超关系结构等特征。因此,例如,通过包括超关系图嵌入或类型信息,仍有改进的余地。许多方法还包括来自维基百科的信息,这些信息很容易与Wikidata组合并提供有价值的文本信息,Wikidata缺乏。
translated by 谷歌翻译
神经机器翻译(NMT)模型在大型双语数据集上已有效。但是,现有的方法和技术表明,该模型的性能高度取决于培训数据中的示例数量。对于许多语言而言,拥有如此数量的语料库是一个牵强的梦想。我们从单语言词典探索新语言的单语扬声器中汲取灵感,我们研究了双语词典对具有极低或双语语料库的语言的适用性。在本文中,我们使用具有NMT模型的双语词典探索方法,以改善资源极低的资源语言的翻译。我们将此工作扩展到多语言系统,表现出零拍的属性。我们详细介绍了字典质量,培训数据集大小,语言家族等对翻译质量的影响。多种低资源测试语言的结果表明,我们的双语词典方法比基线相比。
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
本文介绍了对土耳其语可用于的语料库和词汇资源的全面调查。我们审查了广泛的资源,重点关注公开可用的资源。除了提供有关可用语言资源的信息外,我们还提供了一组建议,并确定可用于在土耳其语言学和自然语言处理中进行研究和建筑应用的数据中的差距。
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
确保适当的标点符号和字母外壳是朝向应用复杂的自然语言处理算法的关键预处理步骤。这对于缺少标点符号和壳体的文本源,例如自动语音识别系统的原始输出。此外,简短的短信和微博的平台提供不可靠且经常错误的标点符号和套管。本调查概述了历史和最先进的技术,用于恢复标点符号和纠正单词套管。此外,突出了当前的挑战和研究方向。
translated by 谷歌翻译
非洲语言仍然滞留在自然语言处理技术的进步中,是缺乏代表性数据的一个原因,具有可以在语言之间传输信息的技术可以帮助减少缺乏数据问题。本文列车Setswana和Sepedi单语法向量,并使用Vecmap为Setsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssswana-sepedi创建交叉语言嵌入式。 Word Embeddings是字向量,其代表单词作为连续浮动数字,其中语义类似的单词映射到N维空间中的附近点。 Word Embeddings的想法是基于分布假设,即在类似上下文中分发了语义类似的单词(Harris,1954)。通过学习两个单独训练的单丝矢量的共享矢量空间来利用单晶嵌入来利用单晶的嵌入,使得具有类似含义的单词由类似的载体表示。在本文中,我们调查Setswana-Sepedi单声道单词矢量的十字旋转嵌入。我们使用Vecmap中的无监督十字形嵌入式培训Setswana-Sepedi跨语言嵌入式。我们使用语义评估任务评估Setswana-Sepedi交叉词表示的质量。对于语义相似性任务,我们将单词和Simlex任务翻译成SetSwana和Sepedi。我们将此数据集发布为其他研究人员的这项工作的一部分。我们评估嵌入式的内在质量,以确定是否有改进单词嵌入的语义表示。
translated by 谷歌翻译