关键词提取是在文本文档中查找几个有趣的短语的任务,它提供了文档中的主要主题列表。大多数现有的基于图形的模型使用共同发生链接作为凝聚指示器来模拟语法元素的关系。但是,单词可能在文档中具有不同形式的表达式,也可能有几个同义词。只需使用共同发生信息无法捕获此信息。在本文中,我们通过利用Word Embeddings作为背景知识来增强基于图形的排名模型,以将语义信息添加到词语图。我们的方法是在既定的基准数据集和经验结果上评估的,表明嵌入邻域信息的单词提高了模型性能。
translated by 谷歌翻译
随着互联网技术的发展,信息超载现象变得越来越明显。用户需要花费大量时间来获取所需的信息。但是,汇总文档信息的关键词非常有助于用户快速获取和理解文档。对于学术资源,大多数现有研究通过标题和摘要提取关键纸张。我们发现引用中的标题信息还包含作者分配的密钥次。因此,本文使用参考信息并应用两种典型的无监督的提取方法(TF * IDF和Textrank),两个代表传统监督学习算法(NA \“IVE贝叶斯和条件随机场)和监督的深度学习模型(Bilstm- CRF),分析参考信息对关键症提取的具体性能。从扩大源文本的角度来提高关键术识别的质量。实验结果表明,参考信息可以提高精度,召回和F1自动关键肾上腺瓶在一定程度上提取。这表明了参考信息关于学术论文的关键症提取的有用性,并为以下关于自动关键正萃取的研究提供了新的想法。
translated by 谷歌翻译
In this paper, we introduce TextRank -a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.
translated by 谷歌翻译
寻找专家在推动成功的合作和加快高质量研究开发和创新方面起着至关重要的作用。但是,科学出版物和数字专业知识的快速增长使确定合适的专家是一个具有挑战性的问题。根据向量空间模型,文档语言模型和基于图形的模型,可以将寻找给定主题的专家的现有方法分类为信息检索技术。在本文中,我们建议$ \ textit {expfinder} $,一种用于专家发现的新合奏模型,该模型集成了一种新颖的$ n $ gram-gram vector空间模型,称为$ n $ vsm和基于图的模型,并表示作为$ \ textit {$ \ mu $ co-hits} $,这是共同算法的拟议变体。 $ n $ vsm的关键是利用$ n $ gram单词和$ \ textIt {expfinder} $ compriese $ n $ vsm的最新反向文档频率加权方法中的实现专家发现。与六个不同的专家发现模型相比,我们在四个不同的数据集上全面评估$ \ textit {expfinder} $。评估结果表明,$ \ textit {expfinder} $是专家发现的高效模型,显着优于19%至160.2%的所有比较模型。
translated by 谷歌翻译
Automatic keyword extraction (AKE) has gained more importance with the increasing amount of digital textual data that modern computing systems process. It has various applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis and document indexing. This paper proposes a simple but effective post-processing-based universal approach to improve the performance of any AKE methods, via an enhanced level of semantic-awareness supported by PoS-tagging. To demonstrate the performance of the proposed approach, we considered word types retrieved from a PoS-tagging step and two representative sources of semantic information -- specialised terms defined in one or more context-dependent thesauri, and named entities in Wikipedia. The above three steps can be simply added to the end of any AKE methods as part of a post-processor, which simply re-evaluate all candidate keywords following some context-specific and semantic-aware criteria. For five state-of-the-art (SOTA) AKE methods, our experimental results with 17 selected datasets showed that the proposed approach improved their performances both consistently (up to 100\% in terms of improved cases) and significantly (between 10.2\% and 53.8\%, with an average of 25.8\%, in terms of F1-score and across all five methods), especially when all the three enhancement steps are used. Our results have profound implications considering the ease to apply our proposed approach to any AKE methods and to further extend it.
translated by 谷歌翻译
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them) has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. this algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as the result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework which can be used individually or as a part of generating the summary to overcome coverage problems.
translated by 谷歌翻译
新闻库中的自动事件检测是开采快速发展的结构化知识的至关重要的任务。由于现实世界事件具有不同的粒度,从顶级主题到关键事件,然后再提及与具体行动相对应的事件,因此通常有两条研究:(1)主题检测从新闻语料库的主要主题中标识(例如,。 ,“ 2019年香港抗议活动”与“ 2020年美国总统大选”),具有非常不同的语义; (2)从一份文件提取的行动提取提取级别的行动(例如,“警察击中抗议者的左臂”),无法理解该事件。在本文中,我们提出了一项新任务,即在中间级别的关键事件检测,目的是从新闻语料库的关键事件(例如,“ 8月12日至14日的HK机场抗议”)中进行检测,每一次都发生在特定时间/位置并专注于同一主题。由于新闻文章的快速发展性质,这项任务可以弥合事件的理解和结构,并且由于关键事件的主题和时间紧密以及标记的数据的稀缺而具有固有的挑战。为了应对这些挑战,我们开发了一个无监督的关键事件检测框架Evmine,(1)使用新颖的TTF-ITF分数提取时间频繁的峰值短语,(2)将峰值短语合并为事件 - 指示特征集,通过从我们的我们检测我们的社区中。设计的峰短语图可以捕获文档的共发生,语义相似性和时间亲密信号,以及(3)迭代地检索与每个关键事件相关的文档,通过训练具有从事件指标特征集中自动生成的伪标签的分类器,并完善该分类器使用检索的文档检测到关键事件。广泛的实验和案例研究表明,Evmine的表现优于所有基线方法及其在两个现实世界新闻机构上的消融。
translated by 谷歌翻译
Assigning qualified, unbiased and interested reviewers to paper submissions is vital for maintaining the integrity and quality of the academic publishing system and providing valuable reviews to authors. However, matching thousands of submissions with thousands of potential reviewers within a limited time is a daunting challenge for a conference program committee. Prior efforts based on topic modeling have suffered from losing the specific context that help define the topics in a publication or submission abstract. Moreover, in some cases, topics identified are difficult to interpret. We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics. Furthermore, we contribute a new dataset for evaluating reviewer matching systems. Our experiments show a significant, consistent improvement in precision when compared with the existing methods. We also use examples to demonstrate why our recommendations are more explainable. The new approach has been deployed successfully at top-tier conferences in the last two years.
translated by 谷歌翻译
测量不同文本的语义相似性在数字人文研究中具有许多重要应用,例如信息检索,文档聚类和文本摘要。不同方法的性能取决于文本,域和语言的长度。本研究侧重于试验一些目前的芬兰方法,这是一种形态学丰富的语言。与此同时,我们提出了一种简单的方法TFW2V,它在处理长文本文档和有限的数据时显示出高效率。此外,我们设计了一种客观评估方法,可以用作基准标记文本相似性方法的框架。
translated by 谷歌翻译
关键术提取是自然语言处理中的基本任务,通常包含两个主要部分:候选关键斑点提取和关键字重要性估计。从人类理解文件的角度来看,我们通常根据其句法准确,信息显着性和概念一致性来衡量短语的重要性。然而,大多数现有的关键酶提取方法仅关注其中的部分,这导致偏置结果。在本文中,我们提出了一种新的方法来估计关键前酶的重要性,从多个视角(称为\ yexit {Kiemp}),并进一步提高关键症提取的性能。具体而言,\ textit {kiemp}估计短语与三个模块的重要性:一个块状模块,以测量其语法精度,一个排名模块,用于检查其信息显着性,以及判断短语之间概念(即,主题)一致性的匹配模块和整个文件。这三个模块通过端到端的多任务学习模型无缝地联合在一起,这有助于三个部分来增强彼此,并平衡三个视角的效果。六个基准数据集上的实验结果表明,在大多数情况下,\ Textit {Kiemp}优于现有的最先进的关键术提取方法。
translated by 谷歌翻译
当医学研究人员进行系统审查(SR)时,筛查研究是最耗时的过程:研究人员阅读了数千个医学文献,手动标记它们相关或无关紧要。筛选优先级排序(即,文件排名)是通过提供相关文件的排名来协助研究人员的方法,其中相关文件的排名高于无关。种子驱动的文档排名(SDR)使用已知的相关文档(即,种子)作为查询并生成这些排名。以前的SDR工作试图在查询文档中识别不同术语权重,并在检索模型中使用它们来计算排名分数。或者,我们将SDR任务制定为查询文档的类似文档,并根据相似度得分生成排名。我们提出了一个名为Mirror匹配的文件匹配度量,通过结合常见的书写模式来计算医疗摘要文本之间的匹配分数,例如背景,方法,结果和结论。我们对2019年克利夫氏素母电子邮件进行实验2 TAR数据集,并且经验结果表明这种简单的方法比平均精度和精密的度量标准的传统和神经检索模型实现了更高的性能。
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
Instead of mining coherent topics from a given text corpus in a completely unsupervised manner, seed-guided topic discovery methods leverage user-provided seed words to extract distinctive and coherent topics so that the mined topics can better cater to the user's interest. To model the semantic correlation between words and seeds for discovering topic-indicative terms, existing seed-guided approaches utilize different types of context signals, such as document-level word co-occurrences, sliding window-based local contexts, and generic linguistic knowledge brought by pre-trained language models. In this work, we analyze and show empirically that each type of context information has its value and limitation in modeling word semantics under seed guidance, but combining three types of contexts (i.e., word embeddings learned from local contexts, pre-trained language model representations obtained from general-domain training, and topic-indicative sentences retrieved based on seed information) allows them to complement each other for discovering quality topics. We propose an iterative framework, SeedTopicMine, which jointly learns from the three types of contexts and gradually fuses their context signals via an ensemble ranking process. Under various sets of seeds and on multiple datasets, SeedTopicMine consistently yields more coherent and accurate topics than existing seed-guided topic discovery approaches.
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
大多数无监督的NLP模型代表了语义空间中单点或单个区域的每个单词,而现有的多感觉单词嵌入物不能代表像素序或句子等更长的单词序列。我们提出了一种用于文本序列(短语或句子)的新型嵌入方法,其中每个序列由一个不同的多模码本嵌入物组表示,以捕获其含义的不同语义面。码本嵌入式可以被视为集群中心,该中心总结了在预训练的单词嵌入空间中的可能共同出现的单词的分布。我们介绍了一个端到端的训练神经模型,直接从测试时间内从输入文本序列预测集群中心集。我们的实验表明,每句话码本嵌入式显着提高无监督句子相似性和提取摘要基准的性能。在短语相似之处实验中,我们发现多面嵌入物提供可解释的语义表示,但不优于单面基线。
translated by 谷歌翻译
在这个数字时代,几乎在每个学科中,人们都在使用自动化系统,这些系统以不同的自然语言以文档格式表示信息。结果,人们对找到,组织和分析这些文件的更好解决方案越来越兴趣。在本文中,我们提出了一个系统,该系统将使用神经词嵌入的百科全书知识(EK)群簇。 EK启用相关概念和神经词嵌入的表示,使我们能够处理相关性的上下文。在聚类过程中,所有文本文档都通过预处理阶段。通过使用EK和Word Embedding模型映射,从每个文档中提取了丰富的文本文档功能。生成了富集特征的TF-IDF加权载体。最后,使用流行的球形K-均值算法聚类文本文档。提出的系统通过Amharic文本语料库和Amharic Wikipedia数据进行了测试。测试结果表明,将EK与单词嵌入文档聚类的使用可提高仅使用EK的平均准确性。此外,改变班级的大小对准确性有重大影响。
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
科学世界正在快速改变,新技术正在开发,新的趋势正在进行频率增加。本文介绍了对学术出版物进行科学分析的框架,这对监测研究趋势并确定潜在的创新至关重要。该框架采用并结合了各种自然语言处理技术,例如Word Embedding和主题建模。嵌入单词嵌入用于捕获特定于域的单词的语义含义。我们提出了两种新颖的科学出版物嵌入,即PUB-G和PUB-W,其能够在各种研究领域学习一般的语义含义以及特定于域的单词。此后,主题建模用于识别这些更大的研究领域内的研究主题集群。我们策划了一个出版物数据集,由两条会议组成,并从1995年到2020年的两项期刊从两个研究领域组成。实验结果表明,与其他基线嵌入式的基于主题连贯性,我们的PUB-G和PUB-W嵌入式与其他基线嵌入式相比优越。
translated by 谷歌翻译