In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them) has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. this algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as the result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework which can be used individually or as a part of generating the summary to overcome coverage problems.
translated by 谷歌翻译
In this paper, we introduce TextRank -a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.
translated by 谷歌翻译
我们解决了无监督的提取文档摘要的问题,尤其是对于长文件。我们将无监督的问题建模为稀疏自动回归的问题,并通过凸,规范约束的问题近似产生的组合问题。我们使用专用的Frank-Wolfe算法来解决它。要生成带有$ k $句子的摘要,该算法只需要执行$ \ of of K $迭代,从而非常有效。我们解释了如何避免明确计算完整梯度以及如何包括嵌入信息的句子。我们使用词汇(标准)胭脂分数以及语义(基于嵌入式)的方法对其他两种无监督的方法评估了我们的方法。我们的方法在两个数据集中取得了更好的结果,并且在与高度释义的摘要结合使用时,尤其有效。
translated by 谷歌翻译
In the scenario of unsupervised extractive summarization, learning high-quality sentence representations is essential to select salient sentences from the input document. Previous studies focus more on employing statistical approaches or pre-trained language models (PLMs) to extract sentence embeddings, while ignoring the rich information inherent in the heterogeneous types of interaction between words and sentences. In this paper, we are the first to propose an unsupervised extractive summarizaiton method with heterogeneous graph embeddings (HGEs) for Chinese document. A heterogeneous text graph is constructed to capture different granularities of interactions by incorporating graph structural information. Moreover, our proposed graph is general and flexible where additional nodes such as keywords can be easily integrated. Experimental results demonstrate that our method consistently outperforms the strong baseline in three summarization datasets.
translated by 谷歌翻译
关键词提取是在文本文档中查找几个有趣的短语的任务,它提供了文档中的主要主题列表。大多数现有的基于图形的模型使用共同发生链接作为凝聚指示器来模拟语法元素的关系。但是,单词可能在文档中具有不同形式的表达式,也可能有几个同义词。只需使用共同发生信息无法捕获此信息。在本文中,我们通过利用Word Embeddings作为背景知识来增强基于图形的排名模型,以将语义信息添加到词语图。我们的方法是在既定的基准数据集和经验结果上评估的,表明嵌入邻域信息的单词提高了模型性能。
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
随着大数据挖掘和现代大量文本分析的出现和普及,自动化文本摘要在从文档中提取和检索重要信息而变得突出。这项研究从单个和多个文档的角度研究了自动文本摘要的各个方面。摘要是将庞大的文本文章凝结成简短的摘要版本的任务。为了摘要目的,该文本的大小减小,但保留了关键的重要信息并保留原始文档的含义。这项研究介绍了潜在的Dirichlet分配(LDA)方法,用于从具有与基因和疾病有关的主题进行摘要的医学科学期刊文章进行主题建模。在这项研究中,基于Pyldavis Web的交互式可视化工具用于可视化所选主题。可视化提供了主要主题的总体视图,同时允许并将深度含义归因于流行率单个主题。这项研究提出了一种新颖的方法来汇总单个文档和多个文档。结果表明,使用提取性摘要技术在处理后的文档中考虑其主题患病率的概率,纯粹是通过考虑其术语来排名的。 Pyldavis可视化描述了探索主题与拟合LDA模型的术语的灵活性。主题建模结果显示了主题1和2中的流行率。该关联表明,本研究中的主题1和2中的术语之间存在相似性。使用潜在语义分析(LSA)和面向召回的研究测量LDA和提取性摘要方法的功效,以评估模型的可靠性和有效性。
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
测量不同文本的语义相似性在数字人文研究中具有许多重要应用,例如信息检索,文档聚类和文本摘要。不同方法的性能取决于文本,域和语言的长度。本研究侧重于试验一些目前的芬兰方法,这是一种形态学丰富的语言。与此同时,我们提出了一种简单的方法TFW2V,它在处理长文本文档和有限的数据时显示出高效率。此外,我们设计了一种客观评估方法,可以用作基准标记文本相似性方法的框架。
translated by 谷歌翻译
Modern multi-document summarization (MDS) methods are based on transformer architectures. They generate state of the art summaries, but lack explainability. We focus on graph-based transformer models for MDS as they gained recent popularity. We aim to improve the explainability of the graph-based MDS by analyzing their attention weights. In a graph-based MDS such as GraphSum, vertices represent the textual units, while the edges form some similarity graph over the units. We compare GraphSum's performance utilizing different textual units, i. e., sentences versus paragraphs, on two news benchmark datasets, namely WikiSum and MultiNews. Our experiments show that paragraph-level representations provide the best summarization performance. Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention weights of GraphSum's multi-heads and decoding layers in order to improve the explainability of a transformer-based MDS model. As a reference metric, we calculate the ROUGE scores between the input paragraphs and each sentence in the generated summary, which indicate source origin information via text similarity. We observe a high correlation between the attention weights and this reference metric, especially on the the later decoding layers of the transformer architecture. Finally, we investigate if the generated summaries follow a pattern of positional bias by extracting which paragraph provided the most information for each generated summary. Our results show that there is a high correlation between the position in the summary and the source origin.
translated by 谷歌翻译
在过去的几十年中,知识感知的方法增强了一系列自然语言处理应用。随着收集的动力,最近在文档摘要中引起了知识,这是自然语言处理应用之一。先前的作品报告说,知识包裹的文档摘要在产生卓越的消化方面表现出色,尤其是在信息性,连贯性和事实一致性方面。本文追求对将知识嵌入文档摘要的最先进方法论进行的首次系统调查。特别是,我们提出了新的分类法,以概括文档摘要观点下的知识和知识嵌入。我们进一步探讨了如何在嵌入文档摘要模型的学习体系结构时,尤其是深度学习模型的学习架构。最后,我们讨论了这个主题和未来方向的挑战。
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译
有效地探索巨大的数据,以做出决定,类似于回答复杂的问题,是挑战许多现实世界应用场景。在这种情况下,自动摘要具有重要的重要性,因为它将为大数据分析提供基础。传统的摘要方法优化系统以产生短暂的静态摘要,适合所有不考虑概述主观性方面的用户,即对不同用户认为有价值的用户,使这些方法在现实世界使用情况下不切实际。本文提出了一种基于互动概念的摘要模型,称为自适应摘要,可帮助用户制作所需的摘要,而不是产生单一的不灵活的摘要。系统通过在迭代循环中提供反馈来逐渐从用户提供信息,同时与系统交互。用户可以选择拒绝或接受概述中包含概念的操作,以从用户的透视和反馈的置信界面的重要性。所提出的方法可以保证交互式速度,以防止用户从事该过程。此外,它消除了对参考摘要的需求,这对于总结任务来说是一个具有挑战性的问题。评估表明,自适应摘要可帮助用户通过最大化所生成的摘要中的用户期望的内容来基于它们的偏好来使高质量的摘要。
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
尽管最近的抽象摘要有所改善,但大多数当前方法都会产生与源文档不一致的摘要,从而严重限制了其在现实世界应用中的信任和使用。最近的作品显示了使用文本或依赖性弧形识别事实错误识别的有希望的改进;但是,他们不会同时考虑整个语义图。为此,我们提出了Factgraph,该方法将文档分解为结构化含义表示(MR),更适合于事实评估。太太描述了核心语义概念及其关系,以规范形式汇总文档和摘要中的主要内容,并减少数据稀疏性。 Factgraph使用与结构感知适配器增强的图形编码器编码此类图,以根据图形连接性捕获概念之间的交互,以及使用基于适配器的文本编码器的文本表示。在不同基准上进行评估事实的实验表明,事实图的表现优于先前的方法高达15%。此外,Factgraph改善了识别内容可验证性错误的性能,并更好地捕获了附近级别的事实不一致。
translated by 谷歌翻译
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and tools. However, English summarization models for low-resource Indian languages are often limited by rich morphological variation, syntax, and semantic differences. In this paper, we propose GAE-ISumm, an unsupervised Indic summarization model that extracts summaries from text documents. In particular, our proposed model, GAE-ISumm uses Graph Autoencoder (GAE) to learn text representations and a document summary jointly. We also provide a manually-annotated Telugu summarization dataset TELSUM, to experiment with our model GAE-ISumm. Further, we experiment with the most publicly available Indian language summarization datasets to investigate the effectiveness of GAE-ISumm on other Indian languages. Our experiments of GAE-ISumm in seven languages make the following observations: (i) it is competitive or better than state-of-the-art results on all datasets, (ii) it reports benchmark results on TELSUM, and (iii) the inclusion of positional and cluster information in the proposed model improved the performance of summaries.
translated by 谷歌翻译
诸如学术文章和商业报告之类的长期文件一直是详细说明重要问题和需要额外关注的复杂主题的标准格式。自动汇总系统可以有效地将长文档置于简短而简洁的文本中,以封装最重要的信息,从而在帮助读者的理解中很重要。最近,随着神经体系结构的出现,已经做出了重大的研究工作,以推动自动文本摘要系统,以及有关将这些系统扩展到长期文档领域的挑战的大量研究。在这项调查中,我们提供了有关长期文档摘要的研究的全面概述,以及其研究环境的三个主要组成部分的系统评估:基准数据集,汇总模型和评估指标。对于每个组成部分,我们在长期汇总的背景下组织文献,并进行经验分析,以扩大有关当前研究进度的观点。实证分析包括一项研究基准数据集的内在特征,摘要模型的多维分析以及摘要评估指标的综述。根据总体发现,我们通过提出可能在这个快速增长的领域中提出未来探索的方向来得出结论。
translated by 谷歌翻译
由于免费的在线百科全书具有大量内容,因此Wikipedia和Wikidata是许多自然语言处理(NLP)任务的关键,例如信息检索,知识基础构建,机器翻译,文本分类和文本摘要。在本文中,我们介绍了Wikides,这是一个新颖的数据集,用于为文本摘要问题提供Wikipedia文章的简短描述。该数据集由6987个主题上的80K英语样本组成。我们设置了一种两阶段的摘要方法 - 描述生成(I阶段)和候选排名(II阶段)作为一种依赖于转移和对比学习的强大方法。对于描述生成,与其他小规模的预训练模型相比,T5和BART表现出了优越性。通过将对比度学习与Beam Search的不同输入一起应用,基于度量的排名模型优于直接描述生成模型,在主题独立拆分和独立于主题的独立拆分中,最高可达22个胭脂。此外,第II期中的结果描述得到了人类评估的支持,其中45.33%以上,而I阶段的23.66%则支持针对黄金描述。在情感分析方面,生成的描述无法有效地从段落中捕获所有情感极性,同时从黄金描述中更好地完成此任务。自动产生的新描述减少了人类为创建它们的努力,并丰富了基于Wikidata的知识图。我们的论文对Wikipedia和Wikidata产生了实际影响,因为有成千上万的描述。最后,我们预计Wikides将成为从短段落中捕获显着信息的相关作品的有用数据集。策划的数据集可公开可用:https://github.com/declare-lab/wikides。
translated by 谷歌翻译
文本摘要方法一直引起了很多关注。近年来,深入学习已被应用于文本摘要,结果表明是非常有效的。然而,基于深度学习的大多数基于深度学习的文本摘要方法需要大规模数据集,这很难在实际应用中实现。本文提出了一种基于多轮计算的无监督的提取文本摘要方法。基于定向图算法,我们改变了一次计算句子排名的传统方法,以多轮计算,并且摘要句子在每一轮计算后动态优化,以更好地匹配文本的特征。在本文中,实验在四个数据集中进行,每组单独包含汉语,英文,长短和短文本。实验结果表明,我们的方法具有比基线方法和其他无监督方法更好的性能,并且在不同的数据集中是强大的。
translated by 谷歌翻译