文件摘要提供了一种更快地了解文本文档的集合并具有几个现实生活应用程序的仪器。随着在线文本数据的增长,最近提出了许多摘要模型。基于序列的序列(SEQ2Seq)的神经摘要模型是由于其高性能而在摘要领域中最广泛使用的。这是因为在编码时,文本中的语义信息和结构信息被充分考虑。然而,现有的提取摘要模型几乎没有注意并使用中央主题信息来协助生成摘要,这导致模型不确保在主要主题下的生成摘要。冗长的文档可以跨越多个主题,单个摘要无法对所有主题进行正义。因此,生成高质量摘要的关键是基于它的核心主题和构建摘要,特别是对于长文档来说。我们提出了一个主题感知编码,用于处理此问题的文档摘要。该模型有效地结合了句法级和主题级信息来构建一个综合句子表示。具体地,在神经基句子级表示中添加了神经主题模型,学习以充分考虑用于捕获原始文档中的关键内容的中心主题信息。三个公共数据集的实验结果表明,我们的模型优于最先进的模型。
translated by 谷歌翻译
文档摘要将长期文档融入了一个简短的版本,具有突出的信息和准确的语义描述。主要问题是如何使输出摘要用输入文档进行语义一致。为了达到这一目标,最近,研究人员专注于监督的端到端混合方法,其中包含提取器模块和摘录器模块。其中,提取器识别来自输入文档的突出句子,摘录器从突出句子生成摘要。该模型通过各种策略(例如,强化学习)成功地保持了所生成的摘要和参考摘要之间的一致性。训练混合模型时有两个语义间隙(一个是文档和提取的句子之间,另一个是在提取的句子和摘要之间)。然而,它们在现有方法中未明确考虑它们,这通常会导致摘要的语义偏见。为了减轻上述问题,本文提出了一个新的\ textBF {r}光纤S \ TextBF {e} Mantic-\ TextBF {SY} MMetry Learning \ TextBF {M} odel为文档摘要(\ TextBF {Resym}) )。 Resym在提取器中引入了语义 - 一致性奖励,以弥合第一个间隙。语义双奖励旨在弥合摘录者中的第二个间隙。通过用混合奖励机制(结合上述两个奖励)来实现整个文件摘要过程。此外,呈现了全面的句子表示学习方法以充分捕获来自原始文档的信息。已经在两个疯狂的基准数据集CNN /日邮件和BigPatent上进行了一系列实验。通过将其与各种评估度量的最新的基线进行比较,结果表明了参数的优越性。
translated by 谷歌翻译
作为自然语言生成的基本任务,文件摘要旨在为特定文件产生短期和连贯的摘要。可控摘要,特别是长度,是一些实际应用的重要问题,特别是如何折衷长度约束和信息完整性。在本文中,我们提出了一个\ textbf {a} daptive \ textbf {l} ength \ textbf {c} Ontrolling \ textbf {o} ptization(\ textbf {alco})方法,通过增强学习利用两阶段抽象摘要模型。 Alco将长度约束结合到句子提取阶段,以惩罚副主提取的句子。同时,旨在使显着性估计机制旨在保留所生成的句子中的突出信息。已经在普通使用的基准数据集\ TEXTIT {CNN /每日邮件}上进行了一系列实验。结果表明,在长度可控性和内容保存方面,ALCO比流行的基线更好。
translated by 谷歌翻译
We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach. The idea is to create a short, one-sentence news summary answering the question "What is the article about?". We collect a real-world, large scale dataset for this task by harvesting online articles from the British Broadcasting Corporation (BBC). We propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures longrange dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans. 1
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
最近,神经主题模型(NTMS)已纳入预训练的语言模型(PLM)中,以捕获用于文本摘要的全局语义信息。但是,在这些方法中,它们捕获和整合全局语义信息的方式仍然存在局限性。在本文中,我们提出了一个新颖的模型,即图形对比主题增强语言模型(GRETEL),该模型将图形对比主题模型与预训练的语言模型结合在一起,以充分利用长文档提取的全球和本地上下文语义摘要。为了更好地捕获并将全局语义信息纳入PLM,图形对比主题模型集成了层次变压器编码器和图形对比度学习,以从全局文档上下文和金摘要中融合语义信息。为此,Gretel鼓励该模型有效提取与黄金摘要有关的显着句子,而不是涵盖亚最佳主题的多余句子。对通用域和生物医学数据集的实验结果表明,我们所提出的方法优于SOTA方法。
translated by 谷歌翻译
多文件摘要中的一个关键挑战是捕获区分单个文档摘要(SDS)和多文件摘要(MDS)的输入文档之间的关系。现有的MDS工作很少解决此问题。一种有效的方法是编码文档位置信息,以帮助模型捕获跨文档关系。但是,现有的MDS模型(例如基于变压器的模型)仅考虑令牌级的位置信息。此外,这些模型无法捕获句子的语言结构,这不可避免地会引起生成的摘要中的混乱。因此,在本文中,我们提出了可以与MDS的变压器体系结构融合的文档意识到的位置编码和语言引导的编码。对于文档感知的位置编码,我们引入了一项通用协议,以指导文档编码功能的选择。对于语言引导的编码,我们建议使用简单但有效的非线性编码学习者进行特征学习,将句法依赖关系嵌入依赖关系掩码中。广泛的实验表明,所提出的模型可以生成高质量的摘要。
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译
In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.
translated by 谷歌翻译
Bidirectional Encoder Representations from Transformers (BERT; Devlin et al. 2019) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several intersentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves stateof-the-art results across the board in both extractive and abstractive settings. 1
translated by 谷歌翻译
名人认可是品牌交流中最重要的策略之一。如今,越来越多的公司试图为自己建立生动的特征。因此,他们的品牌身份交流应符合人类和法规的某些特征。但是,以前的作品主要是通过假设停止的,而不是提出一种特定的品牌和名人之间匹配的方式。在本文中,我们建议基于自然语言处理(NLP)技术的品牌名人匹配模型(BCM)。鉴于品牌和名人,我们首先从互联网上获得了一些描述性文档,然后总结了这些文档,最后计算品牌和名人之间的匹配程度,以确定它们是否匹配。根据实验结果,我们提出的模型以0.362 F1得分和精度的6.3%优于最佳基线,这表明我们模型在现实世界中的有效性和应用值。更重要的是,据我们所知,拟议的BCM模型是使用NLP解决认可问题的第一项工作,因此它可以为以下工作提供一些新颖的研究思想和方法。
translated by 谷歌翻译
诸如学术文章和商业报告之类的长期文件一直是详细说明重要问题和需要额外关注的复杂主题的标准格式。自动汇总系统可以有效地将长文档置于简短而简洁的文本中,以封装最重要的信息,从而在帮助读者的理解中很重要。最近,随着神经体系结构的出现,已经做出了重大的研究工作,以推动自动文本摘要系统,以及有关将这些系统扩展到长期文档领域的挑战的大量研究。在这项调查中,我们提供了有关长期文档摘要的研究的全面概述,以及其研究环境的三个主要组成部分的系统评估:基准数据集,汇总模型和评估指标。对于每个组成部分,我们在长期汇总的背景下组织文献,并进行经验分析,以扩大有关当前研究进度的观点。实证分析包括一项研究基准数据集的内在特征,摘要模型的多维分析以及摘要评估指标的综述。根据总体发现,我们通过提出可能在这个快速增长的领域中提出未来探索的方向来得出结论。
translated by 谷歌翻译
(源)代码摘要旨在以自然语言的形式自动为给定代码段生成摘要/注释。此类摘要在帮助开发人员理解和维护源代码方面起着关键作用。现有的代码摘要技术可以分类为提取方法和抽象方法。提取方法使用检索技术从代码段中提取重要语句和关键字的子集,并生成一个摘要,该摘要保留了重要语句和关键字中的事实详细信息。但是,这样的子集可能会错过标识符或实体命名,因此,产生的摘要的自然性通常很差。抽象方法可以生成类似人写的摘要,从而利用神经机器翻译域的编码器模型。然而,生成的摘要通常会错过重要的事实细节。为了通过保留的事实细节生成类似人写的摘要,我们提出了一个新颖的提取和吸收框架。框架中的提取模块执行了提取代码摘要的任务,该任务列入了代码段,并预测包含关键事实细节的重要陈述。框架中的抽象模块执行了抽象代码摘要的任务,该任务是在整个代码段和并行的重要陈述中进行的,并生成了简洁而人工写的类似的自然语言摘要。我们通过在涉及六种编程语言的三个数据集上进行广泛的实验来评估称为EACS的有效性。实验结果表明,在所有三种广泛使用的指标(包括BLEU,流星和Rough-l)方面,EACS明显优于最先进的技术。
translated by 谷歌翻译
长文件摘要是自然语言处理领域的重要且艰巨的任务。良好的长文件摘要表现揭示了模型对人类语言的理解。目前,大多数研究侧重于如何修改变压器的注意机制,实现更高的胭脂分数。数据预处理和后处理的研究相对较少。在本文中,我们使用两个预处理方法和后处理方法,并分析了这些方法对各种长文件摘要模型的影响。
translated by 谷歌翻译
Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.
translated by 谷歌翻译
法律文本的自动摘要是一个重要的且仍然是一个具有挑战性的任务,因为法律文件往往是长期的,并且具有不寻常的结构和风格。深层模型的最近进步培训结束于终端以可分辨率的损失总结自然文本,但在适用于合法领域时,它们会显示有限的结果。在本文中,我们建议使用强化学习来培养当前的深度摘要模型,以提高其对法律领域的表现。为此,我们采用了近端政策优化方法,并引入了新的奖励函数,鼓励一代满足词汇和语义标准的候选摘要。我们将我们的方法应用于培训不同的摘要骨架,并在3个公共法律数据集中遵守一致而显着的性能增益。
translated by 谷歌翻译
Text summarization is a user-preference based task, i.e., for one document, users often have different priorities for summary. As a key aspect of customization in summarization, granularity is used to measure the semantic coverage between the summary and source document. However, developing systems that can generate summaries with customizable semantic coverage is still an under-explored topic. In this paper, we propose the first unsupervised multi-granularity summarization framework, GranuSum. We take events as the basic semantic units of the source documents and propose to rank these events by their salience. We also develop a model to summarize input documents with given events as anchors and hints. By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner. Meanwhile, we annotate a new benchmark GranuDUC that contains multiple summaries at different granularities for each document cluster. Experimental results confirm the substantial superiority of GranuSum on multi-granularity summarization over strong baselines. Further, by exploiting the event information, GranuSum also exhibits state-of-the-art performance under the conventional unsupervised abstractive setting. Dataset for this paper can be found at: https://github.com/maszhongming/GranuDUC
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and tools. However, English summarization models for low-resource Indian languages are often limited by rich morphological variation, syntax, and semantic differences. In this paper, we propose GAE-ISumm, an unsupervised Indic summarization model that extracts summaries from text documents. In particular, our proposed model, GAE-ISumm uses Graph Autoencoder (GAE) to learn text representations and a document summary jointly. We also provide a manually-annotated Telugu summarization dataset TELSUM, to experiment with our model GAE-ISumm. Further, we experiment with the most publicly available Indian language summarization datasets to investigate the effectiveness of GAE-ISumm on other Indian languages. Our experiments of GAE-ISumm in seven languages make the following observations: (i) it is competitive or better than state-of-the-art results on all datasets, (ii) it reports benchmark results on TELSUM, and (iii) the inclusion of positional and cluster information in the proposed model improved the performance of summaries.
translated by 谷歌翻译
维基百科是可理解知识的重要自由来源。尽管如此,巴西葡萄牙维基百科仍然缺乏对许多科目的描述。为了扩大巴西维基百科,我们贡献了Plsum,这是一种从多个描述性网站生成类似的Wiki的抽象摘要的框架。该框架具有提取阶段,然后是抽象。特别是,对于抽象阶段,我们微调并比较了变压器神经网络,PTT5和啰覆的最近最近的变化。为了微调和评估模型,我们创建了一个具有数千个示例的数据集,将参考网站链接到维基百科。我们的结果表明,可以从巴西葡萄牙语网上内容生成有意义的抽象摘要。
translated by 谷歌翻译