最近,神经主题模型(NTMS)已纳入预训练的语言模型(PLM)中,以捕获用于文本摘要的全局语义信息。但是,在这些方法中,它们捕获和整合全局语义信息的方式仍然存在局限性。在本文中,我们提出了一个新颖的模型,即图形对比主题增强语言模型(GRETEL),该模型将图形对比主题模型与预训练的语言模型结合在一起,以充分利用长文档提取的全球和本地上下文语义摘要。为了更好地捕获并将全局语义信息纳入PLM,图形对比主题模型集成了层次变压器编码器和图形对比度学习,以从全局文档上下文和金摘要中融合语义信息。为此,Gretel鼓励该模型有效提取与黄金摘要有关的显着句子,而不是涵盖亚最佳主题的多余句子。对通用域和生物医学数据集的实验结果表明,我们所提出的方法优于SOTA方法。
translated by 谷歌翻译
文件摘要提供了一种更快地了解文本文档的集合并具有几个现实生活应用程序的仪器。随着在线文本数据的增长,最近提出了许多摘要模型。基于序列的序列(SEQ2Seq)的神经摘要模型是由于其高性能而在摘要领域中最广泛使用的。这是因为在编码时,文本中的语义信息和结构信息被充分考虑。然而,现有的提取摘要模型几乎没有注意并使用中央主题信息来协助生成摘要,这导致模型不确保在主要主题下的生成摘要。冗长的文档可以跨越多个主题,单个摘要无法对所有主题进行正义。因此,生成高质量摘要的关键是基于它的核心主题和构建摘要,特别是对于长文档来说。我们提出了一个主题感知编码,用于处理此问题的文档摘要。该模型有效地结合了句法级和主题级信息来构建一个综合句子表示。具体地,在神经基句子级表示中添加了神经主题模型,学习以充分考虑用于捕获原始文档中的关键内容的中心主题信息。三个公共数据集的实验结果表明,我们的模型优于最先进的模型。
translated by 谷歌翻译
文档摘要将长期文档融入了一个简短的版本,具有突出的信息和准确的语义描述。主要问题是如何使输出摘要用输入文档进行语义一致。为了达到这一目标,最近,研究人员专注于监督的端到端混合方法,其中包含提取器模块和摘录器模块。其中,提取器识别来自输入文档的突出句子,摘录器从突出句子生成摘要。该模型通过各种策略(例如,强化学习)成功地保持了所生成的摘要和参考摘要之间的一致性。训练混合模型时有两个语义间隙(一个是文档和提取的句子之间,另一个是在提取的句子和摘要之间)。然而,它们在现有方法中未明确考虑它们,这通常会导致摘要的语义偏见。为了减轻上述问题,本文提出了一个新的\ textBF {r}光纤S \ TextBF {e} Mantic-\ TextBF {SY} MMetry Learning \ TextBF {M} odel为文档摘要(\ TextBF {Resym}) )。 Resym在提取器中引入了语义 - 一致性奖励,以弥合第一个间隙。语义双奖励旨在弥合摘录者中的第二个间隙。通过用混合奖励机制(结合上述两个奖励)来实现整个文件摘要过程。此外,呈现了全面的句子表示学习方法以充分捕获来自原始文档的信息。已经在两个疯狂的基准数据集CNN /日邮件和BigPatent上进行了一系列实验。通过将其与各种评估度量的最新的基线进行比较,结果表明了参数的优越性。
translated by 谷歌翻译
Bidirectional Encoder Representations from Transformers (BERT; Devlin et al. 2019) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several intersentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves stateof-the-art results across the board in both extractive and abstractive settings. 1
translated by 谷歌翻译
We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach. The idea is to create a short, one-sentence news summary answering the question "What is the article about?". We collect a real-world, large scale dataset for this task by harvesting online articles from the British Broadcasting Corporation (BBC). We propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures longrange dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans. 1
translated by 谷歌翻译
诸如学术文章和商业报告之类的长期文件一直是详细说明重要问题和需要额外关注的复杂主题的标准格式。自动汇总系统可以有效地将长文档置于简短而简洁的文本中,以封装最重要的信息,从而在帮助读者的理解中很重要。最近,随着神经体系结构的出现,已经做出了重大的研究工作,以推动自动文本摘要系统,以及有关将这些系统扩展到长期文档领域的挑战的大量研究。在这项调查中,我们提供了有关长期文档摘要的研究的全面概述,以及其研究环境的三个主要组成部分的系统评估:基准数据集,汇总模型和评估指标。对于每个组成部分,我们在长期汇总的背景下组织文献,并进行经验分析,以扩大有关当前研究进度的观点。实证分析包括一项研究基准数据集的内在特征,摘要模型的多维分析以及摘要评估指标的综述。根据总体发现,我们通过提出可能在这个快速增长的领域中提出未来探索的方向来得出结论。
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
多文件科学摘要(MDSS)旨在为与主题相关的科学论文群生成连贯和简洁的摘要。此任务需要精确理解纸张内容以及对交叉纸关系的准确建模。知识图为文档传达了紧凑且可解释的结构化信息,这使其非常适合内容建模和关系建模。在本文中,我们提出了KGSUM,这是一个MDSS模型,以编码和解码过程中的知识图为中心。具体而言,在编码过程中,提出了两个基于图的模块,以将知识图信息纳入纸张编码,而在解码过程中,我们通过以描述性句子的形式首先生成摘要的知识图,提出了一个两阶段解码器。 ,然后生成最终摘要。经验结果表明,所提出的体系结构对多XSCIENCE数据集的基准进行了实质性改进。
translated by 谷歌翻译
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions. In this paper, we develop a topic-informed discrete latent variable model for semantic textual similarity, which learns a shared latent space for sentence-pair representation via vector quantization. Compared with previous models limited to local semantic contexts, our model can explore richer semantic information via topic modeling. We further boost the performance of semantic similarity by injecting the quantized representation into a transformer-based language model with a well-designed semantic-driven attention mechanism. We demonstrate, through extensive experiments across various English language datasets, that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译
无监督的摘要方法通过纳入预训练的语言模型的表示形式来取得了显着的结果。但是,当输入文档非常长的同时,现有方法无法考虑效率和有效性。为了解决这个问题,在本文中,我们提出了一个基于语义块的无监督长期文档摘要,提议有效的粗到1个方面的排名(C2F-FAR)框架。语义块是指描述相同方面的文档中的连续句子。具体而言,我们通过将一步排名方法转换为层次多范围两阶段排名来解决此问题。在粗级阶段,我们提出了一种新的段算法,将文档拆分为相关的语义块,然后过滤量微不足道的块。在精细阶段,我们在每个块中选择显着句子,然后从选定的句子中提取最终摘要。我们在四个长文档摘要数据集上评估了我们的框架:Gov-Report,Billsum,Arxiv和PubMed。我们的C2F-FAR可以在Gov-Report和Billsum上实现新的无监督摘要结果。此外,我们的方法比以前的方法高4-28倍。
translated by 谷歌翻译
In a citation graph, adjacent paper nodes share related scientific terms and topics. The graph thus conveys unique structure information of document-level relatedness that can be utilized in the paper summarization task, for exploring beyond the intra-document information. In this work, we focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings. We first propose a Multi-granularity Unsupervised Summarization model (MUS) as a simple and low-cost solution to the task. MUS finetunes a pre-trained encoder model on the citation graph by link prediction tasks. Then, the abstract sentences are extracted from the corresponding paper considering multi-granularity information. Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework. Motivated by this, we next propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available. Apart from employing the link prediction as an auxiliary task, GSS introduces a gated sentence encoder and a graph information fusion module to take advantage of the graph information to polish the sentence representation. Experiments on a public benchmark dataset show that MUS and GSS bring substantial improvements over the prior state-of-the-art model.
translated by 谷歌翻译
查询聚焦的文本摘要(QFTS)任务旨在构建基于给定查询的文本文档摘要的构建系统。解决此任务的关键挑战是缺乏培训摘要模型的大量标记数据。在本文中,我们通过探索一系列域适应技术来解决这一挑战。鉴于最近在广泛的自然语言处理任务中进行预先接受的变压器模型的成功,我们利用此类模型为单文档和多文件方案的QFTS任务产生抽象摘要。对于域适应,我们使用预先训练的变压器的摘要模型应用了各种技术,包括转移学习,弱监督学习和远程监督。六个数据集的广泛实验表明,我们所提出的方法非常有效地为QFTS任务产生抽象摘要,同时在一组自动和人类评估指标上设置新的最先进的结果。
translated by 谷歌翻译
对比学习模型在无监督的视觉表示学习中取得了巨大成功,这使得相同图像的不同视图的特征表示之间的相似性最大化,同时最小化不同图像的视图的特征表示之间的相似性。在文本摘要中,输出摘要是输入文档的较短形式,它们具有类似的含义。在本文中,我们提出了对监督抽象文本摘要的对比学习模型,在那里我们查看文档,它的金摘要及其模型生成的摘要,与相同的平均表示的不同视图,并在培训期间最大化它们之间的相似性。我们在三个不同的摘要数据集上改进了一个强序列到序列文本生成模型(即,BART)。人类评估还表明,与其对应物相比,我们的模型达到了更好的忠实性评级,没有对比的目标。
translated by 谷歌翻译
与单案摘要相比,抽象性多文件摘要(MDS)对其冗长和链接的来源的表示和覆盖范围提出了挑战。这项研究开发了一个平行的层次变压器(PHT),具有MDS的注意对齐。通过合并单词和段落级的多头注意,PHT的层次结构可以更好地处理令牌和文档级别的依赖项。为了指导解码到更好的源文档覆盖范围,然后将注意力调整机制引入以校准光束搜索,并预测的最佳注意力分布。根据Wikisum数据,进行了全面的评估,以测试拟议的体系结构对MD的改进。通过更好地处理内部和跨文档的信息,结果胭脂和人类评估都表明,我们的分层模型以相对较低的计算成本生成较高质量的摘要。
translated by 谷歌翻译
尽管最近的抽象摘要有所改善,但大多数当前方法都会产生与源文档不一致的摘要,从而严重限制了其在现实世界应用中的信任和使用。最近的作品显示了使用文本或依赖性弧形识别事实错误识别的有希望的改进;但是,他们不会同时考虑整个语义图。为此,我们提出了Factgraph,该方法将文档分解为结构化含义表示(MR),更适合于事实评估。太太描述了核心语义概念及其关系,以规范形式汇总文档和摘要中的主要内容,并减少数据稀疏性。 Factgraph使用与结构感知适配器增强的图形编码器编码此类图,以根据图形连接性捕获概念之间的交互,以及使用基于适配器的文本编码器的文本表示。在不同基准上进行评估事实的实验表明,事实图的表现优于先前的方法高达15%。此外,Factgraph改善了识别内容可验证性错误的性能,并更好地捕获了附近级别的事实不一致。
translated by 谷歌翻译
Modern multi-document summarization (MDS) methods are based on transformer architectures. They generate state of the art summaries, but lack explainability. We focus on graph-based transformer models for MDS as they gained recent popularity. We aim to improve the explainability of the graph-based MDS by analyzing their attention weights. In a graph-based MDS such as GraphSum, vertices represent the textual units, while the edges form some similarity graph over the units. We compare GraphSum's performance utilizing different textual units, i. e., sentences versus paragraphs, on two news benchmark datasets, namely WikiSum and MultiNews. Our experiments show that paragraph-level representations provide the best summarization performance. Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention weights of GraphSum's multi-heads and decoding layers in order to improve the explainability of a transformer-based MDS model. As a reference metric, we calculate the ROUGE scores between the input paragraphs and each sentence in the generated summary, which indicate source origin information via text similarity. We observe a high correlation between the attention weights and this reference metric, especially on the the later decoding layers of the transformer architecture. Finally, we investigate if the generated summaries follow a pattern of positional bias by extracting which paragraph provided the most information for each generated summary. Our results show that there is a high correlation between the position in the summary and the source origin.
translated by 谷歌翻译
Though many algorithms can be used to automatically summarize legal case decisions, most fail to incorporate domain knowledge about how important sentences in a legal decision relate to a representation of its document structure. For example, analysis of a legal case summarization dataset demonstrates that sentences serving different types of argumentative roles in the decision appear in different sections of the document. In this work, we propose an unsupervised graph-based ranking model that uses a reweighting algorithm to exploit properties of the document structure of legal case decisions. We also explore the impact of using different methods to compute the document structure. Results on the Canadian Legal Case Law dataset show that our proposed method outperforms several strong baselines.
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and tools. However, English summarization models for low-resource Indian languages are often limited by rich morphological variation, syntax, and semantic differences. In this paper, we propose GAE-ISumm, an unsupervised Indic summarization model that extracts summaries from text documents. In particular, our proposed model, GAE-ISumm uses Graph Autoencoder (GAE) to learn text representations and a document summary jointly. We also provide a manually-annotated Telugu summarization dataset TELSUM, to experiment with our model GAE-ISumm. Further, we experiment with the most publicly available Indian language summarization datasets to investigate the effectiveness of GAE-ISumm on other Indian languages. Our experiments of GAE-ISumm in seven languages make the following observations: (i) it is competitive or better than state-of-the-art results on all datasets, (ii) it reports benchmark results on TELSUM, and (iii) the inclusion of positional and cluster information in the proposed model improved the performance of summaries.
translated by 谷歌翻译