In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.
translated by 谷歌翻译
We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach. The idea is to create a short, one-sentence news summary answering the question "What is the article about?". We collect a real-world, large scale dataset for this task by harvesting online articles from the British Broadcasting Corporation (BBC). We propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures longrange dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans. 1
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
translated by 谷歌翻译
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
translated by 谷歌翻译
Bidirectional Encoder Representations from Transformers (BERT; Devlin et al. 2019) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several intersentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves stateof-the-art results across the board in both extractive and abstractive settings. 1
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
具有复制机制的最近神经序列到序列模型在各种文本生成任务中取得了显着的进展。这些模型解决了词汇问题,并促进了稀有词的产生。然而,如先前的复制模型所观察到的,难以产生的,难以产生和缺乏抽象,难以识别。在本文中,我们提出了一种副本网络的新颖监督方法,该方法可帮助模型决定需要复制哪些单词并需要生成。具体而言,我们重新定义目标函数,它利用源序列和目标词汇表作为复制的指导。关于数据到文本生成和抽象总结任务的实验结果验证了我们的方法提高了复制质量,提高了抽象程度。
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
文件摘要提供了一种更快地了解文本文档的集合并具有几个现实生活应用程序的仪器。随着在线文本数据的增长,最近提出了许多摘要模型。基于序列的序列(SEQ2Seq)的神经摘要模型是由于其高性能而在摘要领域中最广泛使用的。这是因为在编码时,文本中的语义信息和结构信息被充分考虑。然而,现有的提取摘要模型几乎没有注意并使用中央主题信息来协助生成摘要,这导致模型不确保在主要主题下的生成摘要。冗长的文档可以跨越多个主题,单个摘要无法对所有主题进行正义。因此,生成高质量摘要的关键是基于它的核心主题和构建摘要,特别是对于长文档来说。我们提出了一个主题感知编码,用于处理此问题的文档摘要。该模型有效地结合了句法级和主题级信息来构建一个综合句子表示。具体地,在神经基句子级表示中添加了神经主题模型,学习以充分考虑用于捕获原始文档中的关键内容的中心主题信息。三个公共数据集的实验结果表明,我们的模型优于最先进的模型。
translated by 谷歌翻译
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. 1 Compared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.
translated by 谷歌翻译
与单案摘要相比,抽象性多文件摘要(MDS)对其冗长和链接的来源的表示和覆盖范围提出了挑战。这项研究开发了一个平行的层次变压器(PHT),具有MDS的注意对齐。通过合并单词和段落级的多头注意,PHT的层次结构可以更好地处理令牌和文档级别的依赖项。为了指导解码到更好的源文档覆盖范围,然后将注意力调整机制引入以校准光束搜索,并预测的最佳注意力分布。根据Wikisum数据,进行了全面的评估,以测试拟议的体系结构对MD的改进。通过更好地处理内部和跨文档的信息,结果胭脂和人类评估都表明,我们的分层模型以相对较低的计算成本生成较高质量的摘要。
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译
复制机制允许序列到序列模型从输入中选择单词并将它们直接放入输出中,这在抽象总结中发现越来越多的使用。但是,由于汉语句子中没有明确的分隔符,所以最现有的中国抽象摘要模型只能执行字符副本,从而导致效率低下。为了解决这个问题,我们提出了一个词典约束的复制网络,在编码器和解码器中模拟多粒度。在源端,单词和字符使用变换器基编码器聚合到相同的输入存储器中。在目标方面,解码器可以在每个时间步骤复制字符或多字符字,并且解码过程由一个词增强的搜索算法引导,其促进并行计算并鼓励模型复制更多单词。此外,我们采用单词选择器来集成关键字信息。实验结果在中国社交媒体数据集显示我们的模型可以独立或使用单词选择器。这两种形式都可以胜过以前的基于角色的模型并实现竞争性表现。
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
尽管具有抽象文本摘要的神经序列到序列模型的成功,但它具有一些缺点,例如重复不准确的事实细节并倾向于重复自己。我们提出了一个混合指针发生器网络,以解决再现事实细节的缺点和短语重复。我们使用混合指针发生器网络增强了基于注意的序列到序列,该混合指针发生器网络可以生成词汇单词并增强再现真实细节的准确性和劝阻重复的覆盖机制。它产生合理的输出文本,可以保留输入文章的概念完整性和事实信息。为了评估,我们主要雇用“百拉那” - 一个高度采用的公共孟加拉数据集。此外,我们准备了一个名为“BANS-133”的大型数据集,由133K Bangla新闻文章组成,与人类生成的摘要相关。试验拟议的模型,我们分别实现了胭脂-1和胭脂 - 2分别为0.66,0.41的“Bansdata”数据集,分别为0.67,0.42,为Bans-133k“数据集。我们证明了所提出的系统超过以前的国家 - 近距离数据集的近距离攀义概要技术及其稳定性。“Bans-133”数据集和代码基础将公开进行研究。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
(源)代码摘要旨在以自然语言的形式自动为给定代码段生成摘要/注释。此类摘要在帮助开发人员理解和维护源代码方面起着关键作用。现有的代码摘要技术可以分类为提取方法和抽象方法。提取方法使用检索技术从代码段中提取重要语句和关键字的子集,并生成一个摘要,该摘要保留了重要语句和关键字中的事实详细信息。但是,这样的子集可能会错过标识符或实体命名,因此,产生的摘要的自然性通常很差。抽象方法可以生成类似人写的摘要,从而利用神经机器翻译域的编码器模型。然而,生成的摘要通常会错过重要的事实细节。为了通过保留的事实细节生成类似人写的摘要,我们提出了一个新颖的提取和吸收框架。框架中的提取模块执行了提取代码摘要的任务,该任务列入了代码段,并预测包含关键事实细节的重要陈述。框架中的抽象模块执行了抽象代码摘要的任务,该任务是在整个代码段和并行的重要陈述中进行的,并生成了简洁而人工写的类似的自然语言摘要。我们通过在涉及六种编程语言的三个数据集上进行广泛的实验来评估称为EACS的有效性。实验结果表明,在所有三种广泛使用的指标(包括BLEU,流星和Rough-l)方面,EACS明显优于最先进的技术。
translated by 谷歌翻译
通过言语技术的最新进步和智能助理的引入,如亚马逊Alexa,Apple Siri和Google Home,越来越多的用户通过语音命令与各种应用程序进行交互。电子商务公司通常在其网页上显示较短的产品标题,在需要简洁时,可以在其网页上进行人工策划或算法生成。然而,这些标题与自然语言不同。例如,“幸运的魅力面筋无麸质谷物,20.5盎司盒装幸运魅力含有无麸质”可以在网页上显示,而在基于语音的文本到语音应用程序中不能使用类似的标题。在这种对话系统中,易于理解的句子,例如“20.5盎司的幸运魅力麸质谷物”是优选的。与显示设备相比,可以向用户呈现图像和详细的产品信息,在与语音助手相互作用时,需要传达最重要信息的产品的短标题。我们提出Ebert,通过进一步预先训练电子商务产品描述语料库中的BERT嵌入来进行序列到序列方法,然后微调结果模型,以产生来自输入Web标题的短,自然的语言标题。我们对现实世界行业数据集的广泛实验,以及对模型输出的人类评估,表明Ebert摘要优于相当的基线模型。由于该模型的功效,该模型的版本已在真实世界中进行部署。
translated by 谷歌翻译
文本摘要的重写方法结合了提取性和抽象的方法,使用抽象模型提高了提取性摘要的简洁性和可读性。退出重写系统将每个提取性句子作为唯一的输入,它相对集中,但可能会失去必要的背景知识和话语上下文。在本文中,我们调查了上下文化的重写,该重写消耗了整个文档并考虑了摘要上下文。我们将上下文重写正式化为具有组标签对齐的SEQ2SEQ,将组标签引入了模拟对齐方式的解决方案,并通过基于内容的地址来识别提取句子。结果表明,我们的方法显着优于非上下文重写系统,而无需加强学习,从而在多个提取器上实现了胭脂分数的强烈改进。
translated by 谷歌翻译