In long document controllable summarization, where labeled data is scarce, pretrained models struggle to adapt to the task and effectively respond to user queries. In this paper, we introduce Socratic pretraining, a question-driven, unsupervised pretraining objective specifically designed to improve controllability in summarization tasks. By training a model to generate and answer relevant questions in a given context, Socratic pretraining enables the model to more effectively adhere to user-provided queries and identify relevant content to be summarized. We demonstrate the effectiveness of this approach through extensive experimentation on two summarization domains, short stories and dialogue, and multiple control strategies: keywords, questions, and factoid QA pairs. Our pretraining method relies only on unlabeled documents and a question generation system and outperforms pre-finetuning approaches that use additional supervised data. Furthermore, our results show that Socratic pretraining cuts task-specific labeled data requirements in half, is more faithful to user-provided queries, and achieves state-of-the-art performance on QMSum and SQuALITY.
translated by 谷歌翻译
Human evaluation is the foundation upon which the evaluation of both summarization systems and automatic metrics rests. However, existing human evaluation protocols and benchmarks for summarization either exhibit low inter-annotator agreement or lack the scale needed to draw statistically significant conclusions, and an in-depth analysis of human evaluation is lacking. In this work, we address the shortcomings of existing summarization evaluation along the following axes: 1) We propose a modified summarization salience protocol, Atomic Content Units (ACUs), which relies on fine-grained semantic units and allows for high inter-annotator agreement. 2) We curate the Robust Summarization Evaluation (RoSE) benchmark, a large human evaluation dataset consisting of over 22k summary-level annotations over state-of-the-art systems on three datasets. 3) We compare our ACU protocol with three other human evaluation protocols, underscoring potential confounding factors in evaluation setups. 4) We evaluate existing automatic metrics using the collected human annotations across evaluation protocols and demonstrate how our benchmark leads to more statistically stable and significant results. Furthermore, our findings have important implications for evaluating large language models (LLMs), as we show that LLMs adjusted by human feedback (e.g., GPT-3.5) may overfit unconstrained human evaluation, which is affected by the annotators' prior, input-agnostic preferences, calling for more robust, targeted evaluation methods.
translated by 谷歌翻译
Very large language models such as GPT-3 have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3 to summarize a large collection of user reviews in a zero-shot fashion, notably approaches based on recursive summarization and selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews and a generic summarization dataset of Amazon and Yelp reviews, we show that the GPT-3 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and evaluate against several new measures targeting faithfulness, factuality, and genericity to contrast these different methods.
translated by 谷歌翻译
State-of-the-art summarization models still struggle to be factually consistent with the input text. A model-agnostic way to address this problem is post-editing the generated summaries. However, existing approaches typically fail to remove entity errors if a suitable input entity replacement is not available or may insert erroneous content. In our work, we focus on removing extrinsic entity errors, or entities not in the source, to improve consistency while retaining the summary's essential information and form. We propose to use sentence-compression data to train the post-editing model to take a summary with extrinsic entity errors marked with special tokens and output a compressed, well-formed summary with those errors removed. We show that this model improves factual consistency while maintaining ROUGE, improving entity precision by up to 30% on XSum, and that this model can be applied on top of another post-editor, improving entity precision by up to a total of 38%. We perform an extensive comparison of post-editing approaches that demonstrate trade-offs between factual consistency, informativeness, and grammaticality, and we analyze settings where post-editors show the largest improvements.
translated by 谷歌翻译
This paper introduces the shared task of summarizing documents in several creative domains, namely literary texts, movie scripts, and television scripts. Summarizing these creative documents requires making complex literary interpretations, as well as understanding non-trivial temporal dependencies in texts containing varied styles of plot development and narrative structure. This poses unique challenges and is yet underexplored for text summarization systems. In this shared task, we introduce four sub-tasks and their corresponding datasets, focusing on summarizing books, movie scripts, primetime television scripts, and daytime soap opera scripts. We detail the process of curating these datasets for the task, as well as the metrics used for the evaluation of the submissions. As part of the CREATIVESUMM workshop at COLING 2022, the shared task attracted 18 submissions in total. We discuss the submissions and the baselines for each sub-task in this paper, along with directions for facilitating future work in the field.
translated by 谷歌翻译
我们介绍了一项对自然语言(NL)推理的人类通知,开放域和逻辑上复杂且多样的数据集,配备了一阶逻辑(fol)注释。对开本由1,435个示例(独特的结论)组成,每个示例与487组前提之一搭配,这些场所作为规则,可用于演绎理由,以理解每个结论的有效性。前提和结论的逻辑正确性是通过其平行注释来确保的,这些注释会自动由我们的FOL推理引擎验证。除了主要的NL推理任务外,对开本中的NL-FOL对自动构成了使用FOL作为逻辑形式的新的NL-FOL翻译数据集。我们对广泛的实验系统地评估了对中型语言模型(BERT,ROBERTA)进行微调的FOL推理能力,并且在大型语言模型(GPT-NEOX,OPT,OPT,GPT-3,Codex)上促成了很少的射击。对于NL-FOL翻译,我们尝试使用GPT-3和Codex。我们的结果表明,公开可用的最强大的大语言模型之一(LLM),GPT-3 Davinci,仅比随机结果略好,而在一部分集的一部分中,该模型尤其不好,并且在预测该模型方面尤其不好。纠正虚假和未知结论的真实价值。我们的数据集和代码可在https://github.com/yale-lily/folio上找到。
translated by 谷歌翻译
实际一致性是实际设置中文本摘要模型的基本质量。在评估此维度的现有工作可以大致分为两行研究,基于征收的指标和问题应答(QA)的指标。然而,最近作品中提出的不同的实验设置导致对比的结论是哪个范例表现最佳。在这项工作中,我们进行了广泛的征集和基于QA的指标的比较,致力于仔细选择基于QA的度量的组件对于性能至关重要。在那些见解中,我们提出了一个优化的公制,我们称之为QAFacteval,这导致了对夏季事实一致性基准的基于QA的度量标准的平均平均平均改进。我们的解决方案提高了基于最佳的基于范围的公制,并在该基准测试中实现了最先进的性能。此外,我们发现基于QA和基于征求的度量提供了互补信号,并将两者组合成单个学习的度量,以进一步提升。通过定性和定量分析,我们将问题生成和可应答性分类视为基于QA的度量的未来工作的两个关键组成部分。
translated by 谷歌翻译
以查询为中心的摘要(QFS)旨在产生应答感兴趣的特定问题的摘要,从而实现更大的用户控制和个性化。虽然最近发布的数据集如QMSUM或Aquamuse,促进QFS中的研究工作,但该领域缺乏对适用建模方法的广泛空间的全面研究。在本文中,考虑到两种普遍的方法,我们对QFS进行了系统探索,探讨了QFS:两阶段的采掘解决方案和端到端模型。在这些类别中,我们调查现有方法,并呈现了在QMSUM数据集上实现最先进的性能的两个模型扩展,其边缘高达3.38 Rouge-1,3.72 Rouge-2和3.28 Rouge-L。通过定量实验,我们突出了不同模型配置之间的权衡,并探讨了摘要任务之间的转移能力。代码和检查点公开可用:https://github.com/salesforce/query-focused-sum。
translated by 谷歌翻译
自然语言处理研究人员已经确定了对生成任务的评估方法的局限性,具有新的问题,提出了自动指标和人群判断的有效性。同时,改善生成模型的努力倾向于专注于简单的n-gram重叠度量(例如,Bleu,Rouge)。我们认为,对模型和指标的新进展应该每个人都更直接受益并告知另一个。因此,我们提出了排行榜,竞争排行榜(广告牌)的概括,同时跟踪语言生成任务和指标的进展。与通过预定度量分类提交系统的传统的单向排行榜不同,广告牌可接受发电机和评估度量作为竞争条目。广告牌会自动创建一个基于跨发电机的全局分析选择和线性地组合一些指标的集合度量。此外,指标基于与人类判断的相关性进行排序。我们释放了用于机器翻译,摘要和图像标题的四个广告牌。我们展示了一些多样化度量的线性集合有时会在隔离中显着优于现有的度量。我们的混合效果模型分析表明,大多数自动度量,尤其是基于参考的机器,对人类发电的重估,展示了更新度量的重要性,将来变得更强大(也许与人类更相似)。
translated by 谷歌翻译
社区问题应答(CQA)FORA,如堆栈溢出和雅虎!答案包含丰富的资源,对广泛的基于社区的问题答案。每个问题线程都可以通过不同的角度接收大量答案。答案摘要的一个目标是产生反映答案视角范围的摘要。抽象答案概述的主要障碍是没有数据集,可以提供监督制作这些摘要。最近的作品提出了创建此类数据的启发式,但这些是嘈杂的,并且不会涵盖答案中存在的所有观点。这项工作介绍了4,631个CQA线程的新型数据集,用于答案摘要,由专业语言学家策划。我们的管道收集了答案概述所涉及的所有子特设的注释,包括选择与问题相关的答案句子,根据透视图对这些句子进行分组,总结每个视角,并生成整体摘要。我们在这些子组织上分析和基准最先进的模型,并为多视角数据增强引入了一种新的无监督方法,这进一步提高了根据自动评估的整体摘要性能。最后,我们提出了加强学习奖励,以改善事实一致性和答案覆盖范围和分析改进领域。
translated by 谷歌翻译