Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
translated by 谷歌翻译
We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
自动化讲故事长期以来一直抓住了研究人员在日常生活中的叙述中的难以感受。但是,在用神经语言模型产生叙述时,保持一致性并保持对特定结束的特定结束挑战。在本文中,我们介绍了读者模型(Storm)的故事生成,这是一个框架,其中读者模型用于推理故事的推理应该进步。读者模型是人类读者相信关于虚构故事世界的概念,实体和关系的人。我们展示了如何作为知识图表所代表的明确读者模型提供故事一致性,并以实现给定的故事世界目标的形式提供可控性。实验表明,我们的模型产生了显着更加连贯和主题的故事,优于尺寸的基线,包括情节合理性并保持主题。我们的系统也优于在未订购的情况下在组成给定概念时占总引导的故事生成基线。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structure knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematical taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
translated by 谷歌翻译
大型预先训练的生成语言模型的出现为AI故事的常见框架通过采样模型来创建持续故事的序列。然而,单独的抽样对故事产生不足。特别是,很难指导语言模型来创建故事以达到特定的目标事件。我们提出了两种在深增强学习和奖励塑造的自动化技术,以控制计算机生成的故事的情节。首先利用近端策略优化来微调现有的基于变换器的语言模型,以生成文本持续,而且是寻求目标。第二种提取来自展开故事的知识图,该故事由策略网络使用,具有图注意选择由语言模型生成的候选继续。我们报告了与故事如何实现给定的目标事件以及与基线和消融相比的一致性和整体故事质量的人类参与者排名的自动化指标报告。
translated by 谷歌翻译
基于神经语言模型的自动化故事生成方法遭受了两个重要的限制。首先,基于语言模型的故事生成器通常不适用于给定的目标或结束。其次,当故事变长时,他们经常失去一致。我们提出了一种新的自动化故事生成方法,将问题视为生成的问答之一。我们所提出的故事生成系统从封装故事的最终事件的句子开始。然后系统迭代地(1)分析描述最新事件的文本,(2)生成关于“为什么”一个字符正在执行他们在事件中执行的事情的问题,然后(3)尝试生成另一个前面的回答这个问题的事件。
translated by 谷歌翻译
文本到图像合成的最新进展导致了较大的经过验证的变压器,具有出色的能力,可以从给定文本产生可视化。但是,这些模型不适合专门的任务,例如故事可视化,该任务要求代理商制作一系列图像,给定相应的字幕序列,形成叙述。此外,我们发现故事可视化任务无法适应新叙事中看不见的情节和角色的概括。因此,我们首先提出了故事延续的任务,其中生成的视觉故事是在源图像上进行的,从而可以更好地对具有新角色的叙述进行更好的概括。然后,我们使用特定于(a)顺序图像生成的任务特定模块和(b)从初始帧复制相关元素的任务特定模块来增强或“复古”文本对图像合成模型。然后,我们探讨了预训练模型的全模型芬太尼以及对参数适应的及时调整。我们在两个现有数据集(PororoSV和FlintStonessV)上评估了我们的方法storydall-e,并介绍了从视频吸引数据集收集的新数据集DIDEMOSV。我们还基于生成的对抗网络(GAN)开发了一个模型故事游戏,以进行故事的延续,并将其与StoryDall-E模型进行比较,以展示我们方法的优势。我们表明,我们的复古拟合方法优于基于GAN的模型,用于故事延续,并促进从源图像中复制视觉元素,从而改善了生成的视觉故事中的连续性。最后,我们的分析表明,经过审计的变压器努力理解包含几个角色的叙述。总体而言,我们的工作表明,可以验证的文本对图像合成模型可以适应复杂和低资源的任务,例如故事延续。
translated by 谷歌翻译
对事件序列的预测对于信息检索和自然语言处理中的许多现实世界应用至关重要。在事件序列预测中,未来的活动生成(FEG)是一项具有挑战性的任务,因为它不仅需要流利的文本生成,而且需要常识性推理才能保持整个事件故事的逻辑连贯性。在本文中,我们提出了一个新颖的可解释的FEG框架COEP。它突出并整合了两种类型的事件知识,对直接事件事件关系的顺序知识以及推论知识,这些知识反映了事件之间的中间角色心理学(例如意图,原因,反应),这些心理本质地将故事推向了故事。为了减轻知识遗忘问题,我们为每种类型的知识设计了两个模块,即IM和GM,它们是通过及时调整组合的。首先,IM专注于理解推论知识,以产生常识性解释并为通用汽车提供软提示向量。我们还设计了一种对比歧视器,以提高概括能力。其次,GM通过用IM的指导对直接顺序知识进行建模来生成未来事件。自动和人类评估表明,我们的方法可以产生更连贯,具体和逻辑的未来事件。
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译
预训练的语言模型(PLM)无法生成长形式的叙事文本,因为它们不考虑全局结构。结果,生成的文本通常是不巧妙的,重复的或缺乏内容的。故事发电的最新工作以提示,关键字或语义框架的形式重新引入了明确的内容计划。经过大型平行语料库的培训,这些模型可以生成更合乎逻辑的事件序列,从而产生更满足的故事。但是,这些中间表示通常不使用自然语言,并且不需要微调就无法使用。我们建议使用现成的PLM生成故事情节,同时保持内容计划的好处,以产生凝聚力和满足的故事。我们提出的方法ScratchPlot首先提示PLM构成内容计划。然后,我们生成故事的身体并以内容计划结束。此外,我们通过使用其他PLM来对生成的(故事,结尾)对进行排名。我们用各种基线基准测试我们的方法,并在人类和自动评估中取得了卓越的结果。
translated by 谷歌翻译
将文本插入段落中指定位置的任务(称为空白(FITB))对于各种应用程序与作家与自然语言生成(NLG)系统互动以制作文本的应用很有用。虽然先前的工作已经通过专门培训的模型来解决此问题,但更有用的模型是可以有效地执行_both_ fitb和延续的模型。在这项工作中,我们评估了使用单个模型完成这两个任务的可行性。我们表明,通过FITB式目标进行预训练的模型都可以完成这两个任务,而预先训练的持续训练的模型却没有。最后,我们展示了如何轻松地对FITB模型进行填充,以允许对一代的长度和单词选择进行细粒度的控制。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods. This motivates the use of contrastive learning for general purpose human preference modeling.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
大型预训练的语言模型能够产生多种多样的文本。从提示开始,这些模型产生了一种可以不可预测的叙述。现有的可控文本生成方法,该方法指导用户指定方向的文本中的叙述,需要创建培训语料库和额外的耗时培训程序。本文提出并调查了Contocation2Text,这是一种用于俄罗斯自动可控文本生成的插件方法,不需要微调。该方法基于两个交互模型:自回归语言Rugpt-3模型和自动编码语言Ruroberta模型。该方法的想法是根据自动编码模型的输出分布将自回归模型的输出分布移动,以确保文本中叙事的连贯过渡向指南短语,其中可以包含单个单词或搭配。能够考虑到令牌的左和右下方的自动编码模型“告诉”“自动回归模型”在当前一代步骤中,该模型是令牌最不合逻辑的,从而增加或降低了相应令牌的概率。使用该方法生成新闻文章的实验显示了其对自动生成的流利文本的有效性,这些文本包含用户指定的短语之间的连贯过渡。
translated by 谷歌翻译
GPT-2经常在故事生成模型中适应,因为它提供了强大的生成能力。但是,它仍然无法生成一致的故事并缺乏多样性。目前的故事生成模型利用诸如情节或致法性的附加信息进入GPT-2以引导生成过程。这些方法侧重于提高故事的发电质量,而我们的工作表明质量和多样性。我们探索BERT和GPT-2的组合构建一个变形式自动化器(VAE),并通过添加其他目标来扩展它来学习故事主题和话语关系等全局功能。我们的评估表明我们的增强型VAE可以提供更好的质量和多样性折衷,从而产生更少的重复故事内容,并学习更具信息丰富的潜在变量。
translated by 谷歌翻译