神经文本生成模型可能遭受低多样性问题。已经提出了各种解码策略和基于培训的方法仅通过利用上下文特征来促进多样性,但很少他们考虑纳入句法结构线索。在这项工作中,我们建议使用语言注释,即演讲(POS),来指导文本生成。详细地,我们将POS引导SoftMax介绍以显式模拟两个后部概率:(i)下一页POS,(ii)来自目标POS的词汇的下一个令牌。进一步提出POS导游采样策略来解决POS的多样性来解决低多样性问题。广泛的实验和人类评估表明,与现有最先进的方法相比,我们的POS引导的Softmax和采样(POSG)可以在保持相当的质量的同时产生更多样化的文本。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration -output text that is bland, incoherent, or gets stuck in repetitive loops.To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for openended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality -as measured by human evaluation -and as diverse as human-written text.Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
translated by 谷歌翻译
文本生成对于许多自然语言处理应用至关重要。然而,基于最大化的解码方法(例如,神经语言模型的光束搜索)通常会导致解析解决方案 - 生成的文本是不自然的,并且包含不良的重复。现有方法通过采样或修改训练目标引入随机性,以降低某些令牌的概率(例如,不可能训练)。但是,它们通常会导致缺乏连贯性的解决方案。在这项工作中,我们表明,模型变性的根本原因是令牌表示的各向异性分布。我们提出了一种对比解决方案:(i)SIMCTG,是校准模型表示空间的对比训练目标,以及(ii)一种解码方法 - 对比度搜索 - 以鼓励多样性,同时在生成的文本中保持连贯性。对两种语言的三个基准测试的广泛实验和分析表明,我们提出的方法显着优于人类和自动指标评估的当前最新文本生成方法。
translated by 谷歌翻译
尽管在产生流利的文本方面取得了进步,但现有的预训练模型倾向于在产生诸如故事和新闻之类的叙述时将不连贯的事件序列附加到相关实体上。我们猜想,这些问题是由将实体表示为浅表词的静态嵌入而导致的,同时忽略了对其不断变化的状态建模,即随着文本的展开,即它们所携带的信息。因此,我们将变压器模型扩展到动态执行实体状态更新和叙事生成的句子实现。我们提出了一个对比框架,以在离散空间中学习状态表示,并将其他注意层插入解码器中以更好地利用这些状态。两个叙述数据集的实验表明,与有意义的实体状态的指导相比,我们的模型可以产生更多的连贯和多样化的叙事。
translated by 谷歌翻译
个性化的自然语言生成可解释的建议在证明为什么建议可能与用户的兴趣相匹配的原因中起着关键作用。现有模型通常通过软约束(例如〜方面计划)来控制发电过程。在有希望的同时,这些方法难以正确地生成特定的信息,这阻止了产生的解释内容丰富和多样化。在本文中,我们提出了UCEPIC,这是一个解释生成模型,该模型统一了可控个性化生成的方面计划和词汇约束。具体而言,我们首先通过提出的强大插入过程预先培训非人性化的文本生成器,以便模型能够生成包含词汇约束的句子。然后,我们演示了将方面计划和个性化引用纳入插入过程的方法,以获得个性化的解释。与先前由软限制控制的工作相比,UCEPIC结合了来自钥匙拼的特定信息,然后很大程度上提高了生成的解释的多样性和信息性。对RateBeer和Yelp的广泛实验表明,UCEPIC可以为建议产生高质量和不同的解释。
translated by 谷歌翻译
增强描述视频内容的句子的多样性是近期视频字幕研究中出现的重要问题。在本文中,我们通过模仿示例句语法来自定义视频标题的小说视角来探讨此问题。具体地,给定视频和任何语法有效的示例句子,我们介绍了一个新的语法定制视频标题(SCVC)的任务,旨在生成一个字幕,不仅开始描述视频内容,而且还句法模仿给定的示例句子。为了解决SCVC任务,我们提出了一种新的视频标题模型,其中首先设计了分层句子语法编码器来提取示例句子的语法结构,然后设计了语法调节标题解码器以生成表达视频语义的语法结构标题。由于没有可用的语法定制地面视频字幕,我们通过提出新的培训策略来解决这种挑战,该策略利用传统的成对视频标题数据和我们所收集的示例性句子来完成模型学习。在语义,句法,流畅性和多样性评估方面进行了广泛的实验,清楚地展示了我们的模型能力,以生成与丰富的多样性很好地模仿不同示例性句子的语法变化和语义 - 相干的视频标题。
translated by 谷歌翻译
神经文本生成的主导范式是自回归语言模型的左右解码。然而,复杂的词汇约束下的受约束或可控发生的产生需要远见计划未来可行的未来路径。从A *搜索算法绘制灵感,我们提出了一种神经系统A * esque,一种解码算法包含未来成本的启发式估计。我们开发了高效的寻找高效,对大规模语言模型有效,使我们的方法成为诸如光束搜索和顶-K采样等共同技术的替代品。为了使受约束的产生,我们构建了神经系统解码(Lu等,2021),将其灵活性结合到与未来约束满足的* esque估计结合起来的逻辑限制。我们的方法在五代任务中优于竞争力的基线,并在表格到文本生成,受限机器翻译和关键字的生成中实现了新的最先进的性能。在需要复杂约束满足或少量拍摄或零拍摄设置的任务上,改进尤其显着。神经系统A * esque说明了用于改进和实现大规模语言模型的新功能的解码的力量。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
最近的研究确定,大规模神经语言模型的学识渊博的令牌嵌入被退化为各向异性,形状狭窄。这种现象称为表示变性问题,促进了对模型性能产生负面影响的令牌嵌入之间的总体相似性的增加。尽管基于对问题触发的现象的观察,解决了变性问题的现有方法改善了文本生成的性能,但仍未探索变性问题背后的令牌嵌入的训练动力学。在这项研究中,我们分析了关注稀有令牌嵌入的令牌嵌入的训练动力学。我们证明,稀有令牌嵌入的梯度的特定部分是训练阶段中所有令牌变性问题的关键原因。基于分析,我们提出了一种称为自适应梯度门控(AGG)的新方法。 AGG通过对稀有令牌嵌入的梯度的特定部分进行门控来解决变性问题。语言建模,单词相似性和机器翻译任务的实验结果定量,定性地验证了AGG的有效性。
translated by 谷歌翻译
非自动性变压器(NAT)是文本生成模型的家族,旨在通过并行预测整个句子来减少解码延迟。但是,这种延迟减少牺牲了捕获从左到右的依赖性的能力,从而使NAT学习非常具有挑战性。在本文中,我们介绍了理论和经验分析,以揭示NAT学习的挑战,并提出统一的观点来了解现有的成功。首先,我们表明,简单地通过最大化可能性来训练NAT可以导致边际分布的近似值,但在代币之间降低了所有依赖关系,在该数据集的条件总相关性可以测量删除的信息。其次,我们在统一的框架中正式化了许多以前的目标,并表明他们的成功可以得出结论,以最大程度地提高代理分布的可能性,从而减少了信息损失。实证研究表明,我们的观点可以解释NAT学习中的现象,并指导新培训方法的设计。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
由于在开放式文本生成中取得了重大进展,衡量机器生成的文本是如何对人类语言的关键问题。我们介绍紫红色,一个开放式文本生成的比较措施,它直接将文本生成模型的学习分布与使用发散边界的分发进行了分布到人写的文本。淡紫色通过计算量化嵌入空间中的信息分流来缩放到现代文本生成模型。通过对三个开放式发电任务的广泛实证研究,我们发现紫红色标识了所生成文本的已知属性,天然存在模型大小,并与人类判断相关,而不是现有的分布评估度量的限制较少。
translated by 谷歌翻译
在几乎所有文本生成应用中,Word序列在左右(L2R)或左右(R2L)方式中构造,因为自然语言句子是写入L2R或R2L。但是,我们发现自然语言书面订单对文本生成至关重要。在本文中,我们提出了一种螺旋语言建模(SLM),这是一种普遍的方法,使人们能够构建超出L2R和R2L订单的自然语言句子。 SLM允许其中一个从结果文本内的任意令牌开始,并在所选的任意令牌中展开REST令牌。它使解码顺序除了语言模型困惑之外的新优化目标,这进一步提高了所生成文本的分集和质量。此外,SLM使得可以通过选择正确的开始令牌来操纵文本构建过程。 SLM还将生成排序引入了额外的正则化,以提高低资源方案中的模型稳健性。 8次广泛研究的神经机翻译(NMT)任务的实验表明,与传统的L2R解码方法相比,SLM高达4.7 BLEU增加。
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
Automated audio captioning is a cross-modal translation task for describing the content of audio clips with natural language sentences. This task has attracted increasing attention and substantial progress has been made in recent years. Captions generated by existing models are generally faithful to the content of audio clips, however, these machine-generated captions are often deterministic (e.g., generating a fixed caption for a given audio clip), simple (e.g., using common words and simple grammar), and generic (e.g., generating the same caption for similar audio clips). When people are asked to describe the content of an audio clip, different people tend to focus on different sound events and describe an audio clip diversely from various aspects using distinct words and grammar. We believe that an audio captioning system should have the ability to generate diverse captions, either for a fixed audio clip, or across similar audio clips. To this end, we propose an adversarial training framework based on a conditional generative adversarial network (C-GAN) to improve diversity of audio captioning systems. A caption generator and two hybrid discriminators compete and are learned jointly, where the caption generator can be any standard encoder-decoder captioning model used to generate captions, and the hybrid discriminators assess the generated captions from different criteria, such as their naturalness and semantics. We conduct experiments on the Clotho dataset. The results show that our proposed model can generate captions with better diversity as compared to state-of-the-art methods.
translated by 谷歌翻译
当前的语言模型可以产生高质量的文本。他们只是复制他们之前看到的文本,或者他们学习了普遍的语言抽象吗?要取笑这些可能性,我们介绍了乌鸦,这是一套评估生成文本的新颖性,专注于顺序结构(n-gram)和句法结构。我们将这些分析应用于四种神经语言模型(LSTM,变压器,变换器-XL和GPT-2)。对于本地结构 - 例如,单个依赖性 - 模型生成的文本比来自每个模型的测试集的人类生成文本的基线显着不那么新颖。对于大规模结构 - 例如,总句结构 - 模型生成的文本与人生成的基线一样新颖甚至更新颖,但模型仍然有时复制,在某些情况下,在训练集中重复超过1000字超过1,000字的通道。我们还表现了广泛的手动分析,表明GPT-2的新文本通常在形态学和语法中形成良好,但具有合理的语义问题(例如,是自相矛盾)。
translated by 谷歌翻译
Although lyrics generation has achieved significant progress in recent years, it has limited practical applications because the generated lyrics cannot be performed without composing compatible melodies. In this work, we bridge this practical gap by proposing a song rewriting system which rewrites the lyrics of an existing song such that the generated lyrics are compatible with the rhythm of the existing melody and thus singable. In particular, we propose SongRewriter, a controllable Chinese lyric generation and editing system which assists users without prior knowledge of melody composition. The system is trained by a randomized multi-level masking strategy which produces a unified model for generating entirely new lyrics or editing a few fragments. To improve the controllabiliy of the generation process, we further incorporate a keyword prompt to control the lexical choices of the content and propose novel decoding constraints and a vowel modeling task to enable flexible end and internal rhyme schemes. While prior rhyming metrics are mainly for rap lyrics, we propose three novel rhyming evaluation metrics for song lyrics. Both automatic and human evaluations show that the proposed model performs better than the state-of-the-art models in both contents and rhyming quality. Our code and models implemented in MindSpore Lite tool will be available.
translated by 谷歌翻译
具有释义生成的长期问题是如何获得可靠的监督信号。在本文中,我们基于假设产生与鉴定相同的上下文相同的含义的两个句子的概率应该是相同的,提出了一种无监督的范例。灵感来自这一基本因的主意,我们提出了一种流水线系统,该系统由基于上下文语言模型的候选候选生成组成,使用评分函数的候选滤波,以及基于所选候选者的释放模型训练。提议的范例提供了现有的释义生成方法的优点:(1)使用上下文规范器在含义上,该模型能够产生大量的高质量释义对; (2)使用人为可解释的评分功能来选择来自候选者的释义对,所提出的框架为开发人员提供了一种与数据生成过程进行干预的通道,导致更可控的模型。不同任务和数据集的实验结果表明,拟议模型在监督和无人监督的设置中的有效性。
translated by 谷歌翻译
Open-ended text generation with autoregressive language models (LMs) is one of the core tasks in natural language processing. However, maximization-based decoding methods (e.g., greedy/beam search) often lead to the degeneration problem, i.e., the generated text is unnatural and contains undesirable repetitions. Existing solutions to this problem either introduce randomness prone to incoherence or require a look-ahead mechanism that demands extra computational overhead. In this study, we formulate open-ended text generation from a new perspective, i.e., we view it as an exploration process within a directed graph. Thereby, we understand the phenomenon of degeneration as circular loops within the directed graph. Based on our formulation, we propose a novel decoding method -- \textit{momentum decoding} -- which encourages the LM to \textit{greedily} explore new nodes outside the current graph. Meanwhile, it also allows the LM to return to the existing nodes with a momentum downgraded by a pre-defined resistance function. We extensively test our approach on three benchmarks from different domains through automatic and human evaluations. The results show that momentum decoding performs comparably with the current state of the art while enjoying notably improved inference speed and computation FLOPs. Furthermore, we conduct a detailed analysis to reveal the merits and inner workings of our approach. Our codes and other related resources are publicly available at https://github.com/gmftbyGMFTBY/MomentumDecoding.
translated by 谷歌翻译