预培训语言模型的浪潮一直不断提高机器生成的对话的质量,然而,一些产生的响应仍然遭受过度重复,有时重复从话语中重复单词,有时重复自我产生的响应中的单词,或者两个都。不当重复单词可以显着降低生成文本的质量。受到惩罚的采样是一种流行的解决方案,减少了推理期间现有词的采样概率,但是,它非常容易受到静态的不适当的设置。将其设置得太高可以产生奇怪和不切实际的句子,同时将其设置得太低,使得抑制重复微不足道的任务。要解决上述方法的缺点,我们设计了一个上下文感知的分类器,以明确决定何时允许重复和何时采用惩罚的采样。这种分类器可以容易地与现有的解码方法集成,在保持文本的分集的同时在适当的情况下减少重复。实验结果表明,我们的方法可以产生更高质量和更真实的对话。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Long-range context modeling is crucial to both dialogue understanding and generation. The most popular method for dialogue context representation is to concatenate the last-$k$ previous utterances. However, this method may not be ideal for conversations containing long-range dependencies. In this work, we propose DialoGX, a novel encoder-decoder based framework for conversational response generation with a generalized and explainable context representation that can look beyond the last-$k$ utterances. Hence the method is adaptive to conversations with long-range dependencies. The main idea of our approach is to identify and utilize the most relevant historical utterances instead of the last-$k$ utterances in chronological order. We study the effectiveness of our proposed method on both dialogue generation (open-domain) and understanding (DST) tasks. DialoGX achieves comparable performance with the state-of-the-art models on DailyDialog dataset. We also observe performance gain in existing DST models with our proposed context representation strategy on MultiWOZ dataset. We justify our context representation through the lens of psycholinguistics and show that the relevance score of previous utterances agrees well with human cognition which makes DialoGX explainable as well.
translated by 谷歌翻译
语言是人类交流的主要工具,其中幽默是最有吸引力的部分之一。使用计算机,又称自然语言生成(NLG)的人类产生自然语言,已广泛用于对话系统,聊天机器人,机器翻译以及计算机AID创建,例如Idea Generations,剧本。但是,自然语言的幽默方面相对不足,尤其是在预训练的语言模型时代。在这项工作中,我们旨在初步测试NLG是否可以像人类一样产生幽默。我们构建了一个新的数据集,该数据集由众多数字化的中国可笑的串扰脚本(称为c $^3 $简称),该脚本适用于1800年代以来名为“ Xiangsheng”的流行中国表演艺术。 (为了方便非中国扬声器,我们在本文中称为“ Xiangsheng”的“ Crosstalk”。)我们基准了各种一代方法,包括训练seq2seq,微调中级PLMS和大型PLMS(大型PLMS)(有无微调)。此外,我们还进行了人类评估,表明1)大规模预处理在很大程度上提高了串扰的产生质量; 2)即使是从最佳PLM产生的脚本也远非我们的期望,只有65%的人类创建的串扰质量。我们得出结论,使用大型PLM可以在很大程度上改善幽默的产生,但仍处于起步阶段。 \ url {https://github.com/anonno2/crosstalk-generation}公开可用数据和基准代码。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
在本文中,我们建议利用对话的独特特征,共享参与者的常识性知识,以解决总结它们的困难。我们提出了病态的框架,该框架使用常识推论作为其他背景。与以前仅依赖于输入对话的工作相比,Sick使用外部知识模型来生成丰富的常识推断,并选择具有基于相似性选择方法的最可能的推理。基于生病的,病人++的理解为监督,在总结多任务学习环境中的对话时,添加了产生常识推断的任务。实验结果表明,通过注入常识性知识,我们的框架比现有方法产生更多信息和一致的摘要。
translated by 谷歌翻译
Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
translated by 谷歌翻译
重复是一种反应,可以在对话中重复上一位演讲者的话语中的单词。如语言研究所述,重复对于与他人建立信任至关重要。在这项工作中,我们专注于重复生成。据我们所知,这是解决重复产生的第一种神经方法。我们提出了加权标签平滑,一种平滑方法,用于明确学习在微调过程中重复哪些单词,以及一种重复评分方法,可以在解码过程中输出更合适的重复。我们进行了自动和人类评估,涉及将这些方法应用于预先训练的语言模型T5来产生重复。实验结果表明,我们的方法在两种评估中都超过了基线。
translated by 谷歌翻译
Keyphrase generation aims to produce a set of phrases summarizing the essentials of a given document. Conventional methods normally apply an encoder-decoder architecture to generate the output keyphrases for an input document, where they are designed to focus on each current document so they inevitably omit crucial corpus-level information carried by other similar documents, i.e., the cross-document dependency and latent topics. In this paper, we propose CDKGen, a Transformer-based keyphrase generator, which expands the Transformer to global attention with cross-document attention networks to incorporate available documents as references so as to generate better keyphrases with the guidance of topic information. On top of the proposed Transformer + cross-document attention architecture, we also adopt a copy mechanism to enhance our model via selecting appropriate words from documents to deal with out-of-vocabulary words in keyphrases. Experiment results on five benchmark datasets illustrate the validity and effectiveness of our model, which achieves the state-of-the-art performance on all datasets. Further analyses confirm that the proposed model is able to generate keyphrases consistent with references while keeping sufficient diversity. The code of CDKGen is available at https://github.com/SVAIGBA/CDKGen.
translated by 谷歌翻译
Crosstalk是一种传统的中国戏剧表演艺术。它通常由两个表演者以对话的形式执行。凭借对话的典型特征,串扰也被设计为有趣的观众。在这项研究中,我们介绍了Crossdial,这是第一个开源数据集,其中包含来自网络上最经典的中国串扰。此外,我们定义了两个新任务,提供了两个基准,并研究了当前的对话生成模型在串扰生成领域的能力。实验结果和案例研究表明,串扰的生成对于直接方法来说是具有挑战性的,并且仍然是未来工作的有趣主题。
translated by 谷歌翻译
这项工作结合了有关预先训练模型编码的对话历史的信息,其含义表示当前系统话语,以实现面向任务对话中的语境语言生成。我们利用预先训练的多上下文转换模型进行从头开始培训的模型中的上下文表示;并利用从预训练的GPT-2调整的模型中的上下文生成的立即使用前面的用户话语。与多种数据集的两个实验表明,通过预先训练的模型编码的上下文信息可提高自动指标和人类评估中的响应生成的性能。我们所呈现的上下文发电机使得更高种类的响应能够更好地适应正在进行的对话。分析上下文大小显示,较长的上下文不会自动导致更好的性能,但是前面的用户话语的直接对上下文生成起着重要作用。此外,我们还提出了一种基于GPT的生成模型的重新排名。实验表明,RE-Ranker选择的响应对自动度量有重大改进。
translated by 谷歌翻译
最先进的对话模型仍然对事实准确性和自我矛盾甚至困难。轶事,他们已被观察到在整个话语中未能维持性质身份;更具体地,可能会涉及其对话者的作用。在这项工作中,我们正规化和量化这种缺陷,并通过人类评估实验表明这确实是一个问题。相比之下,我们展示了专门识别谁在谈话的歧视模型可以表现良好;此外,这些可以用作自动指标。最后,我们评估了各种缓解方法,包括模型架构,培训协议和解码策略的变化。根据人类的注释者,我们最好的车型减少了近65%的误认为是近65%,同时提高了参与度。尽管有这些结果,但我们发现维持性格身份仍然是一个具有挑战性的问题。
translated by 谷歌翻译
当前的语言模型达到了较低的困惑,但其产生的几代人仍然遭受有毒的反应,重复性和矛盾。标准语言建模设置无法解决这些问题。在本文中,我们介绍了一个新的体系结构{\ sc导演},由一个统一的生成器分类器组成,具有语言建模和每个输出令牌的分类头。培训是使用标准语言建模数据共同进行的,并以所需和不良序列标记的数据。与标准语言模型相比,该模型在多种设置中的实验表明,该模型具有竞争性的培训和解码速度,同时产生了较高的结果,从而减轻了已知的问题,同时保持发电质量。就准确性和效率而言,它还优于现有的模型指导方法。
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
文本生成对于许多自然语言处理应用至关重要。然而,基于最大化的解码方法(例如,神经语言模型的光束搜索)通常会导致解析解决方案 - 生成的文本是不自然的,并且包含不良的重复。现有方法通过采样或修改训练目标引入随机性,以降低某些令牌的概率(例如,不可能训练)。但是,它们通常会导致缺乏连贯性的解决方案。在这项工作中,我们表明,模型变性的根本原因是令牌表示的各向异性分布。我们提出了一种对比解决方案:(i)SIMCTG,是校准模型表示空间的对比训练目标,以及(ii)一种解码方法 - 对比度搜索 - 以鼓励多样性,同时在生成的文本中保持连贯性。对两种语言的三个基准测试的广泛实验和分析表明,我们提出的方法显着优于人类和自动指标评估的当前最新文本生成方法。
translated by 谷歌翻译
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledgegrounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/ google/BEGIN-dataset.
translated by 谷歌翻译
良好的善解人意对话系统应首先跟踪并理解用户的情绪,然后以适当的情感回复。但是,目前对此任务的方法要么集中于提高对用户情绪的理解或提出更好的反应策略,而且很少有作品同时考虑这两种工作。我们的工作试图填补这一空缺。受到任务导向对话系统的启发,我们提出了一种具有情感感知对话管理的新颖善解人意的响应生成模型。情绪感知对话管理包含两个部分:(1)情绪状态跟踪保持当前用户的情绪状态,(2)善解人意的对话策略选择预测目标情绪和用户的意图,基于情绪状态跟踪的结果。然后,预测信息用于指导响应的产生。实验结果表明,与自动评估和人类评估下的几个基准相比,动态管理不同的信息可以帮助模型产生更多的移情反应。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译