我们提出了一种新颖的方法来通过使用具有不同个性类型的代理来生成脚本。为了管理脚本中的字符交互,我们采用了模拟的戏剧网络。关于多个标准的自动和人类评估表明,我们的方法的表现优于基于香草-GPT2的基线。我们进一步引入了一个新的指标,以根据自然语言推论评估对话一致性并证明其有效性。
translated by 谷歌翻译
We have a Christmas gift for Harry Potter fans all over the world. In this paper, we present Harry Potter Dialogue (HPD), a dataset that helps train Harry Potter-like dialogue agents. Such a task is typically viewed as a variant of personalized dialogue agents, but they differ significantly in three respects: 1) Harry lived in a virtual world of wizards, thus, real-world commonsense may not apply to Harry's conversations; 2) Harry's behavior is strongly linked to background information in conversations: the scene, its attributes and its relationship to other speakers; and 3) Such backgrounds are dynamically altered as the storyline goes on. The HPD dataset, as the first dataset to facilitate the study of dialogue agent construction for characters within a story, provides rich contextual information about each dialogue session such as scenes, character attributes, and relations. More importantly, all the background information will change over the course of the story. In addition, HPD could support both dialogue generation and retrieval tasks. We evaluate baselines such as Dialog-GPT and BOB to determine the extent to which they can generate Harry Potter-like responses. The experimental results disappoint us in that although the generated responses are fluent, they still seem out of character for Harry. Besides, we validate the current most robust dialogue agent, ChatGPT, which also can't generate plausible Harry-Potter-like responses in some cases, either. Our results suggest that there is much scope for future research.
translated by 谷歌翻译
在本文中,我们使用大规模播放脚本数据集来提出从对话中提出戏剧发电的新颖任务。使用超过一百万行的对话和提示,我们将提示生成问题作为受控文本生成任务方法,并展示如何使用如何使用对话/提示鉴别器的语言模型来增强对话的影响。此外,我们还探讨了主题关键字和情绪的使用,以获得受控文本生成。广泛的定量和定性实验表明,语言模型可以成功地用于在高度专业化的域中生成合理的和属性控制的文本,例如播放脚本。配套材料可在:https://catlab-team.github.io/cuegen。
translated by 谷歌翻译
Crosstalk是一种传统的中国戏剧表演艺术。它通常由两个表演者以对话的形式执行。凭借对话的典型特征,串扰也被设计为有趣的观众。在这项研究中,我们介绍了Crossdial,这是第一个开源数据集,其中包含来自网络上最经典的中国串扰。此外,我们定义了两个新任务,提供了两个基准,并研究了当前的对话生成模型在串扰生成领域的能力。实验结果和案例研究表明,串扰的生成对于直接方法来说是具有挑战性的,并且仍然是未来工作的有趣主题。
translated by 谷歌翻译
近年来,对话系统引起了学术界和工业的重要兴趣。特别是开放式对话系统的纪律,又名聊天,已经获得了很大的势头。然而,困扰研究人员的长期挑战是缺乏有效的自动评估指标,这导致目前研究中的障碍。评估开放式对话模型表现的常见做法涉及对最终部署模型的广泛人类评估,这是时间和成本密集的。此外,最近建立开放式聊天聊天的趋势涉及具有大量社交媒体对话数据的预训练对话模型。但是,社交媒体对话中包含的信息可能是令人反感的和不合适的。不分青红皂白种的使用可能导致不敏感和有毒的生成模型。本文介绍了对话系统技术挑战10(DSTC10)的轨道5获得的数据,基线和结果。
translated by 谷歌翻译
许多古典童话,小说和剧本都利用对话来推进故事情节并建立角色。我们提出了第一个研究,以探索机器是否可以理解和产生故事中的对话,这需要捕获不同角色的特征及其之间的关系。为此,我们提出了两项​​新任务,包括蒙版对话生成和对话演讲者的认可,即分别产生对话转弯和预测说话者的指定对话转弯。我们构建了一个新的数据集拨号故事,该数据集由105K中国故事组成,其中包含大量对话,以支持评估。我们通过对拨号故事进行自动和手动评估测试现有模型来显示提出的任务的困难。此外,我们建议学习明确的角色表示,以提高这些任务的绩效。广泛的实验和案例研究表明,我们的方法可以产生更连贯和信息丰富的对话,并获得比强基础更高的说话者识别精度。
translated by 谷歌翻译
没有一致响应的对话系统并不令人着迷。在这项研究中,我们建立了一个对话系统,可以根据给定的角色设置(角色)响应以带来一致性。考虑到语言模型迅速增加的趋势,我们提出了一种使用迅速调整的方法,该方法在预训练的大规模语言模型上使用了低学习成本。英语和日语中自动和手动评估的结果表明,可以使用比微调更少的计算资源来构建具有更自然和个性化响应的对话系统。
translated by 谷歌翻译
最先进的对话模型仍然对事实准确性和自我矛盾甚至困难。轶事,他们已被观察到在整个话语中未能维持性质身份;更具体地,可能会涉及其对话者的作用。在这项工作中,我们正规化和量化这种缺陷,并通过人类评估实验表明这确实是一个问题。相比之下,我们展示了专门识别谁在谈话的歧视模型可以表现良好;此外,这些可以用作自动指标。最后,我们评估了各种缓解方法,包括模型架构,培训协议和解码策略的变化。根据人类的注释者,我们最好的车型减少了近65%的误认为是近65%,同时提高了参与度。尽管有这些结果,但我们发现维持性格身份仍然是一个具有挑战性的问题。
translated by 谷歌翻译
本文提出了一种简单的方法,用于使用自由形式分类器(即CAIF采样)基于加权逻辑来控制文本生成。使用任意文本分类器,我们将语言模型逻辑的一小部分调整为指导文本生成,以远离分类器预测。我们试验了避免毒性和情感控制任务,并表明该方法在PPL和DESS准确度指标上基于生成的文本的外部分类器而显着优于最近的PPLM,GEDI和DEXPERTS。此外,与其他方法相比,它更容易实施和调整,并且限制和要求较少。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models. Unsupervised metrics can only provide a task-agnostic evaluation result which correlates weakly with human judgments, whereas supervised ones may overfit task-specific data with poor generalization ability to other datasets. In this paper, we propose an unsupervised reference-free metric called CTRLEval, which evaluates controlled text generation from different aspects by formulating each aspect into multiple text infilling tasks. On top of these tasks, the metric assembles the generation probabilities from a pre-trained language model without any model training. Experimental results show that our metric has higher correlations with human judgments than other baselines, while obtaining better generalization of evaluating generated texts from different models and with different qualities.
translated by 谷歌翻译
预培训语言模型的浪潮一直不断提高机器生成的对话的质量,然而,一些产生的响应仍然遭受过度重复,有时重复从话语中重复单词,有时重复自我产生的响应中的单词,或者两个都。不当重复单词可以显着降低生成文本的质量。受到惩罚的采样是一种流行的解决方案,减少了推理期间现有词的采样概率,但是,它非常容易受到静态的不适当的设置。将其设置得太高可以产生奇怪和不切实际的句子,同时将其设置得太低,使得抑制重复微不足道的任务。要解决上述方法的缺点,我们设计了一个上下文感知的分类器,以明确决定何时允许重复和何时采用惩罚的采样。这种分类器可以容易地与现有的解码方法集成,在保持文本的分集的同时在适当的情况下减少重复。实验结果表明,我们的方法可以产生更高质量和更真实的对话。
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
语言是人类交流的主要工具,其中幽默是最有吸引力的部分之一。使用计算机,又称自然语言生成(NLG)的人类产生自然语言,已广泛用于对话系统,聊天机器人,机器翻译以及计算机AID创建,例如Idea Generations,剧本。但是,自然语言的幽默方面相对不足,尤其是在预训练的语言模型时代。在这项工作中,我们旨在初步测试NLG是否可以像人类一样产生幽默。我们构建了一个新的数据集,该数据集由众多数字化的中国可笑的串扰脚本(称为c $^3 $简称),该脚本适用于1800年代以来名为“ Xiangsheng”的流行中国表演艺术。 (为了方便非中国扬声器,我们在本文中称为“ Xiangsheng”的“ Crosstalk”。)我们基准了各种一代方法,包括训练seq2seq,微调中级PLMS和大型PLMS(大型PLMS)(有无微调)。此外,我们还进行了人类评估,表明1)大规模预处理在很大程度上提高了串扰的产生质量; 2)即使是从最佳PLM产生的脚本也远非我们的期望,只有65%的人类创建的串扰质量。我们得出结论,使用大型PLM可以在很大程度上改善幽默的产生,但仍处于起步阶段。 \ url {https://github.com/anonno2/crosstalk-generation}公开可用数据和基准代码。
translated by 谷歌翻译
在本文中,我们根据两个模型提出了一个端到端情感感知的对话代理:答复情绪预测模型,该模型利用对话的上下文来预测适当的情感,以便代理人在其答复中表达表达;以及一个基于预测的情感和对话的上下文的条件的文本生成模型,以产生既适合上下文又适合情感的答复。此外,我们建议使用情感分类模型来评估代理商在模型开发过程中表达的情感。这使我们能够自动评估代理。自动和人类评估结果都表明,用预定义的句子集明确指导文本生成模型导致了明确的改进,包括表达的情感和生成文本的质量。
translated by 谷歌翻译
自动故事生成(ASG)的研究在很大程度上依赖于人类和自动评估。但是,尚无共识在哪些人类评估标准上使用,也没有分析自动标准与它们相关的良好状况。在本文中,我们建议重新评估ASG评估。我们介绍了由社会科学文学精心促进的6种正交和全面的人类标准。我们还提出了汉娜(Hanna),这是一个由10种不同ASG系统制作的1,056个故事的注释数据集。汉娜(Hanna)允许我们定量评估72个自动指标与人类标准的相关性。我们的分析强调了ASG当前指标的弱点,并使我们能够为ASG评估提出实用建议。
translated by 谷歌翻译
Adversarial attacks in NLP challenge the way we look at language models. The goal of this kind of adversarial attack is to modify the input text to fool a classifier while maintaining the original meaning of the text. Although most existing adversarial attacks claim to fulfill the constraint of semantics preservation, careful scrutiny shows otherwise. We show that the problem lies in the text encoders used to determine the similarity of adversarial examples, specifically in the way they are trained. Unsupervised training methods make these encoders more susceptible to problems with antonym recognition. To overcome this, we introduce a simple, fully supervised sentence embedding technique called Semantics-Preserving-Encoder (SPE). The results show that our solution minimizes the variation in the meaning of the adversarial examples generated. It also significantly improves the overall quality of adversarial examples, as confirmed by human evaluators. Furthermore, it can be used as a component in any existing attack to speed up its execution while maintaining similar attack success.
translated by 谷歌翻译
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledgegrounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/ google/BEGIN-dataset.
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译