Long-range context modeling is crucial to both dialogue understanding and generation. The most popular method for dialogue context representation is to concatenate the last-$k$ previous utterances. However, this method may not be ideal for conversations containing long-range dependencies. In this work, we propose DialoGX, a novel encoder-decoder based framework for conversational response generation with a generalized and explainable context representation that can look beyond the last-$k$ utterances. Hence the method is adaptive to conversations with long-range dependencies. The main idea of our approach is to identify and utilize the most relevant historical utterances instead of the last-$k$ utterances in chronological order. We study the effectiveness of our proposed method on both dialogue generation (open-domain) and understanding (DST) tasks. DialoGX achieves comparable performance with the state-of-the-art models on DailyDialog dataset. We also observe performance gain in existing DST models with our proposed context representation strategy on MultiWOZ dataset. We justify our context representation through the lens of psycholinguistics and show that the relevance score of previous utterances agrees well with human cognition which makes DialoGX explainable as well.
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
这项工作结合了有关预先训练模型编码的对话历史的信息,其含义表示当前系统话语,以实现面向任务对话中的语境语言生成。我们利用预先训练的多上下文转换模型进行从头开始培训的模型中的上下文表示;并利用从预训练的GPT-2调整的模型中的上下文生成的立即使用前面的用户话语。与多种数据集的两个实验表明,通过预先训练的模型编码的上下文信息可提高自动指标和人类评估中的响应生成的性能。我们所呈现的上下文发电机使得更高种类的响应能够更好地适应正在进行的对话。分析上下文大小显示,较长的上下文不会自动导致更好的性能,但是前面的用户话语的直接对上下文生成起着重要作用。此外,我们还提出了一种基于GPT的生成模型的重新排名。实验表明,RE-Ranker选择的响应对自动度量有重大改进。
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
知识驱动的对话世代最近取得了非凡的突破。与一般的对话系统相比,卓越的知识对话系统可以通过预先提供的知识产生更多信息和知识渊博的响应。但是,在实际应用中,对话系统无法事先提供相应的知识。为了解决该问题,我们设计了一个名为DRKQG的知识驱动的对话系统(\ emph {通过查询生成动态检索知识,以获取信息性对话响应})。具体而言,系统可以分为两个模块:查询生成模块和对话生成模块。首先,利用时间感知机制来捕获上下文信息,并可以生成查询以检索知识。然后,我们集成了复制机制和变压器,该机制允许响应生成模块产生从上下文和检索知识中得出的响应。 LIC2022,语言和情报技术竞赛的实验结果表明,我们的模块在自动评估指标上的大幅度优于基线模型,而BAIDU语言学团队的人类评估表明,我们的系统在事实上取得了令人印象深刻的结果,实际上是正确的,知识渊博。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译
我们提出了一种新颖的体系结构,用于使用离散的潜在变量对以任务为导向的对话进行解释建模,以表示对话动作。我们的模型基于变异复发性神经网络(VRNN),不需要明确的语义信息注释。与以前的作品不同,我们的方法模型系统和用户单独转动并执行数据库查询建模,这使该模型适用于以任务为导向的对话,同时生成易于解释的可解释的可解释的潜在变量。我们表明,我们的模型在三个数据集中的困惑和BLEU方面优于先前的方法,我们提出了一种衡量对话成功的方法,而无需专家注释。最后,我们提出了一种新颖的方式来解释有关系统动作的潜在变量语义。
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
在本文中,我们建议利用对话的独特特征,共享参与者的常识性知识,以解决总结它们的困难。我们提出了病态的框架,该框架使用常识推论作为其他背景。与以前仅依赖于输入对话的工作相比,Sick使用外部知识模型来生成丰富的常识推断,并选择具有基于相似性选择方法的最可能的推理。基于生病的,病人++的理解为监督,在总结多任务学习环境中的对话时,添加了产生常识推断的任务。实验结果表明,通过注入常识性知识,我们的框架比现有方法产生更多信息和一致的摘要。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
了解用户对话中的毒性无疑是一个重要问题。正如在以前的工作中所说的那样,解决“隐秘”或隐含毒性案件特别困难,需要上下文。以前很少有研究已经分析了会话语境在人类感知或自动检测模型中的影响。我们深入探讨这两个方向。我们首先分析现有的上下文数据集,并得出结论,人类的毒性标记一般受到对话结构,极性和主题的影响。然后,我们建议通过引入(a)神经架构来将这些发现带入计算检测模型中,以了解会话结构的语境毒性检测,以及(b)可以帮助模拟语境毒性检测的数据增强策略。我们的结果表明了了解谈话结构的神经架构的令人鼓舞的潜力。我们还表明,这些模型可以从合成数据中受益,尤其是在社交媒体领域。
translated by 谷歌翻译
预培训语言模型的浪潮一直不断提高机器生成的对话的质量,然而,一些产生的响应仍然遭受过度重复,有时重复从话语中重复单词,有时重复自我产生的响应中的单词,或者两个都。不当重复单词可以显着降低生成文本的质量。受到惩罚的采样是一种流行的解决方案,减少了推理期间现有词的采样概率,但是,它非常容易受到静态的不适当的设置。将其设置得太高可以产生奇怪和不切实际的句子,同时将其设置得太低,使得抑制重复微不足道的任务。要解决上述方法的缺点,我们设计了一个上下文感知的分类器,以明确决定何时允许重复和何时采用惩罚的采样。这种分类器可以容易地与现有的解码方法集成,在保持文本的分集的同时在适当的情况下减少重复。实验结果表明,我们的方法可以产生更高质量和更真实的对话。
translated by 谷歌翻译
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent opendomain dialogue systems.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
对机器的人类谈话的一个重要方面是与同理心交谈,这是理解用户的情绪并适当地响应。最近的神经谈话模型,试图产生同情响应,要么集中在给定的情绪到给定的情绪,或结合当前的用户情绪状态。然而,这些方法不会因用户对生成的反应感受的因素。因此,在本文中,我们提出了表达的情绪展望,这是一种模拟未来用户情绪状态的同情心的新型视角。简而言之,情绪上方是在加强学习框架下的奖励功能,当生成的话语改善用户的情绪时,对生成模型提供更高的奖励。我们实施并评估了三种不同的情绪考虑实现,并经验证明我们的建议方法可以产生比多任务学习等其他竞争基础的竞争,相关和流畅的响应。
translated by 谷歌翻译
个性化对话代理对于对话系统非常重要,以产生更具体,一致,并从事和吸引力的反应。然而,大多数当前对话的个性化方法依赖于推理期间的明确人物描述,严重限制其应用。在本文中,我们提出了一种新颖的方法,该方法将根据对话历史来预测人物信息,以个性化对话代理而不依赖于推理期间的任何明确的人格描述。 Personachat数据集上的实验结果表明,当在对话剂的预测轮廓上调节(即“自身角色”)时,所提出的方法可以提高所产生的响应的一致性,并在预测的角色调节时改善所产生的响应的接合对话伙伴(即“他们的角色”)。我们还发现培训的角色预测模型可以成功转移到其他数据集,并帮助生成更相关的响应。
translated by 谷歌翻译
会话代理已成为简单任务允许情况的一般人群的组成部分。然而,这些系统尚未对各种和少数群体的任何社会影响,例如,帮助患有神经系统障碍的人,例如ALS和言语,语言和社交交流障碍的人。语言模型技术可以发挥巨大作用,以帮助这些用户进行日常沟通和社交互动。要启用此群体,我们构建了一个对话系统,可以使用CUES或关键字的用户控制。我们构建可以在用于控制响应生成的对话响应上下文中建立相关提示的模型,并可以加快通信。我们还介绍了一个关键字丢失来限制模型输出。我们在定性和定量上展示我们的模型可以有效地将关键字诱导到模型响应中,而不会降低响应的质量。在使用退行性障碍的人的使用情况的背景下,我们展示了对我们的提示或关键字预测器和可控对话系统的人类评估,并显示我们的模型比没有控制的模型更好地表现更好。我们的研究表明,在结束到结束响应生成模型的关键字控制是强大的,可以使用户能够与退行性疾病启用和赋予日常通信的日常沟通。
translated by 谷歌翻译
Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
translated by 谷歌翻译
对话是人类沟通与合作的重要组成部分。现有研究主要关注一对一时尚的短对话情景。然而,现实世界中的多人互动,例如会议或访谈,经常超过几千个字。仍然缺乏相应的研究和强大的工具来了解和处理这么长的对话。因此,在这项工作中,我们为长时间对话理解和总结提供了预先培训框架。考虑到长期交谈的性质,我们提出了一种基于窗口的去噪方法,用于生成预训练。对于对话框,它损坏了一个带有对话激发灵感噪声的文本窗口,并指导模型基于剩余对话的内容来重建此窗口。此外,为了更长的输入,我们增加了稀疏关注模型,这些模型以混合方式与传统的关注相结合。我们在长对话的五个数据集进行广泛的实验,涵盖对话摘要的任务,抽象问题回答和主题分割。实验,我们表明,我们的预先训练的模型DialogLM显着超越了数据集和任务的最先进的模型。我们的GitHub存储库(HTTPS:/github.com/microsoft/dialoglm上有源代码和所有预先训练的型号。
translated by 谷歌翻译