We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems 1 .
translated by 谷歌翻译
在本文中,我们介绍了基于大型预训练的语言模型(PLM)pangu-alpha(Zeng等,2021)的中国预训练的开放域对话生成模型。与其他对大量对话数据进行培训的预训练的对话模型不同,我们旨在通过继承PLM的有价值的语言能力和知识来构建强大的对话模型,并以相对较少的数据和计算成本构建强大的对话模型。为此,我们训练大型PLM Pangu-Alpha的Pangu-bot,该机器人已被证明在各种中国自然语言任务上表现出色。我们研究了pangu-bot产生的响应的不同方面,包括响应质量,知识和安全性。我们表明,Pangu-Bot优于最先进的中国对话系统(CDIALGPT(Wang等,2020),Eva(Zhou等,2021),EVA2.0(Gu等,2022)) W.R.T.以上三个方面。我们还证明,可以轻松地部署pangu-bot,以在没有进一步训练的情况下产生情感反应。在整个经验分析中,我们还指出,Pangu-bot响应质量,知识正确性和安全性仍然远非完美,进一步的探索对于建立可靠且智能的对话系统是必不可少的。我们的型号和代码将在https://github.com/huawei-noah/pretretaining-language-model/tree/master/master/pangu-bot上提供。
translated by 谷歌翻译
Empathy is a vital factor that contributes to mutual understanding, and joint problem-solving. In recent years, a growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems. We refer to this topic as empathetic conversational systems. To identify the critical gaps and future opportunities in this topic, this paper examines this rapidly growing field using five review dimensions: (i) conceptual empathy models and frameworks, (ii) adopted empathy-related concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation strategies, and (v) state-of-the-art approaches. The findings show that most studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the text-based modality dominates research in this field. Studies mainly focused on extracting features from the messages of the users and the conversational systems, with minimal emphasis on user modeling and profiling. Notably, studies that have incorporated emotion causes, external knowledge, and affect matching in the response generation models, have obtained significantly better results. For implementation in diverse real-world settings, we recommend that future studies should address key gaps in areas of detecting and authenticating emotions at the entity level, handling multimodal inputs, displaying more nuanced empathetic behaviors, and encompassing additional dialogue system features.
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
Many dialogue systems (DSs) lack characteristics humans have, such as emotion perception, factuality, and informativeness. Enhancing DSs with knowledge alleviates this problem, but, as many ways of doing so exist, keeping track of all proposed methods is difficult. Here, we present the first survey of knowledge-enhanced DSs. We define three categories of systems - internal, external, and hybrid - based on the knowledge they use. We survey the motivation for enhancing DSs with knowledge, used datasets, and methods for knowledge search, knowledge encoding, and knowledge incorporation. Finally, we propose how to improve existing systems based on theories from linguistics and cognitive science.
translated by 谷歌翻译
在语言处理的神经方法上的最新进展引发了人们对建立智能开放域聊天机器人的兴趣的复兴。但是,即使是最先进的神经聊天机器人也无法在对话框中每个回合产生令人满意的响应。一个实用的解决方案是为相同上下文生成多个响应候选者,然后执行响应排名/选择以确定哪个候选者是最好的。先前的响应选择中的工作通常使用从现有对话框形成的合成数据来训练响应排名者,通过使用地面真理响应作为单个适当的响应并通过随机选择或使用对抗方法来构建不适当的响应。在这项工作中,我们策划了一个数据集,其中为适当的(正)和不适当(负)手动注释了为相同对话框上下文产生的多个响应发生器的响应。我们认为,这样的培训数据可以更好地匹配实际的用例示例,从而使模型能够有效地对响应进行排名。有了这个新数据集,我们对最先进的响应选择方法进行了系统的评估,并证明,使用多个积极候选者和使用手动验证的硬性负面候选者的两种策略都可以与使用相比,可以带来重大的绩效提高对抗性训练数据,例如,召回@1分别增加了3%和13%。
translated by 谷歌翻译
创建可以对对话做出适当反应又理解复杂人类语言倾向和社会线索的代理人在NLP社区中一直是一项艰巨的挑战。最近的研究支柱围绕着对话中的情感识别(ERC);情感识别的子场地,重点是包含两个或更多话语的对话或对话。在这项工作中,我们探讨了一种ERC的方法,该方法利用了对话中神经嵌入的使用以及复杂的结构。我们在称为概率软逻辑(PSL)的框架中实现了我们的方法,该框架是一种使用一阶逻辑规则的声明的模板语言,该语言与数据结合时,定义了特定类别的图形模型。此外,PSL为将神经模型的结果纳入PSL模型提供了功能。这使我们的模型可以利用先进的神经方法,例如句子嵌入以及对话结构的逻辑推理。我们将我们的方法与最先进的纯神经ERC系统进行了比较,并将几乎提高了20%。通过这些结果,我们对DailyDialog对话数据集提供了广泛的定性和定量分析。
translated by 谷歌翻译
近年来,对话系统引起了学术界和工业的重要兴趣。特别是开放式对话系统的纪律,又名聊天,已经获得了很大的势头。然而,困扰研究人员的长期挑战是缺乏有效的自动评估指标,这导致目前研究中的障碍。评估开放式对话模型表现的常见做法涉及对最终部署模型的广泛人类评估,这是时间和成本密集的。此外,最近建立开放式聊天聊天的趋势涉及具有大量社交媒体对话数据的预训练对话模型。但是,社交媒体对话中包含的信息可能是令人反感的和不合适的。不分青红皂白种的使用可能导致不敏感和有毒的生成模型。本文介绍了对话系统技术挑战10(DSTC10)的轨道5获得的数据,基线和结果。
translated by 谷歌翻译
Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译
We have a Christmas gift for Harry Potter fans all over the world. In this paper, we present Harry Potter Dialogue (HPD), a dataset that helps train Harry Potter-like dialogue agents. Such a task is typically viewed as a variant of personalized dialogue agents, but they differ significantly in three respects: 1) Harry lived in a virtual world of wizards, thus, real-world commonsense may not apply to Harry's conversations; 2) Harry's behavior is strongly linked to background information in conversations: the scene, its attributes and its relationship to other speakers; and 3) Such backgrounds are dynamically altered as the storyline goes on. The HPD dataset, as the first dataset to facilitate the study of dialogue agent construction for characters within a story, provides rich contextual information about each dialogue session such as scenes, character attributes, and relations. More importantly, all the background information will change over the course of the story. In addition, HPD could support both dialogue generation and retrieval tasks. We evaluate baselines such as Dialog-GPT and BOB to determine the extent to which they can generate Harry Potter-like responses. The experimental results disappoint us in that although the generated responses are fluent, they still seem out of character for Harry. Besides, we validate the current most robust dialogue agent, ChatGPT, which also can't generate plausible Harry-Potter-like responses in some cases, either. Our results suggest that there is much scope for future research.
translated by 谷歌翻译
We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
生成的开放域对话系统可以从外部知识中受益,但是缺乏外部知识资源和寻找相关知识的困难限制了该技术的发展。为此,我们使用动态服务信息提出了一个知识驱动的对话任务。具体而言,我们使用大量的服务API,可以作为外部知识来源提供高覆盖范围和时空敏感性。对话系统生成查询以请求外部服务以及用户信息,获取相关知识,并基于此知识生成响应。为了实现此方法,我们收集并发布了第一个开放式域中国服务知识对话数据集Dusinc。同时,我们构建了一个基线模型柏拉图 - 线,该模型实现了对话的自动利用。自动评估和人类评估都表明,我们提出的新方法可以显着改善开放域对话的效果,并且与对话预培训模型Plato-2相比,人类评估中的会话级总数提高了59.29%。数据集和基准模型将被开源。
translated by 谷歌翻译
Conversational AI has become an increasingly prominent and practical application of machine learning. However, existing conversational AI techniques still suffer from various limitations. One such limitation is a lack of well-developed methods for incorporating auxiliary information that could help a model understand conversational context better. In this paper, we explore how persona-based information could help improve the quality of response generation in conversations. First, we provide a literature review focusing on the current state-of-the-art methods that utilize persona information. We evaluate two strong baseline methods, the Ranking Profile Memory Network and the Poly-Encoder, on the NeurIPS ConvAI2 benchmark dataset. Our analysis elucidates the importance of incorporating persona information into conversational systems. Additionally, our study highlights several limitations with current state-of-the-art methods and outlines challenges and future research directions for advancing personalized conversational AI technology.
translated by 谷歌翻译
在本文中,我们描述了一种数据驱动的方法,用于开发艾米丽(Emily),一种情绪感染的开放域聊天机器人。提出的数据增强方法可以从多转话对话中明确模拟阳性过渡(PT)情感数据。我们使用PT情感数据构建对话语料库,并将其发布供公众使用。通过使用生产的PT增强对话进行验证的对话模型,我们能够开发一种情感感染性的开放式聊天机器人,该聊天机器人在各种情绪影响度指标中表现出几乎人类的表现。我们对艾米丽(Emily)进行评估,以针对一些最先进的(SOTA)开放域聊天机器人,并显示拟议方法的有效性。
translated by 谷歌翻译
我们在面向任务为导向的对话框(TOD)的端到端学习中提出了一种新问题,其中对话系统模仿故障排除代理,该故障排除代理通过诊断其问题(例如,汽车而未启动)帮助用户。这些对话框基于特定于域的流程图,该代理在对话期间应该遵循代理。我们的任务暴露了神经TOD的新颖技术挑战,例如在没有显式注释的情况下对流程图的话语接地,当用户询问澄清问题时,提及额外的手动页面,以及在测试时间遵循看不见的流程图。我们释放由2,738个对话框组成的数据集(浮雕),该对话框为12个不同的故障排除流程图。我们还设计了一个神经模型,扑腾,它使用检索增强的生成架构来训练对话框。我们的实验发现,Flonet可以对未来的流程图进行零射流传输,并为未来的研究设定强大的基线。
translated by 谷歌翻译
人类通常通过利用关于他们正在交谈的人的主题和背景信息的先验知识来进行对话。然而,现有的会话代理和数据集不考虑此类综合信息,因此它们有一个限制生成知识和人格正确融合的话语。为解决此问题,我们介绍了一个呼叫进行定制对话(焦点)数据集,其中包括用户的角色和维基百科知识建立了自定义答案。为了评估预先训练的语言模型的信息和定制话语的能力,我们利用BART和GPT-2以及基于变压器的模型。我们评估了他们的生成能力,自动分数并对人类评估进行定性结果。我们仔细检查模型是否反映了我们提出的两个子任务,人物接地(PG)和知识接地(KG)的充分人物和知识。此外,我们表明我们的数据的话语通过接地质量评估来构建具有正确的知识和角色。
translated by 谷歌翻译
许多社会语言线索用于对话分析,例如情感,情绪和对话行为。一个基本社会线索是礼貌的,这是在语言上具有可用于对话分析的性质。这篇短文介绍了一些礼貌情感对话行为的一些简短调查结果,在那里,我们可以将这些社会语言学线索之间的关系债券相关联。我们发现,情感征集和厌恶的话语更可能是不礼貌的,而幸福和悲伤是礼貌的。对话行为,通知和共同体发生了类似的现象包含许多礼貌的话语,而不是问题和指令。最后,我们将结束这些发现的未来工作。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译