Responding with multi-modal content has been recognized as an essential capability for an intelligent conversational agent. In this paper, we introduce the MMDialog dataset to better facilitate multi-modal conversation. MMDialog is composed of a curated set of 1.08 million real-world dialogues with 1.53 million unique images across 4,184 topics. MMDialog has two main and unique advantages. First, it is the largest multi-modal conversation dataset by the number of dialogues by 8x. Second, it contains massive topics to generalize the open-domain. To build engaging dialogue system with this dataset, we propose and normalize two response producing tasks based on retrieval and generative scenarios. In addition, we build two baselines for above tasks with state-of-the-art techniques and report their experimental performance. We also propose a novel evaluation metric MM-Relevance to measure the multi-modal responses. Our dataset and scripts are available in https://github.com/victorsungo/MMDialog.
translated by 谷歌翻译
As sharing images in an instant message is a crucial factor, there has been active research on learning a image-text multi-modal dialogue model. However, training a well-generalized multi-modal dialogue model is challenging because existing multi-modal dialogue datasets contain a small number of data, limited topics, and a restricted variety of images per dialogue. In this paper, we present a multi-modal dialogue dataset creation pipeline that involves matching large-scale images to dialogues based on CLIP similarity. Using this automatic pipeline, we propose a large-scale multi-modal dialogue dataset, DialogCC, which covers diverse real-world topics and various images per dialogue. With extensive experiments, we demonstrate that training a multi-modal dialogue model with our dataset can improve generalization performance. Additionally, existing models trained with our dataset achieve state-of-the-art performance on image and text retrieval tasks. The source code and the dataset will be released after publication.
translated by 谷歌翻译
在本文中,我们介绍了基于大型预训练的语言模型(PLM)pangu-alpha(Zeng等,2021)的中国预训练的开放域对话生成模型。与其他对大量对话数据进行培训的预训练的对话模型不同,我们旨在通过继承PLM的有价值的语言能力和知识来构建强大的对话模型,并以相对较少的数据和计算成本构建强大的对话模型。为此,我们训练大型PLM Pangu-Alpha的Pangu-bot,该机器人已被证明在各种中国自然语言任务上表现出色。我们研究了pangu-bot产生的响应的不同方面,包括响应质量,知识和安全性。我们表明,Pangu-Bot优于最先进的中国对话系统(CDIALGPT(Wang等,2020),Eva(Zhou等,2021),EVA2.0(Gu等,2022)) W.R.T.以上三个方面。我们还证明,可以轻松地部署pangu-bot,以在没有进一步训练的情况下产生情感反应。在整个经验分析中,我们还指出,Pangu-bot响应质量,知识正确性和安全性仍然远非完美,进一步的探索对于建立可靠且智能的对话系统是必不可少的。我们的型号和代码将在https://github.com/huawei-noah/pretretaining-language-model/tree/master/master/pangu-bot上提供。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
旨在用自然语言和谐地与人类交流的智能对话体系对于促进人工智能时代的人机互动的发展非常出色。有了逐渐复杂的人类计算机交互要求(例如,多模式输入,时间敏感性),传统的基于文本的对话系统很难满足对更加生动和方便的交互的需求。因此,视觉背景增强对话系统(VAD)有可能通过感知和理解多模式信息(即图像或视频中的视觉上下文,文本对话历史记录)与人类进行交流,已成为主要的研究范式。 VAD受益于视觉和文本上下文之间的一致性和互补性,具有产生引人入胜和背景感知响应的潜力。为了描述VAD的开发,我们首先表征VAD的概念和独特功能,然后介绍其通用系统体系结构以说明系统工作流程。随后,对一些研究挑战和代表性作品进行了详细研究,然后进行了权威基准摘要。我们通过提出一些开放问题和有前途的VAD研究趋势来结束本文,例如,在跨模式对话环境下,人机对话的认知机制以及知识增强的跨模式语义互动。
translated by 谷歌翻译
本文介绍了Lingjing团队在NLPCC-2022-Shared-Task-4多模式对话理解和发电(MDUG)中的实验方案。MDUG任务可以分为两个阶段:多模式上下文理解和响应生成。为了充分利用视觉信息以获得场景的理解和对话的生成,我们提出了MDUG任务的场景感知提示。具体而言,我们利用多任务策略共同建模场景和会话多模式的理解。采用视觉标题来了解场景信息,而基于场景和会话感知标签的固定类型的模板提示则用于进一步改善对话生成性能。广泛的实验结果表明,与其他竞争方法相比,所提出的方法已经达到了最先进的(SOTA)性能,在此MDUG竞争中,我们在所有三个子任务中排名1-ST。
translated by 谷歌翻译
知识驱动的对话世代最近取得了非凡的突破。与一般的对话系统相比,卓越的知识对话系统可以通过预先提供的知识产生更多信息和知识渊博的响应。但是,在实际应用中,对话系统无法事先提供相应的知识。为了解决该问题,我们设计了一个名为DRKQG的知识驱动的对话系统(\ emph {通过查询生成动态检索知识,以获取信息性对话响应})。具体而言,系统可以分为两个模块:查询生成模块和对话生成模块。首先,利用时间感知机制来捕获上下文信息,并可以生成查询以检索知识。然后,我们集成了复制机制和变压器,该机制允许响应生成模块产生从上下文和检索知识中得出的响应。 LIC2022,语言和情报技术竞赛的实验结果表明,我们的模块在自动评估指标上的大幅度优于基线模型,而BAIDU语言学团队的人类评估表明,我们的系统在事实上取得了令人印象深刻的结果,实际上是正确的,知识渊博。
translated by 谷歌翻译
真实的人类对话数据是复杂,异质和嘈杂的,从该数据中构建开放域对话系统仍然是一项艰巨的任务。实际上,此类对话数据仍然包含大量信息和知识,但是,它们没有得到充分探索。在本文中,我们展示了现有的开放域对话生成方法,这些方法记住上下文响应配对的数据,并使用自动回归或编码模型模型不利于培训数据。与当前的方法不同,使用外部知识,我们探索了一个检索生成培训框架,该培训框架可以通过将它们视为“证据”来利用异质和嘈杂的培训数据。特别是,我们使用Bertscore进行检索,这给出了证据和一代的更好品质。公开可用数据集的实验表明,我们的方法可以帮助模型产生更好的响应,即使此类培训数据通常会留下深刻的印象为低质量数据。这种性能增益与通过扩大训练组更好的改进的绩效增益相当,甚至更好。我们还发现,模型性能与检索到的证据的相关性有正相关。此外,我们的方法在零拍实验上表现良好,这表明我们的方法对现实世界数据可能更强大。
translated by 谷歌翻译
在寻求信息的对话中,用户与代理商进行对话,以提出一系列通常可以不足或过度指定的问题。理想的代理商首先将通过搜索其基本知识来源,然后与用户进行适当互动以解决它,从而确定他们处于这种情况。但是,大多数现有研究都无法或人为地纳入此类代理端计划。在这项工作中,我们介绍了Inscit(发音为Insight),这是一种用于与混合互动相互作用的信息寻求对话的数据集。它包含从805个人类对话中进行的4.7k用户代理转弯,代理商对Wikipedia进行搜索,并要求澄清或提供相关信息以解决用户查询。我们定义了两个子任务,即证据通过识别和响应产生,以及一种新的人类评估协议来评估模型绩效。我们根据对话知识识别和开放域问题的最新模型报告了两个强大的基线的结果。这两种模型都显着不足,并且没有产生连贯和信息丰富的反应,这表明未来的研究有足够的改进空间。
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
生成的开放域对话系统可以从外部知识中受益,但是缺乏外部知识资源和寻找相关知识的困难限制了该技术的发展。为此,我们使用动态服务信息提出了一个知识驱动的对话任务。具体而言,我们使用大量的服务API,可以作为外部知识来源提供高覆盖范围和时空敏感性。对话系统生成查询以请求外部服务以及用户信息,获取相关知识,并基于此知识生成响应。为了实现此方法,我们收集并发布了第一个开放式域中国服务知识对话数据集Dusinc。同时,我们构建了一个基线模型柏拉图 - 线,该模型实现了对话的自动利用。自动评估和人类评估都表明,我们提出的新方法可以显着改善开放域对话的效果,并且与对话预培训模型Plato-2相比,人类评估中的会话级总数提高了59.29%。数据集和基准模型将被开源。
translated by 谷歌翻译
Crosstalk是一种传统的中国戏剧表演艺术。它通常由两个表演者以对话的形式执行。凭借对话的典型特征,串扰也被设计为有趣的观众。在这项研究中,我们介绍了Crossdial,这是第一个开源数据集,其中包含来自网络上最经典的中国串扰。此外,我们定义了两个新任务,提供了两个基准,并研究了当前的对话生成模型在串扰生成领域的能力。实验结果和案例研究表明,串扰的生成对于直接方法来说是具有挑战性的,并且仍然是未来工作的有趣主题。
translated by 谷歌翻译
Recent video+language datasets cover domains where the interaction is highly structured, such as instructional videos, or where the interaction is scripted, such as TV shows. Both of these properties can lead to spurious cues to be exploited by models rather than learning to ground language. In this paper, we present GrOunded footbAlL commentaries (GOAL), a novel dataset of football (or `soccer') highlights videos with transcribed live commentaries in English. As the course of a game is unpredictable, so are commentaries, which makes them a unique resource to investigate dynamic language grounding. We also provide state-of-the-art baselines for the following tasks: frame reordering, moment retrieval, live commentary retrieval and play-by-play live commentary generation. Results show that SOTA models perform reasonably well in most tasks. We discuss the implications of these results and suggest new tasks for which GOAL can be used. Our codebase is available at: https://gitlab.com/grounded-sport-convai/goal-baselines.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
语言是人类交流的主要工具,其中幽默是最有吸引力的部分之一。使用计算机,又称自然语言生成(NLG)的人类产生自然语言,已广泛用于对话系统,聊天机器人,机器翻译以及计算机AID创建,例如Idea Generations,剧本。但是,自然语言的幽默方面相对不足,尤其是在预训练的语言模型时代。在这项工作中,我们旨在初步测试NLG是否可以像人类一样产生幽默。我们构建了一个新的数据集,该数据集由众多数字化的中国可笑的串扰脚本(称为c $^3 $简称),该脚本适用于1800年代以来名为“ Xiangsheng”的流行中国表演艺术。 (为了方便非中国扬声器,我们在本文中称为“ Xiangsheng”的“ Crosstalk”。)我们基准了各种一代方法,包括训练seq2seq,微调中级PLMS和大型PLMS(大型PLMS)(有无微调)。此外,我们还进行了人类评估,表明1)大规模预处理在很大程度上提高了串扰的产生质量; 2)即使是从最佳PLM产生的脚本也远非我们的期望,只有65%的人类创建的串扰质量。我们得出结论,使用大型PLM可以在很大程度上改善幽默的产生,但仍处于起步阶段。 \ url {https://github.com/anonno2/crosstalk-generation}公开可用数据和基准代码。
translated by 谷歌翻译
由于缺乏培训数据和异质知识来源,知识接地的对话系统是挑战的。由于培训数据中涵盖的有限主题,现有系统在不良主题上表现不佳。此外,异构知识源使系统概括到其他任务的系统,因为不同知识表示中的知识来源需要不同的知识编码器。为了解决这些挑战,我们呈现插头,将不同知识来源均匀化为知识接地的对话生成任务的统一知识来源的语言模型。插头在对话生成任务上进行预先培训,调节统一的基本知识表示。它可以通过一些培训示例概括到不同下游知识接地的对话一代任务。两个基准测试的实证评估表明,我们的模型越好跨越不同的知识接地任务。它可以在完全监督的设置下实现具有最先进的方法的可比性,并且显着优于零拍摄和少量拍摄设置中的其他方法。
translated by 谷歌翻译
Conversational AI has become an increasingly prominent and practical application of machine learning. However, existing conversational AI techniques still suffer from various limitations. One such limitation is a lack of well-developed methods for incorporating auxiliary information that could help a model understand conversational context better. In this paper, we explore how persona-based information could help improve the quality of response generation in conversations. First, we provide a literature review focusing on the current state-of-the-art methods that utilize persona information. We evaluate two strong baseline methods, the Ranking Profile Memory Network and the Poly-Encoder, on the NeurIPS ConvAI2 benchmark dataset. Our analysis elucidates the importance of incorporating persona information into conversational systems. Additionally, our study highlights several limitations with current state-of-the-art methods and outlines challenges and future research directions for advancing personalized conversational AI technology.
translated by 谷歌翻译
Empathy is a vital factor that contributes to mutual understanding, and joint problem-solving. In recent years, a growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems. We refer to this topic as empathetic conversational systems. To identify the critical gaps and future opportunities in this topic, this paper examines this rapidly growing field using five review dimensions: (i) conceptual empathy models and frameworks, (ii) adopted empathy-related concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation strategies, and (v) state-of-the-art approaches. The findings show that most studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the text-based modality dominates research in this field. Studies mainly focused on extracting features from the messages of the users and the conversational systems, with minimal emphasis on user modeling and profiling. Notably, studies that have incorporated emotion causes, external knowledge, and affect matching in the response generation models, have obtained significantly better results. For implementation in diverse real-world settings, we recommend that future studies should address key gaps in areas of detecting and authenticating emotions at the entity level, handling multimodal inputs, displaying more nuanced empathetic behaviors, and encompassing additional dialogue system features.
translated by 谷歌翻译
开发会话剂与患者相互作用并提供主要的临床建议,由于其巨大的应用潜力引起了人们的关注,尤其是在COVID-19-19大流行时期。但是,端到端神经对话系统的培训受到数量不足的医学对话语料库的限制。在这项工作中,我们首次尝试建立和发布与12种常见的胃肠道疾病相关的大规模高质量医学对话数据集,名为MEDDG,并从在线健康咨询社区收集了超过17K的对话。在MEDDG的每次对话中,都会注释五种不同类别的实体,包括疾病,症状,属性,测试和药物。为了推动对建立专家敏感的医学对话系统的未来研究,我们提出了基于MEDDG数据集的两种医疗对话任务。一个是下一个实体预测,另一个是医生的反应生成。为了明确理解这两项医学对话任务,我们实施了几个最先进的基准,并设计了两个对话模型,并进一步考虑了预测的实体。实验结果表明,训练前语言模型和其他基线在我们数据集中的性能差的两项任务上都挣扎,并且可以在辅助实体信息的帮助下增强响应质量。从人类评估来看,简单的检索模型的表现优于几个最新的生成模型,这表明仍然有一个很大的改进空间可以改善产生有意义的反应。
translated by 谷歌翻译
Chatbots are expected to be knowledgeable across multiple domains, e.g. for daily chit-chat, exchange of information, and grounding in emotional situations. To effectively measure the quality of such conversational agents, a model-based automatic dialogue evaluation metric (ADEM) is expected to perform well across multiple domains. Despite significant progress, an ADEM that works well in one domain does not necessarily generalize to another. This calls for a dedicated network architecture for domain generalization. To tackle the multi-domain dialogue evaluation task, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters. The shared encoder captures the general knowledge of dialogues across domains, while each adapter specializes in one specific domain and serves as a domain expert. To validate the idea, we construct a high-quality multi-domain dialogue dataset leveraging data augmentation and pseudo-labeling. The PoE network is comprehensively assessed on 16 dialogue evaluation datasets spanning a wide range of dialogue domains. It achieves state-of-the-art performance in terms of mean Spearman correlation over all the evaluation datasets. It exhibits better zero-shot generalization than existing state-of-the-art ADEMs and the ability to easily adapt to new domains with few-shot transfer learning.
translated by 谷歌翻译