Chatbots are expected to be knowledgeable across multiple domains, e.g. for daily chit-chat, exchange of information, and grounding in emotional situations. To effectively measure the quality of such conversational agents, a model-based automatic dialogue evaluation metric (ADEM) is expected to perform well across multiple domains. Despite significant progress, an ADEM that works well in one domain does not necessarily generalize to another. This calls for a dedicated network architecture for domain generalization. To tackle the multi-domain dialogue evaluation task, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters. The shared encoder captures the general knowledge of dialogues across domains, while each adapter specializes in one specific domain and serves as a domain expert. To validate the idea, we construct a high-quality multi-domain dialogue dataset leveraging data augmentation and pseudo-labeling. The PoE network is comprehensively assessed on 16 dialogue evaluation datasets spanning a wide range of dialogue domains. It achieves state-of-the-art performance in terms of mean Spearman correlation over all the evaluation datasets. It exhibits better zero-shot generalization than existing state-of-the-art ADEMs and the ability to easily adapt to new domains with few-shot transfer learning.
translated by 谷歌翻译
聊天旨在跨越不同域的人类对话,例如普通的Chit-Chat,知识交流和角色接地对话。为了衡量此类会话代理人的质量,预计对话评估员也会在域中进行评估。但是,大多数最先进的自动对话评估指标(ADMS)不是用于多域评估的。我们有动力设计一般和强大的框架MDD-eval,解决问题。具体而言,我们首先将教师评估员与人类注释的数据一起培训,获取评级技能,以便在特定领域的坏人中讲述良好的对话响应,然后采取自我培训策略,以培训具有教师注释的新评估员的新评估人员域数据,有助于新评估程序遍历多个域。 MDD-EVAL在六个对话评估基准上进行了广泛评估。经验结果表明,在所有评估基准的平均矛盾的普通相关评分方面,MDD-ex律师框架在最先进的adms方面取得了强大的表现,绝对改善了7%。
translated by 谷歌翻译
最近,在自动开放域对话框评估中应用预先接受训练的语言模型(PR-LM),有兴趣的兴趣。PR-LMS提供了满足多域评估挑战的有希望的方向。然而,不同PR-LMS对自动度量的性能的影响是不太理解的。本文审查了8种不同的PRM,并研究了三种不同对话评估基准的三种典型自动对话对话指标的影响。具体而言,我们分析PR-LMS的选择如何影响自动度量的性能。执行对每个度量的广泛相关分析以评估不同PR-LMS沿各种轴的影响,包括预训练目标,对话对话标准,模型规模和跨数据集鲁棒性。本研究有助于第一次全面评估不同PR-LMS对自动对话评估的影响。
translated by 谷歌翻译
近年来,对话系统引起了学术界和工业的重要兴趣。特别是开放式对话系统的纪律,又名聊天,已经获得了很大的势头。然而,困扰研究人员的长期挑战是缺乏有效的自动评估指标,这导致目前研究中的障碍。评估开放式对话模型表现的常见做法涉及对最终部署模型的广泛人类评估,这是时间和成本密集的。此外,最近建立开放式聊天聊天的趋势涉及具有大量社交媒体对话数据的预训练对话模型。但是,社交媒体对话中包含的信息可能是令人反感的和不合适的。不分青红皂白种的使用可能导致不敏感和有毒的生成模型。本文介绍了对话系统技术挑战10(DSTC10)的轨道5获得的数据,基线和结果。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
本文介绍了一个新颖的自我监督的细粒度对话评估框架(自我评估)。核心思想是建模转弯质量与整个对话质量之间的相关性。我们首先提出了一种新型的自动数据构建方法,该方法可以自动为任意对话数据分配细粒度的分数。然后,我们使用多层对比度学习模式训练\ textbf {self eval},有助于区分不同的分数水平。多个基准测试的实验结果表明,自我与人类评估高度一致,并且比最先进的模型更好。我们对本文的实验进行了详细的分析。我们的代码和数据将在GitHub上发布。
translated by 谷歌翻译
在本文中,我们介绍了基于大型预训练的语言模型(PLM)pangu-alpha(Zeng等,2021)的中国预训练的开放域对话生成模型。与其他对大量对话数据进行培训的预训练的对话模型不同,我们旨在通过继承PLM的有价值的语言能力和知识来构建强大的对话模型,并以相对较少的数据和计算成本构建强大的对话模型。为此,我们训练大型PLM Pangu-Alpha的Pangu-bot,该机器人已被证明在各种中国自然语言任务上表现出色。我们研究了pangu-bot产生的响应的不同方面,包括响应质量,知识和安全性。我们表明,Pangu-Bot优于最先进的中国对话系统(CDIALGPT(Wang等,2020),Eva(Zhou等,2021),EVA2.0(Gu等,2022)) W.R.T.以上三个方面。我们还证明,可以轻松地部署pangu-bot,以在没有进一步训练的情况下产生情感反应。在整个经验分析中,我们还指出,Pangu-bot响应质量,知识正确性和安全性仍然远非完美,进一步的探索对于建立可靠且智能的对话系统是必不可少的。我们的型号和代码将在https://github.com/huawei-noah/pretretaining-language-model/tree/master/master/pangu-bot上提供。
translated by 谷歌翻译
In natural language processing (NLP), the context of a word or sentence plays an essential role. Contextual information such as the semantic representation of a passage or historical dialogue forms an essential part of a conversation and a precise understanding of the present phrase or sentence. However, the standard attention mechanisms typically generate weights using query and key but ignore context, forming a Bi-Attention framework, despite their great success in modeling sequence alignment. This Bi-Attention mechanism does not explicitly model the interactions between the contexts, queries and keys of target sequences, missing important contextual information and resulting in poor attention performance. Accordingly, a novel and general triple-attention (Tri-Attention) framework expands the standard Bi-Attention mechanism and explicitly interacts query, key, and context by incorporating context as the third dimension in calculating relevance scores. Four variants of Tri-Attention are generated by expanding the two-dimensional vector-based additive, dot-product, scaled dot-product, and bilinear operations in Bi-Attention to the tensor operations for Tri-Attention. Extensive experiments on three NLP tasks demonstrate that Tri-Attention outperforms about 30 state-of-the-art non-attention, standard Bi-Attention, contextual Bi-Attention approaches and pretrained neural language models1.
translated by 谷歌翻译
学习高质量的对话表示对于解决各种面向对话的任务至关重要,尤其是考虑到对话系统通常会遇到数据稀缺。在本文中,我们介绍了对话句子嵌入(DSE),这是一种自我监督的对比学习方法,它学习有效的对话表示,适合各种对话任务。 DSE通过连续进行与对比度学习的正面对话的连续对话来从对话中学习。尽管它很简单,但DSE的表现能力比其他对话表示和普遍的句子表示模型要好得多。我们评估DSE的五个下游对话任务,这些任务检查了不同语义粒度的对话表示。几次射击和零射击设置的实验表明,DSE的表现要优于基线。例如,它在6个数据集中的1-Shot意图分类中比最强的无监督基线实现了13%的平均绩效提高。我们还提供了有关模型的好处和局限性的分析。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
预先接受训练的语言模型的最新进展具有显着改善的神经反应生成。但是,现有方法通常将对话背景视为令牌的线性序列,并通过令牌级自我关注学习生成下一个单词。这些令牌级编码阻碍了话语中话语水平一致性的探索。本文介绍了对话贝特,这是一种新的会话响应生成模型,可以增强以前的基于PLM的对话模型。 DialogBert采用分层变压器架构。为了有效地捕捉话语中的话语水平一致性,我们提出了两种培训目标,包括蒙面的话语回归和分布式话语秩序与原始BERT训练相比。在三个多转对谈话数据集上的实验表明,在定量评估方面,我们的方法非常优于BART和Dialogpt等基线。人类评估表明,DialogBert比具有显着利润率的基线产生更加连贯,信息和人类的反应。
translated by 谷歌翻译
最近,培训预培训方法在以任务为导向的对话框(TOD)系统中表现出了很大的成功。但是,大多数现有的预培训模型用于TOD专注于对话的理解或对话生成,但并非两者兼而有之。在本文中,我们提出了Space-3,这是一种新型的统一的半监督预培训的预训练的对话模型,从大规模对话CORPORA中学习有限的注释,可以有效地对广泛的下游对话任务进行微调。具体而言,Space-3由单个变压器中的四个连续组件组成,以维护TOD系统中的任务流:(i)对话框编码模块编码对话框历史记录,(ii)对话框理解模块以从任一用户中提取语义向量查询或系统响应,(iii)一个对话框策略模块,以生成包含响应高级语义的策略向量,以及(iv)对话框生成模块以产生适当的响应。我们为每个组件设计一个专门的预训练目标。具体而言,我们预先培训对话框编码模块,使用跨度掩码语言建模,以学习上下文化对话框信息。为了捕获“结构化对话框”语义,我们通过额外的对话注释通过新颖的树诱导的半监视对比度学习目标来预先培训对话框理解模块。此外,我们通过将其输出策略向量与响应响应的语义向量之间的L2距离最小化以进行策略优化,从而预先培训对话策略模块。最后,对话框生成模型由语言建模预先训练。结果表明,Space-3在八个下游对话框基准中实现最新性能,包括意图预测,对话框状态跟踪和端到端对话框建模。我们还表明,在低资源设置下,Space-3比现有模型具有更强的射击能力。
translated by 谷歌翻译
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). These PLMs have brought significant performance gains for a range of NLP tasks, circumventing the need to customize complex designs for specific tasks. However, most current work focus on finetuning PLMs on a domain-specific datasets, ignoring the fact that the domain gap can lead to overfitting and even performance drop. Therefore, it is practically important to find an appropriate method to effectively adapt PLMs to a target domain of interest. Recently, a range of methods have been proposed to achieve this purpose. Early surveys on domain adaptation are not suitable for PLMs due to the sophisticated behavior exhibited by PLMs from traditional models trained from scratch and that domain adaptation of PLMs need to be redesigned to take effect. This paper aims to provide a survey on these newly proposed methods and shed light in how to apply traditional machine learning methods to newly evolved and future technologies. By examining the issues of deploying PLMs for downstream tasks, we propose a taxonomy of domain adaptation approaches from a machine learning system view, covering methods for input augmentation, model optimization and personalization. We discuss and compare those methods and suggest promising future research directions.
translated by 谷歌翻译
预先接受的语言模型(PLM)在神经对话建模中标志着巨大的飞跃。虽然PLMS在大型文本语料库上进行预先培训,但通常在具有特定领域知识和对话风格的稀缺对话数据上进行微调。然而,在大型预先训练模型中充分利用现有知识的同时定制语言模型仍然是一个挑战。在本文中,我们提出了一种预先接受训练的对话建模的新方法,将对话生成问题作为一个快速学习任务。而不是在有限的对话数据上进行微调,我们的方法,DialogPrompt学习针对对话背景优化的连续提示嵌入,从而从大型预训练模型中促进了知识。为了鼓励模型更好地利用提示嵌入,提示编码器被设计为在输入对话框上下文中的条件。流行对话数据集的实验表明,我们的方法显着优于微调基线和通用及时学习方法。此外,人类评估强烈支持对DialialPrompt的优越性在响应生成质量方面。
translated by 谷歌翻译
在本文中,我们建议将面向任务导向的对话系统作为纯粹的自然语言生成任务,以便充分利用像GPT-2这样的大规模预训练模型,并简化了复杂的光学化预备。然而,直接应用这种方法严重遭受了通过删除了替代令牌而导致的对话实体不一致,以及在微调期间灾害模型的灾难性遗忘问题,导致表现不令人满意。为了缓解这些问题,我们设计了一种新颖的GPT-Adapter-CopyNet网络,它将轻量级适配器和CopyNet模块包含到GPT-2中,以实现转移学习和对话实体生成的更好性能。在DSTC8轨道1基准和多种数据集上进行的实验结果表明,我们的建议方法显着优于基线模型,在自动和人类评估中具有显着性能。
translated by 谷歌翻译
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledgegrounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/ google/BEGIN-dataset.
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
translated by 谷歌翻译