将对话状态跟踪(DST)概括为新数据特别具有挑战性,因为在培训过程中对丰富和细粒度的监督非常依赖。样本稀疏性,分布转移以及新概念和主题的发生经常导致推理期间的严重降级。在本文中,我们提出了一种培训策略,以构建提取性DST模型,而无需精细颗粒的手动跨度标签。两种新型的输入级辍学方法减轻了样品稀疏性的负面影响。我们提出了一种具有统一编码器的新模型体系结构,该架构通过利用注意机制来支持价值和插槽独立性。我们结合了三复制策略DST的优势和价值匹配,以从互补的预测中受益,而无需违反本体独立性的原则。我们的实验表明,可以在没有手动跨度标签的情况下训练提取的DST模型。我们的体系结构和培训策略提高了对样本稀疏,新概念和主题的鲁棒性,从而在一系列基准中提高了最先进的表现。我们进一步强调了我们的模型有效地从非拨号数据中学习的能力。
translated by 谷歌翻译
对话状态跟踪模型在面向任务的对话系统中发挥着重要作用。然而,它们中的大多数是根据输入定义地独立地造型的插槽类型。我们发现它可能导致模型由共享相同数据类型的插槽类型混淆。为了减轻这个问题,我们提出了连续模型插槽的Trippy-MRF和Trippy-LSTM。我们的研究结果表明,他们能够缓解上述混淆,并将最先进的数据集达到58.7至61.3推出。我们的实现可在https://github.com/ctinray/trippy-joint上获得。
translated by 谷歌翻译
在与用户进行交流时,以任务为导向的对话系统必须根据对话历史记录在每个回合时跟踪用户的需求。这个称为对话状态跟踪(DST)的过程至关重要,因为它直接告知下游对话政策。近年来,DST引起了很大的兴趣,文本到文本范式作为受欢迎的方法。在本评论论文中,我们首先介绍任务及其相关的数据集。然后,考虑到最近出版的大量出版物,我们确定了2021 - 2022年研究的重点和研究进展。尽管神经方法已经取得了重大进展,但我们认为对话系统(例如概括性)的某些关键方面仍未得到充实。为了激励未来的研究,我们提出了几种研究途径。
translated by 谷歌翻译
Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizing all history information, the dialogue state in the last turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the last turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model's ability to update and correct slot values. Furthermore, a contrastive context matching framework is designed to narrow the representation distance between a state and its corresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum and improving anti-noise ability.
translated by 谷歌翻译
以任务为导向的对话系统通常采用对话状态跟踪器(DST)成功完成对话。最近的最新DST实现依赖于各种服务的模式来改善模型的鲁棒性并处理对新域的零击概括[1],但是这种方法[2,3]通常需要多个大型变压器模型和长时间输入序列以表现良好。我们提出了一个基于多任务BERT的单个模型,该模型共同解决了意图预测的三个DST任务,请求的插槽预测和插槽填充。此外,我们提出了对对话历史和服务模式的高效和简约编码,该编码被证明可以进一步提高性能。对SGD数据集的评估表明,我们的方法的表现优于基线SGP-DST,比最新的方法相比表现良好,同时在计算上的效率更高。进行了广泛的消融研究,以检查我们模型成功的促成因素。
translated by 谷歌翻译
Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting.
translated by 谷歌翻译
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.
translated by 谷歌翻译
随着预训练的语言模型的发展,对话理解(DU)已经看到了杰出的成功。但是,当前的DU方法通常为每个不同的DU任务采用独立模型,而无需考虑跨不同任务的共同知识。在本文中,我们提出了一个名为{\ em unidu}的统一的生成对话理解框架,以实现跨不同DU任务的有效信息交流。在这里,我们将所有DU任务重新制定为基于统一的立即生成模型范式。更重要的是,引入了一种新颖的模型多任务训练策略(MATS),以动态调整各种任务的权重,以根据每个任务的性质和可用数据在培训期间进行最佳知识共享。涵盖五个基本DU任务的十个DU数据集的实验表明,在所有任务上,提出的UNIDU框架在很大程度上优于特定于特定于任务精心设计的方法。 MATS还揭示了这些任务的知识共享结构。最后,Unidu在看不见的对话领域中获得了有希望的表现,显示了概括的巨大潜力。
translated by 谷歌翻译
针对任务导向的对话系统的强大状态跟踪目前仍然限于一些流行语言。本文显示,给定以一种语言设置的大规模对话数据,我们可以使用机器翻译自动为其他语言生成有效的语义解析器。我们提出了对话数据集的自动翻译,并进行对齐,以确保插槽值的忠实翻译,并消除以前的基准中使用的昂贵人类监督。我们还提出了一种新的上下文语义解析模型,它编码正式的插槽和值,只有最后一个代理和用户话语。我们表明,简洁的表示降低了翻译误差的复合效果,而不会损害实践中的准确性。我们评估我们对几个对话状态跟踪基准的方法。在Risawoz,Crosswoz,Crosswoz-Zh和Multiwoz-Zh Datasets,我们将最先进的技术提高11%,17%,20%和0.3%,以共同的目标准确度。我们为所有三个数据集提供了全面的错误分析,显示错误注释可以模糊模型质量的判断。最后,我们使用推荐方法创建了Risawoz英语和德语数据集。在这些数据集中,准确性在原始的11%以内,表示可能的高精度多语言对话数据集,而无需依赖昂贵的人类注释。
translated by 谷歌翻译
随着未来以数据为中心的决策,对数据库的无缝访问至关重要。关于创建有效的文本到SQL(Text2SQL)模型以访问数据库的数据有广泛的研究。使用自然语言是可以通过有效访问数据库(尤其是对于非技术用户)来弥合数据和结果之间差距的最佳接口之一。它将打开门,并在精通技术技能或不太熟练的查询语言的用户中引起极大的兴趣。即使提出或研究了许多基于深度学习的算法,在现实工作场景中使用自然语言来解决数据查询问题仍然非常具有挑战性。原因是在不同的研究中使用不同的数据集,这带来了其局限性和假设。同时,我们确实缺乏对这些提议的模型及其对其训练的特定数据集的局限性的彻底理解。在本文中,我们试图介绍过去几年研究的24种神经网络模型的整体概述,包括其涉及卷积神经网络,经常性神经网络,指针网络,强化学习,生成模型等的架构。我们还概述11个数据集,这些数据集被广泛用于训练Text2SQL技术的模型。我们还讨论了无缝数据查询中文本2SQL技术的未来应用可能性。
translated by 谷歌翻译
Training dialogue systems often entails dealing with noisy training examples and unexpected user inputs. Despite their prevalence, there currently lacks an accurate survey of dialogue noise, nor is there a clear sense of the impact of each noise type on task performance. This paper addresses this gap by first constructing a taxonomy of noise encountered by dialogue systems. In addition, we run a series of experiments to show how different models behave when subjected to varying levels of noise and types of noise. Our results reveal that models are quite robust to label errors commonly tackled by existing denoising algorithms, but that performance suffers from dialogue-specific noise. Driven by these observations, we design a data cleaning algorithm specialized for conversational settings and apply it as a proof-of-concept for targeted dialogue denoising.
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译
对话状态跟踪(DST)是对话系统的核心子模块,旨在从系统和用户话语中提取适当的信念状态(域槽值)。大多数先前的研究试图通过增加预训练模型的大小或使用其他功能(例如图形关系)来提高性能。在这项研究中,我们建议使用实体自适应预训练(DSTEA)进行对话状态跟踪,该系统在该系统中,句子中的关键实体受到DST模型的编码者的训练。 DSTEA通过四种方式从输入对话中提取重要实体,然后应用选择性知识掩盖以有效地训练模型。尽管DSTEA仅进行预训练而没有直接向DST模型注入更多知识,但它的性能比Multiwoz 2.0、2.1和2.2上最著名的基准模型更好。 DSTEA的有效性通过有关实体类型和不同自适应设置的各种比较实验得到了验证。
translated by 谷歌翻译
Multiwoz 2.0数据集极大地刺激了面向任务的对话系统的研究。但是,其状态注释包含大量噪声,这阻碍了对模型性能的正确评估。为了解决这个问题,大规模的努力致力于纠正注释。然后释放了三个改进的版本(即Multiwoz 2.1-2.3)。尽管如此,仍然有很多错误和不一致的注释。这项工作介绍了Multiwoz 2.4,该工作完善了Multiwoz 2.1的验证集和测试集中的注释。训练集中的注释保持不变(与多沃兹2.1相同),以引发强大的噪声模型训练。我们在Multiwoz 2.4上基准了八个最新的对话状态跟踪模型。所有这些表现出比Multiwoz 2.1的性能要高得多。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
文本到SQL解析是一项必不可少且具有挑战性的任务。文本到SQL解析的目的是根据关系数据库提供的证据将自然语言(NL)问题转换为其相应的结构性查询语言(SQL)。来自数据库社区的早期文本到SQL解析系统取得了显着的进展,重度人类工程和用户与系统的互动的成本。近年来,深层神经网络通过神经生成模型显着提出了这项任务,该模型会自动学习从输入NL问题到输出SQL查询的映射功能。随后,大型的预训练的语言模型将文本到SQL解析任务的最新作品带到了一个新级别。在这项调查中,我们对文本到SQL解析的深度学习方法进行了全面的评论。首先,我们介绍了文本到SQL解析语料库,可以归类为单转和多转。其次,我们提供了预先训练的语言模型和现有文本解析方法的系统概述。第三,我们向读者展示了文本到SQL解析所面临的挑战,并探索了该领域的一些潜在未来方向。
translated by 谷歌翻译
语言理解(SLU)是以任务为导向对话系统的核心组成部分,期望面对人类用户不耐烦的推理较短。现有的工作通过为单转弯任务设计非自动回旋模型来提高推理速度,但在面对对话历史记录时未能适用于多转移SLU。直观的想法是使所有历史言语串联并直接利用非自动进取模型。但是,这种方法严重错过了显着的历史信息,并遭受了不协调的问题。为了克服这些缺点,我们提出了一个新型模型,用于使用层改造的变压器(SHA-LRT),该模型名为“显着历史”,该模型由SHA模块组成,该模块由SHA模块组成,一种层的机制(LRM)和插槽标签生成(SLG)任务。 SHA通过历史悠久的注意机制捕获了从历史言论和结果进行的当前对话的显着历史信息。 LRM预测了Transferer的中间状态的初步SLU结果,并利用它们来指导最终预测,SLG获得了非自动进取编码器的顺序依赖性信息。公共数据集上的实验表明,我们的模型可显着提高多转弯性能(总体上为17.5%),并且加速(接近15倍)最先进的基线的推理过程,并且在单转弯方面有效SLU任务。
translated by 谷歌翻译
作为面向任务的对话系统中的重要组成部分,对话状态跟踪(DST)旨在跟踪人机相互作用并生成用于管理对话的状态表示。对话状态的表示取决于域本体论和用户的目标。在几个面向任务的对话中,目标范围有限,对话状态可以表示为一组插槽值对。随着对话系统的功能扩展以支持沟通中的自然性,将对话行为处理纳入对话模型设计变得至关重要。缺乏这种考虑限制了对话跟踪模型的可扩展性,以实现特定目标和本体。为了解决这个问题,我们制定和纳入对话行为,并利用机器阅读理解的最新进展来预测多域对话状态跟踪的分类和非类别类型的插槽。实验结果表明,我们的模型可以提高对话状态跟踪在Multiwoz 2.1数据集上的总体准确性,并证明合并对话行为可以指导对话状态设计以实现未来的面向任务的对话系统。
translated by 谷歌翻译
先前的工作表明,数据增强对于改善对话状态跟踪非常有用。但是,用户话语有很多类型,而先前的方法仅认为是最简单的增强方法,这引起了人们对不良概括能力的关注。为了更好地涵盖多样化的对话行为并控制发电质量,本文提出了可控的用户对话ACT扩展(CUDA-DST),以增强具有多种行为的用户话语。有了增强数据,不同的状态跟踪器会提高改进并显示出更好的鲁棒性,从而在Multiwoz 2.1上实现了最先进的性能
translated by 谷歌翻译
对话状态跟踪器是为了跟踪对话中用户目标的设计,是对话系统中的重要组成部分。但是,对话状态跟踪的研究在很大程度上仅限于单形式,其中插槽和老虎机值受知识领域(例如带有餐厅名称和价格范围插槽的餐厅域)的限制,并且由特定的数据库架构定义。在本文中,我们建议将对话状态跟踪的定义扩展到多模式。具体来说,我们介绍了一项新颖的对话状态跟踪任务,以跟踪视频接地对话中提到的视觉对象的信息。每个新的对话说法都可能引入一个新的视频段,新的视觉对象或新对象属性,并且需要一个状态跟踪器来相应地更新这些信息插槽。我们创建了一个新的合成基准测试,并为此任务设计了一个新颖的基线视频 - 底盘变压器网络(VDTN)。 VDTN结合了对象级功能和段级功能,并学习视频和对话之间的上下文依赖性,以生成多模式对话状态。我们为国家生成任务以及一个自我监督的视频理解任务优化了VDTN,该任务恢复了视频段或对象表示。最后,我们培训了VDTN在响应预测任务中使用解码状态。加上全面的消融和定性分析,我们发现了一些有趣的见解,以建立更有能力的多模式对话系统。
translated by 谷歌翻译