本文总结了我们提交第10次对话系统技术挑战(DSTC10)的第二次赛道的任务2的任务2“关于口语对话的知识接地的任务导向对话建模”。类似于前一年的迭代,任务由三个子任务组成:检测转弯是否是知识寻求,选择相关知识文档并最终生成接地响应。今年,焦点在于将系统适应嘈杂的ASR成绩单。我们探讨了不同的方法,使模型对这种类型的输入更加强大,并使生成的响应适应口语对话的风格。对于后者,我们通过嘈杂的频道模型获得最佳效果,该模型另外减少了短和通用响应的数量。我们最好的系统在挑战的人类评估中实现了第一级和第三名。
translated by 谷歌翻译
在口语对话中构建强大的对话系统比书面对话更具挑战。在这方面,提出了DSTC10-TRACK2-TASK2,旨在构建以任务为导向的对话(TOD)系统,该系统将非结构化的外部知识结合在口语对话中,从而扩展了DSTC9-TRACK1。本文介绍了我们的系统,其中包含四种高级方法:数据构建,负面抽样,训练后和样式转移。我们首先自动构建大型培训数据,因为DSTC10-TRACK2未发布官方培训集。对于知识选择任务,我们提出了加权负抽样,以更加细粒度训练模型。我们还采用后培训和样式转移来制作响应生成任务,以生成具有与目标响应类似样式的适当响应。在实验中,我们研究了加权负抽样,训练后和样式转移的效果。我们的模型在客观评估中排名16个团队中的7个,在人类评估中排名6。
translated by 谷歌翻译
面向任务导向的对话系统已经受到获得大规模和高质量的注释对话的困难困扰。此外,大多数公开的数据集仅包括书面对话,这不足以反映实际口头对话系统中的实际人类行为。在本文中,我们提出了面向任务的对话数据增强(TOD-DA),这是一种新型模型 - 不可知的数据增强范例,以提高面向任务对话建模的鲁棒性。 TOD-DA由两个模块组成:1)对话丰富,以扩展关于易于执行数据稀疏性的任务对话的培训数据,用于宽松数据稀疏性和2)口语对话模拟器,以模仿各种粒度的口语样式表达和语音识别错误,以弥合书面之间的差距和口头对话。通过这样的设计,我们的方法在DSTC10 Track2的两个任务中排名第一,这是针对口语对话的任务对话建模的基准,展示了我们提出的TOD-DA的优势和有效性。
translated by 谷歌翻译
在以前的研究中,知识选择任务主要依赖基于语言模型的方法或知识排名。然而,即时依赖于语言模型的方法将所有知识作为在大多数情况下知识不包含顺序信息的顺序输入。另一方面,知识排名方法利用对话历史和每个给定的知识,但不是知识之间的知识。在第10次对话系统技术挑战(DSTC 10)中,我们参加了对话对话时第二轨对话的对话建模。要处理上述问题,我们根据第一个和第三个子任务的SOTA模型修改了培训方法,以及建议的图表 - 知识选择器(GKS),利用包含知识选择语言模型的图形关注基础模型。任务二。 GKS通过同时考虑从语言模型生成的每个知识嵌入,无需顺序特征,使对话中的知识选择决策。 GKS还利用了决策中相当多的知识,将知识与选择过程的一部分带来关系。 GKS从第9次对话系统技术挑战(DSTC9)的知识选择上提出的数据集中建议的几个SOTA模型。
translated by 谷歌翻译
Performance of spoken language understanding (SLU) can be degraded with automatic speech recognition (ASR) errors. We propose a novel approach to improve SLU robustness by randomly corrupting clean training text with an ASR error simulator, followed by self-correcting the errors and minimizing the target classification loss in a joint manner. In the proposed error simulator, we leverage confusion networks generated from an ASR decoder without human transcriptions to generate a variety of error patterns for model training. We evaluate our approach on the DSTC10 challenge targeted for knowledge-grounded task-oriented conversational dialogues with ASR errors. Experimental results show the effectiveness of our proposed approach, boosting the knowledge-seeking turn detection (KTD) F1 significantly from 0.9433 to 0.9904. Knowledge cluster classification is boosted from 0.7924 to 0.9333 in Recall@1. After knowledge document re-ranking, our approach shows significant improvement in all knowledge selection metrics, from 0.7358 to 0.7806 in Recall@1, from 0.8301 to 0.9333 in Recall@5, and from 0.7798 to 0.8460 in MRR@5 on the test set. In the recent DSTC10 evaluation, our approach demonstrates significant improvement in knowledge selection, boosting Recall@1 from 0.495 to 0.7144 compared to the official baseline. Our source code is released in GitHub https://github.com/yctam/dstc10_track2_task2.git.
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledgegrounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/ google/BEGIN-dataset.
translated by 谷歌翻译
已经证明,提供对话模型,可以使开放域的对话更加丰富和引人入胜。现有模型将知识选择视为单独处理每个句子的句子排名或分类问题,忽略了后台文档中句子之间的内部语义连接。在这项工作中,我们建议自动将背景知识文档转换为文档语义图,然后在此类图上执行知识选择。我们的文档语义图通过使用句子节点来保留句子级信息,并提供句子之间的概念连接。我们共同将多任务学习用于句子级别和概念级知识选择,并表明它改善了句子级别的选择。我们的实验表明,我们的基于语义图的知识选择改进了知识选择任务和Holle的端到端响应生成任务的句子选择基线,并改善了WOW中看不见的主题的概括。
translated by 谷歌翻译
最近的知识接地对话框方法通过从外部文本文档中包含信息来生成响应。这些方法不需要在训练期间知道确切的文件,并依赖于使用检索系统来从大型索引获取相关文档。用于生成响应的文档被建模为潜在的变量,其先验概率需要估计。诸如rag等型号,在从索引中检索的文档上边缘化文档概率,以定义对端到端优化的日志似然丢失函数。在本文中,我们开发了上述技术的变分方法,据称,我们最大化证据下限(ELBO)。使用三个公开可用的开放式对话数据集的集合,我们展示了与地面真相响应的信息的后部分布如何允许在训练期间更好地逼近客观函数。为了克服与大型知识收集相关的抽样相关的挑战,我们开发了一种高效的方法来近似eLBO。据我们所知,我们是第一个适用于开放式无监督知识接地对话系统的变分培训。
translated by 谷歌翻译
我们在面向任务为导向的对话框(TOD)的端到端学习中提出了一种新问题,其中对话系统模仿故障排除代理,该故障排除代理通过诊断其问题(例如,汽车而未启动)帮助用户。这些对话框基于特定于域的流程图,该代理在对话期间应该遵循代理。我们的任务暴露了神经TOD的新颖技术挑战,例如在没有显式注释的情况下对流程图的话语接地,当用户询问澄清问题时,提及额外的手动页面,以及在测试时间遵循看不见的流程图。我们释放由2,738个对话框组成的数据集(浮雕),该对话框为12个不同的故障排除流程图。我们还设计了一个神经模型,扑腾,它使用检索增强的生成架构来训练对话框。我们的实验发现,Flonet可以对未来的流程图进行零射流传输,并为未来的研究设定强大的基线。
translated by 谷歌翻译
这项工作提出了一个新的对话数据集,即cookdial,该数据集促进了对任务知识了解的面向任务的对话系统的研究。该语料库包含260个以人类对任务为导向的对话框,其中代理给出了配方文档,指导用户烹饪菜肴。 Cookdial中的对话框展示了两个独特的功能:(i)对话流与支持文档之间的程序对齐; (ii)复杂的代理决策涉及分割长句子,解释硬说明并在对话框上下文中解决核心。此外,我们在假定的面向任务的对话框系统中确定了三个具有挑战性的(子)任务:(1)用户问题理解,(2)代理操作框架预测和(3)代理响应生成。对于这些任务中的每一个,我们都会开发一个神经基线模型,我们在cookdial数据集上进行了评估。我们公开发布烹饪数据集,包括对话框和食谱文档的丰富注释,以刺激对特定于域的文档接地对话框系统的进一步研究。
translated by 谷歌翻译
虽然通常可以使用丰富的开放域文本数据,并且可能包括有趣的现象(幽默,讽刺,移情等),大多数是用于语言处理任务的设计,并且通常采用非交流格式。在这项工作中,我们朝着使用生成的对话网络自动生成对话数据迈出了一步,旨在从可用的语言和知识数据的广度中受益,并培训开放式域社交对话代理。我们使用自动指标和人类评估符在主题聊天数据集上有或没有知识的对话中评估我们的方法。我们的结果表明,对于没有知识基础的对话,GCN可以从种子数据中概括,产生新颖的对话,这些对话较小,但更具吸引力,并且对于知识的对话,它可以产生更多以知识为中心,流利和引人入胜的对话。具体而言,我们表明,对于使用10 \%种子数据的开放域对话,我们的方法靠近使用100%数据的基线,而对于知识接地的对话,它仅使用1%数据,关于人类参与性,流利性和相关性的评级。
translated by 谷歌翻译
Most research on task oriented dialog modeling is based on written text input. However, users interact with practical dialog systems often using speech as input. Typically, systems convert speech into text using an Automatic Speech Recognition (ASR) system, introducing errors. Furthermore, these systems do not address the differences in written and spoken language. The research on this topic is stymied by the lack of a public corpus. Motivated by these considerations, our goal in hosting the speech-aware dialog state tracking challenge was to create a public corpus or task which can be used to investigate the performance gap between the written and spoken forms of input, develop models that could alleviate this gap, and establish whether Text-to-Speech-based (TTS) systems is a reasonable surrogate to the more-labor intensive human data collection. We created three spoken versions of the popular written-domain MultiWoz task -- (a) TTS-Verbatim: written user inputs were converted into speech waveforms using a TTS system, (b) Human-Verbatim: humans spoke the user inputs verbatim, and (c) Human-paraphrased: humans paraphrased the user inputs. Additionally, we provided different forms of ASR output to encourage wider participation from teams that may not have access to state-of-the-art ASR systems. These included ASR transcripts, word time stamps, and latent representations of the audio (audio encoder outputs). In this paper, we describe the corpus, report results from participating teams, provide preliminary analyses of their results, and summarize the current state-of-the-art in this domain.
translated by 谷歌翻译
最近,通过“向导”模拟游戏收集了一类以任务为导向的对话(TOD)数据集。但是,《巫师》数据实际上是模拟的数据,因此与现实生活中的对话根本不同,这些对话更加嘈杂和随意。最近,Seretod挑战赛是组织的,并发布了Mobilecs数据集,该数据集由来自中国移动的真实用户和客户服务人员之间的真实世界对话框组成。基于Mobilecs数据集,Seretod挑战具有两个任务,不仅评估了对话系统本身的构建,而且还检查了对话框成绩单中的信息提取,这对于建立TOD的知识库至关重要。本文主要介绍了Mobilecs数据集对这两项任务的基线研究。我们介绍了如何构建两个基线,遇到的问题以及结果。我们预计基线可以促进令人兴奋的未来研究,以建立针对现实生活任务的人类机器人对话系统。
translated by 谷歌翻译
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., I don't know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations. Input: What are you doing? −0.86 I don't know. −1.09 Get out of here. −1.03 I don't know! −1.09 I'm going home. −1.06 Nothing. −1.09 Oh my god! −1.09 Get out of the way. −1.10 I'm talking to you. Input: what is your name? −0.91 I don't know. ... −0.92 I don't know! −1.55 My name is Robert. −0.92 I don't know, sir. −1.58 My name is John. −0.97 Oh, my god! −1.59 My name's John. Input: How old are you? −0.79 I don't know. ... −1.06 I'm fine.−1.64 Twenty-five. −1.17 I'm all right.−1.66 Five. −1.17 I'm not sure.−1.71 Eight.
translated by 谷歌翻译
由于人类参与者的参与,收集培训对话系统的数据可能非常昂贵,并且需要广泛的注释。特别是在文档接地的对话系统中,人类专家需要仔细阅读非结构化文件以回答用户的问题。结果,现有的文档接地对话对话数据集相对较小,并且妨碍了对话系统的有效培训。在本文中,我们提出了一种通过生成对话模型在文档上接地的自动数据增强技术。对话模型由用户BOT和代理机器人组成,可以在给定输入文档的情况下合成不同的对话,然后用于训练下游模型。在补充原始数据集时,我们的方法可以实现对传统数据增强方法的显着改进。我们还在低资源环境中实现了良好的性能。
translated by 谷歌翻译
Incorporating external knowledge into the response generation process is essential to building more helpful and reliable dialog agents. However, collecting knowledge-grounded conversations is often costly, calling for a better pre-trained model for grounded dialog generation that generalizes well w.r.t. different types of knowledge. In this work, we propose KPT (Keyword-guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation without relying on extra knowledge annotation. Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords. With these keywords, we construct two kinds of knowledge and pre-train a knowledge-grounded response generation model, aiming at handling two different scenarios: (1) the knowledge should be faithfully grounded; (2) it can be selectively used. For the former, the grounding knowledge consists of keywords extracted from the response. For the latter, the grounding knowledge is additionally augmented with keywords extracted from other utterances in the same dialog. Since the knowledge is extracted from the dialog itself, KPT can be easily performed on a large volume and variety of dialogue data. We considered three data sources (open-domain, task-oriented, conversational QA) with a total of 2.5M dialogues. We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages. Our comprehensive experiments and analyses demonstrate that KPT consistently outperforms state-of-the-art methods on these tasks with diverse grounding knowledge.
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译