由于缺乏培训数据和异质知识来源,知识接地的对话系统是挑战的。由于培训数据中涵盖的有限主题,现有系统在不良主题上表现不佳。此外,异构知识源使系统概括到其他任务的系统,因为不同知识表示中的知识来源需要不同的知识编码器。为了解决这些挑战,我们呈现插头,将不同知识来源均匀化为知识接地的对话生成任务的统一知识来源的语言模型。插头在对话生成任务上进行预先培训,调节统一的基本知识表示。它可以通过一些培训示例概括到不同下游知识接地的对话一代任务。两个基准测试的实证评估表明,我们的模型越好跨越不同的知识接地任务。它可以在完全监督的设置下实现具有最先进的方法的可比性,并且显着优于零拍摄和少量拍摄设置中的其他方法。
translated by 谷歌翻译
真实的人类对话数据是复杂,异质和嘈杂的,从该数据中构建开放域对话系统仍然是一项艰巨的任务。实际上,此类对话数据仍然包含大量信息和知识,但是,它们没有得到充分探索。在本文中,我们展示了现有的开放域对话生成方法,这些方法记住上下文响应配对的数据,并使用自动回归或编码模型模型不利于培训数据。与当前的方法不同,使用外部知识,我们探索了一个检索生成培训框架,该培训框架可以通过将它们视为“证据”来利用异质和嘈杂的培训数据。特别是,我们使用Bertscore进行检索,这给出了证据和一代的更好品质。公开可用数据集的实验表明,我们的方法可以帮助模型产生更好的响应,即使此类培训数据通常会留下深刻的印象为低质量数据。这种性能增益与通过扩大训练组更好的改进的绩效增益相当,甚至更好。我们还发现,模型性能与检索到的证据的相关性有正相关。此外,我们的方法在零拍实验上表现良好,这表明我们的方法对现实世界数据可能更强大。
translated by 谷歌翻译
Dialogue summarization has recently garnered significant attention due to its wide range of applications. However, existing methods for summarizing dialogues are suboptimal because they do not take into account the inherent structure of dialogue and rely heavily on labeled data, which can lead to poor performance in new domains. In this work, we propose DIONYSUS (dynamic input optimization in pre-training for dialogue summarization), a pre-trained encoder-decoder model for summarizing dialogues in any new domain. To pre-train DIONYSUS, we create two pseudo summaries for each dialogue example: one is produced by a fine-tuned summarization model, and the other is a collection of dialogue turns that convey important information. We then choose one of these pseudo summaries based on the difference in information distribution across different types of dialogues. This selected pseudo summary serves as the objective for pre-training DIONYSUS using a self-supervised approach on a large dialogue corpus. Our experiments show that DIONYSUS outperforms existing methods on six datasets, as demonstrated by its ROUGE scores in zero-shot and few-shot settings.
translated by 谷歌翻译
Incorporating external knowledge into the response generation process is essential to building more helpful and reliable dialog agents. However, collecting knowledge-grounded conversations is often costly, calling for a better pre-trained model for grounded dialog generation that generalizes well w.r.t. different types of knowledge. In this work, we propose KPT (Keyword-guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation without relying on extra knowledge annotation. Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords. With these keywords, we construct two kinds of knowledge and pre-train a knowledge-grounded response generation model, aiming at handling two different scenarios: (1) the knowledge should be faithfully grounded; (2) it can be selectively used. For the former, the grounding knowledge consists of keywords extracted from the response. For the latter, the grounding knowledge is additionally augmented with keywords extracted from other utterances in the same dialog. Since the knowledge is extracted from the dialog itself, KPT can be easily performed on a large volume and variety of dialogue data. We considered three data sources (open-domain, task-oriented, conversational QA) with a total of 2.5M dialogues. We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages. Our comprehensive experiments and analyses demonstrate that KPT consistently outperforms state-of-the-art methods on these tasks with diverse grounding knowledge.
translated by 谷歌翻译
Recent advances in large-scale pre-training provide large models with the potential to learn knowledge from the raw text. It is thus natural to ask whether it is possible to leverage these large models as knowledge bases for downstream tasks. In this work, we answer the aforementioned question in unsupervised knowledge-grounded conversation. We explore various methods that best elicit knowledge from large models. Our human study indicates that, though hallucinations exist, large models post the unique advantage of being able to output common sense and summarize facts that cannot be directly retrieved from the search engine. To better exploit such generated knowledge in dialogue generation, we treat the generated knowledge as a noisy knowledge source and propose the posterior-based reweighing as well as the noisy training strategy. Empirical results on two benchmarks show advantages over the state-of-the-art methods.
translated by 谷歌翻译
生成的开放域对话系统可以从外部知识中受益,但是缺乏外部知识资源和寻找相关知识的困难限制了该技术的发展。为此,我们使用动态服务信息提出了一个知识驱动的对话任务。具体而言,我们使用大量的服务API,可以作为外部知识来源提供高覆盖范围和时空敏感性。对话系统生成查询以请求外部服务以及用户信息,获取相关知识,并基于此知识生成响应。为了实现此方法,我们收集并发布了第一个开放式域中国服务知识对话数据集Dusinc。同时,我们构建了一个基线模型柏拉图 - 线,该模型实现了对话的自动利用。自动评估和人类评估都表明,我们提出的新方法可以显着改善开放域对话的效果,并且与对话预培训模型Plato-2相比,人类评估中的会话级总数提高了59.29%。数据集和基准模型将被开源。
translated by 谷歌翻译
我们介绍了Godel(接地开放对话语言模型),这是对话框的大型预训练的语言模型。与诸如Dialogpt之类的早期模型相比,Godel利用了一个新的扎根预训练阶段,旨在更好地支持将Godel适应广泛的下游对话框任务,这些任务需要当前对话外部的信息(例如,数据库或文档)到产生良好的回应。针对一系列基准测试的实验,这些基准涵盖了面向任务的对话框,对话质量质量检查和接地的开放式对话框,表明Godel在几次以上的微调设置中优于最先进的预训练的对话模型,就人类和自动评估。我们评估方法的一个新颖特征是引入了一个效用概念,该概念除了其交流特征(内在评估)外,还评估了响应的有用性(外部评估)。我们表明,外部评估提供了改进的通道间一致性和与自动指标的相关性。代码和数据处理脚本公开可用。
translated by 谷歌翻译
对话式AI中的现有研究主要将面向任务的对话框(TOD)和问题答案(QA)视为单独的任务。为了构建可以完成用户任务和支持信息寻求信息的对话代理的目标,构建一个可以访问各种外部知识的系统,构建一个处理TOD和QA的系统非常重要。在这项工作中,我们提出了一项新任务,开放式TOD(OB-TOD),将TOD与QA任务相结合,并将外部知识源扩展到包括明确的知识源(例如Web)和隐式知识源(例如,例如,预训练的语言模型)。我们创建了一个新的数据集ob-multiwoz,在这里,我们在其中丰富了Tod会议,并使用类似QA的信息寻求基于外部知识的经验。我们提出了一个统一的模型Opera(开放式末端到端任务对话框),可以适当地访问明确和隐性的外部知识,以解决定义的任务。实验结果表明,与闭环基线相比,Opera的表现出色,并说明了两种知识类型的价值。
translated by 谷歌翻译
建立能够具有丰富人类的对话能力的开放域对话系统是语言产生中的基本挑战之一。但是,即使该领域的最新进展,现有的开放域生成模型也无法捕获和利用外部知识,从而导致对看不见的话语的重复或通用响应。当前关于知识对话生成的工作主要集中于角色融合或搜索基于事实的结构化知识来源(例如Wikipedia)。我们的方法采用了更广泛,更简单的方法,旨在通过在社交媒体上发现的随意互动模仿人类的反应行为来提高系统的原始对话能力。该模型利用联合检索器生成器设置,从Reddit查询一组过滤的评论数据,以充当SEQ2SEQ生成器的附加上下文。对开放域对话数据集的自动和人类评估证明了我们方法的有效性。
translated by 谷歌翻译
最近的大规模预训练的进步,例如GPT-3允许从给定提示生成看似高质量的文本。然而,这种一代系统经常遭受幻觉的事实问题,并且本身并不是旨在包含有用的外部信息。接地的代表似乎提供了补救措施,但他们的培训通常依赖于提供信息相关文件的很少可用的并行数据。我们提出了一个框架,通过在语言模型信号上共同训练接地的发生器和文档检索来缓解这种数据约束。该模型学会奖励具有生成中最高效用的文档的检索,并用专家混合(MOE)合并来术语术,以产生后续文本。我们证明,发电机和猎犬都可以利用这种联合培训,协同作用,以生产散文和对话一代中的更多信息和相关文本。
translated by 谷歌翻译
在寻求信息的对话中,用户与代理商进行对话,以提出一系列通常可以不足或过度指定的问题。理想的代理商首先将通过搜索其基本知识来源,然后与用户进行适当互动以解决它,从而确定他们处于这种情况。但是,大多数现有研究都无法或人为地纳入此类代理端计划。在这项工作中,我们介绍了Inscit(发音为Insight),这是一种用于与混合互动相互作用的信息寻求对话的数据集。它包含从805个人类对话中进行的4.7k用户代理转弯,代理商对Wikipedia进行搜索,并要求澄清或提供相关信息以解决用户查询。我们定义了两个子任务,即证据通过识别和响应产生,以及一种新的人类评估协议来评估模型绩效。我们根据对话知识识别和开放域问题的最新模型报告了两个强大的基线的结果。这两种模型都显着不足,并且没有产生连贯和信息丰富的反应,这表明未来的研究有足够的改进空间。
translated by 谷歌翻译
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledgegrounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/ google/BEGIN-dataset.
translated by 谷歌翻译
在本文中,我们介绍了基于大型预训练的语言模型(PLM)pangu-alpha(Zeng等,2021)的中国预训练的开放域对话生成模型。与其他对大量对话数据进行培训的预训练的对话模型不同,我们旨在通过继承PLM的有价值的语言能力和知识来构建强大的对话模型,并以相对较少的数据和计算成本构建强大的对话模型。为此,我们训练大型PLM Pangu-Alpha的Pangu-bot,该机器人已被证明在各种中国自然语言任务上表现出色。我们研究了pangu-bot产生的响应的不同方面,包括响应质量,知识和安全性。我们表明,Pangu-Bot优于最先进的中国对话系统(CDIALGPT(Wang等,2020),Eva(Zhou等,2021),EVA2.0(Gu等,2022)) W.R.T.以上三个方面。我们还证明,可以轻松地部署pangu-bot,以在没有进一步训练的情况下产生情感反应。在整个经验分析中,我们还指出,Pangu-bot响应质量,知识正确性和安全性仍然远非完美,进一步的探索对于建立可靠且智能的对话系统是必不可少的。我们的型号和代码将在https://github.com/huawei-noah/pretretaining-language-model/tree/master/master/pangu-bot上提供。
translated by 谷歌翻译
以任务为导向的对话系统(TDSS)主要在离线设置或人类评估中评估。评估通常仅限于单转或非常耗时。作为替代方案,模拟用户行为的用户模拟器使我们能够考虑一组广泛的用户目标,以生成类似人类的对话以进行模拟评估。使用现有的用户模拟器来评估TDSS是具有挑战性的,因为用户模拟器主要旨在优化TDSS的对话策略,并且评估功能有限。此外,对用户模拟器的评估是一个开放的挑战。在这项工作中,我们提出了一个用于端到端TDS评估的隐喻用户模拟器,如果它在与系统的交互中模拟用户的类似思维,则定义模拟器是隐喻的。我们还提出了一个基于测试人员的评估框架,以生成变体,即具有不同功能的对话系统。我们的用户模拟器构建了一个隐喻的用户模型,该模型通过参考遇到新项目时的先验知识来帮助模拟器进行推理。我们通过检查模拟器与变体之间的模拟相互作用来估计模拟器的质量。我们的实验是使用三个TDS数据集进行的。与基于议程的模拟器和三个数据集上的SEQ2SEQ模型相比,隐喻用户模拟器与手动评估的一致性更好。我们的测试人员框架展示了效率,并且可以更好地概括和可扩展性,因为它可以适用于多个域中的对话和多个任务,例如对话建议和电子商务对话。
translated by 谷歌翻译
会话推荐系统(CRS)已成为一个新兴的研究主题,试图通过交互式对话进行建议,这些对话通常由发电和建议模块组成。 CRS的先前工作倾向于将更多的外部和领域特定知识纳入项目评论,以提高性能。尽管事实的收集和注释特定于外部领域的信息需要大量的人类努力并脱离了普遍性,但过多的额外知识在它们之间带来了更大的困难。因此,我们建议从上下文中充分发现和提取内部知识。我们将实体级别和上下文级别的表示形式捕获为对建议的共同模拟用户的偏好,在这种情况下,时间吸引的注意力旨在强调实体级表示中最近出现的项目。我们进一步使用预训练的巴特来初始化生成模块,以减轻数据稀缺性并增强上下文建模。除了在流行数据集(REDIAIL)上进行实验外,我们还包括一个多域数据集(OpenDialKg)来显示我们模型的有效性。两个数据集的实验都表明,我们的模型在大多数评估指标上都具有更好的性能,其外部知识较少,并且可以很好地推广到其他领域。对建议和生成任务的其他分析证明了我们在不同情况下模型的有效性。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
The quality of knowledge retrieval is crucial in knowledge-intensive conversations. Two common strategies to improve the retrieval quality are finetuning the retriever or generating a self-contained query, while they encounter heavy burdens on expensive computation and elaborate annotations. In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. Without extra supervision, the end-to-end joint training of QKConv explores multiple candidate queries and utilizes corresponding selected knowledge to yield the target response. To evaluate the effectiveness of the proposed method, we conducted comprehensive experiments on conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results demonstrate that QKConv achieves state-of-the-art performance compared to unsupervised methods and competitive performance compared to supervised methods.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
本文介绍了端到端以任务为导向的对话(TOD)的本体学预验证的语言模型(OPAL)。与Chit-Chat对话模型不同,面向任务的对话模型至少满足两个特定于任务的模块:对话状态跟踪器(DST)和响应生成器(RG)。对话状态由域插槽值三元组成,它们被认为是用户搜索与域相关数据库的约束。带有带注释的对话状态的大规模面向任务的对话数据通常是无法访问的。它可以防止针对任务对话的审慎语言模型的开发。我们提出了一种简单而有效的预处理方法来减轻此问题,该方法由两个预审进阶段组成。第一阶段是在大规模上下文文本数据上预处理,其中文本的结构化信息是由信息提取工具提取的。为了弥合训练方法和下游任务之间的差距,我们设计了两个预训练的任务:类似于本体的三重恢复和下一文本生成,分别模拟了DST和RG。第二阶段是在TOD数据上微调验证的模型。实验结果表明,即使没有CAMREST676和MULTIWOZ基准的任何TOD数据,我们提出的方法即使没有任何TOD数据,我们提出的方法也可以提高竞争性能。
translated by 谷歌翻译
Dialogue systems can leverage large pre-trained language models and knowledge to generate fluent and informative responses. However, these models are still prone to produce hallucinated responses not supported by the input source, which greatly hinders their application. The heterogeneity between external knowledge and dialogue context challenges representation learning and source integration, and further contributes to unfaithfulness. To handle this challenge and generate more faithful responses, this paper presents RHO ($\rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG). We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism. In addition, we devise a response re-ranking technique based on walks over KG sub-graphs for better conversational reasoning. Experimental results on OpenDialKG show that our approach significantly outperforms state-of-the-art methods on both automatic and human evaluation by a large margin, especially in hallucination reduction (17.54% in FeQA).
translated by 谷歌翻译