用于开放式对话的基于示例基础的生成模型,基于由猎犬提供的示例,利用生成模型和检索模型来产生响应。然而,它们经常忽略所检索的示例,同时产生响应或产生超过拟接到检索的示例的响应。在本文中,我们认为这些缺点是从开放域对话的一对多问题中衍生出来的。当检索的示例与与金响应显着不同的给定上下文相关时,基于示例的基础生成模型验证以忽略示例,因为示例对于产生金响应并不有用。另一方面,当检索到的示例性与金响应类似时,经过高度的,生成模型训练以依赖于示例。因此,我们提出了一种选择与金响应有语义相关的示例性的训练方法,而是从黄金响应的词汇偏移以减轻上述缺点。在培训阶段,我们建议的培训方法首先使用黄金响应而不是对话背景作为查询,以选择与金响应有关的语义相关的样本。然后,它消除了这种示例性,其中词汇类似于金响应,以减轻生成模型对该示例性的依赖性。剩余的示例可以与给定的上下文无关紧要,因为它们是根据金响应搜索的。因此,我们建议的培训方法进一步利用了给定的上下文与示例之间的相关评分,以惩罚不相关的样权。广泛的实验表明,我们所提出的培训方法减轻了现有的基于示例的生成模型的缺点,并显着提高了适当性和信息性方面的性能。
translated by 谷歌翻译
建立能够具有丰富人类的对话能力的开放域对话系统是语言产生中的基本挑战之一。但是,即使该领域的最新进展,现有的开放域生成模型也无法捕获和利用外部知识,从而导致对看不见的话语的重复或通用响应。当前关于知识对话生成的工作主要集中于角色融合或搜索基于事实的结构化知识来源(例如Wikipedia)。我们的方法采用了更广泛,更简单的方法,旨在通过在社交媒体上发现的随意互动模仿人类的反应行为来提高系统的原始对话能力。该模型利用联合检索器生成器设置,从Reddit查询一组过滤的评论数据,以充当SEQ2SEQ生成器的附加上下文。对开放域对话数据集的自动和人类评估证明了我们方法的有效性。
translated by 谷歌翻译
真实的人类对话数据是复杂,异质和嘈杂的,从该数据中构建开放域对话系统仍然是一项艰巨的任务。实际上,此类对话数据仍然包含大量信息和知识,但是,它们没有得到充分探索。在本文中,我们展示了现有的开放域对话生成方法,这些方法记住上下文响应配对的数据,并使用自动回归或编码模型模型不利于培训数据。与当前的方法不同,使用外部知识,我们探索了一个检索生成培训框架,该培训框架可以通过将它们视为“证据”来利用异质和嘈杂的培训数据。特别是,我们使用Bertscore进行检索,这给出了证据和一代的更好品质。公开可用数据集的实验表明,我们的方法可以帮助模型产生更好的响应,即使此类培训数据通常会留下深刻的印象为低质量数据。这种性能增益与通过扩大训练组更好的改进的绩效增益相当,甚至更好。我们还发现,模型性能与检索到的证据的相关性有正相关。此外,我们的方法在零拍实验上表现良好,这表明我们的方法对现实世界数据可能更强大。
translated by 谷歌翻译
Dialogue models are able to generate coherent and fluent responses, but they can still be challenging to control and may produce non-engaging, unsafe results. This unpredictability diminishes user trust and can hinder the use of the models in the real world. To address this, we introduce DialGuide, a novel framework for controlling dialogue model behavior using natural language rules, or guidelines. These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent. We evaluate DialGuide on three tasks in open-domain dialogue response generation: guideline selection, response generation, and response entailment verification. Our dataset contains 10,737 positive and 15,467 negative dialogue context-response-guideline triplets across two domains - chit-chat and safety. We provide baseline models for the tasks and benchmark their performance. We also demonstrate that DialGuide is effective in the dialogue safety domain, producing safe and engaging responses that follow developer guidelines.
translated by 谷歌翻译
以一致的性格赋予聊天机器人对于代理商提供类似人类互动的作用至关重要。但是,现有的个性化方法通常会根据用文本描述描绘的静态预定义角色产生响应,这可能严重限制了人类和聊天机器人的互动性,尤其是当代理人需要回答预定义角色中排除的查询时,这是如此 - 被称为预先定义的角色问题(以简单性为OOP)。为了减轻问题,在本文中,我们提出了一个新颖的检索到预测范式,该范式由两个子组件组成,即(1)角色检索模型(PRM),它根据自然语言推论从全球收藏中检索角色( NLI)模型,推断的角色与预定义的角色一致; (2)后验变压器(PS-Transformer)采用角色后部分布,进一步考虑了地面响应中使用的实际角色,从而最大程度地减轻了训练和推断之间的差距。此外,我们提出了一个名为IT-Convai2的数据集,该数据集首先突出了个性化对话中的OOP问题。对IT-Convai2和Convai2的广泛实验表明,我们提出的模型在自动指标和人类评估方面都有显着改善。
translated by 谷歌翻译
最近的大规模预训练的进步,例如GPT-3允许从给定提示生成看似高质量的文本。然而,这种一代系统经常遭受幻觉的事实问题,并且本身并不是旨在包含有用的外部信息。接地的代表似乎提供了补救措施,但他们的培训通常依赖于提供信息相关文件的很少可用的并行数据。我们提出了一个框架,通过在语言模型信号上共同训练接地的发生器和文档检索来缓解这种数据约束。该模型学会奖励具有生成中最高效用的文档的检索,并用专家混合(MOE)合并来术语术,以产生后续文本。我们证明,发电机和猎犬都可以利用这种联合培训,协同作用,以生产散文和对话一代中的更多信息和相关文本。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically "generate and hope" generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.
translated by 谷歌翻译
知识驱动的对话世代最近取得了非凡的突破。与一般的对话系统相比,卓越的知识对话系统可以通过预先提供的知识产生更多信息和知识渊博的响应。但是,在实际应用中,对话系统无法事先提供相应的知识。为了解决该问题,我们设计了一个名为DRKQG的知识驱动的对话系统(\ emph {通过查询生成动态检索知识,以获取信息性对话响应})。具体而言,系统可以分为两个模块:查询生成模块和对话生成模块。首先,利用时间感知机制来捕获上下文信息,并可以生成查询以检索知识。然后,我们集成了复制机制和变压器,该机制允许响应生成模块产生从上下文和检索知识中得出的响应。 LIC2022,语言和情报技术竞赛的实验结果表明,我们的模块在自动评估指标上的大幅度优于基线模型,而BAIDU语言学团队的人类评估表明,我们的系统在事实上取得了令人印象深刻的结果,实际上是正确的,知识渊博。
translated by 谷歌翻译
Recent advances in large-scale pre-training provide large models with the potential to learn knowledge from the raw text. It is thus natural to ask whether it is possible to leverage these large models as knowledge bases for downstream tasks. In this work, we answer the aforementioned question in unsupervised knowledge-grounded conversation. We explore various methods that best elicit knowledge from large models. Our human study indicates that, though hallucinations exist, large models post the unique advantage of being able to output common sense and summarize facts that cannot be directly retrieved from the search engine. To better exploit such generated knowledge in dialogue generation, we treat the generated knowledge as a noisy knowledge source and propose the posterior-based reweighing as well as the noisy training strategy. Empirical results on two benchmarks show advantages over the state-of-the-art methods.
translated by 谷歌翻译
大型语言模型可以产生流畅的对话,但往往是幻觉的事实不准确。虽然检索式增强的模型有助于缓解这个问题,但他们仍然面临着推理的艰难挑战,以便同时提供正确的知识和产生对话。在这项工作中,我们提出了一种模块化模型,知识响应(K2R),将知识纳入会话代理商,这将这个问题分解为两个更简单的步骤。 K2R首先生成一个知识序列,给定对话背景作为中间步骤。在此“推理步骤”之后,该模型随后参加自己生成的知识序列,以及对话背景,以产生最终的响应。在详细的实验中,我们发现这种模型在知识接地的对话任务中少幻觉,并且在可解释性和模块化方面具有优势。特别地,它可以用来将QA和对话系统一起融合在一起,以使对话代理能够提供知识渊博的答案,或者QA模型,以在零拍摄设置中给出对话响应。
translated by 谷歌翻译
隐性知识,例如常识,是人类对话的关键。当前的神经反应生成(RG)模型经过训练以直接产生响应,省略了未阐明的隐式知识。在本文中,我们介绍了说话之前的思维(TBS),这是一种首先将隐式常识知识(思考)外部化的生成方法(思考),并使用这些知识来产生响应(speak)。我们期望外部化隐式知识可以更有效地学习,产生更多信息的响应,并实现了更多可解释的模型。我们分析了不同的选择,以收集知识一致的对话,代表隐式知识以及知识和对话之间的过渡。经验结果表明,TBS模型在大多数自动指标上优于端到端和知识增强的RG基准,并通过人类注释者评估,产生更有信息,具体和常识性遵循的响应。 TBS还产生了有意义的知识,并且与85 \%左右的对话有关。
translated by 谷歌翻译
Complex dialogue mappings (CDM), including one-to-many and many-to-one mappings, tend to make dialogue models generate incoherent or dull responses, and modeling these mappings remains a huge challenge for neural dialogue systems. To alleviate these problems, methods like introducing external information, reconstructing the optimization function, and manipulating data samples are proposed, while they primarily focus on avoiding training with CDM, inevitably weakening the model's ability of understanding CDM in human conversations and limiting further improvements in model performance. This paper proposes a Sentence Semantic \textbf{Seg}mentation guided \textbf{C}onditional \textbf{V}ariational \textbf{A}uto-\textbf{E}ncoder (SegCVAE) method which can model and take advantages of the CDM data. Specifically, to tackle the incoherent problem caused by one-to-many, SegCVAE uses response-related prominent semantics to constrained the latent variable. To mitigate the non-diverse problem brought by many-to-one, SegCVAE segments multiple prominent semantics to enrich the latent variables. Three novel components, Internal Separation, External Guidance, and Semantic Norms, are proposed to achieve SegCVAE. On dialogue generation tasks, both the automatic and human evaluation results show that SegCVAE achieves new state-of-the-art performance.
translated by 谷歌翻译
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, and another which can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
The quality of knowledge retrieval is crucial in knowledge-intensive conversations. Two common strategies to improve the retrieval quality are finetuning the retriever or generating a self-contained query, while they encounter heavy burdens on expensive computation and elaborate annotations. In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. Without extra supervision, the end-to-end joint training of QKConv explores multiple candidate queries and utilizes corresponding selected knowledge to yield the target response. To evaluate the effectiveness of the proposed method, we conducted comprehensive experiments on conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results demonstrate that QKConv achieves state-of-the-art performance compared to unsupervised methods and competitive performance compared to supervised methods.
translated by 谷歌翻译
由于缺乏培训数据和异质知识来源,知识接地的对话系统是挑战的。由于培训数据中涵盖的有限主题,现有系统在不良主题上表现不佳。此外,异构知识源使系统概括到其他任务的系统,因为不同知识表示中的知识来源需要不同的知识编码器。为了解决这些挑战,我们呈现插头,将不同知识来源均匀化为知识接地的对话生成任务的统一知识来源的语言模型。插头在对话生成任务上进行预先培训,调节统一的基本知识表示。它可以通过一些培训示例概括到不同下游知识接地的对话一代任务。两个基准测试的实证评估表明,我们的模型越好跨越不同的知识接地任务。它可以在完全监督的设置下实现具有最先进的方法的可比性,并且显着优于零拍摄和少量拍摄设置中的其他方法。
translated by 谷歌翻译
Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona. Unlike conventional dialogue generation, the persona-based dialogue needs to consider both dialogue context and persona, posing a challenge for coherent training. Specifically, this requires a delicate weight balance between context and persona. To achieve that, in this paper, we propose an effective framework with Persona-Adaptive Attention (PAA), which adaptively integrates the weights from the persona and context information via our designed attention. In addition, a dynamic masking mechanism is applied to the PAA to not only drop redundant information in context and persona but also serve as a regularization mechanism to avoid overfitting. Experimental results demonstrate the superiority of the proposed PAA framework compared to the strong baselines in both automatic and human evaluation. Moreover, the proposed PAA approach can perform equivalently well in a low-resource regime compared to models trained in a full-data setting, which achieve a similar result with only 20% to 30% of data compared to the larger models trained in the full-data setting. To fully exploit the effectiveness of our design, we designed several variants for handling the weighted information in different ways, showing the necessity and sufficiency of our weighting and masking designs.
translated by 谷歌翻译
对于开放式对话问题应答(CQA),可以检索最相关的段落来回答问题,但与标准通道检索相比,这是具有挑战性,因为它需要了解完整的对话背景而不是单个查询。此外,重新列车良好的检索者(例如用于非对话查询)最初开发的搜索引擎都可以昂贵。为了便于他们的使用,我们开发了一个查询重写模型Conqrr,可以将上下文中的会话问题重写为独立的问题。它培训了一种新颖的奖励功能,可以直接优化检索,并且可以使用强化学习来适应任何固定的黑箱猎犬。我们展示了Conqrr在最近的开放式CQA数据集上实现了最先进的结果,这是来自三个不同来源的对话组合。我们还开展了广泛的实验,以表明CONQRR为任何给定的固定猎犬的有效性。
translated by 谷歌翻译