尽管条件变异自动编码器(CVAE)模型比传统的SEQ2SEQ模型可以产生更多的多样化响应,但响应通常与输入词的相关性低或与问题不合逻辑。进行因果分析以研究背后的原因,并提供了一种寻找调解人并减轻对话中混杂偏见的方法。具体而言,我们建议预测调解人,以保留相关信息,并自动将调解人纳入生成过程中。此外,动态主题图指导条件变异自动编码器(TGG-CVAE)模型用于补充语义空间并减少响应中的混杂偏置。广泛的实验表明,所提出的模型能够产生相关和信息性的响应,并且在自动指标和人类评估方面优于最先进的响应。
translated by 谷歌翻译
Conditional variational models, using either continuous or discrete latent variables, are powerful for open-domain dialogue response generation. However, previous works show that continuous latent variables tend to reduce the coherence of generated responses. In this paper, we also found that discrete latent variables have difficulty capturing more diverse expressions. To tackle these problems, we combine the merits of both continuous and discrete latent variables and propose a Hybrid Latent Variable (HLV) method. Specifically, HLV constrains the global semantics of responses through discrete latent variables and enriches responses with continuous latent variables. Thus, we diversify the generated responses while maintaining relevance and coherence. In addition, we propose Conditional Hybrid Variational Transformer (CHVT) to construct and to utilize HLV with transformers for dialogue generation. Through fine-grained symbolic-level semantic information and additive Gaussian mixing, we construct the distribution of continuous variables, prompting the generation of diverse expressions. Meanwhile, to maintain the relevance and coherence, the discrete latent variable is optimized by self-separation training. Experimental results on two dialogue generation datasets (DailyDialog and Opensubtitles) show that CHVT is superior to traditional transformer-based variational mechanism w.r.t. diversity, relevance and coherence metrics. Moreover, we also demonstrate the benefit of applying HLV to fine-tuning two pre-trained dialogue models (PLATO and BART-base).
translated by 谷歌翻译
Complex dialogue mappings (CDM), including one-to-many and many-to-one mappings, tend to make dialogue models generate incoherent or dull responses, and modeling these mappings remains a huge challenge for neural dialogue systems. To alleviate these problems, methods like introducing external information, reconstructing the optimization function, and manipulating data samples are proposed, while they primarily focus on avoiding training with CDM, inevitably weakening the model's ability of understanding CDM in human conversations and limiting further improvements in model performance. This paper proposes a Sentence Semantic \textbf{Seg}mentation guided \textbf{C}onditional \textbf{V}ariational \textbf{A}uto-\textbf{E}ncoder (SegCVAE) method which can model and take advantages of the CDM data. Specifically, to tackle the incoherent problem caused by one-to-many, SegCVAE uses response-related prominent semantics to constrained the latent variable. To mitigate the non-diverse problem brought by many-to-one, SegCVAE segments multiple prominent semantics to enrich the latent variables. Three novel components, Internal Separation, External Guidance, and Semantic Norms, are proposed to achieve SegCVAE. On dialogue generation tasks, both the automatic and human evaluation results show that SegCVAE achieves new state-of-the-art performance.
translated by 谷歌翻译
当前的个性化对话中的作品主要有助于代理人表现出一致的个性并推动更有用的回应。但是,我们发现大多数以前模型的生成的响应往往是以自我为中心的,对话中的用户几乎不关心。此外,我们认为类似人类的对话基本上是基于推断另一方角色的信息而构建的。由此激励,我们通过检测隐性用户角色提出了一种新颖的个性化对话生成器。因为很难为每个用户收集大量详细的角色,所以我们试图对用户的潜在角色及其从对话历史记录进行建模,而没有外部知识。使用条件变异推断对感知和推子变量进行了构想。两个潜在变量模拟了人们意识到彼此角色并在对话中产生相应表达的过程。最后,提出了后歧视的正规化以增强训练程序。实证研究表明,与最先进的方法相比,我们的方法更关心用户的角色,并在整个评估中实现了相当大的推动力。
translated by 谷歌翻译
个性化对话的产生对于自然和人类的谈话至关重要。通常,个性化对话生成模型涉及对对话历史的生成响应以及对话者的角色/人格的表示。由于获得每个对话者的人格/人格表征是不切实际的,最近的作品已经探讨了通过将模型与对应于给定的人格对应的对话示例的模型来产生个性化对话的可能性。然而,在实际实现中,足够数量的相应对话示例也很少可用。因此,在本文中,我们提出了一种能够在没有任何人格/人格信息或任何相应的对话示例的情况下产生个性化对话的双潜变量发生器(DLVGEN)。与现有工作不同,DLVGEN模拟潜在响应的潜在分布以及代理商的潜在角色的潜在分布。在推理期间,从两个分布中采样潜在变量并进入解码器。经验结果表明,DLVGEN能够产生各种反应,精确地纳入代理人的角色。
translated by 谷歌翻译
We investigate response generation for multi-turn dialogue in generative-based chatbots. Existing generative models based on RNNs (Recurrent Neural Networks) usually employ the last hidden state to summarize the sequences, which makes models unable to capture the subtle variability observed in different dialogues and cannot distinguish the differences between dialogues that are similar in composition. In this paper, we propose a Pseudo-Variational Gated Recurrent Unit (PVGRU) component without posterior knowledge through introducing a recurrent summarizing variable into the GRU, which can aggregate the accumulated distribution variations of subsequences. PVGRU can perceive the subtle semantic variability through summarizing variables that are optimized by the devised distribution consistency and reconstruction objectives. In addition, we build a Pseudo-Variational Hierarchical Dialogue (PVHD) model based on PVGRU. Experimental results demonstrate that PVGRU can broadly improve the diversity and relevance of responses on two benchmark datasets.
translated by 谷歌翻译
目前,基于端到端深度学习的开放域对话系统仍然是黑匣子模型,使其易于与数据驱动的模型生成无关的内容。具体而言,由于缺乏指导培训的先验知识,潜在变量在潜在空间中与不同的语义纠缠在一起。为了解决这个问题,本文提议通过涉及介绍量表特征分离的认知方法来利用生成模型。特别是,该模型将宏观指导类别知识和微观级别的开放域对话数据集成到培训中,并将先验知识利用到潜在空间中,从而使模型能够将潜在变量置于介镜范围内的潜在变量。此外,我们为开放域对话提出了一个新的指标,可以客观地评估潜在空间分布的解释性。最后,我们在不同的数据集上验证了我们的模型,并在实验上证明我们的模型能够比其他模型产生更高的质量和更容易解释的对话。
translated by 谷歌翻译
Conversational recommender systems (CRSs) often utilize external knowledge graphs (KGs) to introduce rich semantic information and recommend relevant items through natural language dialogues. However, original KGs employed in existing CRSs are often incomplete and sparse, which limits the reasoning capability in recommendation. Moreover, only few of existing studies exploit the dialogue context to dynamically refine knowledge from KGs for better recommendation. To address the above issues, we propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR). Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs; and perform dynamic knowledge reasoning conditioned on the dialogue context. Specifically, we denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs refactor. We propose a variational Bayesian method to approximate posterior distributions over dialogue-specific subgraphs, which not only leverages the dialogue corpus for restructuring missing entity relations but also dynamically selects knowledge based on the dialogue context. Finally, we infuse the dialogue-specific subgraphs to decode the recommendation and responses. We conduct experiments on two benchmark CRSs datasets. Experimental results confirm the effectiveness of our proposed method.
translated by 谷歌翻译
开放域对话系统旨在以开放式的方式通过自然语言文本与人类互动。但是,广泛成功的神经网络可能对对话系统无法正常工作,因为它们倾向于产生通用响应。在这项工作中,我们提出了一个相等大小的艰难期望 - 最大化(EQHARD-EM)算法来训练多样化对话生成的多次模型。我们的算法以艰苦的方式将样品分配给解码器,并强加了等同的约束,以确保所有解码器都经过良好的训练。我们提供详细的理论分析以证明我们的方法是合理的。此外,对两个大规模开放域对话数据集进行了实验,验证了我们的eqhard-em算法是否会产生高质量的不同响应。
translated by 谷歌翻译
以一致的性格赋予聊天机器人对于代理商提供类似人类互动的作用至关重要。但是,现有的个性化方法通常会根据用文本描述描绘的静态预定义角色产生响应,这可能严重限制了人类和聊天机器人的互动性,尤其是当代理人需要回答预定义角色中排除的查询时,这是如此 - 被称为预先定义的角色问题(以简单性为OOP)。为了减轻问题,在本文中,我们提出了一个新颖的检索到预测范式,该范式由两个子组件组成,即(1)角色检索模型(PRM),它根据自然语言推论从全球收藏中检索角色( NLI)模型,推断的角色与预定义的角色一致; (2)后验变压器(PS-Transformer)采用角色后部分布,进一步考虑了地面响应中使用的实际角色,从而最大程度地减轻了训练和推断之间的差距。此外,我们提出了一个名为IT-Convai2的数据集,该数据集首先突出了个性化对话中的OOP问题。对IT-Convai2和Convai2的广泛实验表明,我们提出的模型在自动指标和人类评估方面都有显着改善。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
在过去的几年中,在各种文本生成任务中见证了各种自动编码器的优势。但是,由于文本的顺序性质,自动回归解码器倾向于忽略潜在变量,然后降低到简单的语言模型,称为KL消失的问题,当VAE与基于变压器的结构结合时,这将进一步恶化。为了改善这个问题,我们提出了一种新型变化变压器框架Della。德拉(Della)从较低层的层中得知一系列层的潜在变量,每个变量都从下层的层中推断出,并通过低级张量产品与隐藏状态紧密耦合。通过这种方式,Della强迫这些后部潜在变量将其与整个计算路径深入融合,从而结合了更多信息。从理论上讲,我们可以将我们的方法视为纠缠潜在变量,以避免通过层减少后验信息,从而使DELLA即使没有任何退火或阈值技巧,也可以使DELLA获得更高的非零KL值。与多个强大的基线相比,对四个无条件和三个条件生成任务的实验表明,Della可以更好地减轻KL消失并改善质量和多样性。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
As the functionality of dialogue systems evolves, hybrid dialogue systems that accomplish user-specific goals and participate in open-topic chitchat with users are attracting growing attention. Existing research learns both tasks concurrently utilizing a multi-task fusion technique but ignores the negative transfer phenomenon induced by the unique textual style differences. Therefore, contrastive learning based on the latent variable model is used to decouple the various textual genres in the latent space. We devise supervised and self-supervised positive and negative sample constructions for diverse datasets. In addition, to capitalize on the style information contained in the decoupled latent variables, we employ a style prefix that incorporates latent variables further to control the generation of responses with varying styles. We performed extensive experiments on three dialogue datasets, including a hybrid dialogue dataset and two task-oriented dialogue datasets. The experimental results demonstrate that our method can mitigate the negative style transfer issue and achieves state-of-the-art performance on multiple dialogue datasets.
translated by 谷歌翻译
Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
translated by 谷歌翻译
预培训语言模型的浪潮一直不断提高机器生成的对话的质量,然而,一些产生的响应仍然遭受过度重复,有时重复从话语中重复单词,有时重复自我产生的响应中的单词,或者两个都。不当重复单词可以显着降低生成文本的质量。受到惩罚的采样是一种流行的解决方案,减少了推理期间现有词的采样概率,但是,它非常容易受到静态的不适当的设置。将其设置得太高可以产生奇怪和不切实际的句子,同时将其设置得太低,使得抑制重复微不足道的任务。要解决上述方法的缺点,我们设计了一个上下文感知的分类器,以明确决定何时允许重复和何时采用惩罚的采样。这种分类器可以容易地与现有的解码方法集成,在保持文本的分集的同时在适当的情况下减少重复。实验结果表明,我们的方法可以产生更高质量和更真实的对话。
translated by 谷歌翻译
客户评论通常包含有关一个人在线购物体验的大量信息。尽管积极的评论对商店有益,但负面评论将在很大程度上影响消费者的决定,并可能导致销售下降。因此,仔细和有说服力地回答每个负面评论并最大程度地减少其不利影响至关重要。最近的研究考虑利用生成模型来帮助卖家做出回应。但是,此问题并不深入,因为评论可能包含问题的多个方面,这些方面应相应和有说服力地解决。在这项工作中,我们为有说服力的响应生成提出了一个多源多相关生成模型。提出的模型适当地获得和利用了各种信息来源,以产生更有信息和有说服力的响应。提出了一个多方面的细心网络,以自动参与审查中的不同方面,并确保解决大多数问题。在两个现实世界数据集上进行的广泛实验表明,我们的方法优于最先进的方法和在线测试,这证明我们的部署系统大大提高了商店处理负面评论的效率。
translated by 谷歌翻译
医学对话生成是一项重要但具有挑战性的任务。以前的大多数作品都依赖于注意力机制和大规模预处理的语言模型。但是,这些方法通常无法从长时间的对话历史中获取关键信息,从而产生准确和信息丰富的响应,因为医疗实体通常散布在多种话语中以及它们之间的复杂关系。为了减轻此问题,我们提出了一个具有关键信息召回(Medpir)的医疗响应生成模型,该模型建立在两个组件上,即知识吸引的对话图形编码器和召回增强的生成器。知识吸引的对话图编码器通过利用话语中的实体之间的知识关系,并使用图形注意力网络对话图来构建对话图。然后,召回增强的发电机通过在产生实际响应之前生成对话的摘要来增强这些关键信息的使用。两个大型医学对话数据集的实验结果表明,Medpir在BLEU分数和医疗实体F1度量中的表现优于强大的基准。
translated by 谷歌翻译
知识驱动的对话世代最近取得了非凡的突破。与一般的对话系统相比,卓越的知识对话系统可以通过预先提供的知识产生更多信息和知识渊博的响应。但是,在实际应用中,对话系统无法事先提供相应的知识。为了解决该问题,我们设计了一个名为DRKQG的知识驱动的对话系统(\ emph {通过查询生成动态检索知识,以获取信息性对话响应})。具体而言,系统可以分为两个模块:查询生成模块和对话生成模块。首先,利用时间感知机制来捕获上下文信息,并可以生成查询以检索知识。然后,我们集成了复制机制和变压器,该机制允许响应生成模块产生从上下文和检索知识中得出的响应。 LIC2022,语言和情报技术竞赛的实验结果表明,我们的模块在自动评估指标上的大幅度优于基线模型,而BAIDU语言学团队的人类评估表明,我们的系统在事实上取得了令人印象深刻的结果,实际上是正确的,知识渊博。
translated by 谷歌翻译
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions. In this paper, we develop a topic-informed discrete latent variable model for semantic textual similarity, which learns a shared latent space for sentence-pair representation via vector quantization. Compared with previous models limited to local semantic contexts, our model can explore richer semantic information via topic modeling. We further boost the performance of semantic similarity by injecting the quantized representation into a transformer-based language model with a well-designed semantic-driven attention mechanism. We demonstrate, through extensive experiments across various English language datasets, that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
translated by 谷歌翻译