以任务为导向的对话系统(TDSS)主要在离线设置或人类评估中评估。评估通常仅限于单转或非常耗时。作为替代方案,模拟用户行为的用户模拟器使我们能够考虑一组广泛的用户目标,以生成类似人类的对话以进行模拟评估。使用现有的用户模拟器来评估TDSS是具有挑战性的,因为用户模拟器主要旨在优化TDSS的对话策略,并且评估功能有限。此外,对用户模拟器的评估是一个开放的挑战。在这项工作中,我们提出了一个用于端到端TDS评估的隐喻用户模拟器,如果它在与系统的交互中模拟用户的类似思维,则定义模拟器是隐喻的。我们还提出了一个基于测试人员的评估框架,以生成变体,即具有不同功能的对话系统。我们的用户模拟器构建了一个隐喻的用户模型,该模型通过参考遇到新项目时的先验知识来帮助模拟器进行推理。我们通过检查模拟器与变体之间的模拟相互作用来估计模拟器的质量。我们的实验是使用三个TDS数据集进行的。与基于议程的模拟器和三个数据集上的SEQ2SEQ模型相比,隐喻用户模拟器与手动评估的一致性更好。我们的测试人员框架展示了效率,并且可以更好地概括和可扩展性,因为它可以适用于多个域中的对话和多个任务,例如对话建议和电子商务对话。
translated by 谷歌翻译
会话推荐系统(CRS)已成为一个新兴的研究主题,试图通过交互式对话进行建议,这些对话通常由发电和建议模块组成。 CRS的先前工作倾向于将更多的外部和领域特定知识纳入项目评论,以提高性能。尽管事实的收集和注释特定于外部领域的信息需要大量的人类努力并脱离了普遍性,但过多的额外知识在它们之间带来了更大的困难。因此,我们建议从上下文中充分发现和提取内部知识。我们将实体级别和上下文级别的表示形式捕获为对建议的共同模拟用户的偏好,在这种情况下,时间吸引的注意力旨在强调实体级表示中最近出现的项目。我们进一步使用预训练的巴特来初始化生成模块,以减轻数据稀缺性并增强上下文建模。除了在流行数据集(REDIAIL)上进行实验外,我们还包括一个多域数据集(OpenDialKg)来显示我们模型的有效性。两个数据集的实验都表明,我们的模型在大多数评估指标上都具有更好的性能,其外部知识较少,并且可以很好地推广到其他领域。对建议和生成任务的其他分析证明了我们在不同情况下模型的有效性。
translated by 谷歌翻译
用户模拟器(USS)通常用于通过增强学习训练面向任务的对话系统(DSS)。相互作用通常是在语义层面上以提高效率的,但是从语义动作到自然语言仍然存在差距,这会导致培训和部署环境之间的不匹配。在培训期间,将自然语言生成(NLG)模块与USS结合在一起可以部分解决此问题。但是,由于US的策略和NLG是单独优化的,因此在给定的情况下,这些模拟的用户话语可能不够自然。在这项工作中,我们提出了一个基于生成变压器的用户模拟器(Gentus)。 Gentus由编码器结构组成,这意味着它可以共同优化用户策略和自然语言。 Gentus既产生语义动作又产生自然语言话语,从而保留了解释性和增强语言的变化。另外,通过将输入和输出表示为单词序列以及使用大型的预训练语言模型,我们可以在功能表示中实现普遍性。我们通过自动指标和人类评估评估绅士。我们的结果表明,绅士会产生更多的自然语言,并能够以零拍的方式转移到看不见的本体论中。此外,通过加强学习为培训专业用户模拟器打开大门,可以进一步塑造其行为。
translated by 谷歌翻译
对话式AI中的现有研究主要将面向任务的对话框(TOD)和问题答案(QA)视为单独的任务。为了构建可以完成用户任务和支持信息寻求信息的对话代理的目标,构建一个可以访问各种外部知识的系统,构建一个处理TOD和QA的系统非常重要。在这项工作中,我们提出了一项新任务,开放式TOD(OB-TOD),将TOD与QA任务相结合,并将外部知识源扩展到包括明确的知识源(例如Web)和隐式知识源(例如,例如,预训练的语言模型)。我们创建了一个新的数据集ob-multiwoz,在这里,我们在其中丰富了Tod会议,并使用类似QA的信息寻求基于外部知识的经验。我们提出了一个统一的模型Opera(开放式末端到端任务对话框),可以适当地访问明确和隐性的外部知识,以解决定义的任务。实验结果表明,与闭环基线相比,Opera的表现出色,并说明了两种知识类型的价值。
translated by 谷歌翻译
以任务为导向的对话系统(TODS)继续升高,因为各种行业发现有效地利用其能力,节省时间和金钱。然而,即使是最先进的TOD尚未达到其全部潜力。TOD通常具有主要设计专注于完成手头的任务,因此任务分辨率的度量应优先考虑。可能会忽略可能指向对话的其他可能指向成功或其他方面的会话质量属性。这可能导致人类和对话系统之间的相互作用,让用户不满意或沮丧。本文探讨了对话系统的评价框架的文献,以及对话系统中的会话质量属性的作用,看起来,如何以及在与对话系统的性能相关的情况下,如何相关。
translated by 谷歌翻译
Empathy is a vital factor that contributes to mutual understanding, and joint problem-solving. In recent years, a growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems. We refer to this topic as empathetic conversational systems. To identify the critical gaps and future opportunities in this topic, this paper examines this rapidly growing field using five review dimensions: (i) conceptual empathy models and frameworks, (ii) adopted empathy-related concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation strategies, and (v) state-of-the-art approaches. The findings show that most studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the text-based modality dominates research in this field. Studies mainly focused on extracting features from the messages of the users and the conversational systems, with minimal emphasis on user modeling and profiling. Notably, studies that have incorporated emotion causes, external knowledge, and affect matching in the response generation models, have obtained significantly better results. For implementation in diverse real-world settings, we recommend that future studies should address key gaps in areas of detecting and authenticating emotions at the entity level, handling multimodal inputs, displaying more nuanced empathetic behaviors, and encompassing additional dialogue system features.
translated by 谷歌翻译
在口头对话系统中,我们的目标是部署人工智能,以建立可以与人类交流的自动化对话剂。对话系统越来越多地旨在超越仅仅模仿对话,而且随着时间的推移,这些交互也会改善。在本次调查中,我们概述了多年来制定对话系统的方法的广泛概述。对话系统的不同用例范围从基于任务的系统到开放域聊天动机和需要特定的系统。从简单的规则的系统开始,研究已经朝着越来越复杂的建筑培训,这些建筑在大规模的数据集语料库中培训,如深度学习系统。激进了类似人类对话的直觉,通过加强学习将情绪纳入自然语言发生器的进展。虽然我们看到对某些指标的高度边际改善的趋势,但我们发现指标存在有限的理由,评估实践并不统一。要得出结论,我们标志着这些问题并突出了可能的研究方向。
translated by 谷歌翻译
与具有粗粒度信息的Crosswoz(中文)和多发性(英文)数据集相比,没有数据集,可以正确处理细粒度和分层级别信息。在本文中,我们在香港发布了一份粤语知识驱动的对话数据集(KDDRES),将多转谈话中的信息放在一个特定的餐厅。我们的语料库包含0.8k次谈话,它来自10家餐厅,提供不同地区的各种风格。除此之外,我们还设计了细粒度的插槽和意图,以更好地捕获语义信息。基准实验和数据统计分析显示了我们数据集的多样性和丰富的注释。我们认为,KDDRE的出版可以是当前对话数据集的必要补充,以及社会中小企业(中小企业)更适合和更有价值,如为每家餐馆建立定制的对话系统。语料库和基准模型是公开可用的。
translated by 谷歌翻译
与自然语言中用户互动的对话推荐系统(CRS)利用了在配对人类的帮助下收集的建议对话框,其中一个人扮演寻求者的角色,而另一个则是推荐人。这些建议对话包括项目和实体,以披露寻求者自然语言的偏好。但是,为了精确地对寻求者的偏好进行建模并始终如一地做出反应,主要是CRS依赖于对话框中出现的明确注释的项目和实体,通常会利用域知识。在这项工作中,我们调查了受启发的数据集,该数据集包含有关社交对话建议的建议对话框,其中使用自动关键字或模式匹配技术明确注释项目和实体。为此,我们发现了大量案例,这些案例和实体根本被错误注释或缺少注释。然而,这个问题仍然在何种程度上有效的注释有效。此外,目前尚不清楚穷人和改善注释对CRS总体有效性的相对影响在响应的一致性和质量方面是什么。在这方面,首先,我们手动修复了注释并删除了受启发数据集中的噪声。其次,我们使用两个版本的数据集评估了几个基准CR的性能。我们的分析表明,使用数据集的改进版本,即Inspired2,各种基准CRS的表现优于且对话框与使用原始版本的使用相比,具有丰富的知识概念。我们在https://github.com/ahtsham58/inspired2公开发布改进的数据集(Inspired2)。
translated by 谷歌翻译
我们介绍了AARGH,这是一个面向任务的对话框系统,该系统结合了单个模型中的检索和生成方法,旨在改善对话框管理和输出的词汇多样性。该模型采用了一种新的响应选择方法,该方法基于动作感知训练目标和简化的单编码检索架构,该方法使我们能够构建端到端检索增强生成模型,在该模型中,检索和生成共享大多数参数。在Multiwoz数据集上,我们表明我们的方法与最先进的基线相比,在维持或改善状态跟踪和上下文响应生成性能的同时,产生了更多的输出。
translated by 谷歌翻译
Conversational recommender systems (CRSs) often utilize external knowledge graphs (KGs) to introduce rich semantic information and recommend relevant items through natural language dialogues. However, original KGs employed in existing CRSs are often incomplete and sparse, which limits the reasoning capability in recommendation. Moreover, only few of existing studies exploit the dialogue context to dynamically refine knowledge from KGs for better recommendation. To address the above issues, we propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR). Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs; and perform dynamic knowledge reasoning conditioned on the dialogue context. Specifically, we denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs refactor. We propose a variational Bayesian method to approximate posterior distributions over dialogue-specific subgraphs, which not only leverages the dialogue corpus for restructuring missing entity relations but also dynamically selects knowledge based on the dialogue context. Finally, we infuse the dialogue-specific subgraphs to decode the recommendation and responses. We conduct experiments on two benchmark CRSs datasets. Experimental results confirm the effectiveness of our proposed method.
translated by 谷歌翻译
End-to-end task bots are typically learned over a static and usually limited-size corpus. However, when deployed in dynamic, changing, and open environments to interact with users, task bots tend to fail when confronted with data that deviate from the training corpus, i.e., out-of-distribution samples. In this paper, we study the problem of automatically adapting task bots to changing environments by learning from human-bot interactions with minimum or zero human annotations. We propose SL-AGENT, a novel self-learning framework for building end-to-end task bots. SL-AGENT consists of a dialog model and a pre-trained reward model to predict the quality of an agent response. It enables task bots to automatically adapt to changing environments by learning from the unlabeled human-bot dialog logs accumulated after deployment via reinforcement learning with the incorporated reward model. Experimental results on four well-studied dialog tasks show the effectiveness of SL-AGENT to automatically adapt to changing environments, using both automatic and human evaluations. We will release code and data for further research.
translated by 谷歌翻译
Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset labelled with dialogue belief states and dialogue actions is two-fold: firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided. The proposed data-collection pipeline is entirely based on crowd-sourcing without the need of hiring professional annotators; secondly, a set of benchmark results of belief tracking, dialogue act and response generation is reported, which shows the usability of the data and sets a baseline for future studies.
translated by 谷歌翻译
Diverse data formats and ontologies of task-oriented dialogue (TOD) datasets hinder us from developing general dialogue models that perform well on many datasets and studying knowledge transfer between datasets. To address this issue, we present ConvLab-3, a flexible dialogue system toolkit based on a unified TOD data format. In ConvLab-3, different datasets are transformed into one unified format and loaded by models in the same way. As a result, the cost of adapting a new model or dataset is significantly reduced. Compared to the previous releases of ConvLab (Lee et al., 2019b; Zhu et al., 2020b), ConvLab-3 allows developing dialogue systems with much more datasets and enhances the utility of the reinforcement learning (RL) toolkit for dialogue policies. To showcase the use of ConvLab-3 and inspire future work, we present a comprehensive study with various settings. We show the benefit of pre-training on other datasets for few-shot fine-tuning and RL, and encourage evaluating policy with diverse user simulators.
translated by 谷歌翻译
This paper aims to provide a radical rundown on Conversation Search (ConvSearch), an approach to enhance the information retrieval method where users engage in a dialogue for the information-seeking tasks. In this survey, we predominantly focused on the human interactive characteristics of the ConvSearch systems, highlighting the operations of the action modules, likely the Retrieval system, Question-Answering, and Recommender system. We labeled various ConvSearch research problems in knowledge bases, natural language processing, and dialogue management systems along with the action modules. We further categorized the framework to ConvSearch and the application is directed toward biomedical and healthcare fields for the utilization of clinical social technology. Finally, we conclude by talking through the challenges and issues of ConvSearch, particularly in Bio-Medicine. Our main aim is to provide an integrated and unified vision of the ConvSearch components from different fields, which benefit the information-seeking process in healthcare systems.
translated by 谷歌翻译
Conversational recommender systems (CRS) aim to employ natural language conversations to suggest suitable products to users. Understanding user preferences for prospective items and learning efficient item representations are crucial for CRS. Despite various attempts, earlier studies mostly learned item representations based on individual conversations, ignoring item popularity embodied among all others. Besides, they still need support in efficiently capturing user preferences since the information reflected in a single conversation is limited. Inspired by collaborative filtering, we propose a collaborative augmentation (COLA) method to simultaneously improve both item representation learning and user preference modeling to address these issues. We construct an interactive user-item graph from all conversations, which augments item representations with user-aware information, i.e., item popularity. To improve user preference modeling, we retrieve similar conversations from the training corpus, where the involved items and attributes that reflect the user's potential interests are used to augment the user representation through gate control. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our method. Our code and data are available at https://github.com/DongdingLin/COLA.
translated by 谷歌翻译
Any organization needs to improve their products, services, and processes. In this context, engaging with customers and understanding their journey is essential. Organizations have leveraged various techniques and technologies to support customer engagement, from call centres to chatbots and virtual agents. Recently, these systems have used Machine Learning (ML) and Natural Language Processing (NLP) to analyze large volumes of customer feedback and engagement data. The goal is to understand customers in context and provide meaningful answers across various channels. Despite multiple advances in Conversational Artificial Intelligence (AI) and Recommender Systems (RS), it is still challenging to understand the intent behind customer questions during the customer journey. To address this challenge, in this paper, we study and analyze the recent work in Conversational Recommender Systems (CRS) in general and, more specifically, in chatbot-based CRS. We introduce a pipeline to contextualize the input utterances in conversations. We then take the next step towards leveraging reverse feature engineering to link the contextualized input and learning model to support intent recognition. Since performance evaluation is achieved based on different ML models, we use transformer base models to evaluate the proposed approach using a labelled dialogue dataset (MSDialogue) of question-answering interactions between information seekers and answer providers.
translated by 谷歌翻译
Many efforts have been made to construct dialog systems for different types of conversations, such as task-oriented dialog (TOD) and open-domain dialog (ODD). To better mimic human-level conversations that usually fuse various dialog modes, it is essential to build a system that can effectively handle both TOD and ODD and access different knowledge sources. To address the lack of available data for the fused task, we propose a framework for automatically generating dialogues that combine knowledge-grounded ODDs and TODs in various settings. Additionally, we introduce a unified model PivotBot that is capable of appropriately adopting TOD and ODD modes and accessing different knowledge sources in order to effectively tackle the fused task. Evaluation results demonstrate the superior ability of the proposed model to switch seamlessly between TOD and ODD tasks.
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
会话推荐系统(CRS)旨在主动引起用户偏好,并通过自然语言对话推荐高质量的项目。通常,CRS由建议模块组成,以预测用户的首选项目和对话模块,以生成适当的响应。要开发有效的CR,必须无缝整合两个模块。现有作品要么设计语义一致性策略,要么共享两个模块之间的知识资源和表示。但是,这些方法仍然依靠不同的体系结构或技术来开发两个模块,因此很难进行有效的模块集成。为了解决这个问题,我们根据知识增强的及时学习提出了一个名为UNICRS的统一CRS模型。我们的方法将建议和对话子任务统一到及时学习范式中,并根据固定的预训练的语言模型(PLM)利用知识增强的提示来以统一的方法来实现两个子任务。在及时的设计中,我们包括融合的知识表示,特定于任务的软令牌和对话环境,它们可以提供足够的上下文信息以适应CRS任务的PLM。此外,对于建议子任务,我们还将生成的响应模板作为提示的重要组成部分结合起来,以增强两个子任务之间的信息交互。对两个公共CRS数据集进行的广泛实验证明了我们方法的有效性。
translated by 谷歌翻译