在现实世界中的对话系统中,生成的响应必须满足几个互锁的限制:内容丰富,真实且易于控制。语言生成中的两个主要范式 - 神经语言建模和基于规则的一代 - 都难以满足这些约束。即使是最好的神经模型,也容易出现信息的幻觉和省略,而现有的基于规则的形式的形式使得很难编写既灵活又流利的语法。我们描述了对话响应产生的混合体系结构,结合了两种方法的优势。该体系结构有两个组件。首先,使用新的正式框架定义的基于规则的内容选择模型,称为数据流转导,该模型使用声明性规则将对话代理的计算(表示为数据流图)转换为代表上下文可接受响应空间的无上下文语法。其次,使用这些语法来限制神经语言模型的输出的受约束解码过程,该过程选择流利的话语。最终的系统在人类对流利,相关性和真实性的评估中的表现都优于基于规则的方法和学识渊博的方法。
translated by 谷歌翻译
我们探索使用大型预用语言模型作为少量语义解析器。语义解析中的目标是给定自然语言输入的结构化含义表示。但是,培训语言模型以生成自然语言。为了弥合差距,我们使用语言模型来解释进入一个类似于英语的受控的子宫内的输入,可以自动映射到目标含义表示表示。我们的结果表明,只有少量的数据和较少的代码转换为类似英语的代表,我们为快速启动语义解析器的蓝图导致了对多个社区任务的令人惊讶的有效性能,大大超过基线方法也在相同的限制上培训数据。
translated by 谷歌翻译
随着未来以数据为中心的决策,对数据库的无缝访问至关重要。关于创建有效的文本到SQL(Text2SQL)模型以访问数据库的数据有广泛的研究。使用自然语言是可以通过有效访问数据库(尤其是对于非技术用户)来弥合数据和结果之间差距的最佳接口之一。它将打开门,并在精通技术技能或不太熟练的查询语言的用户中引起极大的兴趣。即使提出或研究了许多基于深度学习的算法,在现实工作场景中使用自然语言来解决数据查询问题仍然非常具有挑战性。原因是在不同的研究中使用不同的数据集,这带来了其局限性和假设。同时,我们确实缺乏对这些提议的模型及其对其训练的特定数据集的局限性的彻底理解。在本文中,我们试图介绍过去几年研究的24种神经网络模型的整体概述,包括其涉及卷积神经网络,经常性神经网络,指针网络,强化学习,生成模型等的架构。我们还概述11个数据集,这些数据集被广泛用于训练Text2SQL技术的模型。我们还讨论了无缝数据查询中文本2SQL技术的未来应用可能性。
translated by 谷歌翻译
在最近的工作中已显示出一种模式指导的对话管理方法,可以有效地创建能够充当友好同行或任务助理的强大定制虚拟代理。但是,这些方法在开放式,混合初始性领域中的成功应用仍然难以捉摸 - 尤其是在诸如虚拟标准化患者之类的医疗领域,在这种复杂的互动很常见的情况下 - 比以前的系统需要更广泛,更灵活的对话管理能力提供。在本文中,我们描述了用于开发索菲(Sophie)的通用架构指导的对话管理框架,Sophie是一种虚拟标准化的癌症患者,可让医生方便地练习与患者的互动。我们对医学生和索菲之间的对话进行了众包评估。我们的经纪人被认为是自然,情感上适当的反应,并且与她作为癌症患者的角色一致。此外,它大大优于对人类标准化患者语料库进行微调的端到端神经模型,这证明了模式引导方法的优势。
translated by 谷歌翻译
深度神经语言模型的最新进展与大规模数据集的能力相结合,加速了自然语言生成系统的发展,这些系统在多种任务和应用程序上下文中产生流利和连贯的文本(在各种成功程度上)。但是,为所需的用户控制这些模型的输出仍然是一个开放的挑战。这不仅对于自定义生成语言的内容和样式至关重要,而且对于他们在现实世界中的安全可靠部署至关重要。我们提出了一项关于受约束神经语言生成的新兴主题的广泛调查,在该主题中,我们通过区分条件和约束(后者是在输出文本上而不是输入的可检验条件),正式定义和分类自然语言生成问题,目前是可检验的)约束文本生成任务,并查看受限文本生成的现有方法和评估指标。我们的目的是强调这个新兴领域的最新进展和趋势,以告知最有希望的方向和局限性,以推动受约束神经语言生成研究的最新作品。
translated by 谷歌翻译
我们展示了一个基于逻辑推理的新型对话管理方法的聊天栏。除了帧对话一系列响应生成任务,我们将对话作为协作推断过程,其中扬声器共享信息以实时地合成新知识。我们的Chatbot管道在三个广泛的阶段完成了这种建模。第一阶段将用户话语转换为符号谓词表示。然后,第二阶段与更大的知识库结合使用这种结构化表示来合成使用有效的图形匹配来扫描新谓词。在第三阶段和最后阶段,我们的机器人选择一个小的谓词子集并将它们转化为英语响应。这种方法为了解用户输入的潜在语义,灵活的主动措施以及与对话背景相干的响应。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
用户模拟器(USS)通常用于通过增强学习训练面向任务的对话系统(DSS)。相互作用通常是在语义层面上以提高效率的,但是从语义动作到自然语言仍然存在差距,这会导致培训和部署环境之间的不匹配。在培训期间,将自然语言生成(NLG)模块与USS结合在一起可以部分解决此问题。但是,由于US的策略和NLG是单独优化的,因此在给定的情况下,这些模拟的用户话语可能不够自然。在这项工作中,我们提出了一个基于生成变压器的用户模拟器(Gentus)。 Gentus由编码器结构组成,这意味着它可以共同优化用户策略和自然语言。 Gentus既产生语义动作又产生自然语言话语,从而保留了解释性和增强语言的变化。另外,通过将输入和输出表示为单词序列以及使用大型的预训练语言模型,我们可以在功能表示中实现普遍性。我们通过自动指标和人类评估评估绅士。我们的结果表明,绅士会产生更多的自然语言,并能够以零拍的方式转移到看不见的本体论中。此外,通过加强学习为培训专业用户模拟器打开大门,可以进一步塑造其行为。
translated by 谷歌翻译
我们提出了Blenderbot 3,这是一个175B参数对话模型,能够通过访问Internet和长期内存进行开放域对话,并接受了大量用户定义的任务的培训。我们同时发布了模型权重和代码,还将模型部署在公共网页上,以与有机用户进行交互。该技术报告描述了该模型的构建方式(建筑,模型和培训计划)以及其部署的细节,包括安全机制。人类评估表明,它优于现有的开放域对话代理,包括其前身(Roller等,2021; Komeili等,2022)。最后,我们使用部署收集的数据详细介绍了持续学习的计划,该数据也将公开发布。因此,该研究计划的目标是使社区能够研究通过互动学习的不断改进的负责任的代理商。
translated by 谷歌翻译
我们介绍了BenchClamp,这是一种评估受约束语言模型解析的基准测试,该基准通过通过限制性解码的启动或微调语言模型来基于输入文本的分析来产生语义输出。目前,预审前语言模型的开发人员基于分类,跨度提取和自由文本生成任务。语言解析在语言模型评估中被忽略,因为处理特定于任务的体系结构和表示的复杂性。最近的工作表明,当输出被限制为有效的语义表示时,从提示或微调的语言模型中产生的发电能力可以很好地表现。台式设备包括无上下文的语法,适用于六个具有不同输出含义表示形式的语义解析数据集,以及一个受约束的解码接口,以生成这些语法覆盖的输出。我们为每个数据集提供低,中和高资源分割,从而可以在不同的数据制度下准确比较各种语言模型。我们的基准测试既支持基于及时的学习又支持微调,并为语言模型开发人员提供了易于使用的工具包,以评估语义解析。
translated by 谷歌翻译
Functionality and dialogue experience are two important factors of task-oriented dialogue systems. Conventional approaches with closed schema (e.g., conversational semantic parsing) often fail as both the functionality and dialogue experience are strongly constrained by the underlying schema. We introduce a new paradigm for task-oriented dialogue - Dialog2API - to greatly expand the functionality and provide seamless dialogue experience. The conversational model interacts with the environment by generating and executing programs triggering a set of pre-defined APIs. The model also manages the dialogue policy and interact with the user through generating appropriate natural language responses. By allowing generating free-form programs, Dialog2API supports composite goals by combining different APIs, whereas unrestricted program revision provides natural and robust dialogue experience. To facilitate Dialog2API, the core model is provided with API documents, an execution environment and optionally some example dialogues annotated with programs. We propose an approach tailored for the Dialog2API, where the dialogue states are represented by a stack of programs, with most recently mentioned program on the top of the stack. Dialog2API can work with many application scenarios such as software automation and customer service. In this paper, we construct a dataset for AWS S3 APIs and present evaluation results of in-context learning baselines.
translated by 谷歌翻译
这项工作结合了有关预先训练模型编码的对话历史的信息,其含义表示当前系统话语,以实现面向任务对话中的语境语言生成。我们利用预先训练的多上下文转换模型进行从头开始培训的模型中的上下文表示;并利用从预训练的GPT-2调整的模型中的上下文生成的立即使用前面的用户话语。与多种数据集的两个实验表明,通过预先训练的模型编码的上下文信息可提高自动指标和人类评估中的响应生成的性能。我们所呈现的上下文发电机使得更高种类的响应能够更好地适应正在进行的对话。分析上下文大小显示,较长的上下文不会自动导致更好的性能,但是前面的用户话语的直接对上下文生成起着重要作用。此外,我们还提出了一种基于GPT的生成模型的重新排名。实验表明,RE-Ranker选择的响应对自动度量有重大改进。
translated by 谷歌翻译
Computational notebooks, such as Jupyter notebooks, are interactive computing environments that are ubiquitous among data scientists to perform data wrangling and analytic tasks. To measure the performance of AI pair programmers that automatically synthesize programs for those tasks given natural language (NL) intents from users, we build ARCADE, a benchmark of 1082 code generation problems using the pandas data analysis framework in data science notebooks. ARCADE features multiple rounds of NL-to-code problems from the same notebook. It requires a model to understand rich multi-modal contexts, such as existing notebook cells and their execution states as well as previous turns of interaction. To establish a strong baseline on this challenging task, we develop PaChiNCo, a 62B code language model (LM) for Python computational notebooks, which significantly outperforms public code LMs. Finally, we explore few-shot prompting strategies to elicit better code with step-by-step decomposition and NL explanation, showing the potential to improve the diversity and explainability of model predictions.
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
Task-oriented semantic parsing is increasingly being used in user-facing applications, making measuring the calibration of parsing models especially important. We examine the calibration characteristics of six models across three model families on two common English semantic parsing datasets, finding that many models are reasonably well-calibrated and that there is a trade-off between calibration and performance. Based on confidence scores across three models, we propose and release new challenge splits of the two datasets we examine. We then illustrate the ways a calibrated model can be useful in balancing common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that using model confidence allows us to improve the accuracy on validation programs by 9.6% (absolute) with annotator interactions on only 2.2% of tokens. Using sequence-level confidence scores, we then examine how we can optimize trade-off between a parser's usability and safety. We show that confidence-based thresholding can reduce the number of incorrect low-confidence programs executed by 76%; however, this comes at a cost to usability. We propose the DidYouMean system which balances usability and safety. We conclude by calling for calibration to be included in the evaluation of semantic parsing systems, and release a library for computing calibration metrics.
translated by 谷歌翻译
这项工作提出了一个新的对话数据集,即cookdial,该数据集促进了对任务知识了解的面向任务的对话系统的研究。该语料库包含260个以人类对任务为导向的对话框,其中代理给出了配方文档,指导用户烹饪菜肴。 Cookdial中的对话框展示了两个独特的功能:(i)对话流与支持文档之间的程序对齐; (ii)复杂的代理决策涉及分割长句子,解释硬说明并在对话框上下文中解决核心。此外,我们在假定的面向任务的对话框系统中确定了三个具有挑战性的(子)任务:(1)用户问题理解,(2)代理操作框架预测和(3)代理响应生成。对于这些任务中的每一个,我们都会开发一个神经基线模型,我们在cookdial数据集上进行了评估。我们公开发布烹饪数据集,包括对话框和食谱文档的丰富注释,以刺激对特定于域的文档接地对话框系统的进一步研究。
translated by 谷歌翻译
Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset labelled with dialogue belief states and dialogue actions is two-fold: firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided. The proposed data-collection pipeline is entirely based on crowd-sourcing without the need of hiring professional annotators; secondly, a set of benchmark results of belief tracking, dialogue act and response generation is reported, which shows the usability of the data and sets a baseline for future studies.
translated by 谷歌翻译
大多数低编码平台的用户,例如Excel和PowerApps,都以特定于域的公式语言编写程序来执行非平凡的任务。用户通常可以编写他们想要的大部分程序,但是引入了一些小错误,这些错误会产生破损的公式。这些错误既可以是句法和语义,也很难让低代码用户识别和修复,即使只能通过一些编辑解决。我们正式化了产生最后一英里维修问题等编辑的问题。为了解决这个问题,我们开发了Lamirage,这是一种最后一英里的维修发动机发电机,结合了符号和神经技术,以低代码公式语言进行最后一英里维修。 Lamirage采用语法和一组特定领域的约束/规则,它们共同近似目标语言,并使用它们来生成可以用该语言修复公式的维修引擎。为了应对本地化错误和对候选维修进行排名的挑战,Lamirage利用神经技术,而它依赖于符号方法来生成候选维修。这种组合使Lamirage可以找到满足提供的语法和约束的维修,然后选择最自然的修复。我们将Lamirage与400个Real Excel和PowerFX公式的最新神经和符号方法进行了比较,其中Lamirage的表现优于所有基线。我们释放这些基准,以鼓励在低代码域中进行后续工作。
translated by 谷歌翻译