尽管改善神经对话代理的事实准确性是大量研究的对象,但在神经对话的环境中,沟通的另一个重要方面是对无知的透明度。在这项工作中,我们分析了最新的聊天模型在多大程度上是语言校准的,因为它们的疑问(或信心)的口头表达与该模型的响应实际上是不正确(或正确)的可能性相匹配。 。我们发现这些模型的校准很差,但是我们表明可以准确预测正确性的可能性。通过将这种元认知特征纳入可控生成模型的训练中,我们获得了具有大大改进语言校准的对话代理。尽管改善神经对话代理的事实准确性是大量研究的对象,但在神经对话的环境中,沟通的另一个重要方面是对无知的透明度。在这项工作中,我们分析了最新的聊天模型在多大程度上是语言校准的,因为它们的疑问(或信心)的口头表达与该模型的响应实际上是不正确(或正确)的可能性相匹配。 。我们发现这些模型的校准很差,但是我们表明可以准确预测正确性的可能性。通过将这种元认知特征纳入可控生成模型的训练中,我们获得了具有大大改进语言校准的对话代理。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. 1 Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp. github.io/coqa.
translated by 谷歌翻译
Task-oriented semantic parsing is increasingly being used in user-facing applications, making measuring the calibration of parsing models especially important. We examine the calibration characteristics of six models across three model families on two common English semantic parsing datasets, finding that many models are reasonably well-calibrated and that there is a trade-off between calibration and performance. Based on confidence scores across three models, we propose and release new challenge splits of the two datasets we examine. We then illustrate the ways a calibrated model can be useful in balancing common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that using model confidence allows us to improve the accuracy on validation programs by 9.6% (absolute) with annotator interactions on only 2.2% of tokens. Using sequence-level confidence scores, we then examine how we can optimize trade-off between a parser's usability and safety. We show that confidence-based thresholding can reduce the number of incorrect low-confidence programs executed by 76%; however, this comes at a cost to usability. We propose the DidYouMean system which balances usability and safety. We conclude by calling for calibration to be included in the evaluation of semantic parsing systems, and release a library for computing calibration metrics.
translated by 谷歌翻译
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically "generate and hope" generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.
translated by 谷歌翻译
为了实现长文档理解的构建和测试模型,我们引入质量,具有中文段的多项选择QA DataSet,具有约5,000个令牌的平均长度,比典型的当前模型更长。与经过段落的事先工作不同,我们的问题是由阅读整个段落的贡献者编写和验证的,而不是依赖摘要或摘录。此外,只有一半的问题是通过在紧缩时间限制下工作的注释器来应答,表明略读和简单的搜索不足以一直表现良好。目前的模型在此任务上表现不佳(55.4%),并且落后于人类性能(93.5%)。
translated by 谷歌翻译
我们研究语言模型是否可以评估自己主张的有效性,并预测他们能够正确回答的问题。我们首先表明,当以正确的格式提供时,较大的模型在多样化的多项选择和True/False问题上进行了很好的校准。因此,我们可以通过要求模型首先提出答案,然后评估其答案正确的概率“ p(true)”来对开放式采样任务进行自我评估。我们发现在各种任务中,P(true)的表现,校准和缩放令人鼓舞。当我们允许模型考虑自己的许多样本之前,在预测一种特定可能性的有效性之前,自我评估的性能进一步改善。接下来,我们研究是否可以培训模型来预测“ P(ik)”,即“我知道”问题的概率,而无需参考任何特定提出的答案。模型在预测P(IK)方面表现良好,并且在跨任务中部分概括,尽管它们在新任务上的P(IK)校准方面遇到了困难。预测的p(IK)概率在存在相关的原始材料的情况下以及对数学单词问题解决方案的提示也适当增加。我们希望这些观察结果为培训更诚实的模型提供了基础,并研究了诚实对模型模仿人类写作以外的其他目标培训的案例的普遍性。
translated by 谷歌翻译
我们提出了Blenderbot 3,这是一个175B参数对话模型,能够通过访问Internet和长期内存进行开放域对话,并接受了大量用户定义的任务的培训。我们同时发布了模型权重和代码,还将模型部署在公共网页上,以与有机用户进行交互。该技术报告描述了该模型的构建方式(建筑,模型和培训计划)以及其部署的细节,包括安全机制。人类评估表明,它优于现有的开放域对话代理,包括其前身(Roller等,2021; Komeili等,2022)。最后,我们使用部署收集的数据详细介绍了持续学习的计划,该数据也将公开发布。因此,该研究计划的目标是使社区能够研究通过互动学习的不断改进的负责任的代理商。
translated by 谷歌翻译
大型语言模型可以产生流畅的对话,但往往是幻觉的事实不准确。虽然检索式增强的模型有助于缓解这个问题,但他们仍然面临着推理的艰难挑战,以便同时提供正确的知识和产生对话。在这项工作中,我们提出了一种模块化模型,知识响应(K2R),将知识纳入会话代理商,这将这个问题分解为两个更简单的步骤。 K2R首先生成一个知识序列,给定对话背景作为中间步骤。在此“推理步骤”之后,该模型随后参加自己生成的知识序列,以及对话背景,以产生最终的响应。在详细的实验中,我们发现这种模型在知识接地的对话任务中少幻觉,并且在可解释性和模块化方面具有优势。特别地,它可以用来将QA和对话系统一起融合在一起,以使对话代理能够提供知识渊博的答案,或者QA模型,以在零拍摄设置中给出对话响应。
translated by 谷歌翻译
最先进的对话模型仍然对事实准确性和自我矛盾甚至困难。轶事,他们已被观察到在整个话语中未能维持性质身份;更具体地,可能会涉及其对话者的作用。在这项工作中,我们正规化和量化这种缺陷,并通过人类评估实验表明这确实是一个问题。相比之下,我们展示了专门识别谁在谈话的歧视模型可以表现良好;此外,这些可以用作自动指标。最后,我们评估了各种缓解方法,包括模型架构,培训协议和解码策略的变化。根据人类的注释者,我们最好的车型减少了近65%的误认为是近65%,同时提高了参与度。尽管有这些结果,但我们发现维持性格身份仍然是一个具有挑战性的问题。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Question Answering (QA) is a growing area of research, often used to facilitate the extraction of information from within documents. State-of-the-art QA models are usually pre-trained on domain-general corpora like Wikipedia and thus tend to struggle on out-of-domain documents without fine-tuning. We demonstrate that synthetic domain-specific datasets can be generated easily using domain-general models, while still providing significant improvements to QA performance. We present two new tools for this task: A flexible pipeline for validating the synthetic QA data and training downstream models on it, and an online interface to facilitate human annotation of this generated data. Using this interface, crowdworkers labelled 1117 synthetic QA pairs, which we then used to fine-tune downstream models and improve domain-specific QA performance by 8.75 F1.
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
最近已被证明大型语言模型在各种任务集中获得合理的零射普通化(Brown等,2020)。它已经假设这是语言模型的隐式多任务学习的结果,在语言模型中的预押(Radford等,2019)。可以通过明确的多任务学习直接引起零拍常规化?为了以缩放测试这个问题,我们开发一个系统,以便轻松地将任何自然语言任务映射到人类可读的提示表单中。我们转换一组大量的监督数据集,每个数据集都有多个提示,具有不同的措辞。这些提示的数据集允许基准测试模型执行完全看不见的任务的能力。我们介绍了一个普拉克尔编码器 - 解码器模型(Raffel等,2020; Lester等,2021),覆盖各种任务。该模型在多个标准数据集中达到强大的零点性能,通常优于其尺寸的型号超过16倍。此外,我们的方法对来自Big-替补基准测试的任务子集具有强烈性能,优于其尺寸的6倍。所有提示和培训的型号都可以在https://github.com/ bigscience-workshop / protectsource / httpsource / https://huggingface.co/bigscience/t0pp。
translated by 谷歌翻译
变形金刚目前是自然语言理解(NLU)任务的最新技术,容易产生未校准的预测或极端概率,从而根据其输出相对困难而做出不同的决策过程。在本文中,我们建议构建几个电感Venn - 持续预测因子(IVAP),这些预测因子(IVAP)可以根据预先训练的变压器的选择在最小的假设下可以很好地校准。我们在一组不同的NLU任务上测试了它们的性能,并表明它们能够产生均匀分布在[0,1]间隔的概率预测的良好概率预测,同时均保留了原始模型的预测准确性。
translated by 谷歌翻译
鉴于大型语言模型的广泛能力,应该有可能朝着一般的文本的助手工作,这些助手与人类价值一致,这意味着它是有帮助,诚实的和无害的。在此方向上的初始遗传,我们研究简单的基线技术和评估,例如提示。我们发现,从模型规模增加适度的干预措施的好处,概括为各种对准评估,并不会损害大型模型的性能。接下来,我们调查与对齐,比较仿制,二进制歧视和排名偏好建模相关的几个培训目标的缩放趋势。我们发现排名优先级模型比模仿学习更好地表现得多,并且通常以模型大小更有利地缩放。相比之下,二进制歧视通常与模仿学习非常类似地执行和缩放。最后,我们研究了一种“偏好模型预训练阶段的培训阶段,其目的是在对人偏好的芬明时提高样本效率。
translated by 谷歌翻译
人类在对话中提出的问题通常包含上下文依赖性,即对先前对话转弯的明确或隐式引用。这些依赖性采用核心发挥的形式(例如,通过代词使用)或椭圆形,并且可以使自动化系统的理解难以理解。促进对问题的理解和后续治疗方法的一种方法是将其重写为不受欢迎的形式,即可以理解的形式而没有对话性上下文。我们提出了Coqar,Coqar是一种语料库,其中包含$ 4.5 $ k的对话中的对话询问数据集COQA,总计$ 53 $ K的后续提问 - 答案对。每个原始问题都在至少2个脱离台面重写中手动注释。 COQAR可用于监督三个任务的监督:问题释义,问题重写和会话问题回答。为了评估Coqar重写的质量,我们进行了几项实验,包括培训和评估这三个任务的模型。我们的结果支持以下想法:问题重写可以用作问题回答模型的预处理步骤,从而提高其性能。
translated by 谷歌翻译
会话问题应答(CQA)系统旨在为用户提供自然语言答案,以信息寻求对话。现有的CQA基准测试与预先收集的人类谈话进行比较模型,使用在会话历史中提供的地面真理答案。它仍然尚不清楚我们是否可以依赖于模型开发的这种静态评估,以及当前系统是否能够充分地概括为现实世界的人机对话。在这项工作中,我们开展了最先进的CQA系统的大规模人类评估,人类评估人员与模型交谈并判断了答案的正确性。我们发现,人机对话的分布与人类谈话的分配急剧不同,并且在模型排名方面存在人和金历史评估之间的分歧。我们进一步调查了如何改进自动评估,并提出基于预测历史的问题重写机制,与人类判断更好地相关。最后,我们讨论了各种建模策略和未来方向对更好的会话问题应答系统的影响。
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译