语言理解的概率模型是可解释和结构化的,例如隐喻理解的模型描述了有关潜在主题和特征的推论。但是,这些模型是为特定任务手动设计的。大型语言模型(LLMS)可以通过内在的学习来执行许多任务,但它们缺乏概率模型的清晰结构。在本文中,我们使用经过思考的提示将概率模型的结构引入LLMS。这些提示导致该模型推断潜在变量和有关其关系的理由,以选择隐喻的适当释义。所选择的潜在变量和关系是由认知心理学理解理论得出的。我们将这些提示应用于GPT-3的两个最大版本,并表明它们可以改善释义选择。
translated by 谷歌翻译
抽象推理是智能系统的关键能力。大型语言模型在抽象推理任务上实现了高度的性能,但表现出许多缺陷。但是,人类的抽象推理也是不完美的,并且取决于我们对推理问题内容的知识和信念。例如,人类对在日常情况下基于逻辑规则的逻辑规则比关于抽象属性的任意规则更可靠地理解。语言模型的培训经验类似地赋予了他们先前的期望,这些期望反映了人类的知识和信念。因此,我们假设语言模型会显示出类似人类的内容对抽象推理问题的影响。我们在三个逻辑推理任务中探讨了这一假设:自然语言推论,判断三段论的逻辑有效性和ison选择任务(Wason,1968)。我们发现,最新的大语言模型(具有7或700亿个参数; Hoffman等,2022)反映了这些任务中人类在人类中观察到的许多相同模式 - 像人类一样,模型对可信情况的理由更有效地理由不现实或抽象的。我们的发现对理解这些认知效应以及有助于语言模型表现的因素具有影响。
translated by 谷歌翻译
神经网络语言模型的最新进展表明,通过利用大规模自然语言数据中的语言关联来得出表达意义表示。这些潜在的格式塔表示已实现许多实际应用的最新性能。看来我们正处于经验得出强大而表达的可计算语义的途径。出现的一个关键问题是,仅语言数据才能使计算机能够理解有关物理世界的必要真相?必须关注这个问题,因为我们与智能机器的未来相互作用取决于我们的技术正确地表示和处理人类通常观察到的概念(对象,属性和过程)。在审查了现有协议之后,这项工作的目的是使用新颖且严格控制的推理测试探索这个问题,并突出显示哪些模型可能直接从纯语言数据中学习。
translated by 谷歌翻译
Pragmatics is an essential part of communication, but it remains unclear what mechanisms underlie human pragmatic communication and whether NLP systems capture pragmatic language understanding. To investigate both these questions, we perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor the literal interpretation of an utterance over heuristic-based distractors. We also find evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that even paradigmatic pragmatic phenomena may be solved without explicit representations of other agents' mental states, and that artificial models can be used to gain mechanistic insights into human pragmatic processing.
translated by 谷歌翻译
最近已被证明大型语言模型在各种任务集中获得合理的零射普通化(Brown等,2020)。它已经假设这是语言模型的隐式多任务学习的结果,在语言模型中的预押(Radford等,2019)。可以通过明确的多任务学习直接引起零拍常规化?为了以缩放测试这个问题,我们开发一个系统,以便轻松地将任何自然语言任务映射到人类可读的提示表单中。我们转换一组大量的监督数据集,每个数据集都有多个提示,具有不同的措辞。这些提示的数据集允许基准测试模型执行完全看不见的任务的能力。我们介绍了一个普拉克尔编码器 - 解码器模型(Raffel等,2020; Lester等,2021),覆盖各种任务。该模型在多个标准数据集中达到强大的零点性能,通常优于其尺寸的型号超过16倍。此外,我们的方法对来自Big-替补基准测试的任务子集具有强烈性能,优于其尺寸的6倍。所有提示和培训的型号都可以在https://github.com/ bigscience-workshop / protectsource / httpsource / https://huggingface.co/bigscience/t0pp。
translated by 谷歌翻译
Large language models that are capable of zero or few-shot prompting approaches have given rise to the new research area of prompt engineering. Recent advances showed that for example Chain-of-Thought (CoT) prompts can improve arithmetic or common sense tasks significantly. We explore how such approaches fair with legal reasoning tasks and take the COLIEE entailment task based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning approaches. Our findings show that while CoT prompting and fine-tuning with explanations approaches show improvements, the best results are produced by prompts that are derived from specific legal reasoning techniques such as IRAC (Issue, Rule, Application, Conclusion). Based on our experiments we improve the 2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best system of 0.6789 accuracy with an accuracy of 0.7431.
translated by 谷歌翻译
诸如GPT-3之类的语言模型在研究界引起了愤怒。一些研究发现,GPT-3具有一些创造力,并犯了与人类行为相提并论的错误。本文回答了一个相关的问题:谁是GPT-3?我们为GPT-3管理了两个经过验证的测量工具,以评估其个性,其所持值和自我报告的人口统计。我们的结果表明,GPT -3在人格中与人类样本的分数相似,并且在提供模型响应记忆时 - 根据其所持值。我们提供了对GPT-3模型的心理评估的第一个证据,从而增加了我们对GPT-3模型的理解。我们对未来研究的建议结束,使社会科学更接近语言模型,反之亦然。
translated by 谷歌翻译
Children acquiring English make systematic errors on subject control sentences even after they have reached near-adult competence (C. Chomsky, 1969), possibly due to heuristics based on semantic roles (Maratsos, 1974). Given the advanced fluency of large generative language models, we ask whether model outputs are consistent with these heuristics, and to what degree different models are consistent with each other. We find that models can be categorized by behavior into three separate groups, with broad differences between the groups. The outputs of models in the largest group are consistent with positional heuristics that succeed on subject control but fail on object control. This result is surprising, given that object control is orders of magnitude more frequent in the text data used to train such models. We examine to what degree the models are sensitive to prompting with agent-patient information, finding that raising the salience of agent and patient relations results in significant changes in the outputs of most models. Based on this observation, we leverage an existing dataset of semantic proto-role annotations (White, et al. 2020) to explore the connections between control and labeling event participants with properties typically associated with agents and patients.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
我们微调GPT-3使用基于文本的Web浏览环境来回答长形问题,允许模型搜索和导航Web。通过建立任务,以便通过人类执行,我们能够使用模仿学习培训在任务上的模型,然后通过人体反馈优化答案质量。为了使人为评估事实精度更容易,模型必须在浏览支持答案时收集引用。我们在ELI5上培训并评估我们的模型,Reddit用户提出的问题数据集。我们的最佳模型是通过使用行为克隆进行微调GPT-3获得的,然后对训练训练的奖励模型进行拒绝采样来获得以预测人类偏好。这种模式的答案是人类56%的答案,我们的人类示威者的时间和69%的时间到Reddit的最高投票答复。
translated by 谷歌翻译
我们探索如何产生一系列思想 - 一系列中间推理步骤 - 显着提高了大语言模型执行复杂推理的能力。特别是,我们通过一种称为“思想链”提示的简单方法在足够大的语言模型中自然出现这种推理能力,在此过程中,一些思想示范被作为提示的示例提供了。三种大语模型的实验表明,促使思想链提高了一系列算术,常识和象征性推理任务的性能。经验收益可能会引人注目。例如,仅使用八个思想范围的540B参数语言模型才能在数学单词问题的GSM8K基准上实现最新的精度,甚至超过了带有验证器的Fineted GPT-3。
translated by 谷歌翻译
我们提出了一项探索性定性研究,以了解作家如何与下一页建议相互作用。尽管对建议系统对写作的影响进行了一些定量研究,但几乎没有定性的工作来理解作家如何与建议系统互动及其如何影响他们的写作过程 - 特别是针对非本地但英国作家的。我们进行了一项研究,要求业余作家分别写两部电影评论,一本没有建议。我们发现作家以各种复杂的方式与下一页建议互动 - 作家能够抽象建议的多个部分并将其纳入他们的写作中 - 即使他们不同意整个建议。建议系统对写作过程也有各种影响 - 以独特的方式为写作过程的不同方面做出了影响。我们提出了一种用于与GPT-2写作的作家 - 探索互动模型,用于电影评论写作任务,然后是该模型可用于未来研究的方式,并概述了研究和设计的机会。
translated by 谷歌翻译
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
Large language models (LLMs) have exploded in popularity in the past few years and have achieved undeniably impressive results on benchmarks as varied as question answering and text summarization. We provide a simple new prompting strategy that leads to yet another supposedly "super-human" result, this time outperforming humans at common sense ethical reasoning (as measured by accuracy on a subset of the ETHICS dataset). Unfortunately, we find that relying on average performance to judge capabilities can be highly misleading. LLM errors differ systematically from human errors in ways that make it easy to craft adversarial examples, or even perturb existing examples to flip the output label. We also observe signs of inverse scaling with model size on some examples, and show that prompting models to "explain their reasoning" often leads to alarming justifications of unethical actions. Our results highlight how human-like performance does not necessarily imply human-like understanding or reasoning.
translated by 谷歌翻译
本文探讨了大语言模型的自然语言生成能力,并应用于编程课程中常见的两种学习资源类型。使用OpenAI Codex作为大语言模型,我们创建编程练习(包括示例解决方案和测试用例)和代码说明,从定性和定量上评估这些练习。我们的结果表明,大多数自动生成的内容既新颖又明智,在某些情况下可以按原样使用。在创建练习时,我们发现仅通过提供关键字作为模型输入来影响编程概念和它们所包含的上下文主题非常容易。我们的分析表明,大规模生成机器学习模型是指导者的工具,尽管仍然需要进行一些监督以确保生成的内容的质量在传递给学生之前。我们进一步讨论了OpenAI Codex和类似工具对入门编程教育的含义,并强调了未来的研究流,这些研究流有可能提高教师和学生的教育体验质量。
translated by 谷歌翻译
人工推理通常可以理解为两个系统之间的相互作用:直观和关联(“系统1”)和审议和逻辑(“系统2”)。神经序列模型 - 在执行复杂,结构化任务时越来越成功 - 表现出系统1的优点和故障模式:它们是快速和学习数据的模式,但通常不一致和不连贯。在这项工作中,我们通过添加系统2-Inspired逻辑推理,寻求一种轻量级,无培训的手段来改善现有系统1样序列模型。我们探讨了该主题的几种变体,其中通过符号推理模块检查来自神经序列模型的候选几代,可以通过符号推理模块来接受或拒绝几代人。我们的方法使用神经推理来介导神经系统1和逻辑系统2.导致强大的故事生成和接地的指示,表明这种方法可以增加神经基代的一致性和准确性。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Winograd架构挑战 - 一套涉及代词参考消歧的双句话,似乎需要使用致辞知识 - 是由2011年的赫克托勒维克斯提出的。到2019年,基于大型预先训练的变压器的一些AI系统基于语言模型和微调这些问题,精度优于90%。在本文中,我们审查了Winograd架构挑战的历史并评估了其重要性。
translated by 谷歌翻译