Multi-hop reading comprehension requires not only the ability to reason over raw text but also the ability to combine multiple evidence. We propose a novel learning approach that helps language models better understand difficult multi-hop questions and perform "complex, compositional" reasoning. Our model first learns to decompose each multi-hop question into several sub-questions by a trainable question decomposer. Instead of answering these sub-questions, we directly concatenate them with the original question and context, and leverage a reading comprehension model to predict the answer in a sequence-to-sequence manner. By using the same language model for these two components, our best seperate/unified t5-base variants outperform the baseline by 7.2/6.1 absolute F1 points on a hard subset of DROP dataset.
translated by 谷歌翻译
Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QG-Bench is released along with the fine-tuned models presented in the paper https://github.com/asahi417/lm-question-generation, which are also available as a demo https://autoqg.net/.
translated by 谷歌翻译
Answering complex questions that require making latent decisions is a challenging task, especially when limited supervision is available. Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting by demonstrating how to output intermediate rationalizations while solving the complex question in a single pass. We introduce ``Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution. Successive prompting decouples the supervision for decomposing complex questions from the supervision for answering simple questions, allowing us to (1) have multiple opportunities to query in-context examples at each reasoning step (2) learn question decomposition separately from question answering, including using synthetic data, and (3) use bespoke (fine-tuned) components for reasoning steps where a large LM does not perform well. The intermediate supervision is typically manually written, which can be expensive to collect. We introduce a way to generate a synthetic dataset which can be used to bootstrap a model's ability to decompose and answer intermediate questions. Our best model (with successive prompting) achieves an improvement of ~5% absolute F1 on a few-shot version of the DROP dataset when compared with a state-of-the-art model with the same supervision.
translated by 谷歌翻译
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
translated by 谷歌翻译
Step-by-step reasoning approaches like chain-of-thought (CoT) have proved to be a very effective technique to induce reasoning capabilities in large language models. However, the success of the CoT approach depends primarily on model size, and often billion parameter-scale models are needed to get CoT to work. In this paper, we propose a knowledge distillation approach, that leverages the step-by-step CoT reasoning capabilities of larger models and distils these reasoning abilities into smaller models. Our approach Decompositional Distillation learns a semantic decomposition of the original problem into a sequence of subproblems and uses it to train two models: a) a problem decomposer that learns to decompose the complex reasoning problem into a sequence of simpler sub-problems and b) a problem solver that uses the intermediate subproblems to solve the overall problem. On a multi-step math word problem dataset (GSM8K), we boost the performance of GPT-2 variants up to 35% when distilled with our approach compared to CoT. We show that using our approach, it is possible to train a GPT-2-large model (775M) that can outperform a 10X larger GPT-3 (6B) model trained using CoT reasoning. Finally, we also demonstrate that our approach of problem decomposition can also be used as an alternative to CoT prompting, which boosts the GPT-3 performance by 40% compared to CoT prompts.
translated by 谷歌翻译
Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in the few-shot case, giving several points of improvement in exact match scores. However, we also show that when models are given access to datasets with around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.
translated by 谷歌翻译
有效的多跳问答(QA)需要在多个分散的段落上进行推理,并提供答案的解释。大多数现有方法无法提供可解释的推理过程,以说明这些模型如何得出答案。在本文中,我们提出了一种基于多跳QA的抽象含义表示形式(QDAMR)的问题分解方法,该方法通过将多跳问题分解为更简单的子问题并按顺序回答它们来实现可解释的推理。由于注释分解很昂贵,因此我们首先将理解多跳问题的复杂性委托给AMR解析器。然后,我们通过基于所需的推理类型对相应的AMR图进行分割实现多跳问题的分解。最后,我们使用AMR到文本生成模型生成子问题,并使用现成的QA模型回答它们。 HOTPOTQA的实验结果表明,我们的方法在可解释的推理方面具有竞争力,并且QDAMR产生的子问题是良好的,表现优于现有的基于问题分解的多跳质量质量检查方法。
translated by 谷歌翻译
在有问题的回答需要常识的问题上,语言模型(例如,GPT-3)已用于生成表达有助于提高性能的背景知识的文本。然而,使用此类模型的成本很高。在这项工作中,我们对较小的语言模型产生有用的中间上下文,此处称为阐述。我们的框架在更新两个语言模型之间交替使用 - 阐述生成器和一个答案预测变量 - 允许每个语言都影响彼此。我们的模型使用少于GPT-3的参数的0.5%优于具有相似尺寸的替代方案,并在四个常识性问题上回答基准测试的GPT-3上的差距缩小。人类评估表明,生成的阐述的质量很高。
translated by 谷歌翻译
大型语言模型可以产生流畅的对话,但往往是幻觉的事实不准确。虽然检索式增强的模型有助于缓解这个问题,但他们仍然面临着推理的艰难挑战,以便同时提供正确的知识和产生对话。在这项工作中,我们提出了一种模块化模型,知识响应(K2R),将知识纳入会话代理商,这将这个问题分解为两个更简单的步骤。 K2R首先生成一个知识序列,给定对话背景作为中间步骤。在此“推理步骤”之后,该模型随后参加自己生成的知识序列,以及对话背景,以产生最终的响应。在详细的实验中,我们发现这种模型在知识接地的对话任务中少幻觉,并且在可解释性和模块化方面具有优势。特别地,它可以用来将QA和对话系统一起融合在一起,以使对话代理能够提供知识渊博的答案,或者QA模型,以在零拍摄设置中给出对话响应。
translated by 谷歌翻译
预训练的语言模型(PTLM)已显示出在自然语言任务上表现良好。许多先前的作品都以通过知识图(KGS)标记的关系链接的实体的形式利用结构性常识来协助PTLM。检索方法使用kg作为单独的静态模块,该模块限制了覆盖范围,因为kgs包含有限的知识。生成方法训练PTLMS kg三倍以提高获得知识的规模。但是,对符号KG实体的培训限制了其在涉及自然语言文本的任务中的适用性,在这些任务中,它们忽略了整体上下文。为了减轻这种情况,我们提出了一个以句子为条件的常识性上下文化器(COSE-CO)作为输入,以使其在生成与输入文本的整体上下文相关的任务中通常可用。为了训练Cose-Co,我们提出了一个新的数据集,其中包括句子和常识知识对。 COSE-CO推断出的知识是多种多样的,并且包含了基础KG中不存在的新实体。我们增强了在多选质量质量检查和开放式常识性推理任务中产生的知识,从而改善了CSQA,ARC,QASC和OBQA数据集的当前最佳方法。我们还展示了其在改善释义生成任务的基线模型方面的适用性。
translated by 谷歌翻译
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.
translated by 谷歌翻译
本文探讨了提高语言模型的零次学习能力的简单方法。我们表明,指令调整 - 通过对说明书中所述的任务集合微调语言模型 - 大幅提升零射门上看不见任务中的表现。我们采取预训练的语言模型和指令调整它通过自然语言指令模板语言表达了60NLP任务137B参数。我们评估这种指令调整模型,我们称之为FLAN,在看不见的任务类型。FLAN显着改善其未修饰的对应的性能和超过25的20个任务,我们评估零射门175BGPT-3。FLAN甚至GPT-3通过在安利,RTE,BoolQ,AI2-ARC,OpenbookQA和StoryCloze大比分胜过几拍。消融研究显示任务和模型的规模,这个数字是指令调整取得成功的关键组成部分。
translated by 谷歌翻译
多跳的推理需要汇总多个文档来回答一个复杂的问题。现有方法通常将多跳问题分解为更简单的单跳问题,以解决说明可解释的推理过程的问题。但是,他们忽略了每个推理步骤的支持事实的基础,这往往会产生不准确的分解。在本文中,我们提出了一个可解释的逐步推理框架,以在每个中间步骤中同时合并单跳支持句子识别和单跳问题生成,并利用当前跳跃的推断,直到推理最终结果。我们采用统一的读者模型来进行中级跳跃推理和最终的跳跃推理,并采用关节优化,以更准确,强大的多跳上推理。我们在两个基准数据集HOTPOTQA和2WIKIMULTIHOPQA上进行实验。结果表明,我们的方法可以有效地提高性能,并在不分解监督的情况下产生更好的解释推理过程。
translated by 谷歌翻译
在智能辅导系统中生成提示的现有工作(ITS)主要集中在手动和非个人反馈上。在这项工作中,我们探索了ITS中的个性化反馈作为个性化反馈。我们的个性化反馈可以在学生答案中查明正确,错误或缺失的短语,并通过提出自然语言问题来指导他们正确答案。我们的方法结合了因果分析,以使用基于文本相似性的NLP变压器模型来分解学生答案,以识别正确和不正确或缺失的零件。我们培训了一些弹药的神经问题生成和问题重新排序模型,以显示解决学生答案中缺少的组件的问题,这些组件使学生朝着正确的答案迈进。在基于真实对话的ITS测试时,我们的模型在学生学习的增长方面大大优于简单和强大的基线。最后,我们表明我们个性化的纠正反馈系统有可能改善生成的问答系统。
translated by 谷歌翻译
语言模型(LMS)被证明具有对物理世界的常识知识,这对于在日常情况下完成任务至关重要。但是,LMS是否有能力为具体任务生成扎根的可执行计划,这仍然是一个悬而未决的问题。这是非常具有挑战性的,因为LMS没有“眼睛”或“手”来感知现实的环境。在这项工作中,我们展示了有关这个重要研究问题的第一个研究。我们首先提出了一个名为G-Planet的新型问题公式,它将其作为输入一个高级目标和在特定环境中的对象表。预期输出是一个计划,该计划包括逐步指令供代理执行。为了实现此问题的研究,我们建立了一个评估协议,并设计了一个专门的指标来评估计划的质量。在我们的广泛实验中,我们表明,为编码环境添加扁平表并使用迭代解码策略都可以提高LMS的基础计划能力。我们对结果的分析也导致有趣的非平凡发现。
translated by 谷歌翻译
从头开始解决复杂问题通常是有挑战性的,但如果我们可以访问其解决方案的其他类似问题,则更容易 - 一种称为基于案例的推理(CBR)的范式。我们提出了一种神经象征性的CBR方法(CBR-KBQA),用于在大知识库上应答。 CBR-KBQA由非参数内存组成,该内存存储案例(问题和逻辑表单)和参数模型,该参数模型可以通过检索与其相关的案例来为新问题生成逻辑表单。在包含复杂问题的几个KBQA数据集上,CBR-KBQA实现了竞争性能。例如,在ComplexWebQuestions数据集上,CBR-KBQA以11 \%的准确度优于当前最新状态。此外,我们表明CBR-KBQA能够使用新案例\ EMPH {没有}任何进一步的培训:通过在案例存储器中纳入一些人类标记的示例,CBR-KBQA能够成功地生成包含未经看线KB实体的逻辑表格以及关系。
translated by 谷歌翻译
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github\footnote{\url{https://github.com/wenhuchen/Program-of-Thoughts}}.
translated by 谷歌翻译
目前用于开放域问题的最先进的生成模型(ODQA)专注于从非结构化文本信息生成直接答案。但是,大量的世界知识存储在结构化数据库中,并且需要使用SQL等查询语言访问。此外,查询语言可以回答需要复杂推理的问题,以及提供完全的解释性。在本文中,我们提出了一个混合框架,将文本和表格证据占据了输入,并根据哪种形式更好地回答这个问题而生成直接答案或SQL查询。然后可以在关联的数据库上执行生成的SQL查询以获得最终答案。据我们所知,这是第一种将Text2SQL与ODQA任务应用于ODQA任务的论文。凭经验,我们证明,在几个ODQA数据集上,混合方法始终如一地优于仅采用大边缘的均匀输入的基线模型。具体地,我们使用T5基础模型实现OpenSquad数据集的最先进的性能。在一个详细的分析中,我们证明能够生成结构的SQL查询可以始终带来增益,特别是对于那些需要复杂推理的问题。
translated by 谷歌翻译