Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, and another which can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
translated by 谷歌翻译
我们提出了一种用于在生成答案时将信息与多个检索文件中的信息组合的可检索增强的开放式开放式开放式开放域问题训练方法。我们将检索决策模拟作为相关文件集的潜在变量。由于通过对所检索的文件集的边缘化,因此使用期望最大化算法估计这一点。我们迭代地估计我们的潜在变量的价值(给定问题的这些相关文档集),然后使用此估计来更新检索器和读取器参数。我们假设这种端到端的训练允许训练信号流到读者,然后比上演明智的训练更好地流到猎犬。这导致检索器能够为问题和读者选择更多相关文档,这些文件在更准确的文档中培训以生成答案。三个基准数据集的实验表明,我们所提出的方法优于所有现有的相当大小的方法2-3%绝对精确匹配点,实现了新的最先进的结果。我们的结果还展示了学习检索以改善答复的可行性,而无明确监督检索决策。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Entities, as important carriers of real-world knowledge, play a key role in many NLP tasks. We focus on incorporating entity knowledge into an encoder-decoder framework for informative text generation. Existing approaches tried to index, retrieve, and read external documents as evidence, but they suffered from a large computational overhead. In this work, we propose an encoder-decoder framework with an entity memory, namely EDMem. The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters. To precisely generate entity names, we design three decoding methods to constrain entity generation by linking entities in the memory. EDMem is a unified framework that can be used on various entity-intensive question answering and generation tasks. Extensive experimental results show that EDMem outperforms both memory-based auto-encoder models and non-memory encoder-decoder models.
translated by 谷歌翻译
知识密集型语言任务(苏格兰信)通常需要大量信息来提供正确的答案。解决此问题的一种流行范式是将搜索系统与机器读取器相结合,前者检索支持证据,后者检查它们以产生答案。最近,读者组成部分在大规模预培养的生成模型的帮助下见证了重大进展。同时,搜索组件中的大多数现有解决方案都依赖于传统的``索引 - retrieve-then-Rank''管道,该管道遭受了巨大的内存足迹和端到端优化的困难。受到最新构建基于模型的IR模型的努力的启发,我们建议用新颖的单步生成模型替换传统的多步搜索管道,该模型可以极大地简化搜索过程并以端到端的方式进行优化。我们表明,可以通过一组经过适当设计的预训练任务来学习强大的生成检索模型,并被采用以通过进一步的微调来改善各种下游苏格兰短裙任务。我们将预训练的生成检索模型命名为Copusbrain,因为有关该语料库的所有信息均以其参数进行编码,而无需构造其他索引。经验结果表明,在苏格兰语基准上的检索任务并建立了新的最新性能,Copusbrain可以极大地超过强大的基准。我们还表明,在零农源和低资源设置下,科体班运行良好。
translated by 谷歌翻译
知识密集型任务,例如开放域问题答案(QA),需要访问大量的世界知识或领域知识。知识密集型任务的一种常见方法是采用检索到阅读的管道,该管道首先从诸如Wikipedia之类的外部语料库中检索少数相关的上下文文档,然后预测在检索文档的条件下得到答案。在本文中,我们提出了一种新的观点,可以通过用大型语言模型生成器代替文档检索器来解决知识密集型任务。我们称我们的方法生成-Read Read(GenRead),该方法首先提示大型语言模型根据给定问题生成上下文文档,然后读取生成的文档以产生最终答案。此外,我们提出了一种基于聚类的提示方法,该方法选择了不同的提示,从而产生了涵盖不同观点的生成文档,从而更好地回忆了可接受的答案。我们对三个不同的知识密集任务进行了广泛的实验,包括开放域质量检查,事实检查和对话系统。值得注意的是,GenRead在Triviaqa和WebQ上实现了71.6和54.4的精确匹配分数,显着超过了最先进的检索到+4.0和+3.9的最先进的dpr-fid,而无需从任何外部知识源中检索任何文档。最后,我们证明可以通过结合检索和生成来进一步提高模型性能。
translated by 谷歌翻译
A key component of fact verification is thevevidence retrieval, often from multiple documents. Recent approaches use dense representations and condition the retrieval of each document on the previously retrieved ones. The latter step is performed over all the documents in the collection, requiring storing their dense representations in an index, thus incurring a high memory footprint. An alternative paradigm is retrieve-and-rerank, where documents are retrieved using methods such as BM25, their sentences are reranked, and further documents are retrieved conditioned on these sentences, reducing the memory requirements. However, such approaches can be brittle as they rely on heuristics and assume hyperlinks between documents. We propose a novel retrieve-and-rerank method for multi-hop retrieval, that consists of a retriever that jointly scores documents in the knowledge source and sentences from previously retrieved documents using an autoregressive formulation and is guided by a proof system based on natural logic that dynamically terminates the retrieval process if the evidence is deemed sufficient. This method is competitive with current state-of-the-art methods on FEVER, HoVer and FEVEROUS-S, while using $5$ to $10$ times less memory than competing systems. Evaluation on an adversarial dataset indicates improved stability of our approach compared to commonly deployed threshold-based methods. Finally, the proof system helps humans predict model decisions correctly more often than using the evidence alone.
translated by 谷歌翻译
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by ( 1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new stateof-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
translated by 谷歌翻译
最近的开放式域问题的作品应答使用检索器模型引用外部知识库,可选地重新映射与单独的重新编制模型,并使用另一个读取器模型生成答案。尽管执行相关任务,但模型具有单独的参数,并且在训练期间略微耦合。在这项工作中,我们建议将猎犬和重新划分为依次应用于变压器架构内的硬注视机制,并将所产生的计算表示给读者送入。在这个奇异模型架构中,隐藏的表示从搬运者逐渐改进到Reranker到读者,这更有效地利用模型容量,并且当我们以端到端的方式训练时,还导致更好的梯度流动。我们还提出了一种预先训练的方法,以有效地培训这种架构。我们评估我们的自然问题和TriviaQA Open DataSets的模型以及固定参数预算,我们的模型优于以前的最先进模型1.0和0.7精确匹配分数。
translated by 谷歌翻译
正如GPT-3和T5所证明的那样,随着参数空间变得越来越大,变压器具有能力。但是,对于需要大量知识的任务,非参数存储器允许模型在计算成本和GPU内存需求的次线性增加中急剧增长。诸如RAG和Realm之类的最新模型已将检索引入条件生成。这些模型结合了从一系列语料库中的神经初始检索。我们基于这一研究,提出了RE2G,该研究将神经初始检索和重新融合到基于巴特的序列到序列的生成中。我们的阅读方法还允许从无与伦比分数的来源合并结果,从而实现BM25和神经初始检索的合奏。为了训练我们的系统端到端,我们引入了一种新颖的知识蒸馏变体,以在目标序列输出上仅使用地面真理来训练初始检索,重读者和生成。我们在四个不同的任务中发现了很大的收益:零击插槽填充,问答,事实检查和对话,相对增长了9%至34%,比以前的苏格兰短裙排行榜上的最先前的排行榜相比。我们将代码作为开源提供,网址为https://github.com/ibm/kgi-slot-filling/tree/re2g。
translated by 谷歌翻译
大型语言模型可以产生流畅的对话,但往往是幻觉的事实不准确。虽然检索式增强的模型有助于缓解这个问题,但他们仍然面临着推理的艰难挑战,以便同时提供正确的知识和产生对话。在这项工作中,我们提出了一种模块化模型,知识响应(K2R),将知识纳入会话代理商,这将这个问题分解为两个更简单的步骤。 K2R首先生成一个知识序列,给定对话背景作为中间步骤。在此“推理步骤”之后,该模型随后参加自己生成的知识序列,以及对话背景,以产生最终的响应。在详细的实验中,我们发现这种模型在知识接地的对话任务中少幻觉,并且在可解释性和模块化方面具有优势。特别地,它可以用来将QA和对话系统一起融合在一起,以使对话代理能够提供知识渊博的答案,或者QA模型,以在零拍摄设置中给出对话响应。
translated by 谷歌翻译
信息检索的任务是许多自然语言处理系统的重要组成部分,例如开放式域问题回答。尽管传统方法是基于手工制作的功能,但基于神经网络的连续表示最近获得了竞争结果。使用此类方法的一个挑战是获取监督数据以训练回猎犬模型,该模型对应于一对查询和支持文档。在本文中,我们提出了一种技术,以学习以知识蒸馏的启发,并不需要带注释的查询和文档对。我们的方法利用读者模型的注意分数,用于根据检索文档解决任务,以获取猎犬的合成标签。我们评估我们的方法回答,获得最新结果。
translated by 谷歌翻译
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dualencoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. 1 * Equal contribution 1 The code and trained models have been released at https://github.com/facebookresearch/DPR.
translated by 谷歌翻译
检索增强的代表在许多知识密集型的NLP任务中表现出最先进的表现,例如打开问题应答和事实验证。考虑到检索到的段落,这些模型训练以产生最终输出,这可能与原始查询无关,导致学习虚假线索或回答记忆。这项工作介绍了一种融入通道的证据性的方法 - 是否段落包含正确的证据来支持输出 - 培训发电机。我们介绍了一个多任务学习框架,共同生成最终输出并预测每个段落的证据性,利用新的任务不可行方法来获得{\ IT Silver}分证分性标签进行监督。我们在三个知识密集型任务中的五个数据集的实验表明,我们的新的证据引导发电机具有相同尺寸模型的直接对应的直接对应,并使Faviq-Ambig的最先进。我们将这些改进归因于辅助多任务学习和银证处分性挖掘技术。
translated by 谷歌翻译
为了解决现实世界应用需求的日益增长,知识密集型NLP(KI-NLP)的研究应通过捕获真正开放域环境的挑战:网络规模知识,结构缺乏,质量不一致,和噪音。为此,我们提出了一种新的设置,用于评估现有的KI-NLP任务,其中我们将背景语料库概括为通用Web快照。我们重新保证Kilt,最初为维基百科最初开发的标准Ki-NLP基准测试,并要求系统使用CCNet的子集 - 球体语料库 - 作为知识源。与维基百科相比,球体是较大的数量级,更好地反映了互联网上的全部知识。我们发现,尽管潜在的覆盖范围,规模挑战,结构缺乏,质量较低,来自领域的检索可以实现最先进的检索系统,以匹配和甚至优于基于Wikipedia的模型在几个kilt上任务 - 即使我们积极过滤看起来像维基百科的内容。我们还观察到Wikipedia的单一密集通道指数可以胜过稀疏的BM25版本,而在球体上尚不实现。为了促进进一步研究该领域,并尽量减少社区对专有黑匣子搜索引擎的依赖,我们将分享我们的指数,评估指标和基础设施。
translated by 谷歌翻译
关于信息检索的许多最新研究集中在如何从一项任务(通常具有丰富的监督数据)转移到有限的其他各种任务,并隐含地假设可以从一个任务概括到所有其余的任务。但是,这忽略了这样一个事实,即有许多多样化和独特的检索任务,每个任务都针对不同的搜索意图,查询和搜索域。在本文中,我们建议使用几乎没有散热的检索,每个任务都有一个简短的描述和一些示例。为了扩大一些示例的功能,我们提出了针对检索器(即将到来)的及时基本查询生成,该查询将大型语言模型(LLM)作为几个弹片查询生成器,并根据生成的数据创建特定于任务的检索器。通过LLM的概括能力提供动力,即要来源使得可以仅基于一些示例{没有自然问题或MS MARCO来训练%问题生成器或双重编码器,就可以仅基于一些示例{没有}来创建特定于任务的端到端检索。出乎意料的是,LLM提示不超过8个示例,允许双重编码器在MARCO(例如Colbert V2)上训练的大量工程模型平均在11个检索套件中超过1.2 NDCG。使用相同生成数据的进一步培训标准尺寸的重新级别可获得5.0点NDCG的改进。我们的研究确定,查询产生比以前观察到的更有效,尤其是在给出少量特定于任务知识的情况下。
translated by 谷歌翻译
最近的开放式域问题回答表明,新颖的测试问题之间的模型性能和那些在很大程度上与培训问题重叠的模型性能存在很大差异。然而,目前尚不清楚新颖的问题的哪些方面使他们成为挑战。在进行系统泛化的研究时,我们根据三个类别介绍和注释问题,这些类别测量了不同的水平和概括的种类:培训设定重叠,组成泛化(Comp-Gen)和新颖的实体概括(新实体)。在评估六个流行的参数和非参数模型时,我们发现,对于既定的自然问题和TriviaQA数据集,即使是Comp-Gen /新颖实体的最强的模型性能也是13.1 / 5.4%和9.6 / 1.5%,而与此相比降低对于完整的测试集 - 表示这些类型的问题所带来的挑战。此外,我们表明,虽然非参数模型可以相对良好地处理含有新颖实体的问题,但它们与那些需要组成泛化的问题斗争。最后,我们发现关键问题是:来自检索组件的级联错误,问题模式的频率和实体的频率。
translated by 谷歌翻译
Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95% and 34.51% relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings.
translated by 谷歌翻译
最近的知识接地对话框方法通过从外部文本文档中包含信息来生成响应。这些方法不需要在训练期间知道确切的文件,并依赖于使用检索系统来从大型索引获取相关文档。用于生成响应的文档被建模为潜在的变量,其先验概率需要估计。诸如rag等型号,在从索引中检索的文档上边缘化文档概率,以定义对端到端优化的日志似然丢失函数。在本文中,我们开发了上述技术的变分方法,据称,我们最大化证据下限(ELBO)。使用三个公开可用的开放式对话数据集的集合,我们展示了与地面真相响应的信息的后部分布如何允许在训练期间更好地逼近客观函数。为了克服与大型知识收集相关的抽样相关的挑战,我们开发了一种高效的方法来近似eLBO。据我们所知,我们是第一个适用于开放式无监督知识接地对话系统的变分培训。
translated by 谷歌翻译
Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers. Retrievers and readers are usually modeled separately, which necessitates a cumbersome implementation and is hard to train and adapt in an end-to-end fashion. In this paper, we revisit this design and eschew the separate architecture and training in favor of a single Transformer that performs Retrieval as Attention (ReAtt), and end-to-end training solely based on supervision from the end QA task. We demonstrate for the first time that a single model trained end-to-end can achieve both competitive retrieval and QA performance, matching or slightly outperforming state-of-the-art separately trained retrievers and readers. Moreover, end-to-end adaptation significantly boosts its performance on out-of-domain datasets in both supervised and unsupervised settings, making our model a simple and adaptable solution for knowledge-intensive tasks. Code and models are available at https://github.com/jzbjyb/ReAtt.
translated by 谷歌翻译