使用来自表格(TableQA)的信息回答自然语言问题是最近的兴趣。在许多应用程序中,表未孤立,但嵌入到非结构化文本中。通常,通过将其部分与表格单元格内容或非结构化文本跨度匹配,并从任一源中提取答案来最佳地回答问题。这导致了HybridQA数据集引入的TextableQA问题的新空间。现有的表格表示对基于变换器的阅读理解(RC)架构的适应性未通过单个系统解决两个表示的不同模式。培训此类系统因对遥远监督的需求而进一步挑战。为了降低认知负担,培训实例通常包括问题和答案,后者匹配多个表行和文本段。这导致嘈杂的多实例培训制度不仅涉及表的行,而且涵盖了链接文本的跨度。我们通过提出Mitqa来回应这些挑战,这是一个新的TextableQA系统,明确地模拟了表行选择和文本跨度选择的不同但密切相关的概率空间。与最近的基线相比,我们的实验表明了我们的方法的优越性。该方法目前在HybridQA排行榜的顶部,并进行了一个试验集,在以前公布的结果上实现了对em和f1的21%的绝对改善。
translated by 谷歌翻译
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dualencoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. 1 * Equal contribution 1 The code and trained models have been released at https://github.com/facebookresearch/DPR.
translated by 谷歌翻译
多跳的推理(即跨两个或多个文档的推理)是NLP模型的关键要素,该模型利用大型语料库表现出广泛的知识。为了检索证据段落,多跳模型必须与整个啤酒花的快速增长的搜索空间抗衡,代表结合多个信息需求的复杂查询,并解决有关在训练段落之间跳出的最佳顺序的歧义。我们通过Baleen解决了这些问题,Baleen可以提高多跳检索的准确性,同时从多跳的训练信号中学习强大的训练信号的准确性。为了驯服搜索空间,我们提出了凝结的检索,该管道总结了每个跃点后检索到单个紧凑型上下文的管道。为了建模复杂的查询,我们引入了一个重点的后期相互作用检索器,该检索器允许同一查询表示的不同部分匹配不同的相关段落。最后,为了推断无序的训练段落中的跳跃依赖性,我们设计了潜在的跳跃订购,这是一种弱者的策略,在该策略中,受过训练的检索员本身选择了啤酒花的顺序。我们在检索中评估Baleen的两跳问答和多跳的要求验证,并确定最先进的绩效。
translated by 谷歌翻译
Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.
translated by 谷歌翻译
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
translated by 谷歌翻译
问题答案(QA)是自然语言处理中最具挑战性的最具挑战性的问题之一(NLP)。问答(QA)系统试图为给定问题产生答案。这些答案可以从非结构化或结构化文本生成。因此,QA被认为是可以用于评估文本了解系统的重要研究区域。大量的QA研究致力于英语语言,调查最先进的技术和实现最先进的结果。然而,由于阿拉伯QA中的研究努力和缺乏大型基准数据集,在阿拉伯语问答进展中的研究努力得到了很大速度的速度。最近许多预先接受的语言模型在许多阿拉伯语NLP问题中提供了高性能。在这项工作中,我们使用四个阅读理解数据集来评估阿拉伯QA的最先进的接种变压器模型,它是阿拉伯语 - 队,ArcD,AQAD和TYDIQA-GoldP数据集。我们微调并比较了Arabertv2基础模型,ArabertV0.2大型型号和ARAElectra模型的性能。在最后,我们提供了一个分析,了解和解释某些型号获得的低绩效结果。
translated by 谷歌翻译
Researchers produce thousands of scholarly documents containing valuable technical knowledge. The community faces the laborious task of reading these documents to identify, extract, and synthesize information. To automate information gathering, document-level question answering (QA) offers a flexible framework where human-posed questions can be adapted to extract diverse knowledge. Finetuning QA systems requires access to labeled data (tuples of context, question and answer). However, data curation for document QA is uniquely challenging because the context (i.e. answer evidence passage) needs to be retrieved from potentially long, ill-formatted documents. Existing QA datasets sidestep this challenge by providing short, well-defined contexts that are unrealistic in real-world applications. We present a three-stage document QA approach: (1) text extraction from PDF; (2) evidence retrieval from extracted texts to form well-posed contexts; (3) QA to extract knowledge from contexts to return high-quality answers -- extractive, abstractive, or Boolean. Using QASPER for evaluation, our detect-retrieve-comprehend (DRC) system achieves a +7.19 improvement in Answer-F1 over existing baselines while delivering superior context selection. Our results demonstrate that DRC holds tremendous promise as a flexible framework for practical scientific document QA.
translated by 谷歌翻译
目前用于开放域问题的最先进的生成模型(ODQA)专注于从非结构化文本信息生成直接答案。但是,大量的世界知识存储在结构化数据库中,并且需要使用SQL等查询语言访问。此外,查询语言可以回答需要复杂推理的问题,以及提供完全的解释性。在本文中,我们提出了一个混合框架,将文本和表格证据占据了输入,并根据哪种形式更好地回答这个问题而生成直接答案或SQL查询。然后可以在关联的数据库上执行生成的SQL查询以获得最终答案。据我们所知,这是第一种将Text2SQL与ODQA任务应用于ODQA任务的论文。凭经验,我们证明,在几个ODQA数据集上,混合方法始终如一地优于仅采用大边缘的均匀输入的基线模型。具体地,我们使用T5基础模型实现OpenSquad数据集的最先进的性能。在一个详细的分析中,我们证明能够生成结构的SQL查询可以始终带来增益,特别是对于那些需要复杂推理的问题。
translated by 谷歌翻译
自动问题应答(QA)系统的目的是以时间有效的方式向用户查询提供答案。通常在数据库(或知识库)或通常被称为语料库的文件集合中找到答案。在过去的几十年里,收购知识的扩散,因此生物医学领域的新科学文章一直是指数增长。因此,即使对于领域专家,也难以跟踪域中的所有信息。随着商业搜索引擎的改进,用户可以在某些情况下键入其查询并获得最相关的一小组文档,以及在某些情况下从文档中的相关片段。但是,手动查找所需信息或答案可能仍然令人疑惑和耗时。这需要开发高效的QA系统,该系统旨在为用户提供精确和精确的答案提供了生物医学领域的自然语言问题。在本文中,我们介绍了用于开发普通域QA系统的基本方法,然后彻底调查生物医学QA系统的不同方面,包括使用结构化数据库和文本集合的基准数据集和几种提出的方​​法。我们还探讨了当前系统的局限性,并探索潜在的途径以获得进一步的进步。
translated by 谷歌翻译
Table-and-text hybrid question answering (HybridQA) is a widely used and challenging NLP task commonly applied in the financial and scientific domain. The early research focuses on migrating other QA task methods to HybridQA, while with further research, more and more HybridQA-specific methods have been present. With the rapid development of HybridQA, the systematic survey is still under-explored to summarize the main techniques and advance further research. So we present this work to summarize the current HybridQA benchmarks and methods, then analyze the challenges and future directions of this task. The contributions of this paper can be summarized in three folds: (1) first survey, to our best knowledge, including benchmarks, methods and challenges for HybridQA; (2) systematic investigation with the reasonable comparison of the existing systems to articulate their advantages and shortcomings; (3) detailed analysis of challenges in four important dimensions to shed light on future directions.
translated by 谷歌翻译
Multi-hop Machine reading comprehension is a challenging task with aim of answering a question based on disjoint pieces of information across the different passages. The evaluation metrics and datasets are a vital part of multi-hop MRC because it is not possible to train and evaluate models without them, also, the proposed challenges by datasets often are an important motivation for improving the existing models. Due to increasing attention to this field, it is necessary and worth reviewing them in detail. This study aims to present a comprehensive survey on recent advances in multi-hop MRC evaluation metrics and datasets. In this regard, first, the multi-hop MRC problem definition will be presented, then the evaluation metrics based on their multi-hop aspect will be investigated. Also, 15 multi-hop datasets have been reviewed in detail from 2017 to 2022, and a comprehensive analysis has been prepared at the end. Finally, open issues in this field have been discussed.
translated by 谷歌翻译
Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95% and 34.51% relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings.
translated by 谷歌翻译
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers. We introduce HOTPOTQA, a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowing QA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems' ability to extract relevant facts and perform necessary comparison. We show that HOTPOTQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions.
translated by 谷歌翻译
在开放的书本回答(OBQA)任务中,从分散注意力的信息中选择相关段落和句子对于推理问题的答案至关重要。 HOTPOTQA数据集旨在教授和评估系统以进行段落排名和句子选择。许多现有框架使用单独的模型分别选择相关段落和句子。这样的系统不仅在模型的参数方面具有很高的复杂性,而且还无法将训练这两个任务训练在一起的优势,因为一项任务可能对另一个任务有益。在这项工作中,我们提出了一个简单而有效的框架,可以通过共同排名段落和选择句子来解决这些限制。此外,我们提出一致性和相似性约束,以促进段落排名和句子选择之间的相关性和相互作用。实验表明,我们的框架可以与以前的系统实现竞争性结果,并就相关句子的确切匹配而优于28 \%在HOTPOTQA数据集上。
translated by 谷歌翻译
在现实世界中的问题回答场景中,将表格和文本内容均结合的混合形式吸引了越来越多的关注,其中数值推理问题是最典型和最具挑战性的问题之一。现有方法通常采用编码器框架来表示混合内容并生成答案。但是,它无法捕获编码器侧数值,表格架构和文本信息之间的丰富关系。解码器使用一个简单的预定义运算符分类器,该分类器的灵活性不足以处理具有不同表达式的数值推理过程。为了解决这些问题,本文提出了一个\ textbf {re} lational \ textbf {g} raph增强\ textbf {h} ybrid table-text \ textbf {n}带有\ textbf {t textbf {t text} ree decoder(\ textbff recoder(\ textbf) {reghnt})。它模拟了对表 - 文本混合内容的回答的数值问题,作为表达树的生成任务。此外,我们提出了一种新颖的关系图建模方法,该方法模拟了问题,表和段落之间的对齐方式。我们验证了公开可用的Table-Text混合质量质量质量标准(TAT-QA)的模型。拟议的reghnt显着胜过基线模型,并实现最新结果\脚注{我们在〜\ url {https://github.com/lfy79001/reghnt}}}〜(20222)公开发布了源代码和数据-05-05)。
translated by 谷歌翻译
检索增强的代表在许多知识密集型的NLP任务中表现出最先进的表现,例如打开问题应答和事实验证。考虑到检索到的段落,这些模型训练以产生最终输出,这可能与原始查询无关,导致学习虚假线索或回答记忆。这项工作介绍了一种融入通道的证据性的方法 - 是否段落包含正确的证据来支持输出 - 培训发电机。我们介绍了一个多任务学习框架,共同生成最终输出并预测每个段落的证据性,利用新的任务不可行方法来获得{\ IT Silver}分证分性标签进行监督。我们在三个知识密集型任务中的五个数据集的实验表明,我们的新的证据引导发电机具有相同尺寸模型的直接对应的直接对应,并使Faviq-Ambig的最先进。我们将这些改进归因于辅助多任务学习和银证处分性挖掘技术。
translated by 谷歌翻译
除局部相关性外,开放域的Factoid问题回答的段落排名还需要一个段落以包含答案(答案)。尽管最近的一些研究将一些阅读能力纳入了排名者以说明答复性,但排名仍然受到该领域通常可用的训练数据的嘈杂性质的阻碍,这将考虑任何包含答案实体作为正样本的段落。但是,段落中的答案实体不一定与给定的问题有关。为了解决该问题,我们提出了一种基于生成对抗性神经网络的通道重新管理的方法,称为\ ttt {pregan},除了局部相关性外,还结合了关于答复性的歧视者。目的是强迫发电机对局部相关的段落进行排名,并包含答案。五个公共数据集的实验表明,\ ttt {pregan}可以更好地对适当的段落进行排名,从而提高质量检查系统的有效性,并在不使用外部数据的情况下优于现有方法。
translated by 谷歌翻译
在会话机读数中,系统需要解释自然语言规则,回答“可以获得”诸如VA保健福利“等高级问题?”,并询问后续澄清问题,答案是回答原始问题所必需的。但是,现有的作品假设为每个用户问题提供规则文本,这些问题忽略了实际情况中的基本检索步骤。在这项工作中,我们提出并调查了会话机读取的开放式检索。在开放式检索设置中,相关规则文本未知,以便系统需要从规则文本的集合中检索问题相关的证据,并根据对话方式根据多个检索的规则文本回答用户的高级问题。我们提出泥泞,通过话语细分提取规则文本中规则文本中的条件的多通道话语感知的素食推理,进行多级征集推理,以直接回答用户问题,或者向澄清后续问题提出询问更多信息。在我们创建的或-ShiSC数据集上,泥遍达到最先进的性能,优于现有的单通道对话机阅读模型以及通过大边距的新的多通道对话机读取基线。此外,我们对深入的分析进行了深入的分析,为这个新的设置和模型提供了新的见解。
translated by 谷歌翻译
表问题回答(表QA)是指从表中提供精确的答案来回答用户的问题。近年来,在表质量检查方面有很多作品,但是对该研究主题缺乏全面的调查。因此,我们旨在提供表QA中可用数据集和代表性方法的概述。我们根据其技术将现有的表质量质量质量检查分为五个类别,其中包括基于语义的,生成,提取,基于匹配的基于匹配的方法和基于检索的方法。此外,由于表质量质量质量检查仍然是现有方法的一项艰巨的任务,因此我们还识别和概述了一些关键挑战,并讨论了表质量质量检查的潜在未来方向。
translated by 谷歌翻译
该论文为罗马尼亚语提供了一个开放域的答案系统,回答了Covid-19相关问题。QA系统管道涉及自动问题处理,自动查询生成,Web搜索前10个最相关的文档,并使用用于提取质量质量质量质量质量质量质量的BERT模型回答提取,并在我们手动创建的COVID-19数据集上进行了培训。该论文将介绍质量检查系统及其与罗马尼亚语言技术的集成,COVID-19数据集以及对质量检查性能的不同评估。
translated by 谷歌翻译