Hybrid tabular-textual question answering (QA) requires reasoning from heterogeneous information, and the types of reasoning are mainly divided into numerical reasoning and span extraction. Despite being the main challenge of the task compared to extractive QA, current numerical reasoning method simply uses LSTM to autoregressively decode program sequences, and each decoding step produces either an operator or an operand. However, the step-by-step decoding suffers from exposure bias, and the accuracy of program generation drops sharply with progressive decoding. In this paper, we propose a non-autoregressive program generation framework, which facilitates program generation in parallel. Our framework, which independently generates complete program tuples containing both operators and operands, can significantly boost the speed of program generation while addressing the error accumulation issue. Our experiments on the MultiHiertt dataset shows that our model can bring about large improvements (+7.97 EM and +6.38 F1 points) over the strong baseline, establishing the new state-of-the-art performance, while being much faster (21x) in program generation. The performance drop of our method is also significantly smaller than the baseline with increasing numbers of numerical reasoning steps.
translated by 谷歌翻译
在现实世界中的问题回答场景中,将表格和文本内容均结合的混合形式吸引了越来越多的关注,其中数值推理问题是最典型和最具挑战性的问题之一。现有方法通常采用编码器框架来表示混合内容并生成答案。但是,它无法捕获编码器侧数值,表格架构和文本信息之间的丰富关系。解码器使用一个简单的预定义运算符分类器,该分类器的灵活性不足以处理具有不同表达式的数值推理过程。为了解决这些问题,本文提出了一个\ textbf {re} lational \ textbf {g} raph增强\ textbf {h} ybrid table-text \ textbf {n}带有\ textbf {t textbf {t text} ree decoder(\ textbff recoder(\ textbf) {reghnt})。它模拟了对表 - 文本混合内容的回答的数值问题,作为表达树的生成任务。此外,我们提出了一种新颖的关系图建模方法,该方法模拟了问题,表和段落之间的对齐方式。我们验证了公开可用的Table-Text混合质量质量质量标准(TAT-QA)的模型。拟议的reghnt显着胜过基线模型,并实现最新结果\脚注{我们在〜\ url {https://github.com/lfy79001/reghnt}}}〜(20222)公开发布了源代码和数据-05-05)。
translated by 谷歌翻译
Long-form numerical reasoning in financial analysis aims to generate a reasoning program to calculate the correct answer for a given question. Previous work followed a retriever-generator framework, where the retriever selects key facts from a long-form document, and the generator generates a reasoning program based on retrieved facts. However, they treated all facts equally without considering the different contributions of facts with and without numbers. Meanwhile, the program consistency were ignored under supervised training, resulting in lower training accuracy and diversity. To solve these problems, we proposed APOLLO to improve the long-form numerical reasoning framework. For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts. For the generator, we design consistency-based reinforcement learning and target program augmentation strategy based on the consistency of program execution results. Experimental results on the FinQA and ConvFinQA leaderboard verify the effectiveness of our proposed method, achieving the new state-of-the-art.
translated by 谷歌翻译
金融领域的数值推理 - 进行定量分析并总结了财务报告中的信息 - 可以大大提高业务效率并降低数十亿美元的成本。在这里,我们提出了一个数值推理问答系统,以回答财务文本和表数据源之间的数值推理问题,该问题由回收器模块,发电机模块和集合模块组成。具体而言,除了检索整个行数据外,我们还创新设计了一个细胞回收器,该池检索器可以检索金单元,以避免将同一行中的无关和相似的单元带到发电机模块的输入中。在发电机模块中,我们利用多个发电机来生产程序,这是回答问题的操作步骤。最后,在整体模块中,我们集成了多个程序,以选择最佳程序作为系统的输出。在FinQA竞争中的最终私人测试集中,我们的系统获得了69.79的执行精度。
translated by 谷歌翻译
文档视觉问题回答(VQA)旨在了解视觉上富裕的文档,以自然语言回答问题,这是自然语言处理和计算机视觉的新兴研究主题。在这项工作中,我们介绍了一个名为TAT-DQA的新文档VQA数据集,该数据集由3,067个文档页面组成,其中包含半结构化表和非结构化文本以及16,558个问答,通过扩展Tat-QA Dataset。这些文档是从现实世界中的财务报告中取样的,并包含大量数字,这意味着要求离散的推理能力回答该数据集上的问题。基于TAT-DQA,我们进一步开发了一个名为MHST的新型模型,该模型在包括文本,布局和视觉图像在内的多模式中考虑了信息,以智能地以相应的策略(即提取或推理)智能地解决不同类型的问题。广泛的实验表明,MHST模型明显优于基线方法,证明其有效性。但是,表演仍然远远落后于专家人类。我们预计,我们的新Tat-DQA数据集将有助于研究对视觉和语言结合的视觉丰富文档的深入理解,尤其是对于需要离散推理的场景。另外,我们希望拟议的模型能够激发研究人员将来设计更高级的文档VQA模型。
translated by 谷歌翻译
Table-and-text hybrid question answering (HybridQA) is a widely used and challenging NLP task commonly applied in the financial and scientific domain. The early research focuses on migrating other QA task methods to HybridQA, while with further research, more and more HybridQA-specific methods have been present. With the rapid development of HybridQA, the systematic survey is still under-explored to summarize the main techniques and advance further research. So we present this work to summarize the current HybridQA benchmarks and methods, then analyze the challenges and future directions of this task. The contributions of this paper can be summarized in three folds: (1) first survey, to our best knowledge, including benchmarks, methods and challenges for HybridQA; (2) systematic investigation with the reasonable comparison of the existing systems to articulate their advantages and shortcomings; (3) detailed analysis of challenges in four important dimensions to shed light on future directions.
translated by 谷歌翻译
Structured tabular data exist across nearly all fields. Reasoning task over these data aims to answer questions or determine the truthiness of hypothesis sentences by understanding the semantic meaning of a table. While previous works have devoted significant efforts to the tabular reasoning task, they always assume there are sufficient labeled data. However, constructing reasoning samples over tables (and related text) is labor-intensive, especially when the reasoning process is complex. When labeled data is insufficient, the performance of models will suffer an unendurable decline. In this paper, we propose a unified framework for unsupervised complex tabular reasoning (UCTR), which generates sufficient and diverse synthetic data with complex logic for tabular reasoning tasks, assuming no human-annotated data at all. We first utilize a random sampling strategy to collect diverse programs of different types and execute them on tables based on a "Program-Executor" module. To bridge the gap between the programs and natural language sentences, we design a powerful "NL-Generator" module to generate natural language sentences with complex logic from these programs. Since a table often occurs with its surrounding texts, we further propose novel "Table-to-Text" and "Text-to-Table" operators to handle joint table-text reasoning scenarios. This way, we can adequately exploit the unlabeled table resources to obtain a well-performed reasoning model under an unsupervised setting. Our experiments cover different tasks (question answering and fact verification) and different domains (general and specific), showing that our unsupervised methods can achieve at most 93% performance compared to supervised models. We also find that it can substantially boost the supervised performance in low-resourced domains as a data augmentation technique. Our code is available at https://github.com/leezythu/UCTR.
translated by 谷歌翻译
Fact verification has attracted a lot of research attention recently, e.g., in journalism, marketing, and policymaking, as misinformation and disinformation online can sway one's opinion and affect one's actions. While fact-checking is a hard task in general, in many cases, false statements can be easily debunked based on analytics over tables with reliable information. Hence, table-based fact verification has recently emerged as an important and growing research area. Yet, progress has been limited due to the lack of datasets that can be used to pre-train language models (LMs) to be aware of common table operations, such as aggregating a column or comparing tuples. To bridge this gap, in this paper we introduce PASTA, a novel state-of-the-art framework for table-based fact verification via pre-training with synthesized sentence-table cloze questions. In particular, we design six types of common sentence-table cloze tasks, including Filter, Aggregation, Superlative, Comparative, Ordinal, and Unique, based on which we synthesize a large corpus consisting of 1.2 million sentence-table pairs from WikiTables. PASTA uses a recent pre-trained LM, DeBERTaV3, and further pretrains it on our corpus. Our experimental results show that PASTA achieves new state-of-the-art performance on two table-based fact verification benchmarks: TabFact and SEM-TAB-FACTS. In particular, on the complex set of TabFact, which contains multiple operations, PASTA largely outperforms the previous state of the art by 4.7 points (85.6% vs. 80.9%), and the gap between PASTA and human performance on the small TabFact test set is narrowed to just 1.5 points (90.6% vs. 92.1%).
translated by 谷歌翻译
目前用于开放域问题的最先进的生成模型(ODQA)专注于从非结构化文本信息生成直接答案。但是,大量的世界知识存储在结构化数据库中,并且需要使用SQL等查询语言访问。此外,查询语言可以回答需要复杂推理的问题,以及提供完全的解释性。在本文中,我们提出了一个混合框架,将文本和表格证据占据了输入,并根据哪种形式更好地回答这个问题而生成直接答案或SQL查询。然后可以在关联的数据库上执行生成的SQL查询以获得最终答案。据我们所知,这是第一种将Text2SQL与ODQA任务应用于ODQA任务的论文。凭经验,我们证明,在几个ODQA数据集上,混合方法始终如一地优于仅采用大边缘的均匀输入的基线模型。具体地,我们使用T5基础模型实现OpenSquad数据集的最先进的性能。在一个详细的分析中,我们证明能够生成结构的SQL查询可以始终带来增益,特别是对于那些需要复杂推理的问题。
translated by 谷歌翻译
Label smoothing is a regularization technique widely used in supervised learning to improve the generalization of models on various tasks, such as image classification and machine translation. However, the effectiveness of label smoothing in multi-hop question answering (MHQA) has yet to be well studied. In this paper, we systematically analyze the role of label smoothing on various modules of MHQA and propose F1 smoothing, a novel label smoothing technique specifically designed for machine reading comprehension (MRC) tasks. We evaluate our method on the HotpotQA dataset and demonstrate its superiority over several strong baselines, including models that utilize complex attention mechanisms. Our results suggest that label smoothing can be effective in MHQA, but the choice of smoothing strategy can significantly affect performance.
translated by 谷歌翻译
多跳的推理需要汇总多个文档来回答一个复杂的问题。现有方法通常将多跳问题分解为更简单的单跳问题,以解决说明可解释的推理过程的问题。但是,他们忽略了每个推理步骤的支持事实的基础,这往往会产生不准确的分解。在本文中,我们提出了一个可解释的逐步推理框架,以在每个中间步骤中同时合并单跳支持句子识别和单跳问题生成,并利用当前跳跃的推断,直到推理最终结果。我们采用统一的读者模型来进行中级跳跃推理和最终的跳跃推理,并采用关节优化,以更准确,强大的多跳上推理。我们在两个基准数据集HOTPOTQA和2WIKIMULTIHOPQA上进行实验。结果表明,我们的方法可以有效地提高性能,并在不分解监督的情况下产生更好的解释推理过程。
translated by 谷歌翻译
使用来自表格(TableQA)的信息回答自然语言问题是最近的兴趣。在许多应用程序中,表未孤立,但嵌入到非结构化文本中。通常,通过将其部分与表格单元格内容或非结构化文本跨度匹配,并从任一源中提取答案来最佳地回答问题。这导致了HybridQA数据集引入的TextableQA问题的新空间。现有的表格表示对基于变换器的阅读理解(RC)架构的适应性未通过单个系统解决两个表示的不同模式。培训此类系统因对遥远监督的需求而进一步挑战。为了降低认知负担,培训实例通常包括问题和答案,后者匹配多个表行和文本段。这导致嘈杂的多实例培训制度不仅涉及表的行,而且涵盖了链接文本的跨度。我们通过提出Mitqa来回应这些挑战,这是一个新的TextableQA系统,明确地模拟了表行选择和文本跨度选择的不同但密切相关的概率空间。与最近的基线相比,我们的实验表明了我们的方法的优越性。该方法目前在HybridQA排行榜的顶部,并进行了一个试验集,在以前公布的结果上实现了对em和f1的21%的绝对改善。
translated by 谷歌翻译
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. Span-BERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1 * Equal contribution. 1 Our code and pre-trained models are available at https://github.com/facebookresearch/ SpanBERT.
translated by 谷歌翻译
解决数学单词问题需要对文本中的数量进行演绎推理。各种最近的研究工作主要依赖于序列到序列或序列模型,以生成数学表达式,而无需在给定情况下明确执行数量之间的关系推理。尽管经验上有效,但这种方法通常并未为生成的表达提供解释。在这项工作中,我们将任务视为一个复杂的关系提取问题,提出了一种新的方法,该方法提出了可解释的演绎推理步骤,以迭代构建目标表达式,其中每个步骤涉及两个定义其关系的数量的原始操作。通过在四个基准数据集上进行的大量实验,我们表明该提出的模型显着优于现有的强基础。我们进一步证明,演绎过程不仅提出了更可解释的步骤,而且还使我们能够对需要更复杂推理的问题进行更准确的预测。
translated by 谷歌翻译
作为自然语言处理领域的后起之秀,在各行各业中,问答系统(问答系统)被广泛使用。与其他方案相比,在Q&A系统的可追溯性和解释性方面,财务方案的应用程序有很强的要求。此外,由于对人工智能技术的需求已从最初的计算智能转变为认知智能,因此这项研究主要集中于财务数值推理数据集-FinQA。在共享任务中,目标是根据包含文本和表的给定财务报告生成推理程序和最终答案。我们使用基于Deberta预训练的语言模型的方法,并采用其他优化方法,包括在此基础上进行多模型融合,训练集组合。我们最终获得了68.99的执行精度和64.53的程序精度,在2022 FinQA挑战中排名第4。
translated by 谷歌翻译
在开放的书本回答(OBQA)任务中,从分散注意力的信息中选择相关段落和句子对于推理问题的答案至关重要。 HOTPOTQA数据集旨在教授和评估系统以进行段落排名和句子选择。许多现有框架使用单独的模型分别选择相关段落和句子。这样的系统不仅在模型的参数方面具有很高的复杂性,而且还无法将训练这两个任务训练在一起的优势,因为一项任务可能对另一个任务有益。在这项工作中,我们提出了一个简单而有效的框架,可以通过共同排名段落和选择句子来解决这些限制。此外,我们提出一致性和相似性约束,以促进段落排名和句子选择之间的相关性和相互作用。实验表明,我们的框架可以与以前的系统实现竞争性结果,并就相关句子的确切匹配而优于28 \%在HOTPOTQA数据集上。
translated by 谷歌翻译
Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data such as knowledge bases (KB) or databases (DB). However, existing approaches on semantic parsing cannot adapt to both modalities, as they suffer from the exponential growth of the logical form candidates and can hardly generalize to unseen data. In this work, we propose Uni-Parser, a unified semantic parser for question answering (QA) on both KB and DB. We introduce the primitive (relation and entity in KB, and table name, column name and cell value in DB) as an essential element in our framework. The number of primitives grows linearly with the number of retrieved relations in KB and DB, preventing us from dealing with exponential logic form candidates. We leverage the generator to predict final logical forms by altering and composing topranked primitives with different operations (e.g. select, where, count). With sufficiently pruned search space by a contrastive primitive ranker, the generator is empowered to capture the composition of primitives enhancing its generalization ability. We achieve competitive results on multiple KB and DB QA benchmarks more efficiently, especially in the compositional and zero-shot settings.
translated by 谷歌翻译
表问题回答(表QA)是指从表中提供精确的答案来回答用户的问题。近年来,在表质量检查方面有很多作品,但是对该研究主题缺乏全面的调查。因此,我们旨在提供表QA中可用数据集和代表性方法的概述。我们根据其技术将现有的表质量质量质量检查分为五个类别,其中包括基于语义的,生成,提取,基于匹配的基于匹配的方法和基于检索的方法。此外,由于表质量质量质量检查仍然是现有方法的一项艰巨的任务,因此我们还识别和概述了一些关键挑战,并讨论了表质量质量检查的潜在未来方向。
translated by 谷歌翻译
Transformers are widely used in NLP tasks. However, current approaches to leveraging transformers to understand language expose one weak spot: Number understanding. In some scenarios, numbers frequently occur, especially in semi-structured data like tables. But current approaches to rich-number tasks with transformer-based language models abandon or lose some of the numeracy information - e.g., breaking numbers into sub-word tokens - which leads to many number-related errors. In this paper, we propose the LUNA framework which improves the numerical reasoning and calculation capabilities of transformer-based language models. With the number plugin of NumTok and NumBed, LUNA represents each number as a whole to model input. With number pre-training, including regression loss and model distillation, LUNA bridges the gap between number and vocabulary embeddings. To the best of our knowledge, this is the first work that explicitly injects numeracy capability into language models using Number Plugins. Besides evaluating toy models on toy tasks, we evaluate LUNA on three large-scale transformer models (RoBERTa, BERT, TabBERT) over three different downstream tasks (TATQA, TabFact, CrediTrans), and observe the performances of language models are constantly improved by LUNA. The augmented models also improve the official baseline of TAT-QA (EM: 50.15 -> 59.58) and achieve SOTA performance on CrediTrans (F1 = 86.17).
translated by 谷歌翻译
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
translated by 谷歌翻译