会话问题 - 转移生成是一项任务,它会自动生成一个基于输入段落的大规模对话问题回答数据集。在本文中,我们介绍了一个新颖的框架,该框架从一段段落中提取了值得问候的短语,然后在考虑以前的对话时产生相应的问题。特别是,我们的框架在生成问题后修改了提取的答案,以便答案与配对的问题完全匹配。实验结果表明,我们简单的答案修订方法可显着改善合成数据的质量。此外,我们证明我们的框架可以有效地用于域的适应会话问答。
translated by 谷歌翻译
预审前的语言模型通过提供高质量的上下文化单词嵌入来显着改善了下游语言理解任务(包括提取性问题)的性能。但是,培训问答模型仍然需要大量特定域的注释数据。在这项工作中,我们提出了一个合作的自我训练框架RGX,用于自动生成更非平凡的问题 - 解答对以提高模型性能。 RGX建立在带有答案实体识别器,问题生成器和答案提取器的交互式学习环境的蒙版答案提取任务上。给定带有蒙版实体的段落,生成器会在实体周围生成一个问题,并培训了提取器,以提取蒙面实体,并使用生成的问题和原始文本。该框架允许对任何文本语料库的问题产生和回答模型进行培训,而无需注释。实验结果表明,RGX优于最先进的语言模型(SOTA)的语言模型,并在标准提问基准的基准上采用转移学习方法,并在给定的模型大小和传输学习设置下产生新的SOTA性能。
translated by 谷歌翻译
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. 1 Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp. github.io/coqa.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QG-Bench is released along with the fine-tuned models presented in the paper https://github.com/asahi417/lm-question-generation, which are also available as a demo https://autoqg.net/.
translated by 谷歌翻译
人类在对话中提出的问题通常包含上下文依赖性,即对先前对话转弯的明确或隐式引用。这些依赖性采用核心发挥的形式(例如,通过代词使用)或椭圆形,并且可以使自动化系统的理解难以理解。促进对问题的理解和后续治疗方法的一种方法是将其重写为不受欢迎的形式,即可以理解的形式而没有对话性上下文。我们提出了Coqar,Coqar是一种语料库,其中包含$ 4.5 $ k的对话中的对话询问数据集COQA,总计$ 53 $ K的后续提问 - 答案对。每个原始问题都在至少2个脱离台面重写中手动注释。 COQAR可用于监督三个任务的监督:问题释义,问题重写和会话问题回答。为了评估Coqar重写的质量,我们进行了几项实验,包括培训和评估这三个任务的模型。我们的结果支持以下想法:问题重写可以用作问题回答模型的预处理步骤,从而提高其性能。
translated by 谷歌翻译
This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-ofthe-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. * Equal contribution. † Contact person.
translated by 谷歌翻译
Existing metrics for evaluating the quality of automatically generated questions such as BLEU, ROUGE, BERTScore, and BLEURT compare the reference and predicted questions, providing a high score when there is a considerable lexical overlap or semantic similarity between the candidate and the reference questions. This approach has two major shortcomings. First, we need expensive human-provided reference questions. Second, it penalises valid questions that may not have high lexical or semantic similarity to the reference questions. In this paper, we propose a new metric, RQUGE, based on the answerability of the candidate question given the context. The metric consists of a question-answering and a span scorer module, in which we use pre-trained models from the existing literature, and therefore, our metric can be used without further training. We show that RQUGE has a higher correlation with human judgment without relying on the reference question. RQUGE is shown to be significantly more robust to several adversarial corruptions. Additionally, we illustrate that we can significantly improve the performance of QA models on out-of-domain datasets by fine-tuning on the synthetic data generated by a question generation model and re-ranked by RQUGE.
translated by 谷歌翻译
使用来自表格(TableQA)的信息回答自然语言问题是最近的兴趣。在许多应用程序中,表未孤立,但嵌入到非结构化文本中。通常,通过将其部分与表格单元格内容或非结构化文本跨度匹配,并从任一源中提取答案来最佳地回答问题。这导致了HybridQA数据集引入的TextableQA问题的新空间。现有的表格表示对基于变换器的阅读理解(RC)架构的适应性未通过单个系统解决两个表示的不同模式。培训此类系统因对遥远监督的需求而进一步挑战。为了降低认知负担,培训实例通常包括问题和答案,后者匹配多个表行和文本段。这导致嘈杂的多实例培训制度不仅涉及表的行,而且涵盖了链接文本的跨度。我们通过提出Mitqa来回应这些挑战,这是一个新的TextableQA系统,明确地模拟了表行选择和文本跨度选择的不同但密切相关的概率空间。与最近的基线相比,我们的实验表明了我们的方法的优越性。该方法目前在HybridQA排行榜的顶部,并进行了一个试验集,在以前公布的结果上实现了对em和f1的21%的绝对改善。
translated by 谷歌翻译
大多数在对话率问题回答中建模对话历史记录(CQA)的作品报告了共同CQA基准测试的主要结果。尽管现有模型在CQA排行榜上显示出令人印象深刻的结果,但尚不清楚它们在设置方面(有时是更现实的),训练数据大小(例如从大型集合到小型集合)和域是否有牢固的变化。在这项工作中,我们设计并进行了首次针对CQA的历史建模方法的大规模鲁棒性研究。我们发现,高基准分数不一定会转化为强大的鲁棒性,并且在不同的设置下,各种方法的性能都大不相同。配备了我们研究的见解,我们设计了一种基于及时的新型历史建模方法,并在各种环境中展示了其强大的鲁棒性。我们的方法灵感来自现有方法,这些方法突出了段落中的历史答案。但是,我们不是通过修改段落令牌嵌入来突出显示,而是直接在段落文本中添加文本提示。我们的方法简单,易于插入实际上任何模型,并且非常有效,因此我们建议它作为未来模型开发人员的起点。我们还希望我们的研究和见解将提高人们对以鲁棒性评估的重要性的认识,除了获得较高的排行榜分数,从而提高了更好的CQA系统。
translated by 谷歌翻译
Neural approaches have become very popular in the domain of Question Answering, however they require a large amount of annotated data. Furthermore, they often yield very good performance but only in the domain they were trained on. In this work we propose a novel approach that combines data augmentation via question-answer generation with Active Learning to improve performance in low resource settings, where the target domains are diverse in terms of difficulty and similarity to the source domain. We also investigate Active Learning for question answering in different stages, overall reducing the annotation effort of humans. For this purpose, we consider target domains in realistic settings, with an extremely low amount of annotated samples but with many unlabeled documents, which we assume can be obtained with little effort. Additionally, we assume sufficient amount of labeled data from the source domain is available. We perform extensive experiments to find the best setup for incorporating domain experts. Our findings show that our novel approach, where humans are incorporated as early as possible in the process, boosts performance in the low-resource, domain-specific setting, allowing for low-labeling-effort question answering systems in new, specialized domains. They further demonstrate how human annotation affects the performance of QA depending on the stage it is performed.
translated by 谷歌翻译
谈话问题应答需要能够正确解释问题。然而,由于在日常谈话中难以理解共同参考和省略号的难度,目前的模型仍然不令人满意。尽管生成方法取得了显着的进展,但它们仍然被语义不完整陷入困境。本文提出了一种基于动作的方法来恢复问题的完整表达。具体地,我们首先在将相应的动作分配给每个候选跨度的同时定位问题中的共同引用或省略号的位置。然后,我们寻找与对话环境中的候选线索相关的匹配短语。最后,根据预测的操作,我们决定是否用匹配的信息替换共同参考或补充省略号。我们展示了我们对英语和中文发言权重写任务的方法的有效性,在RESTORATION-200K数据集中分别在3.9 \%和Rouge-L中提高了最先进的EM(完全匹配)。
translated by 谷歌翻译
Question Answering (QA) is a growing area of research, often used to facilitate the extraction of information from within documents. State-of-the-art QA models are usually pre-trained on domain-general corpora like Wikipedia and thus tend to struggle on out-of-domain documents without fine-tuning. We demonstrate that synthetic domain-specific datasets can be generated easily using domain-general models, while still providing significant improvements to QA performance. We present two new tools for this task: A flexible pipeline for validating the synthetic QA data and training downstream models on it, and an online interface to facilitate human annotation of this generated data. Using this interface, crowdworkers labelled 1117 synthetic QA pairs, which we then used to fine-tune downstream models and improve domain-specific QA performance by 8.75 F1.
translated by 谷歌翻译
问题答案(QA)是自然语言处理中最具挑战性的最具挑战性的问题之一(NLP)。问答(QA)系统试图为给定问题产生答案。这些答案可以从非结构化或结构化文本生成。因此,QA被认为是可以用于评估文本了解系统的重要研究区域。大量的QA研究致力于英语语言,调查最先进的技术和实现最先进的结果。然而,由于阿拉伯QA中的研究努力和缺乏大型基准数据集,在阿拉伯语问答进展中的研究努力得到了很大速度的速度。最近许多预先接受的语言模型在许多阿拉伯语NLP问题中提供了高性能。在这项工作中,我们使用四个阅读理解数据集来评估阿拉伯QA的最先进的接种变压器模型,它是阿拉伯语 - 队,ArcD,AQAD和TYDIQA-GoldP数据集。我们微调并比较了Arabertv2基础模型,ArabertV0.2大型型号和ARAElectra模型的性能。在最后,我们提供了一个分析,了解和解释某些型号获得的低绩效结果。
translated by 谷歌翻译
Existing pre-training methods for extractive Question Answering (QA) generate cloze-like queries different from natural questions in syntax structure, which could overfit pre-trained models to simple keyword matching. In order to address this problem, we propose a novel Momentum Contrastive pRe-training fOr queStion anSwering (MCROSS) method for extractive QA. Specifically, MCROSS introduces a momentum contrastive learning framework to align the answer probability between cloze-like and natural query-passage sample pairs. Hence, the pre-trained models can better transfer the knowledge learned in cloze-like samples to answering natural questions. Experimental results on three benchmarking QA datasets show that our method achieves noticeable improvement compared with all baselines in both supervised and zero-shot scenarios.
translated by 谷歌翻译
Fine-tuned language models use greedy decoding to answer reading comprehension questions with relative success. However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one. Does greedy decoding actually perform worse than an algorithm that does adhere to these properties? To study the performance and optimality of greedy decoding, we present exact-extract, a decoding algorithm that efficiently finds the most probable answer span in the context. We compare the performance of T5 with both decoding algorithms on zero-shot and few-shot extractive question answering. When no training examples are available, exact-extract significantly outperforms greedy decoding. However, greedy decoding quickly converges towards the performance of exact-extract with the introduction of a few training examples, becoming more extractive and increasingly likelier to generate the most probable span as the training set grows. We also show that self-supervised training can bias the model towards extractive behavior, increasing performance in the zero-shot setting without resorting to annotated examples. Overall, our results suggest that pretrained language models are so good at adapting to extractive question answering, that it is often enough to fine-tune on a small training set for the greedy algorithm to emulate the optimal decoding strategy.
translated by 谷歌翻译
The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. However, compiling factual questions is accompanied by time- and labour-consuming annotation, limiting the training data's potential size. We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
translated by 谷歌翻译
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dualencoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. 1 * Equal contribution 1 The code and trained models have been released at https://github.com/facebookresearch/DPR.
translated by 谷歌翻译
当前在提取问题答案(EQA)中进行的研究对单跨度提取设置进行了建模,其中单个答案跨度是可以预测给定问题对对的标签。对于通用域EQA来说,这种设置是自然的,因为可以单个跨度可以回答通用域中的大多数问题。遵循通用域EQA模型,当前的生物医学EQA(BIOEQA)模型利用单跨度提取设置,采用后处理步骤。在本文中,我们调查了整个普通和生物医学领域的问题分布,发现生物医学问题更可能需要列表型答案(多个答案),而不是Factoid-type答案(单个答案)。这需要能够为问题提供多个答案的模型。基于这项初步研究,我们为Bioeqa提出了一种序列标记方法,Bioeqa是一种多跨度提取设置。我们的方法直接以不同数量的短语作为答案来解决问题,并可以学会从培训数据中确定问题的答案数量。我们在BioASQ 7B和8B列表类型问题上的实验结果优于表现最佳的现有模型,而无需进行后处理步骤。源代码和资源可免费下载,网址为https://github.com/dmis-lab/seqtagqa
translated by 谷歌翻译
由于人类参与者的参与,收集培训对话系统的数据可能非常昂贵,并且需要广泛的注释。特别是在文档接地的对话系统中,人类专家需要仔细阅读非结构化文件以回答用户的问题。结果,现有的文档接地对话对话数据集相对较小,并且妨碍了对话系统的有效培训。在本文中,我们提出了一种通过生成对话模型在文档上接地的自动数据增强技术。对话模型由用户BOT和代理机器人组成,可以在给定输入文档的情况下合成不同的对话,然后用于训练下游模型。在补充原始数据集时,我们的方法可以实现对传统数据增强方法的显着改进。我们还在低资源环境中实现了良好的性能。
translated by 谷歌翻译