虽然考试风格的问题是一家提供各种目的的基本型教育工具,但有问题的手动构建是一个复杂的过程,需要培训,经验和资源。为减少与人工建设相关的开支并满足不需要持续供应新问题,可以使用自动问题(QG)技术。但是,与自动问题应答(QA)相比,QG是一个更具挑战性的任务。在这项工作中,我们在QA,QG的多任务设置中微调多语言T5(MT5)变压器,并使用土耳其QA DataSet回答提取任务。据我们所知,这是第一个尝试从土耳其语文本执行自动文本到文本问题的学术工作。评估结果表明,拟议的多任务设置达到了最先进的土耳其语问题应答和问题绩效,而不是TQuadv1,TQuadv2数据集和XQuad土耳其分裂。源代码和预先训练的模型可在https://github.com/obss/turkish-question-generation中获得。
translated by 谷歌翻译
问题答案(QA)是自然语言处理中最具挑战性的最具挑战性的问题之一(NLP)。问答(QA)系统试图为给定问题产生答案。这些答案可以从非结构化或结构化文本生成。因此,QA被认为是可以用于评估文本了解系统的重要研究区域。大量的QA研究致力于英语语言,调查最先进的技术和实现最先进的结果。然而,由于阿拉伯QA中的研究努力和缺乏大型基准数据集,在阿拉伯语问答进展中的研究努力得到了很大速度的速度。最近许多预先接受的语言模型在许多阿拉伯语NLP问题中提供了高性能。在这项工作中,我们使用四个阅读理解数据集来评估阿拉伯QA的最先进的接种变压器模型,它是阿拉伯语 - 队,ArcD,AQAD和TYDIQA-GoldP数据集。我们微调并比较了Arabertv2基础模型,ArabertV0.2大型型号和ARAElectra模型的性能。在最后,我们提供了一个分析,了解和解释某些型号获得的低绩效结果。
translated by 谷歌翻译
Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QG-Bench is released along with the fine-tuned models presented in the paper https://github.com/asahi417/lm-question-generation, which are also available as a demo https://autoqg.net/.
translated by 谷歌翻译
The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. However, compiling factual questions is accompanied by time- and labour-consuming annotation, limiting the training data's potential size. We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
translated by 谷歌翻译
This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-ofthe-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. * Equal contribution. † Contact person.
translated by 谷歌翻译
过去十年互联网上可用的信息和信息量增加。该数字化导致自动应答系统需要从冗余和过渡知识源中提取富有成效的信息。这些系统旨在利用自然语言理解(NLU)从此巨型知识源到用户查询中最突出的答案,从而取决于问题答案(QA)字段。问题答案涉及但不限于用户问题映射的步骤,以获取相关查询,检索相关信息,从检索到的信息等找到最佳合适的答案等。当前对深度学习模型的当前改进估计所有这些任务的令人信服的性能改进。在本综述工作中,根据问题的类型,答案类型,证据答案来源和建模方法进行分析QA场的研究方向。此细节随后是自动问题生成,相似性检测和语言的低资源可用性等领域的开放挑战。最后,提出了对可用数据集和评估措施的调查。
translated by 谷歌翻译
知识库问题应答(KBQA)旨在在外部知识库的帮助下回答自然语言问题。核心思想是找到内部知识与知识库的已知三元组之间的内部知识之间的联系。 KBQA任务管道包含几个步骤,包括实体识别,关系提取和实体链接。这种管道方法意味着任何过程中的错误将不可避免地传播到最终预测。为了解决上述问题,本文提出了一种具有预培训语言模型(PLM)和知识图(KG)的语料库生成 - 检索方法(CGRM)。首先,基于MT5模型,我们设计了两个新的预训练任务:基于段落的知识屏蔽语言建模和问题,以获取知识增强型T5(KT5)模型。其次,在用一系列启发式规则预处理知识图的预处理之后,KT5模型基于处理的三元组生成自然语言QA对。最后,我们通过检索合成数据集直接解决QA。我们在NLPCC-ICCPOL 2016 KBQA数据集上测试我们的方法,结果表明,我们的框架提高了KBQA的性能,直接向前的方法与最先进的方法竞争。
translated by 谷歌翻译
谈话问题应答需要能够正确解释问题。然而,由于在日常谈话中难以理解共同参考和省略号的难度,目前的模型仍然不令人满意。尽管生成方法取得了显着的进展,但它们仍然被语义不完整陷入困境。本文提出了一种基于动作的方法来恢复问题的完整表达。具体地,我们首先在将相应的动作分配给每个候选跨度的同时定位问题中的共同引用或省略号的位置。然后,我们寻找与对话环境中的候选线索相关的匹配短语。最后,根据预测的操作,我们决定是否用匹配的信息替换共同参考或补充省略号。我们展示了我们对英语和中文发言权重写任务的方法的有效性,在RESTORATION-200K数据集中分别在3.9 \%和Rouge-L中提高了最先进的EM(完全匹配)。
translated by 谷歌翻译
自动问题回答是电子商务中的一个重要但具有挑战性的任务,因为用户发布了有兴趣购买的产品的数百万个问题。因此,对使用有关产品的相关信息提供快速响应的自动答案生成系统存在很大的需求。他们有三种知识来源可用于接听用户发布查询,它们是评论,重复或类似的问题和规范。有效利用这些信息来源将极大地帮助我们回答复杂问题。然而,利用这些来源存在两个主要挑战:(i)存在无关信息和(ii)的存在评论和类似问题的情绪模糊。通过这项工作,我们提出了一种新的管道(MSQAP),其通过在生成响应之前分别执行相关性和歧义预测来利用上述来源中存在的丰富信息。实验结果表明,与硼基基线相比,我们的相关性预测模型(BERT-QA)优于所有其他变体,并且在F1分数中提高了12.36%。我们的生成模型(T5-QA)优于所有内容保存度量的基线,如Bleu,Rouge,并且在Bleu中的平均提高35.02%,与最高表现为基线(HSSC-Q)相比,BLEU中的198.75%。人为评估我们的管道向我们展示了我们的方法在生成模型(T5-QA)上的准确性提高了30.7%,导致我们的全部管道的方法(MSQAP)提供更准确的答案。据我们所知,这是电子商务域中的第一个工作,它自动生成自然语言答案,将目前的信息与规格,类似问题和评论数据相结合。
translated by 谷歌翻译
传达相关和忠实信息的能力对于有条件生成的许多任务至关重要,但对于神经SEQ-seq seq模型仍然难以捉摸,这些模型的输出通常显示出幻觉,并且无法正确涵盖重要细节。在这项工作中,我们主张规划作为有用的中间表示,以使有条件的一代减少不透明和扎根。我们的作品提出了将文本计划作为一系列提问(QA)对的新概念化。我们用QA蓝图作为内容选择(即〜说什么)和计划(即〜按什么顺序)来增强现有数据集(例如,用于摘要)。我们通过利用最先进的问题生成技术并将输入输出对自动获取蓝图,并将其转换为输入 - 蓝图输出输出元组。我们开发了基于变压器的模型,每个模型都在它们如何将蓝图合并到生成的输出中(例如,作为全局计划或迭代)。跨指标和数据集的评估表明,蓝图模型比不采取计划并允许对生成输出进行更严格控制的替代方案更为事实。
translated by 谷歌翻译
问题应答系统这些天通常使用基于模板的语言生成。虽然足够适用于特定于域的任务,但这些系统对于域无关的系统来说太限性和预定义。本文提出了一个输出全长答案的系统给出一个问题和提取的事实答案(如命名实体等短跨度)作为输入。我们的系统使用选区和依赖性解析问题的树木。基于变压器的语法纠错模型Gector(2020)用作后处理步骤,以便更好流畅。我们将系统与(i)修改的指针生成器(SOTA)和(ii)微调对话框进行了比较。我们还通过更好的结果测试我们的方法(是 - 否)问题的方法。我们的模型比最先进的(SOTA)方法产生准确和流畅的答案。评估是在NewsQA和Squad数据集上完成的,分别增加0.4和0.9个百分点的速度分数。与SOTA相比,推理时间也减少了85 \%。用于我们评估的改进数据集将作为研究贡献的一部分发布。
translated by 谷歌翻译
由于临床实践所需的放射学报告和研究是在自由文本叙述中编写和存储的,因此很难提取相对信息进行进一步分析。在这种情况下,自然语言处理(NLP)技术可以促进自动信息提取和自由文本格式转换为结构化数据。近年来,基于深度学习(DL)的模型已适用于NLP实验,并具有令人鼓舞的结果。尽管基于人工神经网络(ANN)和卷积神经网络(CNN)的DL模型具有显着潜力,但这些模型仍面临临床实践中实施的一些局限性。变形金刚是另一种新的DL体系结构,已越来越多地用于改善流程。因此,在这项研究中,我们提出了一种基于变压器的细粒命名实体识别(NER)架构,以进行临床信息提取。我们以自由文本格式收集了88次腹部超声检查报告,并根据我们开发的信息架构进行了注释。文本到文本传输变压器模型(T5)和covive是T5模型的预训练域特异性适应性,用于微调来提取实体和关系,并将输入转换为结构化的格式。我们在这项研究中基于变压器的模型优于先前应用的方法,例如基于Rouge-1,Rouge-2,Rouge-L和BLEU分别为0.816、0.668、0.528和0.743的ANN和CNN模型,同时提供了一个分数可解释的结构化报告。
translated by 谷歌翻译
Question Generation (QG), as a challenging Natural Language Processing task, aims at generating questions based on given answers and context. Existing QG methods mainly focus on building or training models for specific QG datasets. These works are subject to two major limitations: (1) They are dedicated to specific QG formats (e.g., answer-extraction or multi-choice QG), therefore, if we want to address a new format of QG, a re-design of the QG model is required. (2) Optimal performance is only achieved on the dataset they were just trained on. As a result, we have to train and keep various QG models for different QG datasets, which is resource-intensive and ungeneralizable. To solve the problems, we propose a model named Unified-QG based on lifelong learning techniques, which can continually learn QG tasks across different datasets and formats. Specifically, we first build a format-convert encoding to transform different kinds of QG formats into a unified representation. Then, a method named \emph{STRIDER} (\emph{S}imilari\emph{T}y \emph{R}egular\emph{I}zed \emph{D}ifficult \emph{E}xample \emph{R}eplay) is built to alleviate catastrophic forgetting in continual QG learning. Extensive experiments were conducted on $8$ QG datasets across $4$ QG formats (answer-extraction, answer-abstraction, multi-choice, and boolean QG) to demonstrate the effectiveness of our approach. Experimental results demonstrate that our Unified-QG can effectively and continually adapt to QG tasks when datasets and formats vary. In addition, we verify the ability of a single trained Unified-QG model in improving $8$ Question Answering (QA) systems' performance through generating synthetic QA data.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Open-Domain Generative Question Answering has achieved impressive performance in English by combining document-level retrieval with answer generation. These approaches, which we refer to as GenQA, can generate complete sentences, effectively answering both factoid and non-factoid questions. In this paper, we extend GenQA to the multilingual and cross-lingual settings. For this purpose, we first introduce GenTyDiQA, an extension of the TyDiQA dataset with well-formed and complete answers for Arabic, Bengali, English, Japanese, and Russian. Based on GenTyDiQA, we design a cross-lingual generative model that produces full-sentence answers by exploiting passages written in multiple languages, including languages different from the question. Our cross-lingual generative system outperforms answer sentence selection baselines for all 5 languages and monolingual generative pipelines for three out of five languages studied.
translated by 谷歌翻译
We introduce an approach for the answer-aware question generation problem. Instead of only relying on the capability of strong pre-trained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules: a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pre-trained models for the question generation task. The code is also available (shorturl.at/lV567).
translated by 谷歌翻译
我们介绍了Godel(接地开放对话语言模型),这是对话框的大型预训练的语言模型。与诸如Dialogpt之类的早期模型相比,Godel利用了一个新的扎根预训练阶段,旨在更好地支持将Godel适应广泛的下游对话框任务,这些任务需要当前对话外部的信息(例如,数据库或文档)到产生良好的回应。针对一系列基准测试的实验,这些基准涵盖了面向任务的对话框,对话质量质量检查和接地的开放式对话框,表明Godel在几次以上的微调设置中优于最先进的预训练的对话模型,就人类和自动评估。我们评估方法的一个新颖特征是引入了一个效用概念,该概念除了其交流特征(内在评估)外,还评估了响应的有用性(外部评估)。我们表明,外部评估提供了改进的通道间一致性和与自动指标的相关性。代码和数据处理脚本公开可用。
translated by 谷歌翻译
近年来,低资源机器阅读理解(MRC)取得了重大进展,模型在各种语言数据集中获得了显着性能。但是,这些模型都没有为URDU语言定制。这项工作探讨了通过将机器翻译的队伍与来自剑桥O级书籍的Wikipedia文章和Urdu RC工作表组合的人生成的样本组合了机器翻译的小队,探讨了乌尔通题的半自动创建了数据集(UQuad1.0)。 UQuad1.0是一个大型URDU数据集,用于提取机器阅读理解任务,由49K问题答案成对组成,段落和回答格式。在UQuad1.0中,通过众包的原始SquAd1.0和大约4000对的机器翻译产生45000对QA。在本研究中,我们使用了两种类型的MRC型号:基于规则的基线和基于先进的变换器的模型。但是,我们发现后者优于其他人;因此,我们已经决定专注于基于变压器的架构。使用XLMroberta和多语言伯特,我们分别获得0.66和0.63的F1得分。
translated by 谷歌翻译
查询聚焦的文本摘要(QFTS)任务旨在构建基于给定查询的文本文档摘要的构建系统。解决此任务的关键挑战是缺乏培训摘要模型的大量标记数据。在本文中,我们通过探索一系列域适应技术来解决这一挑战。鉴于最近在广泛的自然语言处理任务中进行预先接受的变压器模型的成功,我们利用此类模型为单文档和多文件方案的QFTS任务产生抽象摘要。对于域适应,我们使用预先训练的变压器的摘要模型应用了各种技术,包括转移学习,弱监督学习和远程监督。六个数据集的广泛实验表明,我们所提出的方法非常有效地为QFTS任务产生抽象摘要,同时在一组自动和人类评估指标上设置新的最先进的结果。
translated by 谷歌翻译
现有的通用机器翻译或自然语言生成评估指标有几个问题,在这种情况下,提问(QA)系统无动于衷。为了构建强大的质量检查系统,我们需要具有等效鲁棒评估系统的能力,以验证对问题的模型预测是否类似于地面真相注释。比较基于语义而不是纯字符串重叠的相似性的能力对于公平比较模型并指出现实生活应用中更现实的接受标准很重要。我们首先建立在我们的知识论文的基础上,该论文使用基于变压器的模型指标来评估语义答案的相似性,并在没有词汇重叠的情况下实现与人类判断的更高相关性。我们提出了跨编码器增强双重编码器和Bertscore模型,以进行语义答案相似性,该模型在新的数据集中进行了培训,该数据集由美国公共人物的名称对组成。就我们而言,我们提供了第一个共同参考名称字符串对的数据集及其相似性,可用于培训。机器学习与应用第四届机器学习与应用国际会议(CMLA 2022)6月25日至2022年6月25日,哥本哈根,丹麦批量编辑:David C. Wyld,Dhinaharan Nagamalai(EDS)
translated by 谷歌翻译