文本的结构化和接地表示通常是通过封闭信息提取形式化的,提取与从知识库模式的预定义实体集合和关系一致的穷举集(主题,关系,对象)三元组的问题。大多数现有的作品是管道容易出错的累积,所有方法都仅适用于不切实际的少数实体和关系。我们介绍了Genie(生成信息提取),第一端到最终的归属化闭合信息提取。 Genie自然地通过自动生成文本形式的关系和实体来利用预先训练的变压器的语言知识。由于新的双层约束生成策略,仅生产与预定义知识库模式一致的三胞胎。我们的实验表明,Genie在封闭信息提取时是最先进的,从较少的训练数据点广泛地推广到基线,并缩放到以前无管理数量的实体和关系。通过这项工作,封闭的信息提取在现实情景中变得实用,为下游任务提供了新的机会。最后,这项工作为信息提取的核心任务铺平了统一的端到端方法。在https://github.com/epfl-dlab/genie提供的代码和模型。
translated by 谷歌翻译
开放信息提取(OpenIE)的最先进的神经方法通常以自回旋或基于谓词的方式迭代地提取三重态(或元组),以免产生重复。在这项工作中,我们提出了一种可以平等或更成功的问题的不同方法。也就是说,我们提出了一种新型的单通道方法,用于开放式启发,该方法受到计算机视觉的对象检测算法的启发。我们使用基于双方匹配的订单不足损失,迫使独特的预测和用于序列标签的仅基于变压器的纯编码体系结构。与质量指标和推理时间相比,与标准基准的最新模型相比,提出的方法更快,并且表现出卓越或类似的性能。我们的模型在CARB上的新最新性能为OIE2016评估,而推断的速度比以前的最新状态更快。我们还在两种语言的零弹奏设置中评估了模型的多语言版本,并引入了一种生成合成多语言数据的策略,以微调每个特定语言的模型。在这种情况下,我们在多语言Re-OIE2016上显示了15%的性能提高,葡萄牙语和西班牙语的F1达到75%。代码和型号可在https://github.com/sberbank-ai/detie上找到。
translated by 谷歌翻译
关系提取是一项重要但具有挑战性的任务,旨在从文本中提取所有隐藏的关系事实。随着深层语言模型的发展,关系提取方法在各种基准上都取得了良好的性能。但是,我们观察到以前方法的两个缺点:首先,在各种关系提取设置下没有统一的框架可以很好地工作;其次,有效利用外部知识作为背景信息。在这项工作中,我们提出了一种知识增强的生成模型来减轻这两个问题。我们的生成模型是一个统一的框架,可在各种关系提取设置下依次生成关系三胞胎,并明确利用来自知识图(KG)的相关知识来解决歧义。我们的模型在包括WebNLG,NYT10和Tacred在内的多个基准和设置上实现了卓越的性能。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
我们介绍了精致的,这是一种有效的端到端实体链接模型,该模型使用精细的实体类型和实体描述来执行链接。该模型执行提及的检测,细粒实体键入以及单个向前传球中文档中所有提及的实体歧义,使其比现有方法快60倍以上。精制还超过了标准实体链接数据集的最先进性能,平均比3.7 F1。该模型能够将其推广到大规模的知识库,例如Wikidata(其实体是Wikipedia的15倍)和零拍的实体链接。速度,准确性和规模的结合使精制成为从网络规模数据集中提取实体的有效且具有成本效益的系统,该数据集已成功部署该模型。我们的代码和预培训模型可在https://github.com/alexa/refined上找到
translated by 谷歌翻译
Wikidata是一个经常更新,社区驱动和多语言知识图形。因此,Wikidata是实体联系的一个有吸引力的基础,这是最近发表论文的增加显而易见的。该调查侧重于四个主题:(1)存在哪些Wikidata实体链接数据集,它们是多么广泛使用,它们是如何构建的? (2)对实体联系数据集的设计进行Wikidata的特点,如果是的话,怎么样? (3)当前实体链接方法如何利用Wikidata的特定特征? (4)现有实体链接方法未开发哪种Wikidata特征?本次调查显示,当前的Wikidata特定实体链接数据集在其他知识图表中的方案中的注释方案中没有不同。因此,没有提升多语言和时间依赖数据集的可能性,是自然适合维基帽的数据集。此外,我们表明大多数实体链接方法使用Wikidata以与任何其他知识图相同的方式,因为任何其他知识图都缺少了利用Wikidata特定特征来提高质量的机会。几乎所有方法都使用标签等特定属性,有时是描述,而是忽略超关系结构等特征。因此,例如,通过包括超关系图嵌入或类型信息,仍有改进的余地。许多方法还包括来自维基百科的信息,这些信息很容易与Wikidata组合并提供有价值的文本信息,Wikidata缺乏。
translated by 谷歌翻译
知识基础问题回答(KBQA)旨在通过知识库(KB)回答问题。早期研究主要集中于回答有关KB的简单问题,并取得了巨大的成功。但是,他们在复杂问题上的表现远非令人满意。因此,近年来,研究人员提出了许多新颖的方法,研究了回答复杂问题的挑战。在这项调查中,我们回顾了KBQA的最新进展,重点是解决复杂问题,这些问题通常包含多个主题,表达复合关系或涉及数值操作。详细说明,我们从介绍复杂的KBQA任务和相关背景开始。然后,我们描述用于复杂KBQA任务的基准数据集,并介绍这些数据集的构建过程。接下来,我们提出两个复杂KBQA方法的主流类别,即基于语义解析的方法(基于SP)的方法和基于信息检索的方法(基于IR)。具体而言,我们通过流程设计说明了他们的程序,并讨论了它们的主要差异和相似性。之后,我们总结了这两类方法在回答复杂问题时会遇到的挑战,并解释了现有工作中使用的高级解决方案和技术。最后,我们结论并讨论了与复杂的KBQA有关的几个有希望的方向,以进行未来的研究。
translated by 谷歌翻译
大型基于变压器的预训练的语言模型在各种知识密集的任务上取得了令人印象深刻的表现,并可以在其参数中捕获事实知识。我们认为,考虑到不断增长的知识和资源需求,在模型参数中存储大量知识是亚最佳选择。我们认为,更有效的替代方法是向模型提供对上下文相关的结构化知识的明确访问,并训练它以使用该知识。我们提出了LM核 - 实现这一目标的一般框架 - 允许从外部知识源对语言模型培训的\ textit {解耦},并允许后者更新而不会影响已经训练的模型。实验结果表明,LM核心获得外部知识,在知识探索任务上的最先进的知识增强语言模型中实现了重要而强大的优于性能。可以有效处理知识更新;并在两个下游任务上表现良好。我们还提出了一个彻底的错误分析,突出了LM核的成功和失败。
translated by 谷歌翻译
Intelligently extracting and linking complex scientific information from unstructured text is a challenging endeavor particularly for those inexperienced with natural language processing. Here, we present a simple sequence-to-sequence approach to joint named entity recognition and relation extraction for complex hierarchical information in scientific text. The approach leverages a pre-trained large language model (LLM), GPT-3, that is fine-tuned on approximately 500 pairs of prompts (inputs) and completions (outputs). Information is extracted either from single sentences or across sentences in abstracts/passages, and the output can be returned as simple English sentences or a more structured format, such as a list of JSON objects. We demonstrate that LLMs trained in this way are capable of accurately extracting useful records of complex scientific knowledge for three representative tasks in materials chemistry: linking dopants with their host materials, cataloging metal-organic frameworks, and general chemistry/phase/morphology/application information extraction. This approach represents a simple, accessible, and highly-flexible route to obtaining large databases of structured knowledge extracted from unstructured text. An online demo is available at http://www.matscholar.com/info-extraction.
translated by 谷歌翻译
关系提取(RE)是自然语言处理的基本任务。RE试图通过识别文本中的实体对之间的关系信息来将原始的,非结构化的文本转变为结构化知识。RE有许多用途,例如知识图完成,文本摘要,提问和搜索查询。RE方法的历史可以分为四个阶段:基于模式的RE,基于统计的RE,基于神经的RE和大型语言模型的RE。这项调查始于对RE的早期阶段的一些示例性作品的概述,突出了局限性和缺点,以使进度相关。接下来,我们回顾流行的基准测试,并严格检查用于评估RE性能的指标。然后,我们讨论遥远的监督,这是塑造现代RE方法发展的范式。最后,我们回顾了重点是降级和培训方法的最新工作。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
Relation extraction (RE) is a sub-discipline of information extraction (IE) which focuses on the prediction of a relational predicate from a natural-language input unit (such as a sentence, a clause, or even a short paragraph consisting of multiple sentences and/or clauses). Together with named-entity recognition (NER) and disambiguation (NED), RE forms the basis for many advanced IE tasks such as knowledge-base (KB) population and verification. In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE by encoding structured information about the sentences' principal units, such as subjects, objects, verbal phrases, and adverbials, into various forms of vectorized (and hence unstructured) representations of the sentences. Our main conjecture is that the decomposition of long and possibly convoluted sentences into multiple smaller clauses via OpenIE even helps to fine-tune context-sensitive language models such as BERT (and its plethora of variants) for RE. Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models compared to existing RE approaches. Our best results reach 92% and 71% of F1 score for KnowledgeNet and FewRel, respectively, proving the effectiveness of our approach on competitive benchmarks.
translated by 谷歌翻译
实体歧义(ED)的最新工作通常忽略了结构性知识库(KB)事实,而是依靠有限的KB信息子集,例如实体描述或类型。这限制了实体可以消除歧义的环境范围。为了允许使用所有KB事实以及描述和类型,我们介绍了一个ED模型,该模型通过以完全可区分的方式通过符号知识基础来链接实体。我们的型号平均超过了六个良好的ED数据集的最新基线。通过允许访问所有KB信息,我们的模型较少依赖于基于流行的实体先验,并提高了具有挑战性的Shadowlink数据集(强调不频繁和模棱两可的实体)的性能12.7 F1。
translated by 谷歌翻译
Triplet extraction aims to extract entities and their corresponding relations in unstructured text. Most existing methods train an extraction model on high-quality training data, and hence are incapable of extracting relations that were not observed during training. Generalizing the model to unseen relations typically requires fine-tuning on synthetic training data which is often noisy and unreliable. In this paper, we argue that reducing triplet extraction to a template filling task over a pre-trained language model can equip the model with zero-shot learning capabilities and enable it to leverage the implicit knowledge in the language model. Embodying these ideas, we propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that is based on end-to-end generative transformers. Our experiments show that without any data augmentation or pipeline systems, ZETT can outperform previous state-of-the-art models with 25% less parameters. We further show that ZETT is more robust in detecting entities and can be incorporated with automatically generated templates for relations.
translated by 谷歌翻译
In recent years, there is a surge of generation-based information extraction work, which allows a more direct use of pre-trained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit document-level relation extraction (DocRE) where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models. We have released our code at https://github.com/ayyyq/DORE.
translated by 谷歌翻译
To effectively train accurate Relation Extraction models, sufficient and properly labeled data is required. Adequately labeled data is difficult to obtain and annotating such data is a tricky undertaking. Previous works have shown that either accuracy has to be sacrificed or the task is extremely time-consuming, if done accurately. We are proposing an approach in order to produce high-quality datasets for the task of Relation Extraction quickly. Neural models, trained to do Relation Extraction on the created datasets, achieve very good results and generalize well to other datasets. In our study, we were able to annotate 10,022 sentences for 19 relations in a reasonable amount of time, and trained a commonly used baseline model for each relation.
translated by 谷歌翻译
Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedure using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infer ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.
translated by 谷歌翻译
我们研究了检查问题的事实,旨在识别给定索赔的真实性。具体而言,我们专注于事实提取和验证(发烧)及其伴随数据集的任务。该任务包括从维基百科检索相关文件(和句子)并验证文件中的信息是否支持或驳斥所索赔的索赔。此任务至关重要,可以是假新闻检测和医疗索赔验证等应用程序块。在本文中,我们以通过以结构化和全面的方式呈现文献来更好地了解任务的挑战。我们通过分析不同方法的技术视角并讨论发热数据集的性能结果,描述了所提出的方法,这是最熟悉的和正式结构化的数据集,就是事实提取和验证任务。我们还迄今为止迄今为止确定句子检索组件的有益损失函数的最大实验研究。我们的分析表明,采样负句对于提高性能并降低计算复杂性很重要。最后,我们描述了开放的问题和未来的挑战,我们激励了未来的任务研究。
translated by 谷歌翻译
现代神经开放式系统和基准的主要缺点是,它们优先考虑萃取中的信息高于其成分的紧凑性。这严重限制了开放式提取物在许多下游任务中的有用性。如果提取是紧凑和共享成分,则可以改善提取的效用。为此,我们研究了使用基于神经的方法鉴定紧凑提取的问题。我们提出了Compactie,这是一种使用新型管道方法的开放式系统,以产生具有重叠成分的紧凑型提取物。它首先检测到提取的成分,然后将它们链接到构建提取物。我们通过处理现有基准测试获得的紧凑提取物进行训练。我们在CARB和WIEL57数据集上的实验表明,紧凑型发现比以前的系统高1.5x-2x提取物,具有高精度,在OpenIE中建立了新的最新性能。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译