In recent years, there is a surge of generation-based information extraction work, which allows a more direct use of pre-trained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit document-level relation extraction (DocRE) where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models. We have released our code at https://github.com/ayyyq/DORE.
translated by 谷歌翻译
关系提取是一项重要但具有挑战性的任务,旨在从文本中提取所有隐藏的关系事实。随着深层语言模型的发展,关系提取方法在各种基准上都取得了良好的性能。但是,我们观察到以前方法的两个缺点:首先,在各种关系提取设置下没有统一的框架可以很好地工作;其次,有效利用外部知识作为背景信息。在这项工作中,我们提出了一种知识增强的生成模型来减轻这两个问题。我们的生成模型是一个统一的框架,可在各种关系提取设置下依次生成关系三胞胎,并明确利用来自知识图(KG)的相关知识来解决歧义。我们的模型在包括WebNLG,NYT10和Tacred在内的多个基准和设置上实现了卓越的性能。
translated by 谷歌翻译
文档级关系提取(DRE)旨在识别两个实体之间的关系。实体可以对应于超越句子边界的多个提升。以前很少有研究已经调查了提及集成,这可能是有问题的,因为库鲁弗提到对特定关系没有同样有贡献。此外,事先努力主要关注实体级的推理,而不是捕获实体对之间的全局相互作用。在本文中,我们提出了两种新颖的技术,上下文指导的集成和交互推理(CGM2IR),以改善DRE。而不是简单地应用平均池,而是利用上下文来指导在加权和方式中的经验提升的集成。另外,对实体对图的相互作用推理在实体对图上执行迭代算法,以模拟关系的相互依赖性。我们在三个广泛使用的基准数据集中评估我们的CGM2IR模型,即Docred,CDR和GDA。实验结果表明,我们的模型优于以前的最先进的模型。
translated by 谷歌翻译
三重提取是自然语言处理和知识图构建信息提取的重要任务。在本文中,我们重新审视了序列生成的端到端三重提取任务。由于生成三重提取可能难以捕获长期依赖性并产生不忠的三元组,因此我们引入了一种新型模型,即与生成变压器的对比度三重提取。具体而言,我们为基于编码器的生成引入了一个共享的变压器模块。为了产生忠实的结果,我们提出了一个新颖的三胞胎对比训练对象。此外,我们引入了两种机制,以进一步提高模型性能(即,批处理动态注意力掩盖和三个方面的校准)。在三个数据集(即NYT,WebNLG和MIE)上进行的实验结果表明,我们的方法比基线的方法更好。
translated by 谷歌翻译
Generative Knowledge Graph Construction (KGC) refers to those methods that leverage the sequence-to-sequence framework for building knowledge graphs, which is flexible and can be adapted to widespread tasks. In this study, we summarize the recent compelling progress in generative knowledge graph construction. We present the advantages and weaknesses of each paradigm in terms of different generation targets and provide theoretical insight and empirical analysis. Based on the review, we suggest promising research directions for the future. Our contributions are threefold: (1) We present a detailed, complete taxonomy for the generative KGC methods; (2) We provide a theoretical and empirical analysis of the generative KGC methods; (3) We propose several research directions that can be developed in the future.
translated by 谷歌翻译
关系提取(RE)是自然语言处理的基本任务。RE试图通过识别文本中的实体对之间的关系信息来将原始的,非结构化的文本转变为结构化知识。RE有许多用途,例如知识图完成,文本摘要,提问和搜索查询。RE方法的历史可以分为四个阶段:基于模式的RE,基于统计的RE,基于神经的RE和大型语言模型的RE。这项调查始于对RE的早期阶段的一些示例性作品的概述,突出了局限性和缺点,以使进度相关。接下来,我们回顾流行的基准测试,并严格检查用于评估RE性能的指标。然后,我们讨论遥远的监督,这是塑造现代RE方法发展的范式。最后,我们回顾了重点是降级和培训方法的最新工作。
translated by 谷歌翻译
文档级别的关系提取旨在提取文档中实体之间的关系。与其句子级的对应物相比,文档级关系提取需要对多个句子进行推断才能提取复杂的关系三元组。先前的研究通常通过有关提及级别或实体级文档编写的信息传播来完成推理,而与关系之间的相关性无关。在本文中,我们提出了一个基于掩盖图像重建网络(DRE-MIR)的新型文档级关系提取模型,该模型将推断模型为掩盖的图像重建问题,以捕获关系之间的相关性。具体来说,我们首先利用编码器模块来获取实体的功能,并根据功能构建实体对矩阵。之后,我们将实体对矩阵视为图像,然后随机掩盖它并通过推理模块恢复它以捕获关系之间的相关性。我们在三个公共文档级关系提取数据集(即Docred,CDR和GDA)上评估了我们的模型。实验结果表明,我们的模型在这三个数据集上实现了最先进的性能,并且在推理过程中对噪声具有出色的鲁棒性。
translated by 谷歌翻译
与伯特(Bert)等语言模型相比,已证明知识增强语言表示的预培训模型在知识基础构建任务(即〜关系提取)中更有效。这些知识增强的语言模型将知识纳入预训练中,以生成实体或关系的表示。但是,现有方法通常用单独的嵌入表示每个实体。结果,这些方法难以代表播出的实体和大量参数,在其基础代币模型之上(即〜变压器),必须使用,并且可以处理的实体数量为由于内存限制,实践限制。此外,现有模型仍然难以同时代表实体和关系。为了解决这些问题,我们提出了一个新的预培训模型,该模型分别从图书中学习实体和关系的表示形式,并分别在文本中跨越跨度。通过使用SPAN模块有效地编码跨度,我们的模型可以代表实体及其关系,但所需的参数比现有模型更少。我们通过从Wikipedia中提取的知识图对我们的模型进行了预训练,并在广泛的监督和无监督的信息提取任务上进行了测试。结果表明,我们的模型比基线学习对实体和关系的表现更好,而在监督的设置中,微调我们的模型始终优于罗伯塔,并在信息提取任务上取得了竞争成果。
translated by 谷歌翻译
文本的结构化和接地表示通常是通过封闭信息提取形式化的,提取与从知识库模式的预定义实体集合和关系一致的穷举集(主题,关系,对象)三元组的问题。大多数现有的作品是管道容易出错的累积,所有方法都仅适用于不切实际的少数实体和关系。我们介绍了Genie(生成信息提取),第一端到最终的归属化闭合信息提取。 Genie自然地通过自动生成文本形式的关系和实体来利用预先训练的变压器的语言知识。由于新的双层约束生成策略,仅生产与预定义知识库模式一致的三胞胎。我们的实验表明,Genie在封闭信息提取时是最先进的,从较少的训练数据点广泛地推广到基线,并缩放到以前无管理数量的实体和关系。通过这项工作,封闭的信息提取在现实情景中变得实用,为下游任务提供了新的机会。最后,这项工作为信息提取的核心任务铺平了统一的端到端方法。在https://github.com/epfl-dlab/genie提供的代码和模型。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
事实证明,将先验知识纳入预训练的语言模型中对知识驱动的NLP任务有效,例如实体键入和关系提取。当前的培训程序通常通过使用知识掩盖,知识融合和知识更换将外部知识注入模型。但是,输入句子中包含的事实信息尚未完全开采,并且尚未严格检查注射的外部知识。结果,无法完全利用上下文信息,并将引入额外的噪音,或者注入的知识量受到限制。为了解决这些问题,我们提出了MLRIP,该MLRIP修改了Ernie-Baidu提出的知识掩盖策略,并引入了两阶段的实体替代策略。进行全面分析的广泛实验说明了MLRIP在军事知识驱动的NLP任务中基于BERT的模型的优势。
translated by 谷歌翻译
我们提出了文件的实体级关系联合模型。与其他方法形成鲜明对比 - 重点关注本地句子中的对,因此需要提及级别的注释 - 我们的模型在实体级别运行。为此,遵循多任务方法,它在Coreference分辨率上建立并通过多级别表示结合全局实体和本地提到信息来聚集相关信号。我们在积木数据集中实现最先进的关系提取结果,并报告了未来参考的第一个实体级端到端关系提取结果。最后,我们的实验结果表明,联合方法与特定于任务专用的学习相提并论,虽然由于共享参数和培训步骤而言更有效。
translated by 谷歌翻译
Triplet extraction aims to extract entities and their corresponding relations in unstructured text. Most existing methods train an extraction model on high-quality training data, and hence are incapable of extracting relations that were not observed during training. Generalizing the model to unseen relations typically requires fine-tuning on synthetic training data which is often noisy and unreliable. In this paper, we argue that reducing triplet extraction to a template filling task over a pre-trained language model can equip the model with zero-shot learning capabilities and enable it to leverage the implicit knowledge in the language model. Embodying these ideas, we propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that is based on end-to-end generative transformers. Our experiments show that without any data augmentation or pipeline systems, ZETT can outperform previous state-of-the-art models with 25% less parameters. We further show that ZETT is more robust in detecting entities and can be incorporated with automatically generated templates for relations.
translated by 谷歌翻译
数据库状表的输出结构,该表由水平行和垂直列构建的值组成,可以通过名称识别,可以涵盖广泛的NLP任务。在此构成之后,我们为文本到餐桌神经模型提出了一个框架,适用于诸如提取订单项,联合实体和关系提取或知识库人群等问题的问题。我们建议的基于置换的解码器是一种广义的顺序方法,该方法理解了表中所有单元的信息。训练最大化了分解顺序的所有随机排列表中表内容的预期对数可能性。在内容推理期间,我们通过搜索可能的顺序以最大化模型的置信度并避免实质性误差积累来利用模型以任何顺序生成单元格的能力,而其他顺序模型则容易出现。实验证明了该框架的高实用价值,该框架在几个具有挑战性的数据集上建立了最先进的结果,优于先前的解决方案高达15%。
translated by 谷歌翻译
Entities, as important carriers of real-world knowledge, play a key role in many NLP tasks. We focus on incorporating entity knowledge into an encoder-decoder framework for informative text generation. Existing approaches tried to index, retrieve, and read external documents as evidence, but they suffered from a large computational overhead. In this work, we propose an encoder-decoder framework with an entity memory, namely EDMem. The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters. To precisely generate entity names, we design three decoding methods to constrain entity generation by linking entities in the memory. EDMem is a unified framework that can be used on various entity-intensive question answering and generation tasks. Extensive experimental results show that EDMem outperforms both memory-based auto-encoder models and non-memory encoder-decoder models.
translated by 谷歌翻译
实体歧义(ED)的最新工作通常忽略了结构性知识库(KB)事实,而是依靠有限的KB信息子集,例如实体描述或类型。这限制了实体可以消除歧义的环境范围。为了允许使用所有KB事实以及描述和类型,我们介绍了一个ED模型,该模型通过以完全可区分的方式通过符号知识基础来链接实体。我们的型号平均超过了六个良好的ED数据集的最新基线。通过允许访问所有KB信息,我们的模型较少依赖于基于流行的实体先验,并提高了具有挑战性的Shadowlink数据集(强调不频繁和模棱两可的实体)的性能12.7 F1。
translated by 谷歌翻译
在文档级事件提取(DEE)任务中,事件参数始终散布在句子(串行问题)中,并且多个事件可能存在于一个文档(多事件问题)中。在本文中,我们认为事件参数的关系信息对于解决上述两个问题具有重要意义,并提出了一个新的DEE框架,该框架可以对关系依赖关系进行建模,称为关系授权的文档级事件提取(REDEE)。更具体地说,该框架具有一种新颖的量身定制的变压器,称为关系增强的注意变形金刚(RAAT)。 RAAT可扩展以捕获多尺度和多启动参数关系。为了进一步利用关系信息,我们介绍了一个单独的事件关系预测任务,并采用多任务学习方法来显式增强事件提取性能。广泛的实验证明了该方法的有效性,该方法可以在两个公共数据集上实现最新性能。我们的代码可在https:// github上找到。 com/tencentyouturesearch/raat。
translated by 谷歌翻译
我们提出了一个新的框架,在增强的自然语言(TANL)之间的翻译,解决了许多结构化预测语言任务,包括联合实体和关系提取,嵌套命名实体识别,关系分类,语义角色标记,事件提取,COREREFED分辨率和对话状态追踪。通过培训特定于特定于任务的鉴别分类器来说,我们将其作为一种在增强的自然语言之间的翻译任务,而不是通过培训问题,而不是解决问题,而是可以轻松提取任务相关信息。我们的方法可以匹配或优于所有任务的特定于任务特定模型,特别是在联合实体和关系提取(Conll04,Ade,NYT和ACE2005数据集)上实现了新的最先进的结果,与关系分类(偶尔和默示)和语义角色标签(Conll-2005和Conll-2012)。我们在使用相同的架构和超参数的同时为所有任务使用相同的架构和超级参数,甚至在培训单个模型时同时解决所有任务(多任务学习)。最后,我们表明,由于更好地利用标签语义,我们的框架也可以显着提高低资源制度的性能。
translated by 谷歌翻译
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem. Previous work focuses mainly on obtaining better contextual representations for entity pairs, hardly address the above challenges. In this paper, we analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time. We argue that the correlations can not only transfer knowledge between data-rich relations and data-scarce ones to assist in the training of tailed relations, but also reflect semantic distance guiding the classifier to identify semantically close relations for multi-label entity pairs. Specifically, we use relation embedding as a medium, and propose two co-occurrence prediction sub-tasks from both coarse- and fine-grained perspectives to capture relation correlations. Finally, the learned correlation-aware embeddings are used to guide the extraction of relational facts. Substantial experiments on two popular DocRE datasets are conducted, and our method achieves superior results compared to baselines. Insightful analysis also demonstrates the potential of relation correlations to address the above challenges.
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译