文档级关系提取(DRE)旨在识别两个实体之间的关系。实体可以对应于超越句子边界的多个提升。以前很少有研究已经调查了提及集成,这可能是有问题的,因为库鲁弗提到对特定关系没有同样有贡献。此外,事先努力主要关注实体级的推理,而不是捕获实体对之间的全局相互作用。在本文中,我们提出了两种新颖的技术,上下文指导的集成和交互推理(CGM2IR),以改善DRE。而不是简单地应用平均池,而是利用上下文来指导在加权和方式中的经验提升的集成。另外,对实体对图的相互作用推理在实体对图上执行迭代算法,以模拟关系的相互依赖性。我们在三个广泛使用的基准数据集中评估我们的CGM2IR模型,即Docred,CDR和GDA。实验结果表明,我们的模型优于以前的最先进的模型。
translated by 谷歌翻译
文档级别的关系提取旨在提取文档中实体之间的关系。与其句子级的对应物相比,文档级关系提取需要对多个句子进行推断才能提取复杂的关系三元组。先前的研究通常通过有关提及级别或实体级文档编写的信息传播来完成推理,而与关系之间的相关性无关。在本文中,我们提出了一个基于掩盖图像重建网络(DRE-MIR)的新型文档级关系提取模型,该模型将推断模型为掩盖的图像重建问题,以捕获关系之间的相关性。具体来说,我们首先利用编码器模块来获取实体的功能,并根据功能构建实体对矩阵。之后,我们将实体对矩阵视为图像,然后随机掩盖它并通过推理模块恢复它以捕获关系之间的相关性。我们在三个公共文档级关系提取数据集(即Docred,CDR和GDA)上评估了我们的模型。实验结果表明,我们的模型在这三个数据集上实现了最先进的性能,并且在推理过程中对噪声具有出色的鲁棒性。
translated by 谷歌翻译
Document-level relation extraction (DocRE) aims to identify semantic labels among entities within a single document. One major challenge of DocRE is to dig decisive details regarding a specific entity pair from long text. However, in many cases, only a fraction of text carries required information, even in the manually labeled supporting evidence. To better capture and exploit instructive information, we propose a novel expLicit syntAx Refinement and Subsentence mOdeliNg based framework (LARSON). By introducing extra syntactic information, LARSON can model subsentences of arbitrary granularity and efficiently screen instructive ones. Moreover, we incorporate refined syntax into text representations which further improves the performance of LARSON. Experimental results on three benchmark datasets (DocRED, CDR, and GDA) demonstrate that LARSON significantly outperforms existing methods.
translated by 谷歌翻译
我们提出了文件的实体级关系联合模型。与其他方法形成鲜明对比 - 重点关注本地句子中的对,因此需要提及级别的注释 - 我们的模型在实体级别运行。为此,遵循多任务方法,它在Coreference分辨率上建立并通过多级别表示结合全局实体和本地提到信息来聚集相关信号。我们在积木数据集中实现最先进的关系提取结果,并报告了未来参考的第一个实体级端到端关系提取结果。最后,我们的实验结果表明,联合方法与特定于任务专用的学习相提并论,虽然由于共享参数和培训步骤而言更有效。
translated by 谷歌翻译
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem. Previous work focuses mainly on obtaining better contextual representations for entity pairs, hardly address the above challenges. In this paper, we analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time. We argue that the correlations can not only transfer knowledge between data-rich relations and data-scarce ones to assist in the training of tailed relations, but also reflect semantic distance guiding the classifier to identify semantically close relations for multi-label entity pairs. Specifically, we use relation embedding as a medium, and propose two co-occurrence prediction sub-tasks from both coarse- and fine-grained perspectives to capture relation correlations. Finally, the learned correlation-aware embeddings are used to guide the extraction of relational facts. Substantial experiments on two popular DocRE datasets are conducted, and our method achieves superior results compared to baselines. Insightful analysis also demonstrates the potential of relation correlations to address the above challenges.
translated by 谷歌翻译
捕获该段落中的单词中复杂语言结构和长期依赖性的能力对于话语级关系提取(DRE)任务是必不可少的。图形神经网络(GNNS)是编码依赖图的方法之一,它在先前的RE中有效地显示了。然而,对GNN的接受领域得到了相对较少的关注,这对于需要话语理解的非常长的文本的情况可能是至关重要的。在这项工作中,我们利用图形汇集的想法,并建议在DRE任务上使用汇集解凝框架。汇集分支减少了图形尺寸,使GNN能够在更少的层内获得更大的接收领域; UnoDooling分支将池化图恢复为其原始分辨率,以便可以提取实体提及的表示。我们提出子句匹配(cm),这是一个新的语言启发图形汇集方法,用于NLP任务。两个DE DATASET上的实验表明,我们的模型在需要建模长期依赖性时显着改善基线,这表明了汇集了解冻框架的有效性和我们的CM汇集方法。
translated by 谷歌翻译
与伯特(Bert)等语言模型相比,已证明知识增强语言表示的预培训模型在知识基础构建任务(即〜关系提取)中更有效。这些知识增强的语言模型将知识纳入预训练中,以生成实体或关系的表示。但是,现有方法通常用单独的嵌入表示每个实体。结果,这些方法难以代表播出的实体和大量参数,在其基础代币模型之上(即〜变压器),必须使用,并且可以处理的实体数量为由于内存限制,实践限制。此外,现有模型仍然难以同时代表实体和关系。为了解决这些问题,我们提出了一个新的预培训模型,该模型分别从图书中学习实体和关系的表示形式,并分别在文本中跨越跨度。通过使用SPAN模块有效地编码跨度,我们的模型可以代表实体及其关系,但所需的参数比现有模型更少。我们通过从Wikipedia中提取的知识图对我们的模型进行了预训练,并在广泛的监督和无监督的信息提取任务上进行了测试。结果表明,我们的模型比基线学习对实体和关系的表现更好,而在监督的设置中,微调我们的模型始终优于罗伯塔,并在信息提取任务上取得了竞争成果。
translated by 谷歌翻译
文件级关系提取旨在识别整个文件中实体之间的关系。捕获远程依赖性的努力大量依赖于通过(图)神经网络学习的隐式强大的表示,这使得模型不太透明。为了解决这一挑战,在本文中,我们通过学习逻辑规则提出了一种新的文档级关系提取的概率模型。 Logire将逻辑规则视为潜在变量,包括两个模块:规则生成器和关系提取器。规则生成器是生成可能导致最终预测的逻辑规则,并且关系提取器基于所生成的逻辑规则输出最终预测。可以通过期望最大化(EM)算法有效地优化这两个模块。通过将逻辑规则引入神经网络,Logire可以明确地捕获远程依赖项,并享受更好的解释。经验结果表明,Logire在关系性能(1.8 F1得分)和逻辑一致性(超过3.3逻辑得分)方面显着优于几种强大的基线。我们的代码可以在https://github.com/rudongyu/logire提供。
translated by 谷歌翻译
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
translated by 谷歌翻译
事实证明,将先验知识纳入预训练的语言模型中对知识驱动的NLP任务有效,例如实体键入和关系提取。当前的培训程序通常通过使用知识掩盖,知识融合和知识更换将外部知识注入模型。但是,输入句子中包含的事实信息尚未完全开采,并且尚未严格检查注射的外部知识。结果,无法完全利用上下文信息,并将引入额外的噪音,或者注入的知识量受到限制。为了解决这些问题,我们提出了MLRIP,该MLRIP修改了Ernie-Baidu提出的知识掩盖策略,并引入了两阶段的实体替代策略。进行全面分析的广泛实验说明了MLRIP在军事知识驱动的NLP任务中基于BERT的模型的优势。
translated by 谷歌翻译
关系提取(RE)是自然语言处理的基本任务。RE试图通过识别文本中的实体对之间的关系信息来将原始的,非结构化的文本转变为结构化知识。RE有许多用途,例如知识图完成,文本摘要,提问和搜索查询。RE方法的历史可以分为四个阶段:基于模式的RE,基于统计的RE,基于神经的RE和大型语言模型的RE。这项调查始于对RE的早期阶段的一些示例性作品的概述,突出了局限性和缺点,以使进度相关。接下来,我们回顾流行的基准测试,并严格检查用于评估RE性能的指标。然后,我们讨论遥远的监督,这是塑造现代RE方法发展的范式。最后,我们回顾了重点是降级和培训方法的最新工作。
translated by 谷歌翻译
In recent years, there is a surge of generation-based information extraction work, which allows a more direct use of pre-trained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit document-level relation extraction (DocRE) where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models. We have released our code at https://github.com/ayyyq/DORE.
translated by 谷歌翻译
使用诸如BERT,ELMO和FLAIR等模型建模上下文信息的成立具有显着改善了文字的表示学习。它还给出了几乎每个NLP任务机器翻译,文本摘要和命名实体识别的Sota结果,以命名为少。在这项工作中,除了使用这些主导的上下文感知的表示之外,我们还提出了一种用于命名实体识别(NER)的知识意识表示学习(KARL)网络。我们讨论了利用现有方法在纳入世界知识方面的挑战,并展示了如何利用我们所提出的方法来克服这些挑战。 KARL基于变压器编码器,该变压器编码器利用表示为事实三元组的大知识库,将它们转换为图形上下文,并提取驻留在内部的基本实体信息以生成用于特征增强的上下文化三联表示。实验结果表明,使用卡尔的增强可以大大提升我们的内部系统的性能,并在三个公共网络数据集中的文献中的现有方法,即Conll 2003,Conll ++和Ontonotes V5实现了比文献中现有方法的显着更好的结果。我们还观察到更好的概括和应用于从Karl上看不见的实体的真实环境。
translated by 谷歌翻译
在文档级事件提取(DEE)任务中,事件参数始终散布在句子(串行问题)中,并且多个事件可能存在于一个文档(多事件问题)中。在本文中,我们认为事件参数的关系信息对于解决上述两个问题具有重要意义,并提出了一个新的DEE框架,该框架可以对关系依赖关系进行建模,称为关系授权的文档级事件提取(REDEE)。更具体地说,该框架具有一种新颖的量身定制的变压器,称为关系增强的注意变形金刚(RAAT)。 RAAT可扩展以捕获多尺度和多启动参数关系。为了进一步利用关系信息,我们介绍了一个单独的事件关系预测任务,并采用多任务学习方法来显式增强事件提取性能。广泛的实验证明了该方法的有效性,该方法可以在两个公共数据集上实现最新性能。我们的代码可在https:// github上找到。 com/tencentyouturesearch/raat。
translated by 谷歌翻译
文档级关系提取(RE)旨在确定整个文档中实体之间的关系。它需要复杂的推理能力来综合各种知识,例如核心和常识。大规模知识图(kgs)包含大量现实世界事实,并可以为文档级别提供宝贵的知识。在本文中,我们提出了一个实体知识注入框架,以增强当前的文档级RE模型。具体而言,我们将核心蒸馏引入注入核心知识,并具有更一般的核心推理能力。我们还采用代表对帐来注入事实知识,并将kg表示形式汇总到统一空间中。两个基准数据集的实验验证了我们实体知识注入框架的概括,并对多个文档级RE模型的一致改进。
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译
Knowledge graph (KG) link prediction aims to infer new facts based on existing facts in the KG. Recent studies have shown that using the graph neighborhood of a node via graph neural networks (GNNs) provides more useful information compared to just using the query information. Conventional GNNs for KG link prediction follow the standard message-passing paradigm on the entire KG, which leads to over-smoothing of representations and also limits their scalability. On a large scale, it becomes computationally expensive to aggregate useful information from the entire KG for inference. To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader. As part of our exemplar instantiation for the new framework, we propose a novel Transformer-based GNN as the reader, which incorporates graph-based attention structure and cross-attention between query and context for deep fusion. This design enables the model to focus on salient context information relevant to the query. Empirical results on two standard KG link prediction datasets demonstrate the competitive performance of the proposed method.
translated by 谷歌翻译
The development of deep neural networks has improved representation learning in various domains, including textual, graph structural, and relational triple representations. This development opened the door to new relation extraction beyond the traditional text-oriented relation extraction. However, research on the effectiveness of considering multiple heterogeneous domain information simultaneously is still under exploration, and if a model can take an advantage of integrating heterogeneous information, it is expected to exhibit a significant contribution to many problems in the world. This thesis works on Drug-Drug Interactions (DDIs) from the literature as a case study and realizes relation extraction utilizing heterogeneous domain information. First, a deep neural relation extraction model is prepared and its attention mechanism is analyzed. Next, a method to combine the drug molecular structure information and drug description information to the input sentence information is proposed, and the effectiveness of utilizing drug molecular structures and drug descriptions for the relation extraction task is shown. Then, in order to further exploit the heterogeneous information, drug-related items, such as protein entries, medical terms and pathways are collected from multiple existing databases and a new data set in the form of a knowledge graph (KG) is constructed. A link prediction task on the constructed data set is conducted to obtain embedding representations of drugs that contain the heterogeneous domain information. Finally, a method that integrates the input sentence information and the heterogeneous KG information is proposed. The proposed model is trained and evaluated on a widely used data set, and as a result, it is shown that utilizing heterogeneous domain information significantly improves the performance of relation extraction from the literature.
translated by 谷歌翻译
关系提取是自然语言处理中的一个基本问题。大多数现有模型都是为常规域中的关系提取而定义的。然而,它们对特定结构域(例如,生物医用)的表现尚不清楚。为了填补这一差距,本文对生物医学研究文章的关系提取进行了实证研究。具体而言,我们考虑句子级和文档级关系提取,并在几个基准数据集上运行一些最先进的方法。我们的研究结果表明,(1)当前文件级关系提取方法具有强大的泛化能力;(2)现有方法需要大量标记数据进行生物医学中的模型微调。我们的观察可能会激发该领域的人们为生物医学关系提取开发更有效的模型。
translated by 谷歌翻译
Multi-hop machine reading comprehension is a challenging task in natural language processing, which requires more reasoning ability across multiple documents. Spectral models based on graph convolutional networks grant inferring abilities and lead to competitive results. However, part of them still faces the challenge of analyzing the reasoning in a human-understandable way. Inspired by the concept of the Grandmother Cells in cognitive neuroscience, a spatial graph attention framework named ClueReader was proposed in this paper, imitating the procedure. This model is designed to assemble the semantic features in multi-level representations and automatically concentrate or alleviate information for reasoning via the attention mechanism. The name ClueReader is a metaphor for the pattern of the model: regard the subjects of queries as the start points of clues, take the reasoning entities as bridge points, consider the latent candidate entities as the grandmother cells, and the clues end up in candidate entities. The proposed model allows us to visualize the reasoning graph, then analyze the importance of edges connecting two entities and the selectivity in the mention and candidate nodes, which can be easier to be comprehended empirically. The official evaluations in the open-domain multi-hop reading dataset WikiHop and the Drug-drug Interactions dataset MedHop prove the validity of our approach and show the probability of the application of the model in the molecular biology domain.
translated by 谷歌翻译