从文本中提取空间关系是自然语言理解的一项基本任务,而先前的研究仅将其视为一项分类任务,由于信息差而忽略了那些具有无效角色的空间关系。为了解决上述问题,我们首先将空间关系提取视为一项生成任务,并为此任务提出了一种新型混合模型HMCGR。HMCGR包含一个生成和分类模型,而前者可以生成那些无效的关系,后者可以提取那些非无效关系以相互补充。此外,使用反射性评估机制,以进一步提高基于空间关系的反射性原理的准确性。SpaceEval的实验结果表明,HMCGR的表现明显优于SOTA基线。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
从文本中获取结构事件知识的事件提取(EE)可以分为两个子任务:事件类型分类和元素提取(即在不同的角色模式下识别触发器和参数)。由于不同的事件类型始终拥有独特的提取模式(即角色模式),因此EE先前的工作通常遵循孤立的学习范式,对不同的事件类型独立执行元素提取。它忽略了事件类型和参数角色之间有意义的关联,导致频繁类型/角色的性能相对较差。本文提出了一个新型的EE任务神经关联框架。给定文档,它首先通过构造文档级别的图形来执行类型分类,以关联不同类型的句子节点,并采用图形注意网络来学习句子嵌入。然后,通过构建一个通用参数角色模式来实现元素提取,并具有参数遗传机制,以增强提取元素的角色偏好。因此,我们的模型考虑了EE期间的类型和角色关联,从而使它们之间的隐式信息共享。实验结果表明,我们的方法始终优于两个子任务中大多数最新的EE方法。特别是,对于具有较少培训数据的类型/角色,该性能优于现有方法。
translated by 谷歌翻译
事件提取(EE)是信息提取的重要任务,该任务旨在从非结构化文本中提取结构化事件信息。大多数先前的工作都专注于提取平坦的事件,同时忽略重叠或嵌套的事件。多个重叠和嵌套EE的模型包括几个连续的阶段来提取事件触发器和参数,这些阶段患有错误传播。因此,我们设计了一种简单而有效的标记方案和模型,以将EE作为单词关系识别,称为oneee。触发器或参数单词之间的关系在一个阶段同时识别出并行网格标记,从而产生非常快的事件提取速度。该模型配备了自适应事件融合模块,以生成事件感知表示表示和距离感知的预测指标,以整合单词关系识别的相对距离信息,从经验上证明这是有效的机制。对3个重叠和嵌套的EE基准测试的实验,即少数FC,GENIA11和GENIA13,表明Oneee实现了最新的(SOTA)结果。此外,ONEEE的推理速度比相同条件下的基线的推理速度快,并且由于它支持平行推断,因此可以进一步改善。
translated by 谷歌翻译
生物医学实体的因果关系提取是生物医学文本挖掘中最复杂的任务之一,涉及两种信息:实体关系和实体功能。一种可行的方法是将关系提取和功能检测作为两个独立的子任务。但是,这种单独的学习方法忽略了它们之间的内在相关性,并导致性能不令人满意。在本文中,我们提出了一个联合学习模型,该模型结合了实体关系提取和实体功能检测以利用其共同点并捕获其相互关系,以提高生物医学因果关系提取的性能。同时,在模型训练阶段,损失函数中的不同功能类型分配了不同的权重。具体而言,负功能实例的惩罚系数增加以有效提高功能检测的精度。 Biocreative-V轨道4语料库的实验结果表明,我们的联合学习模型在BEL语句提取中的表现优于单独的模型,在第2阶段和第1阶段评估中的测试集中,F1得分分别达到58.4%和37.3%。这表明,与其他系统相比,我们的联合学习系统达到了第2阶段的最新性能。
translated by 谷歌翻译
Aspect Sentiment Triplet Extraction (ASTE) has become an emerging task in sentiment analysis research, aiming to extract triplets of the aspect term, its corresponding opinion term, and its associated sentiment polarity from a given sentence. Recently, many neural networks based models with different tagging schemes have been proposed, but almost all of them have their limitations: heavily relying on 1) prior assumption that each word is only associated with a single role (e.g., aspect term, or opinion term, etc. ) and 2) word-level interactions and treating each opinion/aspect as a set of independent words. Hence, they perform poorly on the complex ASTE task, such as a word associated with multiple roles or an aspect/opinion term with multiple words. Hence, we propose a novel approach, Span TAgging and Greedy infErence (STAGE), to extract sentiment triplets in span-level, where each span may consist of multiple words and play different roles simultaneously. To this end, this paper formulates the ASTE task as a multi-class span classification problem. Specifically, STAGE generates more accurate aspect sentiment triplet extractions via exploring span-level information and constraints, which consists of two components, namely, span tagging scheme and greedy inference strategy. The former tag all possible candidate spans based on a newly-defined tagging set. The latter retrieves the aspect/opinion term with the maximum length from the candidate sentiment snippet to output sentiment triplets. Furthermore, we propose a simple but effective model based on the STAGE, which outperforms the state-of-the-arts by a large margin on four widely-used datasets. Moreover, our STAGE can be easily generalized to other pair/triplet extraction tasks, which also demonstrates the superiority of the proposed scheme STAGE.
translated by 谷歌翻译
在文档级事件提取(DEE)任务中,事件参数始终散布在句子(串行问题)中,并且多个事件可能存在于一个文档(多事件问题)中。在本文中,我们认为事件参数的关系信息对于解决上述两个问题具有重要意义,并提出了一个新的DEE框架,该框架可以对关系依赖关系进行建模,称为关系授权的文档级事件提取(REDEE)。更具体地说,该框架具有一种新颖的量身定制的变压器,称为关系增强的注意变形金刚(RAAT)。 RAAT可扩展以捕获多尺度和多启动参数关系。为了进一步利用关系信息,我们介绍了一个单独的事件关系预测任务,并采用多任务学习方法来显式增强事件提取性能。广泛的实验证明了该方法的有效性,该方法可以在两个公共数据集上实现最新性能。我们的代码可在https:// github上找到。 com/tencentyouturesearch/raat。
translated by 谷歌翻译
方面情感三胞胎提取(ASTE)旨在提取方面,意见及其情感关系作为情感三胞胎的跨度。现有的作品通常将跨度检测作为1D令牌标记问题制定,并使用令牌对的2D标记矩阵对情感识别进行建模。此外,通过利用诸如伯特(Bert)之类的审计语言编码器(PLES)的代表形式,它们可以实现更好的性能。但是,他们只是利用将功能提取器作为提取器来构建其模块,但从未深入了解特定知识所包含的内容。在本文中,我们争辩说,与其进一步设计模块以捕获ASTE的电感偏见,不如包含“足够”的“足够”功能,用于1D和2D标记:(1)令牌表示包含令牌本身的上下文含义,因此此级别,因此此级别功能带有必要的信息以进行1D标记。 (2)不同PLE层的注意力矩阵可以进一步捕获令牌对中存在的多层次语言知识,从而使2D标记受益。 (3)此外,对于简单的转换,这两个功能也可以很容易地转换为2D标记矩阵和1D标记序列。这将进一步提高标签结果。通过这样做,PLE可以是自然的标记框架并实现新的最新状态,通过广泛的实验和深入分析来验证。
translated by 谷歌翻译
事实证明,将先验知识纳入预训练的语言模型中对知识驱动的NLP任务有效,例如实体键入和关系提取。当前的培训程序通常通过使用知识掩盖,知识融合和知识更换将外部知识注入模型。但是,输入句子中包含的事实信息尚未完全开采,并且尚未严格检查注射的外部知识。结果,无法完全利用上下文信息,并将引入额外的噪音,或者注入的知识量受到限制。为了解决这些问题,我们提出了MLRIP,该MLRIP修改了Ernie-Baidu提出的知识掩盖策略,并引入了两阶段的实体替代策略。进行全面分析的广泛实验说明了MLRIP在军事知识驱动的NLP任务中基于BERT的模型的优势。
translated by 谷歌翻译
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts under the zero-shot setting, where the relation sets at the training and testing stages are disjoint. Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples, which increases the training cost and severely constrains the model performance. To address the above issues, we propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation Selection and Entity Boundary Detection. The remarkable characteristic of PCRED is that it does not rely on additional data and still achieves promising performance. The model adopts a relation-first paradigm, recognizing unseen relations through candidate relation selection. With this approach, the semantics of relations are naturally infused in the context. Entities are extracted based on the context and the semantics of relations subsequently. We evaluate our model on two ZeroRTE datasets. The experiment results show that our method consistently outperforms previous works. Our code will be available at https://anonymous.4open.science/r/PCRED.
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译
使用诸如BERT,ELMO和FLAIR等模型建模上下文信息的成立具有显着改善了文字的表示学习。它还给出了几乎每个NLP任务机器翻译,文本摘要和命名实体识别的Sota结果,以命名为少。在这项工作中,除了使用这些主导的上下文感知的表示之外,我们还提出了一种用于命名实体识别(NER)的知识意识表示学习(KARL)网络。我们讨论了利用现有方法在纳入世界知识方面的挑战,并展示了如何利用我们所提出的方法来克服这些挑战。 KARL基于变压器编码器,该变压器编码器利用表示为事实三元组的大知识库,将它们转换为图形上下文,并提取驻留在内部的基本实体信息以生成用于特征增强的上下文化三联表示。实验结果表明,使用卡尔的增强可以大大提升我们的内部系统的性能,并在三个公共网络数据集中的文献中的现有方法,即Conll 2003,Conll ++和Ontonotes V5实现了比文献中现有方法的显着更好的结果。我们还观察到更好的概括和应用于从Karl上看不见的实体的真实环境。
translated by 谷歌翻译
Information Extraction (IE) aims to extract structured information from heterogeneous sources. IE from natural language texts include sub-tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Event Extraction (EE). Most IE systems require comprehensive understandings of sentence structure, implied semantics, and domain knowledge to perform well; thus, IE tasks always need adequate external resources and annotations. However, it takes time and effort to obtain more human annotations. Low-Resource Information Extraction (LRIE) strives to use unsupervised data, reducing the required resources and human annotation. In practice, existing systems either utilize self-training schemes to generate pseudo labels that will cause the gradual drift problem, or leverage consistency regularization methods which inevitably possess confirmation bias. To alleviate confirmation bias due to the lack of feedback loops in existing LRIE learning paradigms, we develop a Gradient Imitation Reinforcement Learning (GIRL) method to encourage pseudo-labeled data to imitate the gradient descent direction on labeled data, which can force pseudo-labeled data to achieve better optimization capabilities similar to labeled data. Based on how well the pseudo-labeled data imitates the instructive gradient descent direction obtained from labeled data, we design a reward to quantify the imitation process and bootstrap the optimization capability of pseudo-labeled data through trial and error. In addition to learning paradigms, GIRL is not limited to specific sub-tasks, and we leverage GIRL to solve all IE sub-tasks (named entity recognition, relation extraction, and event extraction) in low-resource settings (semi-supervised IE and few-shot IE).
translated by 谷歌翻译
框架语义解析是一项基本的NLP任务,由三个子任务组成:框架标识,参数识别和角色分类。以前的大多数研究都倾向于忽略不同子任务与论点之间的关系,并且很少关注Framenet中定义的本体论框架知识。在本文中,我们提出了一个带有双层(KID)的知识引导的增量语义解析器。我们首先介绍框架知识图(FKG),这是一个构建框架知识上构建的框架和FES(帧元素)的异质图,以便我们可以得出框架和FES的知识增强表示。此外,我们提出了框架语义图(FSG)来表示用图形结构从文本中提取的框架语义结构。通过这种方式,我们可以将框架语义解析转变为增量图构造问题,以加强子任务之间的相互作用和参数之间的关系。我们的实验表明,在两个Framenet数据集上,KID的表现优于先前的最新方法1.7 f1得分。我们的代码可在https://github.com/pkunlp-icler/kid上使用。
translated by 谷歌翻译
Prior works on Information Extraction (IE) typically predict different tasks and instances (e.g., event triggers, entities, roles, relations) independently, while neglecting their interactions and leading to model inefficiency. In this work, we introduce a joint IE framework, HighIE, that learns and predicts multiple IE tasks by integrating high-order cross-task and cross-instance dependencies. Specifically, we design two categories of high-order factors: homogeneous factors and heterogeneous factors. Then, these factors are utilized to jointly predict labels of all instances. To address the intractability problem of exact high-order inference, we incorporate a high-order neural decoder that is unfolded from a mean-field variational inference method. The experimental results show that our approach achieves consistent improvements on three IE tasks compared with our baseline and prior work.
translated by 谷歌翻译
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
translated by 谷歌翻译
随着信息技术的快速发展,在线平台已经产生了巨大的文本资源。作为一种特定形式的信息提取(即),事件提取(EE)由于其自动从人类语言提取事件的能力而增加了普及。但是,事件提取有限的文献调查。现有审查工作要么花费很多努力,用于描述各种方法的细节或专注于特定领域。本研究提供了全面概述了最先进的事件提取方法及其从文本的应用程序,包括闭域和开放式事件提取。这项调查的特点是它提供了适度复杂性的概要,避免涉及特定方法的太多细节。本研究侧重于讨论代表作品的常见角色,应用领域,优势和缺点,忽略各个方法的特殊性。最后,我们总结了常见问题,当前解决方案和未来的研究方向。我们希望这项工作能够帮助研究人员和从业者获得最近的事件提取的快速概述。
translated by 谷歌翻译
开放信息提取(OpenIE)的最先进的神经方法通常以自回旋或基于谓词的方式迭代地提取三重态(或元组),以免产生重复。在这项工作中,我们提出了一种可以平等或更成功的问题的不同方法。也就是说,我们提出了一种新型的单通道方法,用于开放式启发,该方法受到计算机视觉的对象检测算法的启发。我们使用基于双方匹配的订单不足损失,迫使独特的预测和用于序列标签的仅基于变压器的纯编码体系结构。与质量指标和推理时间相比,与标准基准的最新模型相比,提出的方法更快,并且表现出卓越或类似的性能。我们的模型在CARB上的新最新性能为OIE2016评估,而推断的速度比以前的最新状态更快。我们还在两种语言的零弹奏设置中评估了模型的多语言版本,并引入了一种生成合成多语言数据的策略,以微调每个特定语言的模型。在这种情况下,我们在多语言Re-OIE2016上显示了15%的性能提高,葡萄牙语和西班牙语的F1达到75%。代码和型号可在https://github.com/sberbank-ai/detie上找到。
translated by 谷歌翻译
作为人类认知的重要组成部分,造成效果关系频繁出现在文本中,从文本策划原因关系有助于建立预测任务的因果网络。现有的因果关系提取技术包括基于知识的,统计机器学习(ML)和基于深度学习的方法。每种方法都具有其优点和缺点。例如,基于知识的方法是可以理解的,但需要广泛的手动域知识并具有较差的跨域适用性。由于自然语言处理(NLP)工具包,统计机器学习方法更加自动化。但是,功能工程是劳动密集型的,工具包可能导致错误传播。在过去的几年里,由于其强大的代表学习能力和计算资源的快速增加,深入学习技术吸引了NLP研究人员的大量关注。它们的局限包括高计算成本和缺乏足够的注释培训数据。在本文中,我们对因果关系提取进行了综合调查。我们最初介绍了因果关系提取中存在的主要形式:显式的内部管制因果关系,隐含因果关系和间情态因果关系。接下来,我们列出了代理关系提取的基准数据集和建模评估方法。然后,我们介绍了三种技术的结构化概述了与他们的代表系统。最后,我们突出了潜在的方向存在现有的开放挑战。
translated by 谷歌翻译
Triplet extraction aims to extract entities and their corresponding relations in unstructured text. Most existing methods train an extraction model on high-quality training data, and hence are incapable of extracting relations that were not observed during training. Generalizing the model to unseen relations typically requires fine-tuning on synthetic training data which is often noisy and unreliable. In this paper, we argue that reducing triplet extraction to a template filling task over a pre-trained language model can equip the model with zero-shot learning capabilities and enable it to leverage the implicit knowledge in the language model. Embodying these ideas, we propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that is based on end-to-end generative transformers. Our experiments show that without any data augmentation or pipeline systems, ZETT can outperform previous state-of-the-art models with 25% less parameters. We further show that ZETT is more robust in detecting entities and can be incorporated with automatically generated templates for relations.
translated by 谷歌翻译