认识到没有培训实例的看不见的关系是现实世界中的一个具有挑战性的任务。在本文中,我们提出了一种基于提示的模型,具有语义知识增强(ZS-SKA),以识别零拍摄设置下的看不见的关系。在新的单词级别句子翻译规则之后,我们从带有所看到的关系的情况生成增强的实例。我们根据外部知识图设计提示,以将从所见关系中学到的语义知识信息集成。我们在提示模板中使用实际标签集,而是构造加权虚拟标签单词。通过生成与增强实例的看见和看不见的关系的表示,并通过原型网络提示,计算距离以预测看不见的关系。在三个公共数据集上进行的广泛实验表明,ZS-SKA优于零击方案下的最先进的方法。我们的实验结果还证明了ZS-SKA的有效性和鲁棒性。
translated by 谷歌翻译
我们提出了一个零射门学习关系分类(ZSLRC)框架,通过其识别训练数据中不存在的新颖关系的能力来提高最先进的框架。零射击学习方法模仿人类学习和识别新概念的方式,没有先前的知识。为此,ZSLRC使用修改的高级原型网络来利用加权侧(辅助)信息。 ZSLRC的侧面信息是由关键字,名称实体的高度和标签及其同义词构建的。 ZSLRC还包括一个自动高义的提取框架,可直接从Web获取各种名称实体的高型。 ZSLRC提高了最先进的少量学习关系分类方法,依赖于标记的培训数据,因此即使在现实世界方案中也适用于某些关系对相应标记的培训示例。我们在两种公共数据集(NYT和NEREREL)上使用广泛的实验显示结果,并显示ZSLRC显着优于最先进的方法对监督学习,少量学习和零射击学习任务。我们的实验结果还展示了我们所提出的模型的有效性和稳健性。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
我们研究了很少的细粒实体键入(FET)的问题,其中只有几个带注释的实体对每种实体类型提供了上下文。最近,基于及时的调整通过将实体类型分类任务作为“填补空白”的问题来表明在几次射击方案中表现出优越的性能。这允许有效利用预训练的语言模型(PLM)的强语建模能力。尽管当前基于及时的调整方法成功了,但仍有两个主要挑战:(1)提示中的口头化器要么是由外部知识基础手动设计或构建的,而无需考虑目标语料库和标签层次结构信息,而且(2)当前方法主要利用PLM的表示能力,但没有通过广泛的通用域预训练来探索其产生的功率。在这项工作中,我们为由两个模块组成的几个弹药fet提出了一个新颖的框架:(1)实体类型标签解释模块自动学习将类型标签与词汇联系起来,通过共同利用几个播放实例和标签层次结构和标签层次结构,以及(2)基于类型的上下文化实例生成器根据给定实例生成新实例,以扩大培训集以更好地概括。在三个基准数据集上,我们的模型优于大量利润的现有方法。可以在https://github.com/teapot123/fine-graining-entity-typing上找到代码。
translated by 谷歌翻译
大型预训练的语言模型(PLM)的最新进展导致了自然语言理解(NLU)任务的令人印象深刻的增长,并具有特定于任务的微调。但是,直接调整PLM在很大程度上依赖大量的标记实例,这些实例通常很难获得。迅速对PLM的调整已被证明对各种少数次任务很有价值。现有的作品研究基于迅速的NLU任务的基于及时的调整,主要集中于用语言器来得出正确的标签单词或生成及时的模板,以从PLM中启发语义。此外,还对常规数据增强方法进行了验证,可用于少量射击任务。但是,目前几乎没有针对基于及时的调整范式设计的数据增强方法。因此,我们研究了迅速的少数射击学习者的新数据增强问题。由于标签语义对于迅速的调整至关重要,因此我们提出了一种新颖的标签引导数据增强方法促进DA,该方法利用了丰富的标签语义信息以进行数据增强。很少的文本分类任务的广泛实验结果表明,我们提出的框架通过有效利用标签语义和数据扩展来实现自然语言理解来实现卓越的性能。
translated by 谷歌翻译
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, and paraphrasing. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8% in accuracy on text classification and 18.9% on the GLUE benchmark.
translated by 谷歌翻译
机器学习方法尤其是深度神经网络取得了巨大的成功,但其中许多往往依赖于一些标记的样品进行训练。在真实世界的应用中,我们经常需要通过例如具有新兴预测目标和昂贵的样本注释的动态上下文来解决样本短缺。因此,低资源学习,旨在学习具有足够资源(特别是培训样本)的强大预测模型,现在正在被广泛调查。在所有低资源学习研究中,许多人更喜欢以知识图(kg)的形式利用一些辅助信息,这对于知识表示变得越来越受欢迎,以减少对标记样本的依赖。在这项调查中,我们非常全面地审查了90美元的报纸关于两个主要的低资源学习设置 - 零射击学习(ZSL)的预测,从未出现过训练,而且很少拍摄的学习(FSL)预测的新类仅具有可用的少量标记样本。我们首先介绍了ZSL和FSL研究中使用的KGS以及现有的和潜在的KG施工解决方案,然后系统地分类和总结了KG感知ZSL和FSL方法,将它们划分为不同的范例,例如基于映射的映射,数据增强,基于传播和基于优化的。我们接下来呈现了不同的应用程序,包括计算机视觉和自然语言处理中的kg增强预测任务,还包括kg完成的任务,以及每个任务的一些典型评估资源。我们最终讨论了一些关于新学习和推理范式的方面的一些挑战和未来方向,以及高质量的KGs的建设。
translated by 谷歌翻译
Triplet extraction aims to extract entities and their corresponding relations in unstructured text. Most existing methods train an extraction model on high-quality training data, and hence are incapable of extracting relations that were not observed during training. Generalizing the model to unseen relations typically requires fine-tuning on synthetic training data which is often noisy and unreliable. In this paper, we argue that reducing triplet extraction to a template filling task over a pre-trained language model can equip the model with zero-shot learning capabilities and enable it to leverage the implicit knowledge in the language model. Embodying these ideas, we propose a novel framework, ZETT (ZEro-shot Triplet extraction by Template infilling), that is based on end-to-end generative transformers. Our experiments show that without any data augmentation or pipeline systems, ZETT can outperform previous state-of-the-art models with 25% less parameters. We further show that ZETT is more robust in detecting entities and can be incorporated with automatically generated templates for relations.
translated by 谷歌翻译
几乎没有命名的实体识别(NER)对于在有限的资源领域中标记的实体标记至关重要,因此近年来受到了适当的关注。现有的几声方法主要在域内设置下进行评估。相比之下,对于这些固有的忠实模型如何使用一些标记的域内示例在跨域NER中执行的方式知之甚少。本文提出了一种两步以理性为中心的数据增强方法,以提高模型的泛化能力。几个数据集中的结果表明,与先前的最新方法相比,我们的模型无形方法可显着提高跨域NER任务的性能,包括反事实数据增强和及时调用方法。我们的代码可在\ url {https://github.com/lifan-yuan/factmix}上获得。
translated by 谷歌翻译
使用诸如BERT,ELMO和FLAIR等模型建模上下文信息的成立具有显着改善了文字的表示学习。它还给出了几乎每个NLP任务机器翻译,文本摘要和命名实体识别的Sota结果,以命名为少。在这项工作中,除了使用这些主导的上下文感知的表示之外,我们还提出了一种用于命名实体识别(NER)的知识意识表示学习(KARL)网络。我们讨论了利用现有方法在纳入世界知识方面的挑战,并展示了如何利用我们所提出的方法来克服这些挑战。 KARL基于变压器编码器,该变压器编码器利用表示为事实三元组的大知识库,将它们转换为图形上下文,并提取驻留在内部的基本实体信息以生成用于特征增强的上下文化三联表示。实验结果表明,使用卡尔的增强可以大大提升我们的内部系统的性能,并在三个公共网络数据集中的文献中的现有方法,即Conll 2003,Conll ++和Ontonotes V5实现了比文献中现有方法的显着更好的结果。我们还观察到更好的概括和应用于从Karl上看不见的实体的真实环境。
translated by 谷歌翻译
With the success of the prompt-tuning paradigm in Natural Language Processing (NLP), various prompt templates have been proposed to further stimulate specific knowledge for serving downstream tasks, e.g., machine translation, text generation, relation extraction, and so on. Existing prompt templates are mainly shared among all training samples with the information of task description. However, training samples are quite diverse. The sharing task description is unable to stimulate the unique task-related information in each training sample, especially for tasks with the finite-label space. To exploit the unique task-related information, we imitate the human decision process which aims to find the contrastive attributes between the objective factual and their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual \textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class classification, e.g., relation classification, topic classification, and entity typing. Compared with simple classification tasks, these tasks have more complex finite-label spaces and are more rigorous for prompts. First of all, we prune the finite label space to construct fact-counterfactual pairs. Then, we exploit the contrastive attributes by projecting training instances onto every fact-counterfactual pair. We further set up global prototypes corresponding with all contrastive attributes for selecting valid contrastive attributes as additional tokens in the prompt template. Finally, a simple Siamese representation learning is employed to enhance the robustness of the model. We conduct experiments on relation classification, topic classification, and entity typing tasks in both fully supervised setting and few-shot setting. The results indicate that our model outperforms former baselines.
translated by 谷歌翻译
旨在从非结构化文本中提取结构信息的知识提取(KE)通常会遭受数据稀缺性和新出现的看不见类型,即低资源场景。许多低资源KE的神经方法已广泛研究并取得了令人印象深刻的表现。在本文中,我们在低资源场景中介绍了对KE的文献综述,并将现有作品分为三个范式:(1)利用更高的资源数据,(2)利用更强的模型,(3)利用数据和模型一起。此外,我们描述了有前途的应用,并概述了未来研究的一些潜在方向。我们希望我们的调查能够帮助学术和工业界更好地理解这一领域,激发更多的想法并提高更广泛的应用。
translated by 谷歌翻译
立场检测旨在确定文本的作者是否赞成,反对或中立。这项任务的主要挑战是两个方面的:由于不同目标以及缺乏目标的上下文信息而产生的几乎没有学习。现有作品主要通过设计基于注意力的模型或引入嘈杂的外部知识来解决第二期,而第一个问题仍未探索。在本文中,受到预训练的语言模型(PLM)的潜在能力(PLM)的启发,我们建议介绍基于立场检测的及时基于迅速的微调。 PLM可以为目标提供基本的上下文信息,并通过提示启用几次学习。考虑到目标在立场检测任务中的关键作用,我们设计了目标感知的提示并提出了一种新颖的语言。我们的语言器不会将每个标签映射到具体单词,而是将每个标签映射到矢量,并选择最能捕获姿势与目标之间相关性的标签。此外,为了减轻通过单人工提示来处理不同目标的可能缺陷,我们建议将信息从多个提示中学到的信息提炼。实验结果表明,我们提出的模型在全数据和少数场景中的表现出色。
translated by 谷歌翻译
Few-shot relation extraction (FSRE) aims at recognizing unseen relations by learning with merely a handful of annotated instances. To generalize to new relations more effectively, this paper proposes a novel pipeline for the FSRE task based on queRy-information guided Attention and adaptive Prototype fuSion, namely RAPS. Specifically, RAPS first derives the relation prototype by the query-information guided attention module, which exploits rich interactive information between the support instances and the query instances, in order to obtain more accurate initial prototype representations. Then RAPS elaborately combines the derived initial prototype with the relation information by the adaptive prototype fusion mechanism to get the integrated prototype for both train and prediction. Experiments on the benchmark dataset FewRel 1.0 show a significant improvement of our method against state-of-the-art methods.
translated by 谷歌翻译
为了减少人际关系提取(RE)任务的注释,提出了遥远的监督方法,同时却在低性能方面挣扎。在这项工作中,我们提出了一个新颖的DSRE-NLI框架,该框架既考虑了现有知识库的遥远监督,又考虑了对其他任务的预读语言模型的间接监督。 DSRE-NLI通过半自动关系语言(SARV)机制为现成的自然语言推理(NLI)发动机充满电,以提供间接的监督并进一步巩固远处注释以使多型分类重新模型受益。基于NLI的间接监督仅获取一个从人类的关系模板作为每个关系的语义通用模板,然后模板集由高质量的文本模式富集,从遥远的注释的语料库中自动开采。通过两种简单有效的数据整合策略,培训数据的质量得到了显着提高。广泛的实验表明,所提出的框架可显着改善远距离监督的RE基准数据集上的SOTA性能(最高为F1的7.73%)。
translated by 谷歌翻译
Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in space lacks expressivity and cannot capture class information robustly, while statistical complex modeling poses difficulty to metric designs. In this work, we use tensor fields (``areas'') to model classes from the geometrical perspective for few-shot learning. We present a simple and effective method, dubbed hypersphere prototypes (HyperProto), where class information is represented by hyperspheres with dynamic sizes with two sets of learnable parameters: the hypersphere's center and the radius. Extending from points to areas, hyperspheres are much more expressive than embeddings. Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere. Following this idea, we also develop two variants of prototypes under other measurements. Extensive experiments and analysis on few-shot learning tasks across NLP and CV and comparison with 20+ competitive baselines demonstrate the effectiveness of our approach.
translated by 谷歌翻译
语言模型(LMS)已被证明在各种下游应用程序中很有用,例如摘要,翻译,问答和文本分类。由于它们可以存储的大量信息,LMS正在成为人工智能中越来越重要的工具。在这项工作中,我们提出了道具(提示为探测),该道具利用GPT-3(最初由OpenAI在2020年提出的大型语言模型)来执行知识基础构建任务(KBC)。 Prop实施了一种多步骤方法,该方法结合了各种提示技术来实现这一目标。我们的结果表明,手动提示策划是必不可少的,必须鼓励LM给出可变长度的答案集,特别是包括空的答案集,True/False问题是提高LM生成的建议精度的有用设备。 LM的大小是至关重要的因素,并且实体字典别名提高了LM评分。我们的评估研究表明,这些提出的技术可以大大提高最终预测的质量:Prop赢得了LM-KBC竞争的轨道2,表现优于基线36.4个百分点。我们的实施可在https://github.com/hemile/iswc-challenge上获得。
translated by 谷歌翻译
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts under the zero-shot setting, where the relation sets at the training and testing stages are disjoint. Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples, which increases the training cost and severely constrains the model performance. To address the above issues, we propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation Selection and Entity Boundary Detection. The remarkable characteristic of PCRED is that it does not rely on additional data and still achieves promising performance. The model adopts a relation-first paradigm, recognizing unseen relations through candidate relation selection. With this approach, the semantics of relations are naturally infused in the context. Entities are extracted based on the context and the semantics of relations subsequently. We evaluate our model on two ZeroRTE datasets. The experiment results show that our method consistently outperforms previous works. Our code will be available at https://anonymous.4open.science/r/PCRED.
translated by 谷歌翻译
Contrastive learning has become a new paradigm for unsupervised sentence embeddings. Previous studies focus on instance-wise contrastive learning, attempting to construct positive pairs with textual data augmentation. In this paper, we propose a novel Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP). Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts. Using a prototypical contrastive loss, we enforce the anchor sentence embedding to be close to its corresponding semantic prototypes, and far apart from the negative prototypes as well as the prototypes of other sentences. Extensive experimental results on semantic textual similarity, transfer, and clustering tasks demonstrate the effectiveness of our proposed model compared to strong baselines. Code is available at https://github.com/lemon0830/promptCSE.
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译