我们提出了一种简单而有效的自我训练方法,称为Stad,用于低资源关系提取。该方法首先根据教师模型所预测的概率将自动注释的实例分为两组:自信实例和不确定实例。与大多数以前的研究相反,主要的研究主要仅利用自信实例进行自我训练,我们利用了不确定的实例。为此,我们提出了一种从不确定实例中识别模棱两可但有用的实例的方法,然后将关系分为每个模棱两可的实例中的候选标签集和负标签集。接下来,我们建议对模棱两可的实例的负标签集和对自信实例的积极培训方法提出一种设定的培训方法。最后,提出了一种联合培训方法来在所有数据上构建最终关系提取系统。在两个广泛使用的数据集SEMEVAL2010任务8上进行的实验结果和低资源设置的重新攻击表明,这种新的自我训练方法确实在与几个竞争性自我训练系统相比时确实取得了显着和一致的改进。代码可在https://github.com/jjyunlp/stad上公开获取
translated by 谷歌翻译
缺乏标记数据是关系提取的主要障碍。通过将未标记的样本作为额外培训数据注释,已经证明,半监督联系提取(SSRE)已被证明是一个有希望的方法。沿着这条线几乎所有先前的研究采用多种模型来使注释通过从这些模型中获取交叉路口集的预测结果来更加可靠。然而,差异集包含有关未标记数据的丰富信息,并通过事先研究忽略了忽视。在本文中,我们建议不仅从共识中学习,而且还要学习SSRE中不同模型之间的分歧。为此,我们开发了一种简单且一般的多教师蒸馏(MTD)框架,可以轻松集成到任何现有的SSRE方法中。具体来说,我们首先让教师对应多个模型,并在SSRE方法中选择最后一次迭代的交叉点集中的样本,以便像往常一样增加标记的数据。然后,我们将类分布转移为差异设置为软标签以指导学生。我们最后使用训练有素的学生模型进行预测。两个公共数据集上的实验结果表明,我们的框架显着促进了基础SSRE方法的性能,具有相当低的计算成本。
translated by 谷歌翻译
Information Extraction (IE) aims to extract structured information from heterogeneous sources. IE from natural language texts include sub-tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Event Extraction (EE). Most IE systems require comprehensive understandings of sentence structure, implied semantics, and domain knowledge to perform well; thus, IE tasks always need adequate external resources and annotations. However, it takes time and effort to obtain more human annotations. Low-Resource Information Extraction (LRIE) strives to use unsupervised data, reducing the required resources and human annotation. In practice, existing systems either utilize self-training schemes to generate pseudo labels that will cause the gradual drift problem, or leverage consistency regularization methods which inevitably possess confirmation bias. To alleviate confirmation bias due to the lack of feedback loops in existing LRIE learning paradigms, we develop a Gradient Imitation Reinforcement Learning (GIRL) method to encourage pseudo-labeled data to imitate the gradient descent direction on labeled data, which can force pseudo-labeled data to achieve better optimization capabilities similar to labeled data. Based on how well the pseudo-labeled data imitates the instructive gradient descent direction obtained from labeled data, we design a reward to quantify the imitation process and bootstrap the optimization capability of pseudo-labeled data through trial and error. In addition to learning paradigms, GIRL is not limited to specific sub-tasks, and we leverage GIRL to solve all IE sub-tasks (named entity recognition, relation extraction, and event extraction) in low-resource settings (semi-supervised IE and few-shot IE).
translated by 谷歌翻译
命名实体识别(NER)是自然语言处理中的重要任务。但是,传统的监督NER需要大规模注释的数据集。提出了远处的监督以减轻对数据集的巨大需求,但是以这种方式构建的数据集非常嘈杂,并且存在严重的未标记实体问题。交叉熵(CE)损耗函数对未标记的数据高度敏感,从而导致严重的性能降解。作为替代方案,我们提出了一种称为NRCES的新损失函数,以应对此问题。Sigmoid项用于减轻噪声的负面影响。此外,我们根据样品和训练过程平衡模型的收敛性和噪声耐受性。关于合成和现实世界数据集的实验表明,在严重的未标记实体问题的情况下,我们的方法表现出强大的鲁棒性,从而实现了现实世界数据集的新最新技术。
translated by 谷歌翻译
作为自然语言处理领域(NLP)领域的广泛研究,基于方面的情感分析(ABSA)是预测文本中相对于相应方面所表达的情感的任务。不幸的是,大多数语言缺乏足够的注释资源,因此越来越多的研究人员专注于跨语义方面的情感分析(XABSA)。但是,最近的研究仅集中于跨语性数据对准而不是模型对齐。为此,我们提出了一个新颖的框架CL-XABSA:基于跨语言的情感分析的对比度学习。基于对比度学习,我们在不同的语义空间中关闭具有相同标签的样品之间的距离,从而实现了不同语言的语义空间的收敛。具体而言,我们设计了两种对比策略,即代币嵌入(TL-CTE)和情感水平的对比度学习,对代币嵌入(SL-CTE)的对比度学习,以使源语言和目标语言的语义空间正规化,以使其更加统一。由于我们的框架可以在培训期间以多种语言接收数据集,因此我们的框架不仅可以适应XABSA任务,而且可以针对基于多语言的情感分析(MABSA)进行调整。为了进一步提高模型的性能,我们执行知识蒸馏技术利用未标记的目标语言的数据。在蒸馏XABSA任务中,我们进一步探讨了不同数据(源数据集,翻译数据集和代码切换数据集)的比较有效性。结果表明,所提出的方法在XABSA,蒸馏XABSA和MABSA的三个任务中具有一定的改进。为了获得可重复性,我们的本文代码可在https://github.com/gklmip/cl-xabsa上获得。
translated by 谷歌翻译
由于许多微调预先训练的语言模型〜(PLMS)具有有希望的性能,因此慷慨地释放,研究了重用这些模型的更好方法至关重要,因为它可以大大降低再培训计算成本和潜在的环境副作用。在本文中,我们探索了一种小型模型重用范式,知识合并〜(ka)。如果没有人为注释,KA旨在将来自不同教师的知识合并到一个专门从事不同的分类问题中的知识,进入多功能的学生模型。实现这一目标,我们设计了模型不确定感知知识合并〜(Muka)框架,其使用Monte-Carlo辍学来识别潜在的足够教师,以估计金色监督指导学生。实验结果表明,Muka在基准数据集上实现了对基准的基本改进。进一步的分析表明,Muka可以通过多个教师模型,异构教师,甚至交叉数据集教师概括很好的复杂设置。
translated by 谷歌翻译
现有的远处监督的关系提取器通常依靠嘈杂的数据进行模型培训和评估,这可能导致垃圾堆放系统。为了减轻问题,我们研究了小型清洁数据集是否可以帮助提高远距离监督模型的质量。我们表明,除了对模型进行更具说服力的评估外,一个小的清洁数据集还可以帮助我们构建更强大的Denoising模型。具体而言,我们提出了一个基于影响函数的清洁实例选择的新标准。它收集了样本级别的证据,以识别良好实例(这比损失级别的证据更具信息性)。我们还提出了一种教师实习机制,以控制自举套件时中间结果的纯度。整个方法是模型不合时宜的,并且在denoising Real(NYT)和合成噪声数据集上都表现出强烈的性能。
translated by 谷歌翻译
Pre-trained Language Models (PLMs) have been applied in NLP tasks and achieve promising results. Nevertheless, the fine-tuning procedure needs labeled data of the target domain, making it difficult to learn in low-resource and non-trivial labeled scenarios. To address these challenges, we propose Prompt-based Text Entailment (PTE) for low-resource named entity recognition, which better leverages knowledge in the PLMs. We first reformulate named entity recognition as the text entailment task. The original sentence with entity type-specific prompts is fed into PLMs to get entailment scores for each candidate. The entity type with the top score is then selected as final label. Then, we inject tagging labels into prompts and treat words as basic units instead of n-gram spans to reduce time complexity in generating candidates by n-grams enumeration. Experimental results demonstrate that the proposed method PTE achieves competitive performance on the CoNLL03 dataset, and better than fine-tuned counterparts on the MIT Movie and Few-NERD dataset in low-resource settings.
translated by 谷歌翻译
命名实体识别(ner)是从文本中提取特定类型的命名实体的任务。当前的NER模型往往依赖于人类注释的数据集,要求在目标领域和实体上广泛参与专业知识。这项工作介绍了一个询问生成的方法,它通过询问反映实体类型的需求的简单自然语言问题来自动生成NER数据集(例如,哪种疾病?)到开放式域问题应答系统。不使用任何域中资源(即,培训句子,标签或域名词典),我们的模型在我们生成的数据集上仅培训了,这在很大程度上超过了四个不同域的六个基准测试的弱势监督模型。令人惊讶的是,在NCBI疾病中,我们的模型达到75.5 F1得分,甚至优于以前的最佳弱监督模型4.1 F1得分,它利用域专家提供的丰富的域名词典。制定具有自然语言的NER的需求,也允许我们为诸如奖项等细粒度实体类型构建NER模型,其中我们的模型甚至优于完全监督模型。在三个少量的NER基准测试中,我们的模型实现了新的最先进的性能。
translated by 谷歌翻译
大多数NER方法都依赖于广泛的标记数据进行模型培训,这些数据在低资源场景中挣扎,培训数据有限。与资源丰富的源域相比,现有的主要方法通常会遇到目标域具有不同标签集的挑战,该标签集可以作为类传输和域转移得出的结论。在本文中,我们通过可拔出的提示(Lightner)提出了一个轻巧的调整范式,用于低资源。具体而言,我们构建了实体类别的统一可学习的语言器,以生成实体跨度序列和实体类别,而无需任何标签特定的分类器,从而解决了类转移问题。我们通过将可学习的参数纳入自我发言层作为指导,进一步提出了一个可插入的指导模块,该参数可以重新调节注意力并调整预训练的权重。请注意,我们仅通过修复了预训练的语言模型的整个参数来调整那些插入的模块,从而使我们的方法轻巧且灵活地适合低资源场景,并且可以更好地跨域传输知识。实验结果表明,Lightner可以在标准监督环境中获得可比的性能,并且在低资源设置中优于强大基线。代码在https://github.com/zjunlp/deepke/tree/main/main/example/ner/few-shot中。
translated by 谷歌翻译
Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation. Many existing approaches are based on heuristics such as sample losses, which might not be flexible enough to achieve optimal solutions. Meta learning based methods address this issue by learning a data selection function, but can be hard to optimize. In light of these pros and cons, we propose Selection-Enhanced Noisy label Training (SENT) that does not rely on meta learning while having the flexibility of being data-driven. SENT transfers the noise distribution to a clean set and trains a model to distinguish noisy labels from clean ones using model-based features. Empirically, on a wide range of tasks including text classification and speech recognition, SENT improves performance over strong baselines under the settings of self-training and label corruption.
translated by 谷歌翻译
我们提出了弗雷多(Fredo),几张文档级别的关系提取(FSDLRE)基准。与基于句子级别的关系提取语料库建立的现有基准相反,我们认为文档级的语料库提供了更多的现实主义,尤其是关于无原始的(nota)分布。因此,我们建议一组FSDLRE任务,并基于两个现有的监督学习数据集(DOCRED和SCIERC)构建基准测试。我们将最先进的句子级方法MNAV调整为文档级别,并进一步开发它以改善域的适应性。我们发现FSDLRE是一个充满挑战的环境,具有有趣的新特征,例如从支持集中进行nota实例的能力。数据,代码和训练的模型可在线获得(https://github.com/nicpopovic/fredo)。
translated by 谷歌翻译
Distantly-Supervised Named Entity Recognition (DS-NER) effectively alleviates the data scarcity problem in NER by automatically generating training samples. Unfortunately, the distant supervision may induce noisy labels, thus undermining the robustness of the learned models and restricting the practical application. To relieve this problem, recent works adopt self-training teacher-student frameworks to gradually refine the training labels and improve the generalization ability of NER models. However, we argue that the performance of the current self-training frameworks for DS-NER is severely underestimated by their plain designs, including both inadequate student learning and coarse-grained teacher updating. Therefore, in this paper, we make the first attempt to alleviate these issues by proposing: (1) adaptive teacher learning comprised of joint training of two teacher-student networks and considering both consistent and inconsistent predictions between two teachers, thus promoting comprehensive student learning. (2) fine-grained student ensemble that updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise. To verify the effectiveness of our proposed method, we conduct experiments on four DS-NER datasets. The experimental results demonstrate that our method significantly surpasses previous SOTA methods.
translated by 谷歌翻译
由于低资源语言缺乏培训数据,交叉语言机器阅读理解(XMRC)是挑战。最近的方法仅使用培训数据,以资源丰富的语言,如英语到微调大规模的跨语法预训练的语言模型。由于语言之间的巨大差异,仅由源语言微调的模型可能无法对目标语言表现良好。有趣的是,我们观察到,虽然先前方法预测的前1个结果可能经常无法达到地面真理答案,但是正确的答案通常包含在Top-K预测结果中。基于这种观察,我们开发了一种两级方法来提高模型性能。召回的第一阶段目标:我们设计一个艰难的学习(HL)算法,以最大化顶级预测包含准确答案的可能性。第二阶段专注于精确:开发了答案感知对比学习(AA-CL)机制,以了解准确答案和其他候选者之间的细差异。我们的广泛实验表明,我们的模型在两个交叉语言MRC基准数据集上显着优于一系列强大的基线。
translated by 谷歌翻译
我们提出了一个新的框架,在增强的自然语言(TANL)之间的翻译,解决了许多结构化预测语言任务,包括联合实体和关系提取,嵌套命名实体识别,关系分类,语义角色标记,事件提取,COREREFED分辨率和对话状态追踪。通过培训特定于特定于任务的鉴别分类器来说,我们将其作为一种在增强的自然语言之间的翻译任务,而不是通过培训问题,而不是解决问题,而是可以轻松提取任务相关信息。我们的方法可以匹配或优于所有任务的特定于任务特定模型,特别是在联合实体和关系提取(Conll04,Ade,NYT和ACE2005数据集)上实现了新的最先进的结果,与关系分类(偶尔和默示)和语义角色标签(Conll-2005和Conll-2012)。我们在使用相同的架构和超参数的同时为所有任务使用相同的架构和超级参数,甚至在培训单个模型时同时解决所有任务(多任务学习)。最后,我们表明,由于更好地利用标签语义,我们的框架也可以显着提高低资源制度的性能。
translated by 谷歌翻译
Open Relation Extraction (OpenRE) aims to discover novel relations from open domains. Previous OpenRE methods mainly suffer from two problems: (1) Insufficient capacity to discriminate between known and novel relations. When extending conventional test settings to a more general setting where test data might also come from seen classes, existing approaches have a significant performance decline. (2) Secondary labeling must be performed before practical application. Existing methods cannot label human-readable and meaningful types for novel relations, which is urgently required by the downstream tasks. To address these issues, we propose the Active Relation Discovery (ARD) framework, which utilizes relational outlier detection for discriminating known and novel relations and involves active learning for labeling novel relations. Extensive experiments on three real-world datasets show that ARD significantly outperforms previous state-of-the-art methods on both conventional and our proposed general OpenRE settings. The source code and datasets will be available for reproducibility.
translated by 谷歌翻译
从HTML文档中提取结构化信息是一个长期研究的问题,其中包括知识库构造,面积搜索和个性化建议。先前的工作依靠每个目标网站上的一些人体标记的网页或一些从某些种子网站的人类标记的网页来培训可转移的提取模型,该模型在看不见的目标网站上概括。嘈杂的内容,较低的站点级别的一致性以及缺乏通信协议使标签网页成为耗时且昂贵的磨难。我们开发的最少是半结构化Web文档的标签有效的自我训练方法,以克服这些限制。至少利用一些人标记的页面来伪造来自目标垂直行业的大量未标记的网页。它使用自我训练对人类标记和伪标记的样品进行了可转移的Web取消模型训练。为了减轻由于嘈杂的训练样本而导致的错误传播,至少根据其估计的标签准确性重新重量重量,并将其纳入培训。据我们所知,这是第一项提出端到端培训的工作,用于仅利用少数人标记的页面进行可转移的Web提取模型。大规模公共数据集的实验表明,每个种子网站上使用少于十个人体标记的页面进行培训,最不受欢迎的模型在未见网站上的平均f1点以上的最新型号超过26个平均F1点,减少人类标记的页面的数量,以达到超过10倍的性能。
translated by 谷歌翻译
Relation extraction (RE), which has relied on structurally annotated corpora for model training, has been particularly challenging in low-resource scenarios and domains. Recent literature has tackled low-resource RE by self-supervised learning, where the solution involves pretraining the relation embedding by RE-based objective and finetuning on labeled data by classification-based objective. However, a critical challenge to this approach is the gap in objectives, which prevents the RE model from fully utilizing the knowledge in pretrained representations. In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning. Since in this kind of representation learning paradigm, one relation may easily form multiple clusters in the representation space, we further propose a multi-center contrastive loss that allows one relation to form multiple clusters to better align with pretraining. Experiments on two document-level RE datasets, BioRED and Re-DocRED, demonstrate the effectiveness of our method. Particularly, when using 1% end-task training data, our method outperforms PLM-based RE classifier by 10.5% and 5.8% on the two datasets, respectively.
translated by 谷歌翻译
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code at: https://github.com/msadat3/SSL_for_NLI.
translated by 谷歌翻译
Two key obstacles in biomedical relation extraction (RE) are the scarcity of annotations and the prevalence of instances without explicitly pre-defined labels due to low annotation coverage. Existing approaches, which treat biomedical RE as a multi-class classification task, often result in poor generalization in low-resource settings and do not have the ability to make selective prediction on unknown cases but give a guess from seen relations, hindering the applicability of those approaches. We present NBR, which converts biomedical RE as natural language inference formulation through indirect supervision. By converting relations to natural language hypotheses, NBR is capable of exploiting semantic cues to alleviate annotation scarcity. By incorporating a ranking-based loss that implicitly calibrates abstinent instances, NBR learns a clearer decision boundary and is instructed to abstain on uncertain instances. Extensive experiments on three widely-used biomedical RE benchmarks, namely ChemProt, DDI and GAD, verify the effectiveness of NBR in both full-set and low-resource regimes. Our analysis demonstrates that indirect supervision benefits biomedical RE even when a domain gap exists, and combining NLI knowledge with biomedical knowledge leads to the best performance gains.
translated by 谷歌翻译