我们提出了弗雷多(Fredo),几张文档级别的关系提取(FSDLRE)基准。与基于句子级别的关系提取语料库建立的现有基准相反,我们认为文档级的语料库提供了更多的现实主义,尤其是关于无原始的(nota)分布。因此,我们建议一组FSDLRE任务,并基于两个现有的监督学习数据集(DOCRED和SCIERC)构建基准测试。我们将最先进的句子级方法MNAV调整为文档级别,并进一步开发它以改善域的适应性。我们发现FSDLRE是一个充满挑战的环境,具有有趣的新特征,例如从支持集中进行nota实例的能力。数据,代码和训练的模型可在线获得(https://github.com/nicpopovic/fredo)。
translated by 谷歌翻译
事件参数提取(EAE)在句子级别进行了很好的研究,但在文档级别进行了探索。在本文中,我们研究以捕获实际上分布在文档中的句子的事件论点。先前的工作主要假设对丰富的文档监督的完全访问,而忽略了该论点监督在文档中受到限制的事实。为了填补这一空白,我们基于最大的文档级事件提取数据集DOCEE提出了几个示波的文档级事件参数提取基准。我们首先定义了新问题,并通过新颖的N-Way-D-Doc采样而不是传统的NWay-K-shot策略来重建语料库。然后,我们将高级文档级神经模型调整为几个弹出设置,以在内部和跨域设置下提供基线结果。由于参数提取取决于多个句子的上下文,并且学习过程仅限于很少的示例,因此我们发现该任务在实质上较低的性能中非常具有挑战性。考虑到很少有Docae与低资源制度下的实际使用密切相关,我们希望这种基准能够朝着这一方向发展进行更多的研究。我们的数据和代码将在线提供。
translated by 谷歌翻译
Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in space lacks expressivity and cannot capture class information robustly, while statistical complex modeling poses difficulty to metric designs. In this work, we use tensor fields (``areas'') to model classes from the geometrical perspective for few-shot learning. We present a simple and effective method, dubbed hypersphere prototypes (HyperProto), where class information is represented by hyperspheres with dynamic sizes with two sets of learnable parameters: the hypersphere's center and the radius. Extending from points to areas, hyperspheres are much more expressive than embeddings. Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere. Following this idea, we also develop two variants of prototypes under other measurements. Extensive experiments and analysis on few-shot learning tasks across NLP and CV and comparison with 20+ competitive baselines demonstrate the effectiveness of our approach.
translated by 谷歌翻译
我们提出了文件的实体级关系联合模型。与其他方法形成鲜明对比 - 重点关注本地句子中的对,因此需要提及级别的注释 - 我们的模型在实体级别运行。为此,遵循多任务方法,它在Coreference分辨率上建立并通过多级别表示结合全局实体和本地提到信息来聚集相关信号。我们在积木数据集中实现最先进的关系提取结果,并报告了未来参考的第一个实体级端到端关系提取结果。最后,我们的实验结果表明,联合方法与特定于任务专用的学习相提并论,虽然由于共享参数和培训步骤而言更有效。
translated by 谷歌翻译
关系提取是自然语言处理中的一个基本问题。大多数现有模型都是为常规域中的关系提取而定义的。然而,它们对特定结构域(例如,生物医用)的表现尚不清楚。为了填补这一差距,本文对生物医学研究文章的关系提取进行了实证研究。具体而言,我们考虑句子级和文档级关系提取,并在几个基准数据集上运行一些最先进的方法。我们的研究结果表明,(1)当前文件级关系提取方法具有强大的泛化能力;(2)现有方法需要大量标记数据进行生物医学中的模型微调。我们的观察可能会激发该领域的人们为生物医学关系提取开发更有效的模型。
translated by 谷歌翻译
文档级关系提取(DRE)旨在识别两个实体之间的关系。实体可以对应于超越句子边界的多个提升。以前很少有研究已经调查了提及集成,这可能是有问题的,因为库鲁弗提到对特定关系没有同样有贡献。此外,事先努力主要关注实体级的推理,而不是捕获实体对之间的全局相互作用。在本文中,我们提出了两种新颖的技术,上下文指导的集成和交互推理(CGM2IR),以改善DRE。而不是简单地应用平均池,而是利用上下文来指导在加权和方式中的经验提升的集成。另外,对实体对图的相互作用推理在实体对图上执行迭代算法,以模拟关系的相互依赖性。我们在三个广泛使用的基准数据集中评估我们的CGM2IR模型,即Docred,CDR和GDA。实验结果表明,我们的模型优于以前的最先进的模型。
translated by 谷歌翻译
近年来,预制语言模型彻底改变了NLP世界,同时在各种下游任务中实现了最先进的性能。但是,在许多情况下,当标记数据稀缺时,这些模型不会表现良好,并且预计模型将在零或几秒钟内执行。最近,有几项工作表明,与下游任务更好地对准的预先预测或执行第二阶段,可以导致改进的结果,尤其是在稀缺数据设置中。在此,我们建议利用携带的情绪话语标记来产生大规模的弱标记数据,这又可以用于适应语言模型进行情感分析。广泛的实验结果显示了我们在各种基准数据集中的方法的价值,包括金融域。在https://github.com/ibm/tslm-discourse-markers上提供代码,模型和数据。
translated by 谷歌翻译
It has been experimentally demonstrated that humans are able to learn in a manner that allows them to make predictions on categories for which they have not seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020) have recently presented a machine learning approach that aims to do the same. They utilise synthetically generated data and demonstrate that it is possible to achieve sub-linear scaling and develop models that can learn to recognise N classes from M training samples where M is less than N - aka less-than-one shot learning. Their method was, however, defined for univariate or simple multivariate data (Sucholutsky et al., 2021). We extend it to work on large, high-dimensional and real-world datasets and empirically validate it in this new and challenging setting. We apply this method to learn previously unseen NLP tasks from very few examples (4, 8 or 16). We first generate compact, sophisticated less-than-one shot representations called soft-label prototypes which are fitted on training data, capturing the distribution of different classes across the input domain space. We then use a modified k-Nearest Neighbours classifier to demonstrate that soft-label prototypes can classify data competitively, even outperforming much more computationally complex few-shot learning methods.
translated by 谷歌翻译
Relation extraction (RE), which has relied on structurally annotated corpora for model training, has been particularly challenging in low-resource scenarios and domains. Recent literature has tackled low-resource RE by self-supervised learning, where the solution involves pretraining the relation embedding by RE-based objective and finetuning on labeled data by classification-based objective. However, a critical challenge to this approach is the gap in objectives, which prevents the RE model from fully utilizing the knowledge in pretrained representations. In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning. Since in this kind of representation learning paradigm, one relation may easily form multiple clusters in the representation space, we further propose a multi-center contrastive loss that allows one relation to form multiple clusters to better align with pretraining. Experiments on two document-level RE datasets, BioRED and Re-DocRED, demonstrate the effectiveness of our method. Particularly, when using 1% end-task training data, our method outperforms PLM-based RE classifier by 10.5% and 5.8% on the two datasets, respectively.
translated by 谷歌翻译
关系提取(RE)是自然语言处理的基本任务。RE试图通过识别文本中的实体对之间的关系信息来将原始的,非结构化的文本转变为结构化知识。RE有许多用途,例如知识图完成,文本摘要,提问和搜索查询。RE方法的历史可以分为四个阶段:基于模式的RE,基于统计的RE,基于神经的RE和大型语言模型的RE。这项调查始于对RE的早期阶段的一些示例性作品的概述,突出了局限性和缺点,以使进度相关。接下来,我们回顾流行的基准测试,并严格检查用于评估RE性能的指标。然后,我们讨论遥远的监督,这是塑造现代RE方法发展的范式。最后,我们回顾了重点是降级和培训方法的最新工作。
translated by 谷歌翻译
Practices in the built environment have become more digitalized with the rapid development of modern design and construction technologies. However, the requirement of practitioners or scholars to gather complicated professional knowledge in the built environment has not been satisfied yet. In this paper, more than 80,000 paper abstracts in the built environment field were obtained to build a knowledge graph, a knowledge base storing entities and their connective relations in a graph-structured data model. To ensure the retrieval accuracy of the entities and relations in the knowledge graph, two well-annotated datasets have been created, containing 2,000 instances and 1,450 instances each in 29 relations for the named entity recognition task and relation extraction task respectively. These two tasks were solved by two BERT-based models trained on the proposed dataset. Both models attained an accuracy above 85% on these two tasks. More than 200,000 high-quality relations and entities were obtained using these models to extract all abstract data. Finally, this knowledge graph is presented as a self-developed visualization system to reveal relations between various entities in the domain. Both the source code and the annotated dataset can be found here: https://github.com/HKUST-KnowComp/BEKG.
translated by 谷歌翻译
我们提出了一个零射门学习关系分类(ZSLRC)框架,通过其识别训练数据中不存在的新颖关系的能力来提高最先进的框架。零射击学习方法模仿人类学习和识别新概念的方式,没有先前的知识。为此,ZSLRC使用修改的高级原型网络来利用加权侧(辅助)信息。 ZSLRC的侧面信息是由关键字,名称实体的高度和标签及其同义词构建的。 ZSLRC还包括一个自动高义的提取框架,可直接从Web获取各种名称实体的高型。 ZSLRC提高了最先进的少量学习关系分类方法,依赖于标记的培训数据,因此即使在现实世界方案中也适用于某些关系对相应标记的培训示例。我们在两种公共数据集(NYT和NEREREL)上使用广泛的实验显示结果,并显示ZSLRC显着优于最先进的方法对监督学习,少量学习和零射击学习任务。我们的实验结果还展示了我们所提出的模型的有效性和稳健性。
translated by 谷歌翻译
Few-shot relation extraction (FSRE) aims at recognizing unseen relations by learning with merely a handful of annotated instances. To generalize to new relations more effectively, this paper proposes a novel pipeline for the FSRE task based on queRy-information guided Attention and adaptive Prototype fuSion, namely RAPS. Specifically, RAPS first derives the relation prototype by the query-information guided attention module, which exploits rich interactive information between the support instances and the query instances, in order to obtain more accurate initial prototype representations. Then RAPS elaborately combines the derived initial prototype with the relation information by the adaptive prototype fusion mechanism to get the integrated prototype for both train and prediction. Experiments on the benchmark dataset FewRel 1.0 show a significant improvement of our method against state-of-the-art methods.
translated by 谷歌翻译
我们介绍了精致的,这是一种有效的端到端实体链接模型,该模型使用精细的实体类型和实体描述来执行链接。该模型执行提及的检测,细粒实体键入以及单个向前传球中文档中所有提及的实体歧义,使其比现有方法快60倍以上。精制还超过了标准实体链接数据集的最先进性能,平均比3.7 F1。该模型能够将其推广到大规模的知识库,例如Wikidata(其实体是Wikipedia的15倍)和零拍的实体链接。速度,准确性和规模的结合使精制成为从网络规模数据集中提取实体的有效且具有成本效益的系统,该数据集已成功部署该模型。我们的代码和预培训模型可在https://github.com/alexa/refined上找到
translated by 谷歌翻译
In recent years, there is a surge of generation-based information extraction work, which allows a more direct use of pre-trained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit document-level relation extraction (DocRE) where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models. We have released our code at https://github.com/ayyyq/DORE.
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Relation extraction (RE) is a sub-discipline of information extraction (IE) which focuses on the prediction of a relational predicate from a natural-language input unit (such as a sentence, a clause, or even a short paragraph consisting of multiple sentences and/or clauses). Together with named-entity recognition (NER) and disambiguation (NED), RE forms the basis for many advanced IE tasks such as knowledge-base (KB) population and verification. In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE by encoding structured information about the sentences' principal units, such as subjects, objects, verbal phrases, and adverbials, into various forms of vectorized (and hence unstructured) representations of the sentences. Our main conjecture is that the decomposition of long and possibly convoluted sentences into multiple smaller clauses via OpenIE even helps to fine-tune context-sensitive language models such as BERT (and its plethora of variants) for RE. Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models compared to existing RE approaches. Our best results reach 92% and 71% of F1 score for KnowledgeNet and FewRel, respectively, proving the effectiveness of our approach on competitive benchmarks.
translated by 谷歌翻译
我们研究了检查问题的事实,旨在识别给定索赔的真实性。具体而言,我们专注于事实提取和验证(发烧)及其伴随数据集的任务。该任务包括从维基百科检索相关文件(和句子)并验证文件中的信息是否支持或驳斥所索赔的索赔。此任务至关重要,可以是假新闻检测和医疗索赔验证等应用程序块。在本文中,我们以通过以结构化和全面的方式呈现文献来更好地了解任务的挑战。我们通过分析不同方法的技术视角并讨论发热数据集的性能结果,描述了所提出的方法,这是最熟悉的和正式结构化的数据集,就是事实提取和验证任务。我们还迄今为止迄今为止确定句子检索组件的有益损失函数的最大实验研究。我们的分析表明,采样负句对于提高性能并降低计算复杂性很重要。最后,我们描述了开放的问题和未来的挑战,我们激励了未来的任务研究。
translated by 谷歌翻译
我们提出了一个新的框架,在增强的自然语言(TANL)之间的翻译,解决了许多结构化预测语言任务,包括联合实体和关系提取,嵌套命名实体识别,关系分类,语义角色标记,事件提取,COREREFED分辨率和对话状态追踪。通过培训特定于特定于任务的鉴别分类器来说,我们将其作为一种在增强的自然语言之间的翻译任务,而不是通过培训问题,而不是解决问题,而是可以轻松提取任务相关信息。我们的方法可以匹配或优于所有任务的特定于任务特定模型,特别是在联合实体和关系提取(Conll04,Ade,NYT和ACE2005数据集)上实现了新的最先进的结果,与关系分类(偶尔和默示)和语义角色标签(Conll-2005和Conll-2012)。我们在使用相同的架构和超参数的同时为所有任务使用相同的架构和超级参数,甚至在培训单个模型时同时解决所有任务(多任务学习)。最后,我们表明,由于更好地利用标签语义,我们的框架也可以显着提高低资源制度的性能。
translated by 谷歌翻译
To effectively train accurate Relation Extraction models, sufficient and properly labeled data is required. Adequately labeled data is difficult to obtain and annotating such data is a tricky undertaking. Previous works have shown that either accuracy has to be sacrificed or the task is extremely time-consuming, if done accurately. We are proposing an approach in order to produce high-quality datasets for the task of Relation Extraction quickly. Neural models, trained to do Relation Extraction on the created datasets, achieve very good results and generalize well to other datasets. In our study, we were able to annotate 10,022 sentences for 19 relations in a reasonable amount of time, and trained a commonly used baseline model for each relation.
translated by 谷歌翻译