实体链接(EL)是将实体提及在文本中及其相应实体中出现在知识库中的过程。通常基于Wikipedia估算实体的EL特征(例如,先前的概率,相关性评分和实体嵌入)。但是,对于刚刚在新闻中发现的新兴实体(EES)而言,它们可能仍未包含在Wikipedia中。结果,它无法获得Wikipedia和EL模型的EES所需的EL功能,将始终无法将歧义提及与这些EES正确链接,因为它没有其EL功能。为了解决这个问题,在本文中,我们专注于以一般方式为新兴实体学习EL功能的新任务。我们提出了一种名为Stamo的新颖方法,可以自动学习EES的高质量EL功能,该功能仅需要从网络中收集的每个EE的少数标记文档,因为它可以进一步利用隐藏在未标记的数据中的知识。 Stamo主要基于自我训练,这使其与任何EL功能或EL模型都灵活地集成在一起,但也使其很容易遭受由错误标签的数据引起的错误加强问题。我们认为自我训练是相对于EES的EL特征,而不是一些试图将错误标签的数据抛弃的常见自我训练策略,而是提出了内部插槽和斜率优化的多重优化过程,以减轻误差加强问题隐含。我们构建了涉及选定的EE的两个EL数据集,以评估EES获得的EL特征的质量,实验结果表明,我们的方法显着优于其他学习EL特征的基线方法。
translated by 谷歌翻译
实体链接旨在将模棱两可的提及与知识库中的相应实体联系起来,这对于各种下游应用程序是重要的,例如知识库完成,问题答案和信息提取。尽管已经致力于这项任务,但这些研究中的大多数遵循以下假设,即可以使用大规模标记的数据。但是,当由于劳动密集型注释工作而导致的标记数据不足以针对特定领域时,现有算法的性能将遭受无法忍受的下降。在本文中,我们努力解决了几个弹药实体链接的问题,这只需要最少的标记数据,并且在实际情况下更为实用。具体而言,我们首先提出了一种新颖的弱监督策略,以基于提及的重写生成非平凡的合成实体对。由于合成数据的质量对有效的模型训练有关键的影响,因此我们进一步设计了一种元学习机制,以自动为每个合成实体对分配不同的权重。通过这种方式,我们可以深刻利用丰富而宝贵的语义信息,从而在几个射击设置下得出训练有素的实体链接模型。现实世界数据集上的实验表明,所提出的方法可以广泛改善最新的几杆实体链接模型,并在只有少量标记的数据可用时实现令人印象深刻的性能。此外,我们还展示了模型可传递性的出色能力。
translated by 谷歌翻译
Wikidata是一个经常更新,社区驱动和多语言知识图形。因此,Wikidata是实体联系的一个有吸引力的基础,这是最近发表论文的增加显而易见的。该调查侧重于四个主题:(1)存在哪些Wikidata实体链接数据集,它们是多么广泛使用,它们是如何构建的? (2)对实体联系数据集的设计进行Wikidata的特点,如果是的话,怎么样? (3)当前实体链接方法如何利用Wikidata的特定特征? (4)现有实体链接方法未开发哪种Wikidata特征?本次调查显示,当前的Wikidata特定实体链接数据集在其他知识图表中的方案中的注释方案中没有不同。因此,没有提升多语言和时间依赖数据集的可能性,是自然适合维基帽的数据集。此外,我们表明大多数实体链接方法使用Wikidata以与任何其他知识图相同的方式,因为任何其他知识图都缺少了利用Wikidata特定特征来提高质量的机会。几乎所有方法都使用标签等特定属性,有时是描述,而是忽略超关系结构等特征。因此,例如,通过包括超关系图嵌入或类型信息,仍有改进的余地。许多方法还包括来自维基百科的信息,这些信息很容易与Wikidata组合并提供有价值的文本信息,Wikidata缺乏。
translated by 谷歌翻译
开放信息提取(OIE)方法从非结构化文本中提取大量的OIE三元<名词短语,关系短语,名词短语>,它们组成了大型开放知识基础(OKB)。此类OKB中的名词短语和关系短语不是规范化的,这导致了散落和冗余的事实。发现知识的两种观点(即,基于事实三重的事实视图和基于事实三重源上下文的上下文视图)提供了互补信息,这对于OKB规范化的任务至关重要,该信息将其簇为同义名词短语和关系短语分为同一组,并为他们分配唯一的标识符。但是,到目前为止,这两种知识的观点已被现有作品孤立地利用。在本文中,我们提出了CMVC,这是一个新颖的无监督框架,该框架利用这两种知识的观点共同将典范的OKBS化,而无需手动注释的标签。为了实现这一目标,我们提出了一种多视图CH K均值聚类算法,以相互加强通过考虑其不同的聚类质量从每个视图中学到的特定视图嵌入的聚类。为了进一步提高规范化的性能,我们在每个特定视图中分别提出了一个培训数据优化策略,以迭代方式完善学习视图的特定嵌入。此外,我们提出了一种对数跳跃算法,以数据驱动的方式预测簇数的最佳数量,而无需任何标签。我们通过针对最新方法的多个现实世界OKB数据集进行了广泛的实验来证明我们的框架的优势。
translated by 谷歌翻译
Information Extraction (IE) aims to extract structured information from heterogeneous sources. IE from natural language texts include sub-tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Event Extraction (EE). Most IE systems require comprehensive understandings of sentence structure, implied semantics, and domain knowledge to perform well; thus, IE tasks always need adequate external resources and annotations. However, it takes time and effort to obtain more human annotations. Low-Resource Information Extraction (LRIE) strives to use unsupervised data, reducing the required resources and human annotation. In practice, existing systems either utilize self-training schemes to generate pseudo labels that will cause the gradual drift problem, or leverage consistency regularization methods which inevitably possess confirmation bias. To alleviate confirmation bias due to the lack of feedback loops in existing LRIE learning paradigms, we develop a Gradient Imitation Reinforcement Learning (GIRL) method to encourage pseudo-labeled data to imitate the gradient descent direction on labeled data, which can force pseudo-labeled data to achieve better optimization capabilities similar to labeled data. Based on how well the pseudo-labeled data imitates the instructive gradient descent direction obtained from labeled data, we design a reward to quantify the imitation process and bootstrap the optimization capability of pseudo-labeled data through trial and error. In addition to learning paradigms, GIRL is not limited to specific sub-tasks, and we leverage GIRL to solve all IE sub-tasks (named entity recognition, relation extraction, and event extraction) in low-resource settings (semi-supervised IE and few-shot IE).
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
由于看不见和新兴实体的频率,新闻中的命名实体链接(NEL)是一项具有挑战性的努力,因此需要使用无监督或零摄像的方法。但是,这种方法往往会带来警告,例如不整合新兴实体的合适知识库(例如Wikidata),缺乏可扩展性和不良的可解释性。在这里,我们考虑在Quotebank中的人歧义,这是新闻中大量的说话者归类的语言,并调查了NEL在网络规模的语料库中直观,轻巧且可扩展的启发式方法的适用性。我们表现最好的启发式歧义分别在Quotebank和Aida-Conll基准上分别占94%和63%。此外,提出的启发式方法与最先进的无监督和零摄像方法,本本系和MGenRE相比,从而成为无监督和零照片实体链接的强基础。
translated by 谷歌翻译
现代实体链接(EL)系统构成了流行偏见,但是没有数据集以英语以外的其他语言上关注尾巴和新兴实体。我们向Hansel展示了中国人的新基准,它填补了非英国几乎没有射击和零击EL挑战的空缺。Hansel的测试集经过人工注释和审查,并采用了一种用于收集零照片EL数据集的新方法。它涵盖了新闻,社交媒体帖子和其他网络文章中的10k多种文档,Wikidata作为目标知识库。我们证明,现有的最新EL系统在Hansel上的表现不佳(R@1中的36.6%,几乎没有射击)。然后,我们建立了一个强大的基线,该基线在我们的数据集上的零射门上为46.2%的R@1分之1。我们还表明,我们的基线在TAC-KBP2015中国实体链接任务上取得了竞争成果。
translated by 谷歌翻译
我们介绍了精致的,这是一种有效的端到端实体链接模型,该模型使用精细的实体类型和实体描述来执行链接。该模型执行提及的检测,细粒实体键入以及单个向前传球中文档中所有提及的实体歧义,使其比现有方法快60倍以上。精制还超过了标准实体链接数据集的最先进性能,平均比3.7 F1。该模型能够将其推广到大规模的知识库,例如Wikidata(其实体是Wikipedia的15倍)和零拍的实体链接。速度,准确性和规模的结合使精制成为从网络规模数据集中提取实体的有效且具有成本效益的系统,该数据集已成功部署该模型。我们的代码和预培训模型可在https://github.com/alexa/refined上找到
translated by 谷歌翻译
实体联系面临着重大的挑战,例如多产的变化和普遍的歧义,特别是在具有无数实体的高价值领域。标准分类方法遭受注释瓶颈,无法有效处理看不见的实体。零拍实体链接已成为概括的方向,以概括新实体,但它仍然需要在所有实体的培训和规范描述期间提到示例,这两者都很少在维基百科外面可用。在本文中,我们通过利用易于提供的域知识来探索实体链接的知识丰富的自我监督($ \ tt kriss $)。在培训中,它会使用域本体进行未标记的文本生成自我监控的提到示例,并使用对比学习列举一个上下文编码器。出于推理,它将自我监督的提到作为每个实体的原型,并通过将测试提及映射到最相似的原型来进行链接。我们的方法归入零拍摄和少量拍摄方法,并且可以轻松地包含实体说明和黄金如果可用的标签。使用Biomedicine作为案例研究,我们对跨越生物医学文献和临床票据的七个标准数据集进行了广泛的实验。不使用任何标记信息,我们的方法为400万UMLS实体提供$ \ TT Krissbert $,这是一个Uncer Intity Linker,它可以获得新的艺术状态,优先于先前的自我监督方法,高度为20多个绝对点。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
机器学习方法尤其是深度神经网络取得了巨大的成功,但其中许多往往依赖于一些标记的样品进行训练。在真实世界的应用中,我们经常需要通过例如具有新兴预测目标和昂贵的样本注释的动态上下文来解决样本短缺。因此,低资源学习,旨在学习具有足够资源(特别是培训样本)的强大预测模型,现在正在被广泛调查。在所有低资源学习研究中,许多人更喜欢以知识图(kg)的形式利用一些辅助信息,这对于知识表示变得越来越受欢迎,以减少对标记样本的依赖。在这项调查中,我们非常全面地审查了90美元的报纸关于两个主要的低资源学习设置 - 零射击学习(ZSL)的预测,从未出现过训练,而且很少拍摄的学习(FSL)预测的新类仅具有可用的少量标记样本。我们首先介绍了ZSL和FSL研究中使用的KGS以及现有的和潜在的KG施工解决方案,然后系统地分类和总结了KG感知ZSL和FSL方法,将它们划分为不同的范例,例如基于映射的映射,数据增强,基于传播和基于优化的。我们接下来呈现了不同的应用程序,包括计算机视觉和自然语言处理中的kg增强预测任务,还包括kg完成的任务,以及每个任务的一些典型评估资源。我们最终讨论了一些关于新学习和推理范式的方面的一些挑战和未来方向,以及高质量的KGs的建设。
translated by 谷歌翻译
Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in space lacks expressivity and cannot capture class information robustly, while statistical complex modeling poses difficulty to metric designs. In this work, we use tensor fields (``areas'') to model classes from the geometrical perspective for few-shot learning. We present a simple and effective method, dubbed hypersphere prototypes (HyperProto), where class information is represented by hyperspheres with dynamic sizes with two sets of learnable parameters: the hypersphere's center and the radius. Extending from points to areas, hyperspheres are much more expressive than embeddings. Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere. Following this idea, we also develop two variants of prototypes under other measurements. Extensive experiments and analysis on few-shot learning tasks across NLP and CV and comparison with 20+ competitive baselines demonstrate the effectiveness of our approach.
translated by 谷歌翻译
实体链接的目的是在文档中的实体和知识图中的相应实体(KGS)中建立实体提到的链接。以前的工作表明了实体联系的全球一致性的有效性。但是,基于顺序决策的大多数现有全局链接方法侧重于如何利用先前链接的实体来增强后期决策。在这些方法中,提及顺序是固定的,使模型无法根据先前链接的结果调整后续链接目标,这将导致先前的信息不合理地利用。为了解决问题,我们提出了一种新颖的模型,称为Dymen,以通过增强学习基于先前链接的实体动态调整后续链接目标,使模型能够选择可以完全使用先前链接的信息的链接目标。我们通过滑动窗口来提及,以减少加强学习的动作采样空间,并保持提及的语义连贯性。在多个基准数据集上进行的实验表明了所提出的模型的有效性。
translated by 谷歌翻译
人类每天产生的exabytes数据,导致越来越需要对大数据带来的多标签学习的大挑战的新努力。例如,极端多标签分类是一个有效且快速增长的研究区域,可以处理具有极大数量的类或标签的分类任务;利用具有有限监督的大规模数据构建一个多标签分类模型对实际应用变得有价值。除此之外,如何收获深度学习的强大学习能力,有巨大努力,以更好地捕获多标签的标签依赖性学习,这是深入学习解决现实世界分类任务的关键。然而,有人指出,缺乏缺乏系统性研究,明确关注分析大数据时代的多标签学习的新兴趋势和新挑战。呼吁综合调查旨在满足这项任务和描绘未来的研究方向和新应用。
translated by 谷歌翻译
Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https://github.com/ict-bigdatalab/VNEL.
translated by 谷歌翻译
Coreference resolution (CR) is one of the most challenging areas of natural language processing. This task seeks to identify all textual references to the same real-world entity. Research in this field is divided into coreference resolution and anaphora resolution. Due to its application in textual comprehension and its utility in other tasks such as information extraction systems, document summarization, and machine translation, this field has attracted considerable interest. Consequently, it has a significant effect on the quality of these systems. This article reviews the existing corpora and evaluation metrics in this field. Then, an overview of the coreference algorithms, from rule-based methods to the latest deep learning techniques, is provided. Finally, coreference resolution and pronoun resolution systems in Persian are investigated.
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data. Recent research shows that keyword-driven methods can achieve state-of-the-art performance on various tasks. However, these methods not only rely on carefully-crafted class descriptions to obtain class-specific keywords but also require substantial amount of unlabeled data and takes a long time to train. This paper proposes FastClass, an efficient weakly-supervised classification approach. It uses dense text representation to retrieve class-relevant documents from external unlabeled corpus and selects an optimal subset to train a classifier. Compared to keyword-driven methods, our approach is less reliant on initial class descriptions as it no longer needs to expand each class description into a set of class-specific keywords. Experiments on a wide range of classification tasks show that the proposed approach frequently outperforms keyword-driven models in terms of classification accuracy and often enjoys orders-of-magnitude faster training speed.
translated by 谷歌翻译
The development of deep neural networks has improved representation learning in various domains, including textual, graph structural, and relational triple representations. This development opened the door to new relation extraction beyond the traditional text-oriented relation extraction. However, research on the effectiveness of considering multiple heterogeneous domain information simultaneously is still under exploration, and if a model can take an advantage of integrating heterogeneous information, it is expected to exhibit a significant contribution to many problems in the world. This thesis works on Drug-Drug Interactions (DDIs) from the literature as a case study and realizes relation extraction utilizing heterogeneous domain information. First, a deep neural relation extraction model is prepared and its attention mechanism is analyzed. Next, a method to combine the drug molecular structure information and drug description information to the input sentence information is proposed, and the effectiveness of utilizing drug molecular structures and drug descriptions for the relation extraction task is shown. Then, in order to further exploit the heterogeneous information, drug-related items, such as protein entries, medical terms and pathways are collected from multiple existing databases and a new data set in the form of a knowledge graph (KG) is constructed. A link prediction task on the constructed data set is conducted to obtain embedding representations of drugs that contain the heterogeneous domain information. Finally, a method that integrates the input sentence information and the heterogeneous KG information is proposed. The proposed model is trained and evaluated on a widely used data set, and as a result, it is shown that utilizing heterogeneous domain information significantly improves the performance of relation extraction from the literature.
translated by 谷歌翻译