In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks. 1
translated by 谷歌翻译
最近公布的知识图形嵌入模型的实施,培训和评估的异质性已经公平和彻底的比较困难。为了评估先前公布的结果的再现性,我们在Pykeen软件包中重新实施和评估了21个交互模型。在这里,我们概述了哪些结果可以通过其报告的超参数再现,这只能以备用的超参数再现,并且无法再现,并且可以提供洞察力,以及为什么会有这种情况。然后,我们在四个数据集上进行了大规模的基准测试,其中数千个实验和24,804 GPU的计算时间。我们展示了最佳实践,每个模型的最佳配置以及可以通过先前发布的最佳配置进行改进的洞察。我们的结果强调了模型架构,训练方法,丢失功能和逆关系显式建模的组合对于模型的性能来说至关重要,而不仅由模型架构决定。我们提供了证据表明,在仔细配置时,若干架构可以获得对最先进的结果。我们制定了所有代码,实验配置,结果和分析,导致我们在https://github.com/pykeen/pykeen和https://github.com/pykeen/benchmarking中获得的解释
translated by 谷歌翻译
Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer modelswhich potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test sethowever, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.
translated by 谷歌翻译
Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.
translated by 谷歌翻译
Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline. * Equal contribution.
translated by 谷歌翻译
In this paper we show the surprising effectiveness of a simple observed features model in comparison to latent feature models on two benchmark knowledge base completion datasets, FB15K and WN18. We also compare latent and observed feature models on a more challenging dataset derived from FB15K, and additionally coupled with textual mentions from a web-scale corpus. We show that the observed features model is most effective at capturing the information present for entity pairs with textual relations, and a combination of the two combines the strengths of both model types.
translated by 谷歌翻译
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.
translated by 谷歌翻译
我们研究了对知识图中链路预测任务的知识图形嵌入(KGE)模型产生数据中毒攻击的问题。为了毒害KGE模型,我们建议利用他们通过知识图中的对称性,反演和构图等关系模式捕获的归纳能力。具体而言,为了降低模型对目标事实的预测信心,建议改善模型对一系列诱饵事实的预测信心。因此,我们通过不同的推理模式来制作对逆势的添加能够改善模型对诱饵事实上的预测信心。我们的实验表明,拟议的中毒攻击在四个KGE模型上倾斜的最先进的基座,用于两个公共数据集。我们还发现基于对称模式的攻击遍历了所有模型 - 数据集合,指示KGE模型对此模式的灵敏度。
translated by 谷歌翻译
最近,链接预测问题,也称为知识图完成,已经吸引了大量的研究。即使最近的型号很少试图通过在低维度中嵌入知识图表来实现相对良好的性能,即目前最先进的模型的最佳结果是以大大提高嵌入的维度的成本赚取的。然而,这导致在巨大知识库的情况下导致过度舒服和更重要的可扩展性问题。灵感灵感来自变压器模型的变体提供的深度学习的进步,因为它的自我关注机制,在本文中,我们提出了一种基于IT的模型来解决上述限制。在我们的模型中,自我关注是将查询依赖预测应用于实体和关系的关键,并捕获它们之间的相互信息,以获得来自低维嵌入的高度富有表现力的表现。两种标准链路预测数据集,FB15K-237和WN18RR的经验结果表明,我们的模型比我们三个最近最近期的最新竞争对手实现了相当的性能或更好的性能,其维度的重大减少了76.3%平均嵌入。
translated by 谷歌翻译
随着知识图的扩散,具有复杂多界结构的建模数据在统计关系学习领域获得了越来越大的关注。统计关系学习最重要的目标之一是链路预测,即,预测知识图中是否存在某些关系。已经提出了大量模型和算法来执行链路预测,其中张量分解方法已经证明在计算效率和预测准确性方面实现了最先进的性能。然而,现有张量分解模型的共同缺点是缺失的关系和非现有关系是以相同的方式对待,这导致信息丢失。为了解决这个问题,我们提出了一种具有探测链路的二进制张量分解模型,其不仅继承了来自经典张量分解模型的计算效率,还占关联数据的二进制性质。我们所提出的探测张量分解(PTF)模型显示了预测准确性和可解释性的优点
translated by 谷歌翻译
知识图(KG)通常不完整,我们经常希望推断出现有的新事实。这可以被认为是二进制分类问题;我们的目标是预测新事实是真或假的。不幸的是,我们通常只有积极的例子(已知事实),但我们也需要负面的例子来训练分类器。要解决此问题,通常使用负面采样策略生成否定示例。但是,这可以产生可能降低性能的假否定,是计算昂贵的,并且不会产生校准的分类概率。在本文中,我们提出了一种培训程序,通过向损失函数添加新的正则化术语来消除对负面采样的需要。我们的两个关系嵌入模型(DISTMULT和简单)的结果显示了我们的提案的优点。
translated by 谷歌翻译
知识图形嵌入研究主要集中在两个最小的规范部门代数,$ \ mathbb {r} $和$ \ mathbb {c} $。最近的结果表明,四元增值嵌入的三线性产品可以是解决链路预测的更有效手段。此外,基于真实嵌入的卷曲的模型通常会产生最先进的链路预测结果。在本文中,我们调查了一种卷积操作的组成,具有超量用乘法。我们提出了四个方法qmult,amult,convic和convo来解决链路预测问题。 Qmult和Omult可以被视为先前最先进方法的四元数和octonion扩展,包括Distmult和复杂。 Convic和Convo在Qmult和Omlult上建立在剩余学习框架的方式中包括卷积操作。我们在七个链路预测数据集中评估了我们的方法,包括WN18RR,FB15K-237和YAGO3-10。实验结果表明,随着知识图的规模和复杂性的增长,学习超复分价值的矢量表示的益处变得更加明显。 Convo优于MRR的FB15K-237上的最先进的方法,命中@ 1并点击@ 3,而Qmult,Omlult,Convic和Convo在所有度量标准中的Yago3-10上的最终倾斜的方式。结果还表明,通过预测平均可以进一步改善链路预测性能。为了培养可重复的研究,我们提供了开源的方法,包括培训和评估脚本以及佩戴型模型。
translated by 谷歌翻译
知识库完成(KBC)最近是一个非常活跃的领域。最近的一些KBCPAPER提出了建筑变化,新的培训方法甚至新的配方。KBC系统通常在标准基准数据集上进行评估:FB15K,FB15K-237,WN18,WN18RR和Yago3-10。大多数现有方法在这些数据集中为每个正实例训练少量的负样本,以节省计算成本。本文讨论了最近的发展如何使我们能够使用所有可用的负样本进行培训。我们表明,使用所有可用的负样本进行培训时,复杂的复合物在所有数据集上都具有近乎最先进的性能。我们称这种方法为复杂V2。我们还强调了最近在文献中提出的各种乘法KBC方法如何受益于这种训练制度,并且在大多数数据集上的性能方面都无法区分。根据这些发现,我们的工作要求重新评估其个人价值。
translated by 谷歌翻译
Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.
translated by 谷歌翻译
学习知识图的嵌入对人工智能至关重要,可以使各种下游应用受益,例如推荐和问题回答。近年来,已经提出了许多研究努力,以嵌入知识图形。然而,最先前的知识图形嵌入方法忽略不同三元组中的相关实体和实体关系耦合之间的语义相似性,因为它们与评分函数分别优化每个三倍。为了解决这个问题,我们提出了一个简单但有效的对比学习框架,用于知识图形嵌入,可以缩短不同三元组中相关实体和实体关系耦合的语义距离,从而提高知识图形嵌入的表现力。我们在三个标准知识图形基准上评估我们提出的方法。值得注意的是,我们的方法可以产生一些新的最先进的结果,在WN18RR数据集中实现51.2%的MRR,46.8%HITS @ 1,59.1%的MRR,51.8%在YAGO3-10数据集中击打@ 1 。
translated by 谷歌翻译
事实证明,信息提取方法可有效从结构化或非结构化数据中提取三重。以(头部实体,关系,尾部实体)形式组织这样的三元组的组织称为知识图(kgs)。当前的大多数知识图都是不完整的。为了在下游任务中使用kgs,希望预测kgs中缺少链接。最近,通过将实体和关系嵌入到低维的矢量空间中,旨在根据先前访问的三元组来预测三元组,从而对KGS表示不同的方法。根据如何独立或依赖对三元组进行处理,我们将知识图完成的任务分为传统和图形神经网络表示学习,并更详细地讨论它们。在传统的方法中,每个三重三倍将独立处理,并在基于GNN的方法中进行处理,三倍也考虑了他们的当地社区。查看全文
translated by 谷歌翻译
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive datasets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's Knowledge Vault project as an example of such combination.
translated by 谷歌翻译
We consider the problem of embedding entities and relationships of multirelational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.
translated by 谷歌翻译
实体类型预测是知识图中的一个重要问题(kg)研究。在这项工作中提出了一种新的KG实体类型预测方法,名为Core(复杂的空间回归和嵌入)。所提出的核心方法利用两个复杂空间嵌入模型的表现力;即,旋转和复杂的模型。它使用旋转或复杂地将实体和类型嵌入两个不同的复杂空间中。然后,我们推导了一个复杂的回归模型来链接这两个空格。最后,介绍了一种优化嵌入和回归参数的机制。实验表明,核心优于代表性KG实体型推理数据集的基准测试方法。分析了各种实体型预测方法的强度和弱点。
translated by 谷歌翻译
Covid-19上的知识图(KGS)已建立在加速Covid-19的研究过程中。然而,KGs总是不完整,特别是新建造的Covid-19公斤。链路预测任务旨在预测(e,r,t)或(h,r,e)的丢失实体,其中H和t是某些实体,E是需要预测的实体,R是关系。这项任务还有可能解决Covid-19相关的KGS的不完全问题。虽然已经提出了各种知识图形嵌入(KGE)方法的链路预测任务,但这些现有方法遭受了使用单个评分函数的限制,这不能捕获Covid-19 Kgs的丰富特征。在这项工作中,我们提出了利用多个评分函数来提取来自现有三元组的更多特征的MDistmult模型。我们在CCKS2020 Covid-19抗病毒药物知识图(CADKG)上采用实验。实验结果表明,我们的MDistmult在CADKG数据集上的链路预测任务中实现了最先进的性能
translated by 谷歌翻译