知识图(kg)是近年来突出的知识表示形式。因为它集中在名义实体及其关系上,所以传统的知识图本质上是静态和百科全书。在此基础上,事件知识图(事件kg)通过文本处理对时间和空间动力进行建模,以促进下游应用程序,例如提问,建议和智能搜索。另一方面,现有的KG研究主要集中在文本处理和静态事实上,而忽略了照片,电影和预训练的神经网络中包含的大量动态行为信息。此外,没有努力将行为智能信息包括到深入强化学习(DRL)和机器人学习的知识图中。在本文中,我们提出了一种新颖的动态知识和技能图(KSG),然后我们基于CN-DBPEDIA开发了基本和特定的KSG。节点分为实体和属性节点,其中包含代理,环境和技能(DRL策略或策略表示)的实体节点,以及包含实体描述,预训练网络和离线数据集的属性节点。 KSG可以在各种环境中搜索不同代理的技能,并提供可转移的信息以获取新技能。这是我们意识到的第一项研究,研究了动态的KSG,以进行技能检索和学习。新技能学习的广泛实验结果表明,KSG提高了新的技能学习效率。
translated by 谷歌翻译
基于强化学习(RL)的图表行走在导航代理人通过探索多跳关系路径来导航代理以通过不完整的知识图(kg)来自动完成各种推理任务。然而,现有的多跳推理方法仅在短路推理路径上工作,并且倾向于利用增加的路径长度错过目标实体。这对于实际情况中的许多理由任务是不可取的,其中连接源实体的短路不完整的公斤,因此,除非代理能够寻求更多的线索,否则推理性能急剧下降路径。为了解决上述挑战,在本文中,我们提出了一种双代理强化学习框架,该框架列举了两个代理(巨型和矮人),共同走过了公斤,并协同寻找答案。我们的方法通过将其中一个代理(巨型)进行了快速寻找群集路径并为另一代理(DWARF)提供阶段明智的提示来解决长途路径中的推理挑战。最后,对几千克推理基准测试的实验结果表明,我们的方法可以更准确,高效地搜索答案,并且优于大型余量的长路径查询的基于RL的基于RL的方法。
translated by 谷歌翻译
Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering and recommendation systems, etc. According to the graph types, the existing KGR models can be roughly divided into three categories, \textit{i.e.,} static models, temporal models, and multi-modal models. The early works in this domain mainly focus on static KGR and tend to directly apply general knowledge graph embedding models to the reasoning task. However, these models are not suitable for more complex but practical tasks, such as inductive static KGR, temporal KGR, and multi-modal KGR. To this end, multiple works have been developed recently, but no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the preliminaries, summaries of KGR models, and typical datasets are introduced and discussed consequently. Moreover, we discuss the challenges and potential opportunities. The corresponding open-source repository is shared on GitHub: https://github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.
translated by 谷歌翻译
叙事制图是一项学科,研究了故事和地图的交织性质。然而,叙述的传统地理化技术经常遇到几个突出的挑战,包括数据采集和一体化挑战和语义挑战。为了解决这些挑战,在本文中,我们提出了具有知识图表(KGS)的叙事制图的想法。首先,要解决数据采集和集成挑战,我们开发了一组基于KG的地理学工具箱,以允许用户从GISYstem内搜索和检索来自集成跨域知识图中的相关数据以获得来自GISYSTEM的叙述映射。在此工具的帮助下,来自KG的检索数据以GIS格式直接实现,该格式已准备好用于空间分析和映射。两种用例 - 麦哲伦的远征和第二次世界大战 - 被提出展示了这种方法的有效性。与此同时,从这种方法中确定了几个限制,例如数据不完整,语义不相容,以及地理化的语义挑战。对于后面的两个限制,我们为叙事制图提出了一个模块化本体,它将地图内容(地图内容模块)和地理化过程(制图模块)正式化。我们证明,通过代表KGS(本体)中的地图内容和地理化过程,我们可以实现数据可重用性和叙事制图的地图再现性。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Pre-trained Language Models (PLMs) which are trained on large text corpus through the self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Incorporating knowledge into PLMs has been tried to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight the focus of these two kinds of tasks. For NLU, we take several types of knowledge into account and divide them into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.
translated by 谷歌翻译
Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.
translated by 谷歌翻译
知识库问题应答(KBQA)旨在在外部知识库的帮助下回答自然语言问题。核心思想是找到内部知识与知识库的已知三元组之间的内部知识之间的联系。 KBQA任务管道包含几个步骤,包括实体识别,关系提取和实体链接。这种管道方法意味着任何过程中的错误将不可避免地传播到最终预测。为了解决上述问题,本文提出了一种具有预培训语言模型(PLM)和知识图(KG)的语料库生成 - 检索方法(CGRM)。首先,基于MT5模型,我们设计了两个新的预训练任务:基于段落的知识屏蔽语言建模和问题,以获取知识增强型T5(KT5)模型。其次,在用一系列启发式规则预处理知识图的预处理之后,KT5模型基于处理的三元组生成自然语言QA对。最后,我们通过检索合成数据集直接解决QA。我们在NLPCC-ICCPOL 2016 KBQA数据集上测试我们的方法,结果表明,我们的框架提高了KBQA的性能,直接向前的方法与最先进的方法竞争。
translated by 谷歌翻译
Recent years have witnessed the resurgence of knowledge engineering which is featured by the fast growth of knowledge graphs. However, most of existing knowledge graphs are represented with pure symbols, which hurts the machine's capability to understand the real world. The multi-modalization of knowledge graphs is an inevitable key step towards the realization of human-level machine intelligence. The results of this endeavor are Multi-modal Knowledge Graphs (MMKGs). In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques. We then systematically review the challenges, progresses and opportunities on the construction and application of MMKGs respectively, with detailed analyses of the strength and weakness of different solutions. We finalize this survey with open research problems relevant to MMKGs.
translated by 谷歌翻译
在商业航空域中,有大量文件,例如事故报告(NTSB,ASRS)和监管指令(ADS)。有必要有效地访问这些多样化的存储库,以便在航空业中的服务需求,例如维护,合规性和安全性。在本文中,我们提出了一个基于深度学习的知识图(kg)基于深度学习(DL)的问题答案(QA)航空安全系统。我们从飞机事故报告中构建了知识图,并向研究人员社区贡献了这一资源。该资源的功效由上述质量保证系统测试和证明。根据上述文档构建的自然语言查询将转换为SPARQL(RDF图数据库的接口语言)查询并回答。在DL方面,我们有两个不同的质量检查模型:(i)BERT QA,它是通道检索(基于句子的)和问题答案(基于BERT)的管道,以及(ii)最近发布的GPT-3。我们根据事故报告创建的一系列查询评估系统。我们组合的QA系统在GPT-3上的准确性增长了9.3%,比Bert QA增加了40.3%。因此,我们推断出KG-DL的性能比单一表现更好。
translated by 谷歌翻译
在本文中,我们重点介绍了在流中为在线POI推荐的动态地球人类相互作用建模的问题。具体而言,我们将式的地球人类相互作用建模问题提出到一个新颖的深层交互式增强学习框架中,在该框架中,代理是推荐的,而动作是下一个要访问的POI。我们将强化学习环境独特地建模为用户和地理空间环境(POI,POI类别,功能区)的联合组成和连接的组成。用户在流中访问POI的事件更新了用户和地理空间环境的状态;代理商认为更新的环境状态可以提出在线建议。具体而言,我们通过将所有用户,访问和地理空间上下文统一为动态知识图流来对混合用户事件流进行建模,以模拟人类,地理 - 人类,地理geo互动的建模。我们设计了一种解决过期信息挑战的退出机制,设计了一种元路径方法来应对推荐候选人的生成挑战,并开发了一种新的深层政策网络结构来应对不同的行动空间挑战,最后提出有效的对抗性优化的培训方法。最后,我们提出了广泛的实验,以证明方法的增强性能。
translated by 谷歌翻译
机器学习方法尤其是深度神经网络取得了巨大的成功,但其中许多往往依赖于一些标记的样品进行训练。在真实世界的应用中,我们经常需要通过例如具有新兴预测目标和昂贵的样本注释的动态上下文来解决样本短缺。因此,低资源学习,旨在学习具有足够资源(特别是培训样本)的强大预测模型,现在正在被广泛调查。在所有低资源学习研究中,许多人更喜欢以知识图(kg)的形式利用一些辅助信息,这对于知识表示变得越来越受欢迎,以减少对标记样本的依赖。在这项调查中,我们非常全面地审查了90美元的报纸关于两个主要的低资源学习设置 - 零射击学习(ZSL)的预测,从未出现过训练,而且很少拍摄的学习(FSL)预测的新类仅具有可用的少量标记样本。我们首先介绍了ZSL和FSL研究中使用的KGS以及现有的和潜在的KG施工解决方案,然后系统地分类和总结了KG感知ZSL和FSL方法,将它们划分为不同的范例,例如基于映射的映射,数据增强,基于传播和基于优化的。我们接下来呈现了不同的应用程序,包括计算机视觉和自然语言处理中的kg增强预测任务,还包括kg完成的任务,以及每个任务的一些典型评估资源。我们最终讨论了一些关于新学习和推理范式的方面的一些挑战和未来方向,以及高质量的KGs的建设。
translated by 谷歌翻译
Practices in the built environment have become more digitalized with the rapid development of modern design and construction technologies. However, the requirement of practitioners or scholars to gather complicated professional knowledge in the built environment has not been satisfied yet. In this paper, more than 80,000 paper abstracts in the built environment field were obtained to build a knowledge graph, a knowledge base storing entities and their connective relations in a graph-structured data model. To ensure the retrieval accuracy of the entities and relations in the knowledge graph, two well-annotated datasets have been created, containing 2,000 instances and 1,450 instances each in 29 relations for the named entity recognition task and relation extraction task respectively. These two tasks were solved by two BERT-based models trained on the proposed dataset. Both models attained an accuracy above 85% on these two tasks. More than 200,000 high-quality relations and entities were obtained using these models to extract all abstract data. Finally, this knowledge graph is presented as a self-developed visualization system to reveal relations between various entities in the domain. Both the source code and the annotated dataset can be found here: https://github.com/HKUST-KnowComp/BEKG.
translated by 谷歌翻译
Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG). To cope with the vast search space, existing work usually adopts a two-stage approach: it firstly retrieves a relatively small subgraph related to the question and then performs the reasoning on the subgraph to accurately find the answer entities. Although these two stages are highly related, previous work employs very different technical solutions for developing the retrieval and reasoning models, neglecting their relatedness in task essence. In this paper, we propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning. For model architecture, UniKGQA consists of a semantic matching module based on a pre-trained language model~(PLM) for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the edges on KGs. For parameter learning, we design a shared pre-training task based on question-relation matching for both retrieval and reasoning models, and then propose retrieval- and reasoning-oriented fine-tuning strategies. Compared with previous studies, our approach is more unified, tightly relating the retrieval and reasoning stages. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our method on the multi-hop KGQA task. Our codes and data are publicly available at https://github.com/RUCAIBox/UniKGQA.
translated by 谷歌翻译
本文研究了知识图的推荐系统,可以有效地解决数据稀疏和冷启动的问题。最近,已经为这个问题开发了各种方法,这通常试图根据其表示,学习用户和物品的有效陈述,然后根据其表示将项目匹配。虽然这些方法已经表现得非常有效,但它们缺乏良好的解释,这对推荐系统至关重要。在本文中,我们采取了不同的路线,并提出通过从用户到项目的有意义路径来创造建议。具体地,我们将问题作为顺序决策过程,其中目标用户被定义为初始状态,并且图中的边缘被定义为动作。我们根据现有的最先进方法塑造奖励,然后使用策略梯度方法培训策略函数。三个现实世界数据集的实验结果表明,我们的提出方法不仅提供有效的建议,还提供了良好的解释。
translated by 谷歌翻译
多年来,旨在从已知事实中推断出新结论的知识图(KGS)的推理主要集中在静态KG上。现实生活中知识的不断增长提出了使能够扩大KGS的归纳推理能力的必要性。现有的归纳工作假设新实体都在批处理中一次出现,这过度简化了新实体不断出现的实际情况。这项研究探讨了一个更现实,更具挑战性的环境,新实体分为多批次。我们提出了一个基于步行的归纳推理模型来解决新环境。具体而言,具有自适应关系聚合的图形卷积网络旨在使用其邻近关系编码和更新实体。为了捕捉不同的邻居的重要性,我们在聚合过程中采用了一种查询反馈注意机制。此外,为了减轻新实体的稀疏链接问题,我们提出了一种链接增强策略,以将可信赖的事实添加到KGS中。我们构建了三个新数据集,用于模拟此多批次出现方案。实验结果表明,我们所提出的模型优于基于最先进的基于嵌入的,基于步行的基于步行和基于规则的模型。
translated by 谷歌翻译
Over the past few years, large knowledge bases have been constructed to store massive amounts of knowledge. However, these knowledge bases are highly incomplete, for example, over 70% of people in Freebase have no known place of birth. To solve this problem, we propose a query-driven knowledge base completion system with multimodal fusion of unstructured and structured information. To effectively fuse unstructured information from the Web and structured information in knowledge bases to achieve good performance, our system builds multimodal knowledge graphs based on question answering and rule inference. We propose a multimodal path fusion algorithm to rank candidate answers based on different paths in the multimodal knowledge graphs, achieving much better performance than question answering, rule inference and a baseline fusion algorithm. To improve system efficiency, query-driven techniques are utilized to reduce the runtime of our system, providing fast responses to user queries. Extensive experiments have been conducted to demonstrate the effectiveness and efficiency of our system.
translated by 谷歌翻译
为了减轻从头开始构建知识图(kg)的挑战,更一般的任务是使用开放式语料库中的三元组丰富一个kg,那里获得的三元组包含嘈杂的实体和关系。在保持知识代表的质量的同时,以新收获的三元组丰富一个公园,这是一项挑战。本文建议使用从附加语料库中收集的信息来完善kg的系统。为此,我们将任务制定为两个耦合子任务,即加入事件提取(JEE)和知识图融合(KGF)。然后,我们提出了一个协作知识图融合框架,以允许我们的子任务以交替的方式相互协助。更具体地说,探险家执行了由地面注释和主管提供的现有KG监督的JEE。然后,主管评估了探险家提取的三元组,并用高度排名的人来丰富KG。为了实施此评估,我们进一步提出了一种翻译的关系一致性评分机制,以对齐并将提取的三元组对齐为先前的kg。实验验证了这种合作既可以提高JEE和KGF的表现。
translated by 谷歌翻译
Natural Language Processing (NLP) is one of the core techniques in AI software. As AI is being applied to more and more domains, how to efficiently develop high-quality domain-specific language models becomes a critical question in AI software engineering. Existing domain-specific language model development processes mostly focus on learning a domain-specific pre-trained language model (PLM); when training the domain task-specific language model based on PLM, only a direct (and often unsatisfactory) fine-tuning strategy is adopted commonly. By enhancing the task-specific training procedure with domain knowledge graphs, we propose KnowledgeDA, a unified and low-code domain language model development service. Given domain-specific task texts input by a user, KnowledgeDA can automatically generate a domain-specific language model following three steps: (i) localize domain knowledge entities in texts via an embedding-similarity approach; (ii) generate augmented samples by retrieving replaceable domain entity pairs from two views of both knowledge graph and training data; (iii) select high-quality augmented samples for fine-tuning via confidence-based assessment. We implement a prototype of KnowledgeDA to learn language models for two domains, healthcare and software development. Experiments on five domain-specific NLP tasks verify the effectiveness and generalizability of KnowledgeDA. (Code is publicly available at https://github.com/RuiqingDing/KnowledgeDA.)
translated by 谷歌翻译
知识图表(kg)作为从大型自然语言文本语料库中举行蒸馏信息的伟大工具。查询知识图表的自然语言问题对于这些信息的人类消费至关重要。通常通过将自然语言查询转换为结构化查询,然后在kg上触发结构化查询来解决此问题。在文献中的知识图中直接回答模型很少。查询转换模型和直接模型都需要与知识图表的域有关的特定培训数据。在这项工作中,我们将通过知识图表的自然语言问题转换为前提假设对的推理问题。使用培训的深度学习模型进行转换后的代理推理问题,我们为原始自然语言查询问题提供了解决方案。我们的方法在MetaQA数据集中实现了超过90%的准确性,击败现有的最先进。我们还提出了一种推论称为分层复发路径编码器(HRPE)的模型。可以微调推断模型以跨越跨越培训数据的域使用。我们的方法不需要大型域特定的培训数据来查询来自不同域的新知识图表。
translated by 谷歌翻译