As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
In knowledge graph completion (KGC), predicting triples involving emerging entities and/or relations, which are unseen when the KG embeddings are learned, has become a critical challenge. Subgraph reasoning with message passing is a promising and popular solution. Some recent methods have achieved good performance, but they (i) usually can only predict triples involving unseen entities alone, failing to address more realistic fully inductive situations with both unseen entities and unseen relations, and (ii) often conduct message passing over the entities with the relation patterns not fully utilized. In this study, we propose a new method named RMPI which uses a novel Relational Message Passing network for fully Inductive KGC. It passes messages directly between relations to make full use of the relation patterns for subgraph reasoning with new techniques on graph transformation, graph pruning, relation-aware neighborhood attention, addressing empty subgraphs, etc., and can utilize the relation semantics defined in the ontological schema of KG. Extensive evaluation on multiple benchmarks has shown the effectiveness of techniques involved in RMPI and its better performance compared with the existing methods that support fully inductive KGC. RMPI is also comparable to the state-of-the-art partially inductive KGC methods with very promising results achieved. Our codes and data are available at https://github.com/zjukg/RMPI.
translated by 谷歌翻译
视觉问题回答(VQA)通常需要对视觉概念和语言语义的理解,这取决于外部知识。大多数现有方法利用了预训练的语言模型或/和非结构化文本,但是这些资源中的知识通常不完整且嘈杂。有些方法更喜欢使用经常具有强化结构知识的知识图(kgs),但是研究仍然相当初步。在本文中,我们提出了Lako,这是一种知识驱动的VQA方法,通过后期的文本注射。为了有效地纳入外部kg,我们将三元三元转移到文本中,并提出一种晚期注射机制。最后,我们将VQA作为文本生成任务,并具有有效的编码器范式。在使用OKVQA数据集的评估中,我们的方法可实现最新的结果。
translated by 谷歌翻译
零击学习(ZSL)旨在预测看不见的课程,其样本在培训期间从未出现过,经常利用其他语义信息(又称侧信息)来桥接培训(见过)课程和看不见的课程。用于零拍图像分类的最有效且最广泛使用的语义信息之一是属性,是类级视觉特征的注释。但是,由于细粒度的注释短缺,属性不平衡和同时出现,当前方法通常无法区分图像之间的那些微妙的视觉区别,从而限制了它们的性能。在本文中,我们提出了一种名为Duet的基于变压器的端到端ZSL方法,该方法通过自我监督的多模式学习范式从审前的语言模型(PLM)中整合了潜在的语义知识。具体而言,我们(1)开发了一个跨模式的语义接地网络,以研究模型从图像中解开语义属性的能力,(2)应用了属性级的对比度学习策略,以进一步增强模型对细粒视觉特征的歧视反对属性的共同出现和不平衡,(3)提出了一个多任务学习策略,用于考虑多模型目标。通过对三个标准ZSL基准测试和配备ZSL基准的知识图进行广泛的实验,我们发现二重奏通常可以实现最新的性能,其组件是有效的,并且其预测是可以解释的。
translated by 谷歌翻译
知识图(kg)及其本体论的变体已被广泛用于知识表示,并且已证明在增强零拍学习(ZSL)方面非常有效。但是,利用KGS的现有ZSL方法都忽略了KGS中代表的类间关系的内在复杂性。一个典型的功能是,一类通常与不同语义方面的其他类别有关。在本文中,我们专注于增强ZSL的本体,并建议学习以本体论属性为指导的解剖本体嵌入,以捕获和利用不同方面的更细粒度的类关系。我们还贡献了一个名为dozsl的新ZSL框架,该框架包含两个新的ZSL解决方案,分别基于生成模型和图形传播模型有效地利用了分解的本体学嵌入。已经对零摄像图分类(ZS-IMGC)和零射Hot KG完成(ZS-KGC)进行了五个基准测试进行了广泛的评估。 Dozsl通常比最先进的表现更好,并且通过消融研究和案例研究证实了其组成部分。我们的代码和数据集可在https://github.com/zjukg/dozsl上找到。
translated by 谷歌翻译
机器学习方法尤其是深度神经网络取得了巨大的成功,但其中许多往往依赖于一些标记的样品进行训练。在真实世界的应用中,我们经常需要通过例如具有新兴预测目标和昂贵的样本注释的动态上下文来解决样本短缺。因此,低资源学习,旨在学习具有足够资源(特别是培训样本)的强大预测模型,现在正在被广泛调查。在所有低资源学习研究中,许多人更喜欢以知识图(kg)的形式利用一些辅助信息,这对于知识表示变得越来越受欢迎,以减少对标记样本的依赖。在这项调查中,我们非常全面地审查了90美元的报纸关于两个主要的低资源学习设置 - 零射击学习(ZSL)的预测,从未出现过训练,而且很少拍摄的学习(FSL)预测的新类仅具有可用的少量标记样本。我们首先介绍了ZSL和FSL研究中使用的KGS以及现有的和潜在的KG施工解决方案,然后系统地分类和总结了KG感知ZSL和FSL方法,将它们划分为不同的范例,例如基于映射的映射,数据增强,基于传播和基于优化的。我们接下来呈现了不同的应用程序,包括计算机视觉和自然语言处理中的kg增强预测任务,还包括kg完成的任务,以及每个任务的一些典型评估资源。我们最终讨论了一些关于新学习和推理范式的方面的一些挑战和未来方向,以及高质量的KGs的建设。
translated by 谷歌翻译
外部知识(A.K.A.侧面信息)在零拍摄学习(ZSL)中起着关键作用,该角色旨在预测从未出现在训练数据中的看不见的类。已被广泛调查了几种外部知识,例如文本和属性,但他们独自受到不完整的语义。因此,一些最近的研究提出了由于其高度富有效力和代表知识的兼容性而使用知识图表(千克)。但是,ZSL社区仍然缺乏用于学习和比较不同外部知识设置和基于不同的KG的ZSL方法的标准基准。在本文中,我们提出了六个资源,涵盖了三个任务,即零拍摄图像分类(ZS-IMGC),零拍摄关系提取(ZS-RE)和零拍KG完成(ZS-KGC)。每个资源都有一个正常的zsl基准标记和包含从文本到属性的kg的kg,从关系知识到逻辑表达式。我们已清楚地介绍了这些资源,包括其建设,统计数据格式和使用情况W.r.t.不同的ZSL方法。更重要的是,我们进行了一项全面的基准研究,具有两个通用和最先进的方法,两种特定方法和一种可解释方法。我们讨论并比较了不同的ZSL范式W.R.T.不同的外部知识设置,并发现我们的资源具有开发更高级ZSL方法的巨大潜力,并为应用KGS进行增强机学习的更多解决方案。所有资源都可以在https://github.com/china-uk-zsl/resources_for_kzsl上获得。
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder. However, in contrast to passages or sentences, retrieval on long documents suffers from the scope hypothesis that a long document may cover multiple topics. This maximizes their structure heterogeneity and poses a granular-mismatch issue, leading to an inferior distillation efficacy. In this work, we propose a new learning framework, fine-grained distillation (FGD), for long-document retrievers. While preserving the conventional dense retrieval paradigm, it first produces global-consistent representations crossing different fine granularity and then applies multi-granular aligned distillation merely during training. In experiments, we evaluate our framework on two long-document retrieval benchmarks, which show state-of-the-art performance.
translated by 谷歌翻译