已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识,并且其中许多基于ALC,即原型和表达的DL或其扩展。探索ALC本体论的主要任务是计算语义范围。符号方法可以保证声音和完整的语义需要,但对不一致和缺失信息敏感。为此,我们提出了一个模糊的ALC本体神经推理器Falcon。 Falcon使用模糊逻辑运算符为任意ALC本体论生成单个模型结构,并使用多个模型结构来计算语义索引。理论结果表明,保证猎鹰是计算ALC本体学语义索引的声音和完整算法。实验结果表明,Falcon不仅可以近似推理(不完整的本体理由)和chanseansissist的推理(因本体不一致的推理),还可以通过结合ALC本体的背景知识来改善生物医学领域的机器学习。
translated by 谷歌翻译
会话推荐系统(CRS)旨在捕获用户的当前意图,并通过实时多转交流交互提供建议。作为人机互动系统,CRS必须改善用户体验。但是,大多数CRS方法忽略了用户体验的重要性。在本文中,我们为CRS提出了两个关键点,以改善用户体验:(1)像人类一样说话,人类可以根据当前的对话环境以不同的风格说话。 (2)识别精细颗粒的意图,即使对于相同的话语,不同的用户也具有多种良好的意图,这与用户的固有偏好有关。根据观察结果,我们提出了一个新颖的CRS模型,即创建的定制对话推荐系统(CCRS),该系统从三个角度从三个角度定制了用户的CRS模型。对于类似人类的对话服务,我们提出了多式对话响应生成器,该响应响应生成器选择了语音发言的上下文感知语言风格。为了提供个性化的建议,我们在用户固有的偏好的指导下从对话上下文中提取用户当前的细粒度意图。最后,为了自定义每个用户的模型参数,我们从元学习的角度训练模型。广泛的实验和一系列分析表明,我们的CCR在推荐和对话服务上的优势。
translated by 谷歌翻译
已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识。本体论由一个ABOX,即两个实体之间或一个概念与实体之间的断言公理组成,以及Tbox,即两个概念之间的术语公理。神经逻辑推理(NLR)是探索此类知识库的基本任务,该任务旨在根据查询和答案的分布式表示,以逻辑操作来回答多跳的查询。尽管以前的NLR方法可以给出特定的实体级答案,即ABOX答案,但它们无法提供描述性概念级答案,即Tbox答案,其中每个概念都是对一组实体的描述。换句话说,以前的NLR方法在忽略Tbox时唯一的原因是本体论的Abox。特别是,提供Tbox答案可以通过描述性概念来推断每个查询的解释,这使用户可以理解答案,并且在应用本体论领域具有极大的有用性。在这项工作中,我们提出了整个Tbox和Abox(TA-NLR)的神经逻辑推理的问题,该问题解决了需要解决在概念上纳入,代表和操作时需要解决的挑战。我们提出了一种原始解决方案,名为Ta-nlr的TAR。首先,我们合并了基于本体论公理的描述以提供概念的来源。然后,我们将概念和查询表示为模糊集,即其元素具有成员程度的集合,以与实体桥接概念和查询。此外,我们设计了涉及概念的概念的概念和查询以进行优化和推理的概念的设计操作员。两个现实世界数据集的广泛实验结果证明了TAR对TA-NLR的有效性。
translated by 谷歌翻译
大多数真实的知识图(kg)远非完整和全面。这个问题激发了预测最合理的缺失事实以完成给定的kg,即知识图完成(KGC)。但是,现有的kgc方法遇到了两个主要问题,1)虚假负面问题,即,采样的负面培训实例可能包括潜在的真实事实; 2)数据稀疏问题,即真实事实仅解释了所有可能事实的一小部分。为此,我们提出了针对KGC的对抗数据增强(PUDA)的积极未标记的学习。特别是,PUDA针对KGC任务量身定制了正标记的风险估计器,以解决虚假的负面问题。此外,为了解决数据稀疏问题,PUDA通过在积极的无标记的Minimax游戏中统一对抗性培训和积极的未标记学习来实现数据增强策略。现实世界基准数据集的广泛实验结果证明了我们提出的方法的有效性和兼容性。
translated by 谷歌翻译
冷启动问题在推荐系统中仍然是一个非常具有挑战性的问题。幸运的是,冷启动用户在辅助源域中的交互可以帮助目标域中的冷启动推荐。如何将用户的偏好从源域转移到目标域,是跨域推荐(CDR)中的关键问题,这是处理冷启动问题的有希望的解决方案。大多数现有方法模型用于传输所有用户的偏好。直观地,由于偏好因用户对用户而异,不同用户的偏好网桥应该是不同的。在这一行中,我们提出了一个名为个性化用户偏好的小说框架,用于跨域推荐(PTUPCDR)。具体地,学习了与用户特征嵌入的元网络,以生成个性化桥接功能以实现每个用户的个性化的偏好传送。要稳定地学习元网络,我们采用了面向任务的优化过程。利用元生成的个性化桥函数,用户在源域中的偏好嵌入可以转换为目标域,并且变换的用户偏好嵌入可以用作目标域中的冷启动用户的初始嵌入。使用大型现实数据集,我们进行广泛的实验,以评估PTUPCDR对冷启动和热启动阶段的有效性。代码已在https://github.com/easezyc/wsdm2022-ptupcdr中提供。
translated by 谷歌翻译
随着深度学习技术的发展,基于卷积神经网络的多光谱图像超分辨率方法最近取得了很大的进展。然而,由于高光谱数据的高维和复谱特性,单个高光谱图像超分辨率仍然是一个具有挑战性的问题,这使得难以同时捕获空间和光谱信息。要处理此问题,我们提出了一种新的反馈精确的本地 - 全球网络(FRLGN),用于超光谱图像的超级分辨率。具体而言,我们开发新的反馈结构和本地全局频谱块,以减轻空间和光谱特征提取的难度。反馈结构可以传输高电平信息以指导低级特征的生成过程,其通过具有有限展开的经常性结构实现。此外,为了有效地使用所传回的高电平信息,构造局部全局频谱块以处理反馈连接。本地 - 全局频谱块利用反馈高级信​​息来校正来自局部光谱频带的低级功能,并在全局光谱频带之间产生强大的高级表示。通过结合反馈结构和局部全局光谱块,FRLGN可以充分利用光谱带之间的空间光谱相关性,并逐渐重建高分辨率高光谱图像。 FRLGN的源代码在https://github.com/tangzhenjie/frlgn上获得。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译