Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Machine Learning as a service (MLaaS) permits resource-limited clients to access powerful data analytics services ubiquitously. Despite its merits, MLaaS poses significant concerns regarding the integrity of delegated computation and the privacy of the server's model parameters. To address this issue, Zhang et al. (CCS'20) initiated the study of zero-knowledge Machine Learning (zkML). Few zkML schemes have been proposed afterward; however, they focus on sole ML classification algorithms that may not offer satisfactory accuracy or require large-scale training data and model parameters, which may not be desirable for some applications. We propose ezDPS, a new efficient and zero-knowledge ML inference scheme. Unlike prior works, ezDPS is a zkML pipeline in which the data is processed in multiple stages for high accuracy. Each stage of ezDPS is harnessed with an established ML algorithm that is shown to be effective in various applications, including Discrete Wavelet Transformation, Principal Components Analysis, and Support Vector Machine. We design new gadgets to prove ML operations effectively. We fully implemented ezDPS and assessed its performance on real datasets. Experimental results showed that ezDPS achieves one-to-three orders of magnitude more efficient than the generic circuit-based approach in all metrics while maintaining more desirable accuracy than single ML classification approaches.
translated by 谷歌翻译
Solar activity is usually caused by the evolution of solar magnetic fields. Magnetic field parameters derived from photospheric vector magnetograms of solar active regions have been used to analyze and forecast eruptive events such as solar flares and coronal mass ejections. Unfortunately, the most recent solar cycle 24 was relatively weak with few large flares, though it is the only solar cycle in which consistent time-sequence vector magnetograms have been available through the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) since its launch in 2010. In this paper, we look into another major instrument, namely the Michelson Doppler Imager (MDI) on board the Solar and Heliospheric Observatory (SOHO) from 1996 to 2010. The data archive of SOHO/MDI covers more active solar cycle 23 with many large flares. However, SOHO/MDI data only has line-of-sight (LOS) magnetograms. We propose a new deep learning method, named MagNet, to learn from combined LOS magnetograms, Bx and By taken by SDO/HMI along with H-alpha observations collected by the Big Bear Solar Observatory (BBSO), and to generate vector components Bx' and By', which would form vector magnetograms with observed LOS data. In this way, we can expand the availability of vector magnetograms to the period from 1996 to present. Experimental results demonstrate the good performance of the proposed method. To our knowledge, this is the first time that deep learning has been used to generate photospheric vector magnetograms of solar active regions for SOHO/MDI using SDO/HMI and H-alpha data.
translated by 谷歌翻译
一对一的匹配是DETR建立其端到端功能的关键设计,因此对象检测不需要手工制作的NMS(非最大抑制)方法来删除重复检测。这种端到端的签名对于DETR的多功能性很重要,并且已将其推广到广泛的视觉问题,包括实例/语义分割,人体姿势估计以及基于点云/多视图的检测,但是,我们注意到,由于分配为正样本的查询太少,因此一对一的匹配显着降低了阳性样品的训练效率。本文提出了一种基于混合匹配方案的简单而有效的方法,该方法将原始的一对一匹配分支与辅助查询结合在一起,这些查询在训练过程中使用一对一的匹配损失。该混合策略已被证明可显着提高训练效率并提高准确性。在推断中,仅使用原始的一对一匹配分支,从而维持端到端的优点和相同的DETR推断效率。该方法命名为$ \ MATHCAL {H} $ - DETR,它表明可以在各种视觉任务中始终如一地改进各种代表性的DITR方法,包括可变形,3DDER/PETRV2,PETR和TRANDRACK, ,其他。代码将在以下网址提供:https://github.com/hdetr
translated by 谷歌翻译
传统上,分割任务是作为一个完整标签的像素分类任务提出的,可以从所有图像或视频共享的固定数量的预定义语义类别中预测每个像素的类。然而,遵循这种表述,在更现实的设置下,标准体系结构将不可避免地遇到各种挑战,其中类别的范围扩大了(例如,超出1K的级别)。另一方面,在典型的图像或视频中,只有少数类别,即存在一小部分完整标签。在本文中,我们提议将分割分解为两个子问题:(i)图像级或视频级多标签分类和(ii)像素级适应性选定标签分类。给定输入图像或视频,我们的框架首先在完整标签上进行多标签分类,然后对完整的标签进行分类,并根据其类置信度得分选择一个小子集。然后,我们使用等级自适应像素分类器对仅选择的标签执行像素的分类,该标签使用一组面向等级的可学习温度参数来调整像素分类分数。我们的方法在概念上是一般的,可以通过简单地使用轻质多标签分类头和等级适应像素分类器来改善各种现有的分割框架。我们通过四个任务进行了竞争性实验结果,证明了我们的框架的有效性,包括图像语义分割,图像泛型细分,视频实例分段和视频语义分段。尤其是,借助我们的rankSeg,Mask2Former在ADE20K PANOPTIC分段/YouTubevis 2019视频实例分段/VSPW视频语义分段基准分别获得了+0.8%/+0.7%/+0.7%。
translated by 谷歌翻译
BackPropagation广泛用于计算深神经网络(DNN)中的梯度。通常与随机梯度下降(SGD)或其变体一起施用,反向化被认为是在各种机器学习任务中的De-Facto选择,包括DNN培训和对抗性攻击/防御。最近,引入了名为LINBP的BP的线性变体,用于通过GUO等人产生更加可转移的对抗性实施例,以产生黑箱对抗性攻击。然而,它尚未理解研究,并且缺乏这种方法的收敛分析。本文用作郭等人的延伸,以郭等人在内的杂志上涉及的基础学习任务提供了杂志和模型培训的理论分析。我们展示了,与BP相比,Linbp可能会导致在相同的超参数设置中的这些任务中的速度更快。我们通过广泛的实验确认我们的理论结果。
translated by 谷歌翻译
鉴于输入面部照片,漫画生成的目标是生产风格化,夸张的漫画,与照片共享与相同的身份。它需要同时传输和形状夸张,具有丰富的多样性,同时保留输入的身份。为了解决这一具有挑战性的问题,我们提出了一种名为Multi-Warping GaN(MW-GAN)的新型框架,包括风格网络和几何网络,旨在分别进行样式传输和几何夸张。我们通过双向设计弥合图像的风格和地标之间的差距,并通过双向设计来生成具有任意样式和几何夸张的漫画,可以通过潜在代码或给定的随机采样来指定漫画样本。此外,我们对图像空间和地标空间施加身份保持损失,导致产生漫画的质量的巨大改善。实验表明,由MW-GaN产生的漫画具有比现有方法更好的质量。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译