最近深入学习已成功应用于无监督的主动学习。然而,当前方法试图通过自动编码器来忽略样本关系的同时学习非线性转换,留下巨大的空间来设计用于无监督的主动学习的更有效的表示学习机制。在本文中,我们通过可学习的图表提出了一种新颖的无监督的主动学习模型,命名为Allg。 allg从学习最佳图形结构中获益,以获取更好的样本表示,然后选择代表样本。为了使学习的图形结构更加稳定和有效,我们考虑了$ k $ -nealest邻居图作为先验,并学习关系传播图形结构。我们还将快捷方式连接到不同的层中,可以在一定程度上缓解众所周知的过平滑问题。据我们所知,这是第一次利用图形结构学习的第一次尝试,以便无监督的主动学习。在六个数据集上进行的广泛实验证明了我们的方法的功效。
translated by 谷歌翻译
Generative models have been widely studied in computer vision. Recently, diffusion models have drawn substantial attention due to the high quality of their generated images. A key desired property of image generative models is the ability to disentangle different attributes, which should enable modification towards a style without changing the semantic content, and the modification parameters should generalize to different images. Previous studies have found that generative adversarial networks (GANs) are inherently endowed with such disentanglement capability, so they can perform disentangled image editing without re-training or fine-tuning the network. In this work, we explore whether diffusion models are also inherently equipped with such a capability. Our finding is that for stable diffusion models, by partially changing the input text embedding from a neutral description (e.g., "a photo of person") to one with style (e.g., "a photo of person with smile") while fixing all the Gaussian random noises introduced during the denoising process, the generated images can be modified towards the target style without changing the semantic content. Based on this finding, we further propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation. This entire process only involves optimizing over around 50 parameters and does not fine-tune the diffusion model itself. Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms that require fine-tuning. The optimized weights generalize well to different images. Our code is publicly available at https://github.com/UCSB-NLP-Chang/DiffusionDisentanglement.
translated by 谷歌翻译
捆绑式推荐系统向用户推荐一组物品(例如裤子,衬衫和鞋子),但他们经常遇到两个问题:重大的互动稀疏性和大型输出空间。在这项工作中,我们扩展了多轮对话建议(MCR)以减轻这些问题。 MCR是使用对话范式通过询问标签(例如类别或属性)的用户偏好来引起用户兴趣的MCR,并在多个回合中处理用户反馈,是一个新兴的建议设置,以获取用户反馈并缩小输出空间,但具有缩小的输出空间没有在捆绑建议的背景下探索。在这项工作中,我们提出了一个名为Bundle MCR的新颖推荐任务。我们首先提出了一个新框架,以将MCR作为Markov决策过程(MDP),其中有多个代理,用于用户建模,咨询和反馈处理。在此框架下,我们向(1)推荐项目,(2)提出问题和(3)基于捆绑感的对话状态来管理对话。此外,要有效地训练Bunt,我们提出了两阶段的培训策略。在离线预训练阶段,Bunt使用多个披肩任务进行训练,以模仿对话中的捆绑互动。然后,在在线微调阶段,用户交互增强了Bunt代理。我们在多个离线数据集以及人类评估上进行的实验显示了将MCR框架扩展到捆绑设置的价值以及我们的Bunt设计的有效性。
translated by 谷歌翻译
行动预测旨在通过部分观察视频推断即将举行的人类行动,这是由于早期观察结果有限的信息有限。现有方法主要采用重建策略来处理此任务,期望从部分观察到完整视频来学习单个映射函数,以便于预测过程。在这项研究中,我们提出了来自两个新方面的部分视频查询生成“完整视频”功能调节的对抗性记忆网络(AMEMNet)。首先,键值结构化存储器发生器旨在将不同的部分视频存储为键存储器,并在具有门控机制和查询关注的值存储器中动态地写入完整视频。其次,我们开发了一个类感知判别者,以指导内存发生器在对抗训练时不仅提供现实,而且还提供鉴别的完整视频特征。通过RGB和光学流量的晚期融合给出了AMEMNET的最终预测结果。提供两个基准视频数据集,UCF-101和HMDB51的广泛实验结果,以证明所提出的AMEMNET模型在最先进的方法的有效性。
translated by 谷歌翻译
学习时空事件的动态是一个根本的问题。神经点过程提高了与深神经网络的点过程模型的表现。但是,大多数现有方法只考虑没有空间建模的时间动态。我们提出了深蓝点过程(DeepStpp),这是一款整合时空点流程的深层动力学模型。我们的方法灵活,高效,可以在空间和时间准确地预测不规则采样的事件。我们方法的关键构造是非参数时空强度函数,由潜在过程管理。强度函数享有密度的闭合形式集成。潜在进程捕获事件序列的不确定性。我们使用摊销变分推理来推断使用深网络的潜在进程。使用合成数据集,我们验证我们的模型可以准确地学习真实的强度函数。在真实世界的基准数据集上,我们的模型展示了最先进的基线的卓越性能。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译