无监督的生成的虚拟人类具有各种外观和动画姿势对于创建3D人体化身和其他AR/VR应用非常重要。现有方法要么仅限于刚性对象建模,要么不生成,因此无法合成高质量的虚拟人类并使它们进行动画化。在这项工作中,我们提出了Avatargen,这是第一种不仅可以具有不同外观的非刚性人类产生的方法,而且还可以完全控制姿势和观点,同时仅需要2D图像进行训练。具体而言,它通过利用粗糙的人体模型作为代理将观察空间扭曲到规范空间下的标准头像,将最近的3D甘斯扩展到了人类的衣服。为了建模非刚性动力学,它引入了一个变形网络,以学习规范空间中的姿势依赖性变形。为了提高生成的人类化身的几何质量,它利用签名距离字段作为几何表示,从而可以从几何学学习上的身体模型中进行更直接的正则化。从这些设计中受益,我们的方法可以生成具有高质量外观和几何形状建模的动画人体化身,从而极大地表现了先前的3D gan。此外,它有能力用于许多应用,例如单视重构造,复活和文本引导的合成。代码和预培训模型将可用。
translated by 谷歌翻译
Existing Cross Modal Hashing (CMH) methods are mainly designed for balanced data, while imbalanced data with long-tail distribution is more general in real-world. Several long-tail hashing methods have been proposed but they can not adapt for multi-modal data, due to the complex interplay between labels and individuality and commonality information of multi-modal data. Furthermore, CMH methods mostly mine the commonality of multi-modal data to learn hash codes, which may override tail labels encoded by the individuality of respective modalities. In this paper, we propose LtCMH (Long-tail CMH) to handle imbalanced multi-modal data. LtCMH firstly adopts auto-encoders to mine the individuality and commonality of different modalities by minimizing the dependency between the individuality of respective modalities and by enhancing the commonality of these modalities. Then it dynamically combines the individuality and commonality with direct features extracted from respective modalities to create meta features that enrich the representation of tail labels, and binaries meta features to generate hash codes. LtCMH significantly outperforms state-of-the-art baselines on long-tail datasets and holds a better (or comparable) performance on datasets with balanced labels.
translated by 谷歌翻译
多实例多标签学习(MIML)模型复杂对象(袋),每个都与一组相互关联的标签相关联,并用一组实例组成。当前的MIML解决方案仍然专注于单型对象,并假设IID分配训练数据。但这些对象与其他类型的对象,%(即,与各种用户的Facebook链接中的图片中的图片相关联,这也是对目标对象的语义进行编码。此外,它们通常需要丰富的标记数据进行培训。为了有效地挖掘不同类型的相互依赖的MIML对象,我们提出了一种网络嵌入和基于元学习的方法(Metamiml)。 Metamiml向上介绍了网络嵌入的上下文学习者,以捕获不同类型对象的语义信息,以及提取Meta知识的任务学习者,以便快速调整新任务。通过这种方式,Metamiml可以自然地处理数据级别的改进的MIML对象,但也利用了模型增强的元学习的力量。基准数据集的实验表明,Metamiml比最先进的算法实现了显着更好的性能。
translated by 谷歌翻译
由于在大异构数据上加速查询时间的同时减少存储的优点,已经广泛研究了跨模型散列,以便对多模态数据的近似邻近搜索进行广泛研究。大多数散列方法假设培训数据是类平衡的。但是,在实践中,现实世界数据通常具有长尾的分布。在本文中,我们介绍了一种基于元学习的跨模态散列方法(MetacMH)来处理长尾数据。由于尾部类中缺乏培训样本,MetacMH首先从不同模式中的数据中学习直接功能,然后引入关联内存模块,以了解尾部类别的样本的存储器功能。然后,它结合了直接和内存功能以获得每个样本的元特征。对于长尾分布的头部类别的样本,直接功能的重量越大,因为有足够的训练数据来学习它们;虽然对于罕见的类,但内存功能的重量越大。最后,MetacMH使用似然损耗函数来保持不同模式中的相似性,并以端到端的方式学习哈希函数。长尾数据集的实验表明,MetacMH比最先进的方法表现出明显好,特别是在尾部课上。
translated by 谷歌翻译
跨模态散列(CMH)是跨模型近似最近邻搜索中最有前途的方法之一。大多数CMH解决方案理想地假设培训和测试集的标签是相同的。但是,通常违反假设,导致零拍摄的CMH问题。最近解决此问题的努力侧重于使用标签属性将知识转移到未见的类。但是,该属性与多模态数据的特征隔离。为了减少信息差距,我们介绍了一种名为LAEH的方法(嵌入零拍跨模型散列的标签属性)。 Laeh首先通过Word2Vec模型获取标签的初始语义属性向量,然后使用转换网络将它们转换为常见的子空间。接下来,它利用散列向量和特征相似矩阵来指导不同方式的特征提取网络。与此同时,Laeh使用属性相似性作为标签相似度的补充,以纠正标签嵌入和常见子空间。实验表明,Laeh优于相关代表零射和跨模态散列方法。
translated by 谷歌翻译
我们筹集并定义了一个新的众群情景,开放套装,在那里我们只知道一个不熟悉的众群项目的一般主题,我们不知道其标签空间,即可能的标签集。这仍然是一个任务注释问题,但与任务和标签空间的不熟悉妨碍了任务和工人的建模,以及真理推断。我们提出了一个直观的解决方案,Oscrowd。首先,Oscrowd将人群主题相关的数据集集成到一个大源域中,以便于部分传输学习,以近似这些任务的标签空间推理。接下来,它将基于类别相关性为每个源域分配权重。在此之后,它使用多源打开集传输学习来模拟人群任务并分配可能的注释。转让学习给出的标签空间和注释将用于指导和标准化人群工人的注释。我们在在线场景中验证了Oscrowd,并证明了Oscrowd解决了开放式众群问题,比相关的众包解决方案更好。
translated by 谷歌翻译
由于互联网工作人员的不可靠性,很难满足众群项目,特别是当任务多次并且预算有限时。最近,元学习为少量学习带来了新的生命力,使得可以使用几个训练样本获得具有公平性能的分类器。在这里,我们介绍了由Meta学习训练的机器注释员的概念,用于适合AI的任务类型(即图像分类)。与常规人群工人不同,元工人可以是可靠的,稳定的,更重要的,不知疲倦和自由。我们首先群集未标记的数据,并要求人群工人反复注释集群中心附近的情况;然后,我们利用带注释的数据和元训练数据集来建立使用不同的元学习算法来构建一组元工人。随后,要求元工人注释剩余的众群任务。 Jensen-Shannon分歧用于衡量Meta-Workers提供的注释中的分歧,这决定了人群工人是否应被邀请进一步注释同一任务。最后,我们模拟了Meta-Workers的偏好并计算了加权多数投票的共识注释。我们的实证研究证实,通过组合机器和人类智能,我们可以完成比最先进的任务分配方法的预算较低的众群项目,同时实现了优越或相当的质量。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译