图级表示在各种现实世界中至关重要,例如预测分子的特性。但是实际上,精确的图表注释通常非常昂贵且耗时。为了解决这个问题,图形对比学习构造实例歧视任务,将正面对(同一图的增强对)汇总在一起,并将负面对(不同图的增强对)推开,以进行无监督的表示。但是,由于为了查询,其负面因素是从所有图中均匀抽样的,因此现有方法遭受关键采样偏置问题的损失,即,否定物可能与查询具有相同的语义结构,从而导致性能降解。为了减轻这种采样偏见问题,在本文中,我们提出了一种典型的图形对比度学习(PGCL)方法。具体而言,PGCL通过将语义相似的图形群群归为同一组的群集数据的基础语义结构,并同时鼓励聚类的一致性,以实现同一图的不同增强。然后给出查询,它通过从与查询群集不同的群集中绘制图形进行负采样,从而确保查询及其阴性样本之间的语义差异。此外,对于查询,PGCL根据其原型(集群质心)和查询原型之间的距离进一步重新重新重新重新重新享受其负样本,从而使那些具有中等原型距离的负面因素具有相对较大的重量。事实证明,这种重新加权策略比统一抽样更有效。各种图基准的实验结果证明了我们的PGCL比最新方法的优势。代码可在https://github.com/ha-lins/pgcl上公开获取。
translated by 谷歌翻译
本文提出了一种新的3D形状生成方法,从而在小波域中的连续隐式表示上实现了直接生成建模。具体而言,我们提出了一个带有一对粗糙和细节系数的紧凑型小波表示,通过截短的签名距离函数和多尺度的生物联盟波波隐式表示3D形状,并制定了一对神经网络:基于生成器基于扩散模型的生成器以粗糙系数的形式产生不同的形状;以及一个细节预测因子,以进一步生成兼容的细节系数量,以丰富具有精细结构和细节的生成形状。定量和定性实验结果都表现出我们的方法在产生具有复杂拓扑和结构,干净表面和细节的多样化和高质量形状方面的优势,超过了最先进的模型的3D生成能力。
translated by 谷歌翻译
由于没有大型配对的文本形状数据,这两种方式之间的大量语义差距以及3D形状的结构复杂性,因此文本指导的3D形状生成仍然具有挑战性。本文通过引入2D图像作为垫脚石来连接两种方式并消除对配对的文本形状数据的需求,提出了一个名为“图像”的新框架,称为“垫脚石”(ISS)。我们的关键贡献是一种两阶段的功能空间对准方法,它通过利用具有多视图Supperions的预训练的单视重构造(SVR)模型来映射剪辑功能以形成形状:首先将剪辑图像剪辑剪辑功能到详细信息 - SVR模型中的丰富形状空间,然后将剪辑文本功能映射到形状空间,并通过鼓励输入文本和渲染图像之间的剪辑一致性来优化映射。此外,我们制定了一个文本制定的形状样式化模块,以用新颖的纹理打扮出输出形状。除了从文本上生成3D Shape生成的现有作品外,我们的新方法是在不需要配对的文本形状数据的情况下创建形状的一般性。实验结果表明,我们的方法在忠诚度和与文本一致性方面优于最先进的和我们的基线。此外,我们的方法可以通过逼真的和幻想结构和纹理对生成的形状进行样式化。
translated by 谷歌翻译
本文介绍了一个名为DTNET的新颖框架,用于3D网格重建和通过Distangled Tostology生成。除了以前的工作之外,我们还学习一个特定于每个输入的拓扑感知的神经模板,然后将模板变形以重建详细的网格,同时保留学习的拓扑。一个关键的见解是将复杂的网格重建分解为两个子任务:拓扑配方和形状变形。多亏了脱钩,DT-NET隐含地学习了潜在空间中拓扑和形状的分离表示。因此,它可以启用新型的脱离控件,以支持各种形状生成应用,例如,将3D对象的拓扑混合到以前的重建作品无法实现的3D对象的拓扑结构。广泛的实验结果表明,与最先进的方法相比,我们的方法能够产生高质量的网格,尤其是具有不同拓扑结构。
translated by 谷歌翻译
点云上采样是为了使从3D传感器获得的稀疏点集致密,从而为基础表面提供了密度的表示。现有方法将输入点划分为小贴片,并分别对每个贴片进行整理,但是,忽略了补丁之间的全局空间一致性。在本文中,我们提出了一种新颖的方法PC $^2 $ -PU,该方法探讨了贴片对点和点对点相关性,以实现更有效和强大的点云上采样。具体而言,我们的网络有两个吸引人的设计:(i)我们将相邻的补丁作为补充输入来补偿单个补丁中的损失结构信息,并引入一个补丁相关模块以捕获补丁之间的差异和相似性。 (ii)在增强每个贴片的几何形状后,我们进一步引入了一个点相关模块,以揭示每个贴片内部的关系以维持局部空间一致性。对合成和真实扫描数据集进行的广泛实验表明,我们的方法超过了以前的上采样方法,尤其是在嘈杂的输入中。代码和数据位于\ url {https://github.com/chenlongwhu/pc2-pu.git}。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译