扩散降级概率模型(DDPM)和视觉变压器(VIT)分别在生成任务和判别任务中表现出重大进展,到目前为止,这些模型已在其自身领域中很大程度上开发出来。在本文中,我们通过将VIT结构集成到DDPM之间,建立DDPM和VIT之间的直接联系,并引入一种称为“生成Vit(Genvit)”的新生成模型。VIT的建模灵活性使我们能够将Genvit进一步扩展到混合判别生成建模,并引入混合VIT(HYBVIT)。我们的工作是最早探索单个VIT以共同探索图像生成和分类的人之一。我们进行了一系列实验,以分析提出的模型的性能,并证明它们在生成和判别任务中都超过了先前的最新技术。我们的代码和预培训模型可以在https://github.com/sndnyang/diffusion_vit中找到。
translated by 谷歌翻译
Graph neural networks have achieved significant success in representation learning. However, the performance gains come at a cost; acquiring comprehensive labeled data for training can be prohibitively expensive. Active learning mitigates this issue by searching the unexplored data space and prioritizing the selection of data to maximize model's performance gain. In this paper, we propose a novel method SMARTQUERY, a framework to learn a graph neural network with very few labeled nodes using a hybrid uncertainty reduction function. This is achieved using two key steps: (a) design a multi-stage active graph learning framework by exploiting diverse explicit graph information and (b) introduce label propagation to efficiently exploit known labels to assess the implicit embedding information. Using a comprehensive set of experiments on three network datasets, we demonstrate the competitive performance of our method against state-of-the-arts on very few labeled data (up to 5 labeled nodes per class).
translated by 谷歌翻译
表格数据是业务应用程序中最常见的数据存储格式之一,范围从零售,银行和电子商务。这些应用在很大程度上依赖机器学习模型来取得业务成功。学习表格数据的关键问题之一是将有影响力的特征与所有预定特征区分开。假设所有实例都具有相同的影响力子集,那么全球功能选择已经进行了很长时间。但是,不同的实例依赖于实践中的不同特征子集,这也引起了实例的特征选择,在最近的研究中受到了越来越多的关注。在本文中,我们首先提出了一种新的方法,以发现表格数据的实例影响特征(DIWIFT),其核心是引入影响函数以衡量实例特征的重要性。 Diwift能够在不同实例中自动发现不同尺寸的影响力子集,这与全局特征选择不同,后者考虑了具有相同影响力特征子集的所有实例。另一方面,与以前的实例功能选择不同,DIWIFT最大程度地减少了验证集的验证损失,因此对于训练数据集和测试数据集中存在的分配变化更为强大,这在表格数据中很重要。最后,我们对合成数据集和现实数据集进行了广泛的实验,以验证我们的diwift的有效性,并将其与基线方法进行了比较。此外,我们还通过一些消融实验来证明我们方法的鲁棒性。
translated by 谷歌翻译
如图1所示,光学特征识别(OCR)技术已在各种场景中广泛使用。设计实用的OCR系统仍然是一项有意义但具有挑战性的任务。在以前的工作中,考虑到效率和准确性,我们提出了实用的超轻型OCR系统(PP-OCR)和优化的版本PP-OCRV2。为了进一步提高PP-OCRV2的性能,本文提出了更强大的OCR系统PP-OCRV3。 PP-OCRV3基于PP-OCRV2的9个方面升级了文本检测模型和文本识别模型。对于文本检测器,我们引入了一个带有大型接收场LK-PAN的锅模块,该模块是一个名为RSE-FPN的剩余注意机制的FPN模块和DML蒸馏策略。对于文本识别器,基本模型将从CRNN替换为SVTR,我们介绍了轻量级文本识别网络SVTR LCNET,通过注意力进行CTC的指导培训,数据增强策略TextConaug,由自我审查的TextRotnet,UDML和UDML和UDML和UDML和更好的预培训模型。 UIM加速模型并改善效果。实际数据上的实验表明,在可比的推理速度下,PP-OCRV3的Hmean比PP-OCRV2高5%。上述所有上述型号都是开源的,并且代码可在由PaddlePaddle供电的GitHub存储库Paddleocr中可用。
translated by 谷歌翻译
光流估计是自动驾驶和机器人系统系统中的一项基本任务,它可以在时间上解释流量场景。自动驾驶汽车显然受益于360 {\ deg}全景传感器提供的超宽视野(FOV)。但是,由于全景相机的独特成像过程,专为针孔图像设计的模型不会令人满意地概括为360 {\ deg}全景图像。在本文中,我们提出了一个新颖的网络框架 - panoflow,以学习全景图像的光流。为了克服全景转化中等应角投影引起的扭曲,我们设计了一种流动失真增强(FDA)方法,其中包含径向流量失真(FDA-R)或等骨流量失真(FDA-E)。我们进一步研究了全景视频的环状光流的定义和特性,并通过利用球形图像的环状来推断360 {\ deg}光流并将大型位移转换为相对小的位移,从而提出了环状流量估计(CFE)方法移位。 Panoflow适用于任何现有的流量估计方法,并从狭窄的FOL流量估计的进度中受益。此外,我们创建并释放基于CARLA的合成全景数据集Flow360,以促进训练和定量分析。 Panoflow在公共Omniflownet和已建立的Flow360基准中实现了最先进的表现。我们提出的方法将Flow360上的端点误差(EPE)降低了27.3%。在Omniflownet上,Panoflow获得了3.17像素的EPE,从最佳发布的结果中降低了55.5%的误差。我们还通过收集工具和公共现实世界中的全球数据集对我们的方法进行定性验证我们的方法,这表明对现实世界导航应用程序的强大潜力和稳健性。代码和数据集可在https://github.com/masterhow/panoflow上公开获取。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译