视频异常检测是计算机视觉社区的一项具有挑战性的任务。大多数基于任务的方法都不考虑独特的空间和时间模式的独立性,而两流结构则缺乏对相关性的探索。在本文中,我们提出了时空记忆增强了两个流动自动编码器框架,该框架可以独立学习外观正常和运动正常,并通过对抗性学习探索相关性。具体而言,我们首先设计了两个代理任务来训练两流结构,以隔离地提取外观和运动特征。然后,将原型特征记录在相应的空间和时间内存池中。最后,编码编码网络通过歧视者进行对抗学习,以探索空间和时间模式之间的相关性。实验结果表明,我们的框架优于最先进的方法,在UCSD PED2和CUHK Avenue数据集上,AUC达到98.1%和89.8%。
translated by 谷歌翻译
最近,大多数手写的数学表达识别(HMER)方法采用编码器 - 编码器网络,该网络直接从具有注意机制的公式图像中直接预测标记序列。但是,此类方法可能无法准确读取具有复杂结构的公式或生成长的标记序列,因为由于写作样式或空间布局的差异很大,注意结果通常是不准确的。为了减轻此问题,我们为HMER提出了一个名为Counting-Aware-Aware网络(CAN)的非常规网络,该网络共同优化了两个任务:HMER和符号计数。具体而言,我们设计了一个弱监督的计数模块,该模块可以预测每个符号类的数量,而无需符号级别的位置注释,然后将其插入HMER的典型基于注意力的编码器模型。在基准数据集上进行的实验验证了关节优化和计数结果既有益于纠正编码器模型的预测误差,又可以始终如一地胜过最先进的方法。特别是,与HMER的编码器模型相比,提议的计数模块引起的额外时间成本是边缘的。源代码可从https://github.com/lbh1024/can获得。
translated by 谷歌翻译
人群本地化(预测头部位置)是一项更实用,更高的任务,而不是仅仅计数。现有方法采用伪装框或预设计的本地化图,依靠复杂的后处理来获得头部位置。在本文中,我们提出了一个名为CLTR的优雅,端到端的人群本地化变压器,该变压器在基于回归的范式中解决了任务。所提出的方法将人群定位视为直接设置的预测问题,将提取的功能和可训练的嵌入作为变压器描述器的输入。为了减少模棱两可的点并产生更合理的匹配结果,我们引入了基于KMO的匈牙利匹配器,该匹配器采用附近的环境作为辅助匹配成本。在各种数据设置中在五个数据集上进行的广泛实验显示了我们方法的有效性。特别是,所提出的方法在NWPU-Crowd,UCF-QNRF和Shanghaitech a部分A部分上实现了最佳的本地化性能。
translated by 谷歌翻译
主流人群计数方法通常利用卷积神经网络(CNN)回归密度图,需要点级注释。但是,用一点点注释每个人是一个昂贵且费力的过程。在测试阶段,未考虑点级注释来评估计数精度,这意味着点级注释是冗余的。因此,希望开发仅依赖计数级注释的弱监督计数方法,这是一种更经济的标签方式。当前的弱监督计数方法采用了CNN来通过图像计数范式回归人群的总数。但是,对于上下文建模的接受场有限是这些基于CNN的弱监督法的内在局限性。因此,在现实世界中的应用有限的情况下,这些方法无法实现令人满意的性能。变压器是自然语言处理(NLP)中流行的序列到序列预测模型,其中包含一个全球接收场。在本文中,我们提出了transercroderd,从基于变压器的序列到计数的角度来重新制定了弱监督的人群计数问题。我们观察到,所提出的译者可以使用变压器的自发机制有效地提取语义人群信息。据我们所知,这是第一项采用纯变压器进行人群计算研究的工作。五个基准数据集的实验表明,与所有基于弱的CNN的计数方法相比,所提出的transercroud的性能优于较高的性能,并且与某些流行的完全监督的计数方法相比,基于CNN的计数方法和提高了竞争激烈的计数性能。
translated by 谷歌翻译
在本文中,我们专注于人群本地化任务,这是人群分析的关键主题。大多数基于回归的方法都利用卷积神经网络(CNN)回归密度图,该密度图无法准确地定位在极度密集的场景中,这两个至关重要的原因是:1)密度图由一系列模糊的高斯斑点组成,2)密度图的致密区域中存在严重的重叠。为了解决这个问题,我们为人群本地化任务提出了一个新颖的焦点反向变换(FIDT)图。与密度图相比,FIDT地图准确地描述了人们的位置,而不会在密集区域重叠。基于FIDT地图,得出了局部Maxima-detection-Strategy(LMDS),以有效地为每个人提取中心点。此外,我们引入了独立的SSIM(I-SSIM)损失,以使模型倾向于学习局部结构信息,从而更好地识别局部最大值。广泛的实验表明,提出的方法报告在六个人群数据集和一个车辆数据集上的最先进的本地化性能。此外,我们发现所提出的方法在负面和极密密集的场景上显示出优异的鲁棒性,这进一步验证了FIDT地图的有效性。该代码和模型将在https://github.com/dk-liang/fidtm上找到。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance. Theoretical understanding for such experimental observations are yet to be developed. This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization. Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization. It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance. More surprisingly, the generalization bound gets better as the pruning fraction gets larger. To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing. This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network. Up to our knowledge, this is the \textbf{first} generalization result for pruned neural networks, suggesting that pruning can improve the neural network's generalization.
translated by 谷歌翻译