Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
尽管最近对Deepfake技术的滥用引起了严重的关注,但由于每个帧的光真逼真的合成,如何检测DeepFake视频仍然是一个挑战。现有的图像级方法通常集中在单个框架上,而忽略了深击视频中隐藏的时空提示,从而导致概括和稳健性差。视频级检测器的关键是完全利用DeepFake视频中不同框架的当地面部区域分布在当地面部区域中的时空不一致。受此启发,本文提出了一种简单而有效的补丁级方法,以通过时空辍学变压器促进深击视频检测。该方法将每个输入视频重组成贴片袋,然后将其馈入视觉变压器以实现强大的表示。具体而言,提出了时空辍学操作,以充分探索斑块级时空提示,并作为有效的数据增强,以进一步增强模型的鲁棒性和泛化能力。该操作是灵活的,可以轻松地插入现有的视觉变压器中。广泛的实验证明了我们对25种具有令人印象深刻的鲁棒性,可推广性和表示能力的最先进的方法的有效性。
translated by 谷歌翻译
变形金刚和蒙版语言建模在计算机视觉中很快被视为视觉变压器和蒙版图像建模(MIM)。在这项工作中,我们认为由于图像中令牌的数量和相关性,图像令牌掩盖与文本中的令牌掩盖有所不同。特别是,为了为MIM产生具有挑战性的借口任务,我们主张从随机掩盖到知情掩盖的转变。我们在基于蒸馏的MIM的背景下开发并展示了这一想法,其中教师变压器编码器生成了一个注意力图,我们用它来指导学生为学生指导掩盖。因此,我们引入了一种新颖的掩蔽策略,称为注意引导蒙版(ATTMASK),我们证明了其对基于密集蒸馏的MIM以及基于普通蒸馏的自然剥离的自助力学习的有效性。我们确认ATTMASK可以加快学习过程,并提高各种下游任务的性能。我们在https://github.com/gkakogeorgiou/attmask上提供实现代码。
translated by 谷歌翻译
最近,由于社交媒体数字取证中的安全性和隐私问题,DeepFake引起了广泛的公众关注。随着互联网上广泛传播的深层视频变得越来越现实,传统的检测技术未能区分真实和假货。大多数现有的深度学习方法主要集中于使用卷积神经网络作为骨干的局部特征和面部图像中的关系。但是,本地特征和关系不足以用于模型培训,无法学习足够的一般信息以进行深层检测。因此,现有的DeepFake检测方法已达到瓶颈,以进一步改善检测性能。为了解决这个问题,我们提出了一个深度卷积变压器,以在本地和全球范围内纳入决定性图像。具体而言,我们应用卷积池和重新注意事项来丰富提取的特征并增强功效。此外,我们在模型训练中采用了几乎没有讨论的图像关键框架来改进性能,并可视化由视频压缩引起的密钥和正常图像帧之间的特征数量差距。我们最终通过在几个DeepFake基准数据集上进行了广泛的实验来说明可传递性。所提出的解决方案在内部和跨数据库实验上始终优于几个最先进的基线。
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
As ultra-realistic face forgery techniques emerge, deepfake detection has attracted increasing attention due to security concerns. Many detectors cannot achieve accurate results when detecting unseen manipulations despite excellent performance on known forgeries. In this paper, we are motivated by the observation that the discrepancies between real and fake videos are extremely subtle and localized, and inconsistencies or irregularities can exist in some critical facial regions across various information domains. To this end, we propose a novel pipeline, Cross-Domain Local Forensics (XDLF), for more general deepfake video detection. In the proposed pipeline, a specialized framework is presented to simultaneously exploit local forgery patterns from space, frequency, and time domains, thus learning cross-domain features to detect forgeries. Moreover, the framework leverages four high-level forgery-sensitive local regions of a human face to guide the model to enhance subtle artifacts and localize potential anomalies. Extensive experiments on several benchmark datasets demonstrate the impressive performance of our method, and we achieve superiority over several state-of-the-art methods on cross-dataset generalization. We also examined the factors that contribute to its performance through ablations, which suggests that exploiting cross-domain local characteristics is a noteworthy direction for developing more general deepfake detectors.
translated by 谷歌翻译
由于其能够学习全球关系和卓越的表现,变形金刚引起了很多关注。为了实现更高的性能,将互补知识从变形金刚到卷积神经网络(CNN)是很自然的。但是,大多数现有的知识蒸馏方法仅考虑同源 - 建筑蒸馏,例如将知识从CNN到CNN蒸馏。在申请跨架构方案时,它们可能不合适,例如从变压器到CNN。为了解决这个问题,提出了一种新颖的跨架构知识蒸馏方法。具体而言,引入了部分交叉注意投影仪和小组线性投影仪,而不是直接模仿老师的输出/中级功能,以使学生的功能与教师的功能保持一致。并进一步提出了多视图强大的训练方案,以提高框架的稳健性和稳定性。广泛的实验表明,所提出的方法在小规模和大规模数据集上均优于14个最先进的方法。
translated by 谷歌翻译
面部伪造技术的最新进展几乎可以产生视觉上无法追踪的深冰录视频,这些视频可以通过恶意意图来利用。结果,研究人员致力于深泡检测。先前的研究已经确定了局部低级提示和时间信息在追求跨层次方法中概括的重要性,但是,它们仍然遭受鲁棒性问题的影响。在这项工作中,我们提出了基于本地和时间感知的变压器的DeepFake检测(LTTD)框架,该框架采用了局部到全球学习协议,特别关注本地序列中有价值的时间信息。具体而言,我们提出了一个局部序列变压器(LST),该局部序列变压器(LST)对限制空间区域的序列进行了时间一致性,其中低级信息通过学习的3D滤波器的浅层层增强。基于局部时间嵌入,我们然后以全球对比的方式实现最终分类。对流行数据集进行的广泛实验验证了我们的方法有效地发现了本地伪造线索并实现最先进的表现。
translated by 谷歌翻译
Face manipulation detection has been receiving a lot of attention for the reliability and security of the face images. Recent studies focus on using auxiliary information or prior knowledge to capture robust manipulation traces, which are shown to be promising. As one of the important face features, the face depth map, which has shown to be effective in other areas such as the face recognition or face detection, is unfortunately paid little attention to in literature for detecting the manipulated face images. In this paper, we explore the possibility of incorporating the face depth map as auxiliary information to tackle the problem of face manipulation detection in real world applications. To this end, we first propose a Face Depth Map Transformer (FDMT) to estimate the face depth map patch by patch from a RGB face image, which is able to capture the local depth anomaly created due to manipulation. The estimated face depth map is then considered as auxiliary information to be integrated with the backbone features using a Multi-head Depth Attention (MDA) mechanism that is newly designed. Various experiments demonstrate the advantage of our proposed method for face manipulation detection.
translated by 谷歌翻译
Facial action units (FAUs) are critical for fine-grained facial expression analysis. Although FAU detection has been actively studied using ideally high quality images, it was not thoroughly studied under heavily occluded conditions. In this paper, we propose the first occlusion-robust FAU recognition method to maintain FAU detection performance under heavy occlusions. Our novel approach takes advantage of rich information from the latent space of masked autoencoder (MAE) and transforms it into FAU features. Bypassing the occlusion reconstruction step, our model efficiently extracts FAU features of occluded faces by mining the latent space of a pretrained masked autoencoder. Both node and edge-level knowledge distillation are also employed to guide our model to find a mapping between latent space vectors and FAU features. Facial occlusion conditions, including random small patches and large blocks, are thoroughly studied. Experimental results on BP4D and DISFA datasets show that our method can achieve state-of-the-art performances under the studied facial occlusion, significantly outperforming existing baseline methods. In particular, even under heavy occlusion, the proposed method can achieve comparable performance as state-of-the-art methods under normal conditions.
translated by 谷歌翻译
通过开发基于生成的自我监督学习(SSL)方法,例如Beit和Mae,如何通过掩盖输入图像的随机补丁并重建缺失信息来学习良好的表示形式。但是,Beit和Peco需要一个“预先陈述”阶段,以生成用于掩盖补丁代表的离散代码手册。 MAE不需要预训练的代码簿流程,但是将像素设置为重建目标可能会引入前训练和下游任务之间的优化差距,即良好的重建质量可能并不总是会导致模型的高描述能力。考虑到上述问题,在本文中,我们提出了一个简单的自鉴定的蒙面自动编码器网络,即SDAE。 SDAE由一个使用编码器解码器结构的学生分支组成,以重建缺失的信息,并制作一个师范分支,生产蒙版代币的潜在表示。我们还分析了如何从信息瓶颈的角度来为教师分支机构建立潜在代表性的好看法。之后,我们提出了一种多重掩蔽策略,以提供多个掩盖视图,并具有平衡的信息以提高性能,这也可以降低计算复杂性。我们的方法很好地概括了:只有300个时期预训练,香草vit-base模型在Imagenet-1K分类上达到了84.1%的微调精度,48.6 MIOU在ADE20K细分方面和48.9 coco检测中的MAP,它超过了其他方法,从而超过其他方法。通过相当大的边距。代码可从https://github.com/abrahamyabo/sdae获得。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, to leverage the advantage of different teachers, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 75.9% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 1.6% respectively. Code will be available at \url{https://github.com/ruiwang2021/mvd}.
translated by 谷歌翻译
With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
尽管令人鼓舞的是深泡检测的进展,但由于训练过程中探索的伪造线索有限,对未见伪造类型的概括仍然是一个重大挑战。相比之下,我们注意到Deepfake中的一种常见现象:虚假的视频创建不可避免地破坏了原始视频中的统计规律性。受到这一观察的启发,我们建议通过区分实际视频中没有出现的“规律性中断”来增强深层检测的概括。具体而言,通过仔细检查空间和时间属性,我们建议通过伪捕获生成器破坏真实的视频,并创建各种伪造视频以供培训。这种做法使我们能够在不使用虚假视频的情况下实现深泡沫检测,并以简单有效的方式提高概括能力。为了共同捕获空间和时间上的破坏,我们提出了一个时空增强块,以了解我们自我创建的视频之间的规律性破坏。通过全面的实验,我们的方法在几个数据集上表现出色。
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
基于卷积神经网络的面部伪造检测方法在训练过程中取得了显着的结果,但在测试过程中努力保持可比的性能。我们观察到,检测器比人工制品痕迹更容易专注于内容信息,这表明检测器对数据集的内在偏置敏感,这会导致严重的过度拟合。在这一关键观察的激励下,我们设计了一个易于嵌入的拆卸框架,以删除内容信息,并进一步提出内容一致性约束(C2C)和全球表示对比度约束(GRCC),以增强分解特征的独立性。此外,我们巧妙地构建了两个不平衡的数据集来研究内容偏差的影响。广泛的可视化和实验表明,我们的框架不仅可以忽略内容信息的干扰,而且还可以指导探测器挖掘可疑的人工痕迹并实现竞争性能。
translated by 谷歌翻译
在亲自重新识别(REID)中,最近的研究已经验证了未标记的人图像上的模型的预训练要比ImageNet上要好得多。但是,这些研究直接应用了为图像分类设计的现有自我监督学习(SSL)方法,用于REID,而无需在框架中进行任何适应。这些SSL方法将本地视图的输出(例如红色T恤,蓝色短裤)与同时的全球视图相匹配,从而丢失了很多细节。在本文中,我们提出了一种特定于REID的预训练方法,部分意识的自我监督预训练(PASS),该方法可以生成零件级别的功能以提供细粒度的信息,并且更适合REID。通行证将图像分为几个局部区域,每个区域随机裁剪的本地视图都有特定的可学习[部分]令牌。另一方面,所有地方区域的[部分]也附加到全球视图中。通行证学习以匹配同一[部分]上本地视图的输出和全局视图。也就是说,从本地区域获得的本地视图的[部分]仅与从全球视图中学到的相应[部分]相匹配。结果,每个[部分]可以专注于图像的特定局部区域,并提取该区域的细粒度信息。实验显示通行证在Market1501和MSMT17上的新最先进的表演以及各种REID任务(例如Vanilla vit-s/16)通过Pass Achieves 92.2 \%/90.2 \%/88.5 \%地图准确性,例如Vanilla vit-s/16在Market1501上进行监督/UDA/USL REID。我们的代码可在https://github.com/casia-iva-lab/pass-reid上找到。
translated by 谷歌翻译
知识蒸馏(KD)在将学习表征从大型模型(教师)转移到小型模型(学生)方面表现出非常有希望的能力。但是,随着学生和教师之间的容量差距变得更大,现有的KD方法无法获得更好的结果。我们的工作表明,“先验知识”对KD至关重要,尤其是在应用大型老师时。特别是,我们提出了动态的先验知识(DPK),该知识将教师特征的一部分作为特征蒸馏之前的先验知识。这意味着我们的方法还将教师的功能视为“输入”,而不仅仅是``目标''。此外,我们根据特征差距动态调整训练阶段的先验知识比率,从而引导学生在适当的困难中。为了评估所提出的方法,我们对两个图像分类基准(即CIFAR100和Imagenet)和一个对象检测基准(即MS Coco)进行了广泛的实验。结果表明,在不同的设置下,我们方法在性能方面具有优势。更重要的是,我们的DPK使学生模型的表现与教师模型的表现呈正相关,这意味着我们可以通过应用更大的教师进一步提高学生的准确性。我们的代码将公开用于可重复性。
translated by 谷歌翻译
通过各种面部操作技术产生,由于安全问题,面部伪造检测引起了不断的关注。以前的作品总是根据交叉熵损失将面部伪造检测作为分类问题,这强调了类别级别差异,而不是真实和假面之间的基本差异,限制了看不见的域中的模型概括。为了解决这个问题,我们提出了一种新颖的面部伪造检测框架,名为双重对比学习(DCL),其特殊地构建了正负配对数据,并在不同粒度下进行了设计的对比学习,以学习广义特征表示。具体地,结合硬样品选择策略,首先提出通过特别构造实例对来促进与之相关的鉴别特征学习的任务相关的对比学习策略。此外,为了进一步探索基本的差异,引入内部内部对比学习(INL-ICL),以通过构建内部实例构建局部区域对来关注伪造的面中普遍存在的局部内容不一致。在若干数据集上的广泛实验和可视化证明了我们对最先进的竞争对手的方法的概括。
translated by 谷歌翻译