通过各种面部操作技术产生,由于安全问题,面部伪造检测引起了不断的关注。以前的作品总是根据交叉熵损失将面部伪造检测作为分类问题,这强调了类别级别差异,而不是真实和假面之间的基本差异,限制了看不见的域中的模型概括。为了解决这个问题,我们提出了一种新颖的面部伪造检测框架,名为双重对比学习(DCL),其特殊地构建了正负配对数据,并在不同粒度下进行了设计的对比学习,以学习广义特征表示。具体地,结合硬样品选择策略,首先提出通过特别构造实例对来促进与之相关的鉴别特征学习的任务相关的对比学习策略。此外,为了进一步探索基本的差异,引入内部内部对比学习(INL-ICL),以通过构建内部实例构建局部区域对来关注伪造的面中普遍存在的局部内容不一致。在若干数据集上的广泛实验和可视化证明了我们对最先进的竞争对手的方法的概括。
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
随着面部伪造技术的快速发展,由于安全问题,伪造的检测引起了越来越多的关注。现有方法尝试使用频率信息在高质量的锻造面上进行微妙的伪影。然而,频率信息的开发是粗糙的,更重要的是,他们的香草学习过程努力提取细粒度的伪造痕迹。为了解决这个问题,我们提出了一个渐进式增强学习框架来利用RGB和细粒度的频率线索。具体而言,我们对RGB图像进行细粒度分解,以在频率空间中完全删除真实的迹线和虚假的迹线。随后,我们提出了一种基于双分支网络的渐进式增强学习框架,结合自增强和互增强模块。自增强模块基于空间噪声增强和渠道注意,捕获不同输入空间中的迹线。通过在共享空间维度中通信,互增强模块同时增强RGB和频率特征。逐步增强过程有助于学习具有细粒面的伪造线索的歧视特征。在多个数据集上进行广泛的实验表明我们的方法优于最先进的面部伪造检测方法。
translated by 谷歌翻译
近年来,随着面部编辑和发电的迅速发展,越来越多的虚假视频正在社交媒体上流传,这引起了极端公众的关注。基于频域的现有面部伪造方法发现,与真实图像相比,GAN锻造图像在频谱中具有明显的网格视觉伪像。但是对于综合视频,这些方法仅局限于单个帧,几乎不关注不同框架之间最歧视的部分和时间频率线索。为了充分利用视频序列中丰富的信息,本文对空间和时间频域进行了视频伪造检测,并提出了一个离散的基于余弦转换的伪造线索增强网络(FCAN-DCT),以实现更全面的时空功能表示。 FCAN-DCT由一个骨干网络和两个分支组成:紧凑特征提取(CFE)模块和频率时间注意(FTA)模块。我们对两个可见光(VIS)数据集Wilddeepfake和Celeb-DF(V2)进行了彻底的实验评估,以及我们的自我构建的视频伪造数据集DeepFakenir,这是第一个近境模式的视频伪造数据集。实验结果证明了我们方法在VIS和NIR场景中检测伪造视频的有效性。
translated by 谷歌翻译
基于卷积神经网络的面部伪造检测方法在训练过程中取得了显着的结果,但在测试过程中努力保持可比的性能。我们观察到,检测器比人工制品痕迹更容易专注于内容信息,这表明检测器对数据集的内在偏置敏感,这会导致严重的过度拟合。在这一关键观察的激励下,我们设计了一个易于嵌入的拆卸框架,以删除内容信息,并进一步提出内容一致性约束(C2C)和全球表示对比度约束(GRCC),以增强分解特征的独立性。此外,我们巧妙地构建了两个不平衡的数据集来研究内容偏差的影响。广泛的可视化和实验表明,我们的框架不仅可以忽略内容信息的干扰,而且还可以指导探测器挖掘可疑的人工痕迹并实现竞争性能。
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
最近,由于社交媒体数字取证中的安全性和隐私问题,DeepFake引起了广泛的公众关注。随着互联网上广泛传播的深层视频变得越来越现实,传统的检测技术未能区分真实和假货。大多数现有的深度学习方法主要集中于使用卷积神经网络作为骨干的局部特征和面部图像中的关系。但是,本地特征和关系不足以用于模型培训,无法学习足够的一般信息以进行深层检测。因此,现有的DeepFake检测方法已达到瓶颈,以进一步改善检测性能。为了解决这个问题,我们提出了一个深度卷积变压器,以在本地和全球范围内纳入决定性图像。具体而言,我们应用卷积池和重新注意事项来丰富提取的特征并增强功效。此外,我们在模型训练中采用了几乎没有讨论的图像关键框架来改进性能,并可视化由视频压缩引起的密钥和正常图像帧之间的特征数量差距。我们最终通过在几个DeepFake基准数据集上进行了广泛的实验来说明可传递性。所提出的解决方案在内部和跨数据库实验上始终优于几个最先进的基线。
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
Video synthesis methods rapidly improved in recent years, allowing easy creation of synthetic humans. This poses a problem, especially in the era of social media, as synthetic videos of speaking humans can be used to spread misinformation in a convincing manner. Thus, there is a pressing need for accurate and robust deepfake detection methods, that can detect forgery techniques not seen during training. In this work, we explore whether this can be done by leveraging a multi-modal, out-of-domain backbone trained in a self-supervised manner, adapted to the video deepfake domain. We propose FakeOut; a novel approach that relies on multi-modal data throughout both the pre-training phase and the adaption phase. We demonstrate the efficacy and robustness of FakeOut in detecting various types of deepfakes, especially manipulations which were not seen during training. Our method achieves state-of-the-art results in cross-manipulation and cross-dataset generalization. This study shows that, perhaps surprisingly, training on out-of-domain videos (i.e., videos with no speaking humans), can lead to better deepfake detection systems. Code is available on GitHub.
translated by 谷歌翻译
尽管最近对Deepfake技术的滥用引起了严重的关注,但由于每个帧的光真逼真的合成,如何检测DeepFake视频仍然是一个挑战。现有的图像级方法通常集中在单个框架上,而忽略了深击视频中隐藏的时空提示,从而导致概括和稳健性差。视频级检测器的关键是完全利用DeepFake视频中不同框架的当地面部区域分布在当地面部区域中的时空不一致。受此启发,本文提出了一种简单而有效的补丁级方法,以通过时空辍学变压器促进深击视频检测。该方法将每个输入视频重组成贴片袋,然后将其馈入视觉变压器以实现强大的表示。具体而言,提出了时空辍学操作,以充分探索斑块级时空提示,并作为有效的数据增强,以进一步增强模型的鲁棒性和泛化能力。该操作是灵活的,可以轻松地插入现有的视觉变压器中。广泛的实验证明了我们对25种具有令人印象深刻的鲁棒性,可推广性和表示能力的最先进的方法的有效性。
translated by 谷歌翻译
现有的伪造检测方法通常将面部伪造视为二进制分类问题,并采用深层卷积神经网络来学习歧视性特征。理想的判别特征应仅与面部图像的真实/假标签有关。但是,我们观察到,香草分类网络学到的特征与不必要的属性(例如伪造方法和面部身份)相关。这种现象将限制伪造的检测性能,尤其是对于概括能力。在此激励的基础上,我们提出了一种新型方法,该方法利用对抗性学习来消除不同伪造方法和面部身份的负面影响,该方法有助于分类网络学习固有的常见歧视性特征,以进行伪造伪造。为了利用缺乏面部身份的地面真实标签的数据,我们根据来自现成的面部识别模型得出的相似性信息设计了一个特殊的身份歧视器。在对抗性学习的帮助下,我们的伪造检测模型学会了通过消除伪造方法和面部身份的影响来提取共同的歧视特征。广泛的实验证明了该方法在数据集内和交叉数据集评估设置下的有效性。
translated by 谷歌翻译
尽管令人鼓舞的是深泡检测的进展,但由于训练过程中探索的伪造线索有限,对未见伪造类型的概括仍然是一个重大挑战。相比之下,我们注意到Deepfake中的一种常见现象:虚假的视频创建不可避免地破坏了原始视频中的统计规律性。受到这一观察的启发,我们建议通过区分实际视频中没有出现的“规律性中断”来增强深层检测的概括。具体而言,通过仔细检查空间和时间属性,我们建议通过伪捕获生成器破坏真实的视频,并创建各种伪造视频以供培训。这种做法使我们能够在不使用虚假视频的情况下实现深泡沫检测,并以简单有效的方式提高概括能力。为了共同捕获空间和时间上的破坏,我们提出了一个时空增强块,以了解我们自我创建的视频之间的规律性破坏。通过全面的实验,我们的方法在几个数据集上表现出色。
translated by 谷歌翻译
Face manipulation detection has been receiving a lot of attention for the reliability and security of the face images. Recent studies focus on using auxiliary information or prior knowledge to capture robust manipulation traces, which are shown to be promising. As one of the important face features, the face depth map, which has shown to be effective in other areas such as the face recognition or face detection, is unfortunately paid little attention to in literature for detecting the manipulated face images. In this paper, we explore the possibility of incorporating the face depth map as auxiliary information to tackle the problem of face manipulation detection in real world applications. To this end, we first propose a Face Depth Map Transformer (FDMT) to estimate the face depth map patch by patch from a RGB face image, which is able to capture the local depth anomaly created due to manipulation. The estimated face depth map is then considered as auxiliary information to be integrated with the backbone features using a Multi-head Depth Attention (MDA) mechanism that is newly designed. Various experiments demonstrate the advantage of our proposed method for face manipulation detection.
translated by 谷歌翻译
With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
由于滥用了深层,检测伪造视频是非常可取的。现有的检测方法有助于探索DeepFake视频中的特定工件,并且非常适合某些数据。但是,这些人工制品的不断增长的技术一直在挑战传统的深泡探测器的鲁棒性。结果,这些方法的普遍性的发展已达到阻塞。为了解决这个问题,鉴于经验结果是,深层视频中经常在声音和面部背后的身份不匹配,并且声音和面孔在某种程度上具有同质性,在本文中,我们建议从未开发的语音中执行深层检测 - 面对匹配视图。为此,设计了一种语音匹配方法来测量这两个方法的匹配度。然而,对特定的深泡数据集进行培训使模型过于拟合深层算法的某些特征。相反,我们提倡一种迅速适应未开发的伪造方法的方法,然后进行预训练,然后进行微调范式。具体而言,我们首先在通用音频视频数据集上预先培训该模型,然后在下游深板数据上进行微调。我们对三个广泛利用的DeepFake数据集进行了广泛的实验-DFDC,Fakeavceleb和DeepFaketimit。与其他最先进的竞争对手相比,我们的方法获得了显着的性能增长。还值得注意的是,我们的方法在有限的DeepFake数据上进行了微调时已经取得了竞争性结果。
translated by 谷歌翻译
我们解决了几次拍摄语义分割(FSS)的问题,该问题旨在通过一些带有一些注释的样本分段为目标图像中的新型类对象。尽管通过结合基于原型的公制学习来进行最近的进步,但由于其特征表示差,现有方法仍然显示出在极端内部对象变化和语义相似的类别对象下的有限性能。为了解决这个问题,我们提出了一种针对FSS任务定制的双重原型对比学习方法,以有效地捕获代表性的语义。主要思想是通过增加阶级距离来鼓励原型更差异,同时减少了原型特征空间中的课堂距离。为此,我们首先向类别特定的对比丢失丢失具有动态原型字典,该字典字典存储在训练期间的类感知原型,从而实现相同的类原型和不同的类原型是不同的。此外,我们通过压缩每集内语义类的特征分布来提高类别无话的对比损失,以提高未经看不见的类别的概念能力。我们表明,所提出的双重原型对比学习方法优于Pascal-5i和Coco-20i数据集的最先进的FSS方法。该代码可用于:https://github.com/kwonjunn01/dpcl1。
translated by 谷歌翻译
As ultra-realistic face forgery techniques emerge, deepfake detection has attracted increasing attention due to security concerns. Many detectors cannot achieve accurate results when detecting unseen manipulations despite excellent performance on known forgeries. In this paper, we are motivated by the observation that the discrepancies between real and fake videos are extremely subtle and localized, and inconsistencies or irregularities can exist in some critical facial regions across various information domains. To this end, we propose a novel pipeline, Cross-Domain Local Forensics (XDLF), for more general deepfake video detection. In the proposed pipeline, a specialized framework is presented to simultaneously exploit local forgery patterns from space, frequency, and time domains, thus learning cross-domain features to detect forgeries. Moreover, the framework leverages four high-level forgery-sensitive local regions of a human face to guide the model to enhance subtle artifacts and localize potential anomalies. Extensive experiments on several benchmark datasets demonstrate the impressive performance of our method, and we achieve superiority over several state-of-the-art methods on cross-dataset generalization. We also examined the factors that contribute to its performance through ablations, which suggests that exploiting cross-domain local characteristics is a noteworthy direction for developing more general deepfake detectors.
translated by 谷歌翻译
随着过去五年的快速发展,面部身份验证已成为最普遍的生物识别方法。得益于高准确的识别性能和用户友好的用法,自动面部识别(AFR)已爆炸成多次实用的应用程序,而不是设备解锁,签到和经济支付。尽管面部身份验证取得了巨大的成功,但各种面部表现攻击(FPA),例如印刷攻击,重播攻击和3D面具攻击,但仍引起了不信任的问题。除了身体上的攻击外,面部视频/图像很容易受到恶意黑客发起的各种数字攻击技术的影响,从而对整个公众造成了潜在的威胁。由于无限制地访问了巨大的数字面部图像/视频,并披露了互联网上流通的易于使用的面部操纵工具,因此没有任何先前专业技能的非专家攻击者能够轻松创建精致的假面,从而导致许多危险的应用程序例如财务欺诈,模仿和身份盗用。这项调查旨在通过提供对现有文献的彻底分析并突出需要进一步关注的问题来建立面部取证的完整性。在本文中,我们首先全面调查了物理和数字面部攻击类型和数据集。然后,我们回顾了现有的反攻击方法的最新和最先进的进度,并突出显示其当前限制。此外,我们概述了面对法医社区中现有和即将面临的挑战的未来研究指示。最后,已经讨论了联合物理和数字面部攻击检​​测的必要性,这在先前的调查中从未进行过研究。
translated by 谷歌翻译
面部伪造技术的最新进展几乎可以产生视觉上无法追踪的深冰录视频,这些视频可以通过恶意意图来利用。结果,研究人员致力于深泡检测。先前的研究已经确定了局部低级提示和时间信息在追求跨层次方法中概括的重要性,但是,它们仍然遭受鲁棒性问题的影响。在这项工作中,我们提出了基于本地和时间感知的变压器的DeepFake检测(LTTD)框架,该框架采用了局部到全球学习协议,特别关注本地序列中有价值的时间信息。具体而言,我们提出了一个局部序列变压器(LST),该局部序列变压器(LST)对限制空间区域的序列进行了时间一致性,其中低级信息通过学习的3D滤波器的浅层层增强。基于局部时间嵌入,我们然后以全球对比的方式实现最终分类。对流行数据集进行的广泛实验验证了我们的方法有效地发现了本地伪造线索并实现最先进的表现。
translated by 谷歌翻译
上下文信息对于各种计算机视觉任务至关重要,以前的作品通常设计插件模块和结构损失,以有效地提取和汇总全局上下文。这些方法利用优质标签来优化模型,但忽略了精细训练的特征也是宝贵的训练资源,可以将优选的分布引入硬像素(即错误分类的像素)。受到无监督范式的对比学习的启发,我们以监督的方式应用了对比度损失,并重新设计了损失功能,以抛弃无监督学习的刻板印象(例如,积极和负面的不平衡,对锚定计算的混淆)。为此,我们提出了阳性阴性相等的对比损失(PNE损失),这增加了阳性嵌入对锚的潜在影响,并同时对待阳性和阴性样本对。 PNE损失可以直接插入现有的语义细分框架中,并以可忽视的额外计算成本导致出色的性能。我们利用许多经典的分割方法(例如,DeepLabv3,Ocrnet,Upernet)和骨干(例如Resnet,Hrnet,Swin Transformer)进行全面的实验,并在两个基准数据集(例如,例如,例如,,例如城市景观和可可固定)。我们的代码将公开
translated by 谷歌翻译