在过去的几年中,虚假内容的增长速度令人难以置信。社交媒体和在线平台的传播使他们的恶意演员越来越多地传播大规模的传播。同时,由于虚假图像生成方法的扩散越来越大,已经提出了许多基于深度学习的检测技术。这些方法中的大多数依赖于从RGB图像中提取显着特征,以通过二进制分类器检测图像是假的或真实的。在本文中,我们提出了DepthFake,这是一项有关如何使用深度图改善基于经典RGB的方法的研究。深度信息是从具有最新单眼深度估计技术的RGB图像中提取的。在这里,我们证明了深度映射对深料检测任务的有效贡献对稳健的预训练架构。实际上,针对faceforensic ++数据集的标准RGB体系结构,对于一些DeepFake攻击,对一些DeepFake攻击的平均提高了3.20%和11.7%。
translated by 谷歌翻译
深度学习已成功地用于解决从大数据分析到计算机视觉和人级控制的各种复杂问题。但是,还采用了深度学习进步来创建可能构成隐私,民主和国家安全威胁的软件。最近出现的那些深度学习驱动的应用程序之一是Deepfake。 DeepFake算法可以创建人类无法将它们与真实图像区分开的假图像和视频。因此,可以自动检测和评估数字视觉媒体完整性的技术的建议是必不可少的。本文介绍了一项用于创造深击的算法的调查,更重要的是,提出的方法旨在检测迄今为止文献中的深击。我们对与Deepfake技术有关的挑战,研究趋势和方向进行了广泛的讨论。通过回顾深层味和最先进的深层检测方法的背景,本研究提供了深入的深层技术的概述,并促进了新的,更强大的方法的发展,以应对日益挑战性的深击。
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
鉴于我们不断增加的在线形象和信息摄入,现实的虚假视频是传播有害错误信息的潜在工具。本文提出了一种基于多模式学习的方法,用于检测真实和虚假视频。该方法结合了来自三种模式的信息 - 音频,视频和生理学。我们通过将视频与生理学的信息增加或通过新颖地学习这两种方式与所提出的图形卷积网络体系结构的融合来研究两种结合视频和生理方式的策略。两种结合两种方式的策略都取决于一种新方法来生成生理信号的视觉表示。然后,对真实视频和虚假视频的检测是基于音频和修改视频方式之间的差异。在两个基准数据集上评估了所提出的方法,与以前的方法相比,结果显示检测性能显着增加。
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
近年来,视觉伪造达到了人类无法识别欺诈的复杂程度,这对信息安全构成了重大威胁。出现了广泛的恶意申请,例如名人的假新闻,诽谤或勒索,政治战中的政治家冒充,以及谣言的传播吸引观点。结果,已经提出了一种富有的视觉验证技术,以试图阻止这种危险的趋势。在本文中,我们使用全面的和经验方法,提供了一种基准,可以对视觉伪造和视觉取证进行深入的洞察。更具体地,我们开发一个独立的框架,整合最先进的假冒生成器和探测器,并使用各种标准来测量这些技术的性能。我们还对基准测试结果进行了详尽的分析,确定了在措施与对策之间永无止境的战争中的比较参考的方法的特征。
translated by 谷歌翻译
本文介绍了我们关于使用时间图像进行深泡探测的结果和发现。我们通过使用这些面部地标上的像素值构造图像(称为时间图像),模拟了在给定视频跨帧的468个面部标志物横跨给定视频框架中的临时关系。CNN能够识别给定图像的像素之间存在的空间关系。研究了10种不同的成像网模型。
translated by 谷歌翻译
DeepFake媒体如今正变得广泛,因为具有易于使用的工具和移动应用程序可以生成现实的DeepFake视频/图像,而无需任何技术知识。随着在不久的将来的这一技术领域的进一步进步,预计深冰媒体的数量和质量也将蓬勃发展,同时使DeepFake Media成为传播错误/虚假信息的可能新的实用工具。由于这些担忧,深层媒体检测工具已成为必要。在这项研究中,我们提出了一个新型混合变压器网络,利用早期功能融合策略进行深击视频检测。我们的模型采用两个不同的CNN网络,即(1)XceptionNet和(2)效率网络B4作为特征提取器。我们在FaceForensics ++,DFDC基准测试中以端到端的方式训练两个功能提取器。我们的模型在具有相对简单的体系结构的同时,在对FaceForensics ++和DFDC基准进行评估时,取得了与其他更先进的最先进方法相当的结果。除此之外,我们还提出了新颖的面部切割增加以及随机切割的增加。我们表明,提出的增强改善了模型的检测性能并减少过度拟合。除此之外,我们还表明我们的模型能够从少量数据中学习。
translated by 谷歌翻译
In this paper, we introduce MINTIME, a video deepfake detection approach that captures spatial and temporal anomalies and handles instances of multiple people in the same video and variations in face sizes. Previous approaches disregard such information either by using simple a-posteriori aggregation schemes, i.e., average or max operation, or using only one identity for the inference, i.e., the largest one. On the contrary, the proposed approach builds on a Spatio-Temporal TimeSformer combined with a Convolutional Neural Network backbone to capture spatio-temporal anomalies from the face sequences of multiple identities depicted in a video. This is achieved through an Identity-aware Attention mechanism that attends to each face sequence independently based on a masking operation and facilitates video-level aggregation. In addition, two novel embeddings are employed: (i) the Temporal Coherent Positional Embedding that encodes each face sequence's temporal information and (ii) the Size Embedding that encodes the size of the faces as a ratio to the video frame size. These extensions allow our system to adapt particularly well in the wild by learning how to aggregate information of multiple identities, which is usually disregarded by other methods in the literature. It achieves state-of-the-art results on the ForgeryNet dataset with an improvement of up to 14% AUC in videos containing multiple people and demonstrates ample generalization capabilities in cross-forgery and cross-dataset settings. The code is publicly available at https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection.
translated by 谷歌翻译
Face manipulation technology is advancing very rapidly, and new methods are being proposed day by day. The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world. Our key insight is that each person has specific biometric characteristics that a synthetic generator cannot likely reproduce. Accordingly, we extract high-level audio-visual biometric features which characterize the identity of a person, and use them to create a person-of-interest (POI) deepfake detector. We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity. As a result, when the video and/or audio of a person is manipulated, its representation in the embedding space becomes inconsistent with the real identity, allowing reliable detection. Training is carried out exclusively on real talking-face videos, thus the detector does not depend on any specific manipulation method and yields the highest generalization ability. In addition, our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos by building only on high-level semantic features. Experiments on a wide variety of datasets confirm that our method ensures a SOTA performance, with an average improvement in terms of AUC of around 3%, 10%, and 4% for high-quality, low quality, and attacked videos, respectively. https://github.com/grip-unina/poi-forensics
translated by 谷歌翻译
最近,由于社交媒体数字取证中的安全性和隐私问题,DeepFake引起了广泛的公众关注。随着互联网上广泛传播的深层视频变得越来越现实,传统的检测技术未能区分真实和假货。大多数现有的深度学习方法主要集中于使用卷积神经网络作为骨干的局部特征和面部图像中的关系。但是,本地特征和关系不足以用于模型培训,无法学习足够的一般信息以进行深层检测。因此,现有的DeepFake检测方法已达到瓶颈,以进一步改善检测性能。为了解决这个问题,我们提出了一个深度卷积变压器,以在本地和全球范围内纳入决定性图像。具体而言,我们应用卷积池和重新注意事项来丰富提取的特征并增强功效。此外,我们在模型训练中采用了几乎没有讨论的图像关键框架来改进性能,并可视化由视频压缩引起的密钥和正常图像帧之间的特征数量差距。我们最终通过在几个DeepFake基准数据集上进行了广泛的实验来说明可传递性。所提出的解决方案在内部和跨数据库实验上始终优于几个最先进的基线。
translated by 谷歌翻译
尽管令人鼓舞的是深泡检测的进展,但由于训练过程中探索的伪造线索有限,对未见伪造类型的概括仍然是一个重大挑战。相比之下,我们注意到Deepfake中的一种常见现象:虚假的视频创建不可避免地破坏了原始视频中的统计规律性。受到这一观察的启发,我们建议通过区分实际视频中没有出现的“规律性中断”来增强深层检测的概括。具体而言,通过仔细检查空间和时间属性,我们建议通过伪捕获生成器破坏真实的视频,并创建各种伪造视频以供培训。这种做法使我们能够在不使用虚假视频的情况下实现深泡沫检测,并以简单有效的方式提高概括能力。为了共同捕获空间和时间上的破坏,我们提出了一个时空增强块,以了解我们自我创建的视频之间的规律性破坏。通过全面的实验,我们的方法在几个数据集上表现出色。
translated by 谷歌翻译
随着深层技术的传播,这项技术变得非常易于访问和足够好,以至于对其恶意使用感到担忧。面对这个问题,检测锻造面孔对于确保安全和避免在全球和私人规模上避免社会政治问题至关重要。本文提出了一种使用卷积神经网络检测深击的解决方案,并为此目的开发了一个数据集-celeb -df。结果表明,在这些图像的分类中,总体准确性为95%,提出的模型接近于最新的现状,并且可以调整未来出现的操纵技术的可能性。。
translated by 谷歌翻译
近年来,随着面部编辑和发电的迅速发展,越来越多的虚假视频正在社交媒体上流传,这引起了极端公众的关注。基于频域的现有面部伪造方法发现,与真实图像相比,GAN锻造图像在频谱中具有明显的网格视觉伪像。但是对于综合视频,这些方法仅局限于单个帧,几乎不关注不同框架之间最歧视的部分和时间频率线索。为了充分利用视频序列中丰富的信息,本文对空间和时间频域进行了视频伪造检测,并提出了一个离散的基于余弦转换的伪造线索增强网络(FCAN-DCT),以实现更全面的时空功能表示。 FCAN-DCT由一个骨干网络和两个分支组成:紧凑特征提取(CFE)模块和频率时间注意(FTA)模块。我们对两个可见光(VIS)数据集Wilddeepfake和Celeb-DF(V2)进行了彻底的实验评估,以及我们的自我构建的视频伪造数据集DeepFakenir,这是第一个近境模式的视频伪造数据集。实验结果证明了我们方法在VIS和NIR场景中检测伪造视频的有效性。
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
随着过去五年的快速发展,面部身份验证已成为最普遍的生物识别方法。得益于高准确的识别性能和用户友好的用法,自动面部识别(AFR)已爆炸成多次实用的应用程序,而不是设备解锁,签到和经济支付。尽管面部身份验证取得了巨大的成功,但各种面部表现攻击(FPA),例如印刷攻击,重播攻击和3D面具攻击,但仍引起了不信任的问题。除了身体上的攻击外,面部视频/图像很容易受到恶意黑客发起的各种数字攻击技术的影响,从而对整个公众造成了潜在的威胁。由于无限制地访问了巨大的数字面部图像/视频,并披露了互联网上流通的易于使用的面部操纵工具,因此没有任何先前专业技能的非专家攻击者能够轻松创建精致的假面,从而导致许多危险的应用程序例如财务欺诈,模仿和身份盗用。这项调查旨在通过提供对现有文献的彻底分析并突出需要进一步关注的问题来建立面部取证的完整性。在本文中,我们首先全面调查了物理和数字面部攻击类型和数据集。然后,我们回顾了现有的反攻击方法的最新和最先进的进度,并突出显示其当前限制。此外,我们概述了面对法医社区中现有和即将面临的挑战的未来研究指示。最后,已经讨论了联合物理和数字面部攻击检​​测的必要性,这在先前的调查中从未进行过研究。
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
深层生成技术正在快速发展,使创建现实的操纵图像和视频并危及现代社会的宁静成为可能。新技术的持续出现带来了一个要面对的另一个问题,即DeepFake检测模型及时更新自己的能力,以便能够使用最新方法识别进行的操作。这是一个非常复杂的问题,因为训练一个模型需要大量数据,如果深层生成方法过于最近,这很难获得。此外,不断地重新训练网络是不可行的。在本文中,我们问自己,在各种深度学习技术中,是否有一个能够概括深层的概念,以至于它不会与培训中使用的一种或多种或多种特定的深层捕获方法息息相关。放。我们将视觉变压器与基于伪造网络数据集的跨性别环境中的有效NETV2进行了比较。从我们的实验中,有效的NETV2具有更大的专业趋势,通常会在训练方法上获得更好的结果,而视觉变压器具有卓越的概括能力,即使在使用新方法生成的图像上也使它们更有能力。
translated by 谷歌翻译
Figure 1: FaceForensics++ is a dataset of facial forgeries that enables researchers to train deep-learning-based approaches in a supervised fashion. The dataset contains manipulations created with four state-of-the-art methods, namely, Face2Face, FaceSwap, DeepFakes, and NeuralTextures.
translated by 谷歌翻译