鉴于我们不断增加的在线形象和信息摄入,现实的虚假视频是传播有害错误信息的潜在工具。本文提出了一种基于多模式学习的方法,用于检测真实和虚假视频。该方法结合了来自三种模式的信息 - 音频,视频和生理学。我们通过将视频与生理学的信息增加或通过新颖地学习这两种方式与所提出的图形卷积网络体系结构的融合来研究两种结合视频和生理方式的策略。两种结合两种方式的策略都取决于一种新方法来生成生理信号的视觉表示。然后,对真实视频和虚假视频的检测是基于音频和修改视频方式之间的差异。在两个基准数据集上评估了所提出的方法,与以前的方法相比,结果显示检测性能显着增加。
translated by 谷歌翻译
Face manipulation technology is advancing very rapidly, and new methods are being proposed day by day. The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world. Our key insight is that each person has specific biometric characteristics that a synthetic generator cannot likely reproduce. Accordingly, we extract high-level audio-visual biometric features which characterize the identity of a person, and use them to create a person-of-interest (POI) deepfake detector. We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity. As a result, when the video and/or audio of a person is manipulated, its representation in the embedding space becomes inconsistent with the real identity, allowing reliable detection. Training is carried out exclusively on real talking-face videos, thus the detector does not depend on any specific manipulation method and yields the highest generalization ability. In addition, our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos by building only on high-level semantic features. Experiments on a wide variety of datasets confirm that our method ensures a SOTA performance, with an average improvement in terms of AUC of around 3%, 10%, and 4% for high-quality, low quality, and attacked videos, respectively. https://github.com/grip-unina/poi-forensics
translated by 谷歌翻译
在过去的几年中,虚假内容的增长速度令人难以置信。社交媒体和在线平台的传播使他们的恶意演员越来越多地传播大规模的传播。同时,由于虚假图像生成方法的扩散越来越大,已经提出了许多基于深度学习的检测技术。这些方法中的大多数依赖于从RGB图像中提取显着特征,以通过二进制分类器检测图像是假的或真实的。在本文中,我们提出了DepthFake,这是一项有关如何使用深度图改善基于经典RGB的方法的研究。深度信息是从具有最新单眼深度估计技术的RGB图像中提取的。在这里,我们证明了深度映射对深料检测任务的有效贡献对稳健的预训练架构。实际上,针对faceforensic ++数据集的标准RGB体系结构,对于一些DeepFake攻击,对一些DeepFake攻击的平均提高了3.20%和11.7%。
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Video synthesis methods rapidly improved in recent years, allowing easy creation of synthetic humans. This poses a problem, especially in the era of social media, as synthetic videos of speaking humans can be used to spread misinformation in a convincing manner. Thus, there is a pressing need for accurate and robust deepfake detection methods, that can detect forgery techniques not seen during training. In this work, we explore whether this can be done by leveraging a multi-modal, out-of-domain backbone trained in a self-supervised manner, adapted to the video deepfake domain. We propose FakeOut; a novel approach that relies on multi-modal data throughout both the pre-training phase and the adaption phase. We demonstrate the efficacy and robustness of FakeOut in detecting various types of deepfakes, especially manipulations which were not seen during training. Our method achieves state-of-the-art results in cross-manipulation and cross-dataset generalization. This study shows that, perhaps surprisingly, training on out-of-domain videos (i.e., videos with no speaking humans), can lead to better deepfake detection systems. Code is available on GitHub.
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for large-scale datasets. However, current DeepFake datasets suffer from low visual quality and do not resemble Deep-Fake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5, 639 high-quality DeepFake videos of celebrities generated using improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF.
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
由于滥用了深层,检测伪造视频是非常可取的。现有的检测方法有助于探索DeepFake视频中的特定工件,并且非常适合某些数据。但是,这些人工制品的不断增长的技术一直在挑战传统的深泡探测器的鲁棒性。结果,这些方法的普遍性的发展已达到阻塞。为了解决这个问题,鉴于经验结果是,深层视频中经常在声音和面部背后的身份不匹配,并且声音和面孔在某种程度上具有同质性,在本文中,我们建议从未开发的语音中执行深层检测 - 面对匹配视图。为此,设计了一种语音匹配方法来测量这两个方法的匹配度。然而,对特定的深泡数据集进行培训使模型过于拟合深层算法的某些特征。相反,我们提倡一种迅速适应未开发的伪造方法的方法,然后进行预训练,然后进行微调范式。具体而言,我们首先在通用音频视频数据集上预先培训该模型,然后在下游深板数据上进行微调。我们对三个广泛利用的DeepFake数据集进行了广泛的实验-DFDC,Fakeavceleb和DeepFaketimit。与其他最先进的竞争对手相比,我们的方法获得了显着的性能增长。还值得注意的是,我们的方法在有限的DeepFake数据上进行了微调时已经取得了竞争性结果。
translated by 谷歌翻译
在今天的数字错误信息的时代,我们越来越受到视频伪造技术构成的新威胁。这种伪造的范围从Deepfakes(例如,复杂的AI媒体合成方法)的经济饼(例如,精致的AI媒体合成方法)从真实视频中无法区分。为了解决这一挑战,我们提出了一种多模态语义法医法,可以发现超出视觉质量差异的线索,从而处理更简单的便宜赌注和视觉上有说服力的德国。在这项工作中,我们的目标是验证视频中看到的据称人士确实是通过检测他们的面部运动与他们所说的词语之间的异常对应。我们利用归因的想法,以了解特定于人的生物识别模式,将给定发言者与他人区分开来。我们使用可解释的行动单位(AUS)来捕捉一个人的面部和头部运动,而不是深入的CNN视觉功能,我们是第一个使用字样的面部运动分析。与现有的人特定的方法不同,我们的方法也有效地对抗专注于唇部操纵的攻击。我们进一步展示了我们的方法在培训中没有看到的一系列假装的效率,包括未经视频操纵的培训,这在事先工作中没有解决。
translated by 谷歌翻译
作为内容编辑成熟的工具,以及基于人工智能(AI)综合媒体增长的算法,在线媒体上的操纵内容的存在正在增加。这种现象导致错误信息的传播,从而更需要区分“真实”和“操纵”内容。为此,我们介绍了Videosham,该数据集由826个视频(413个真实和413个操纵)组成。许多现有的DeepFake数据集专注于两种类型的面部操作 - 与另一个受试者的面部交换或更改现有面部。另一方面,Videosham包含更多样化的,上下文丰富的和以人为本的高分辨率视频,使用6种不同的空间和时间攻击组合来操纵。我们的分析表明,最新的操纵检测算法仅适用于一些特定的攻击,并且在Videosham上不能很好地扩展。我们在亚马逊机械土耳其人上进行了一项用户研究,其中1200名参与者可以区分Videosham中的真实视频和操纵视频。最后,我们更深入地研究了人类和sota-Algorithms表演的优势和劣势,以识别需要用更好的AI算法填补的差距。
translated by 谷歌翻译
强大的深度学习技术的发展为社会和个人带来了一些负面影响。一个这样的问题是假媒体的出现。为了解决这个问题,我们组织了可信赖的媒体挑战(TMC)来探讨人工智能(AI)如何利用如何打击假媒体。我们与挑战一起发布了一个挑战数据集,由4,380张假和2,563个真实视频组成。所有这些视频都伴随着Audios,采用不同的视频和/或音频操作方法来生产不同类型的假媒体。数据集中的视频具有各种持续时间,背景,照明,最小分辨率为360p,并且可能包含模拟传输误差和不良压缩的扰动。我们还开展了用户学习,以展示所作数据集的质量。结果表明,我们的数据集具有有希望的质量,可以在许多情况下欺骗人类参与者。
translated by 谷歌翻译
深度学习已成功地用于解决从大数据分析到计算机视觉和人级控制的各种复杂问题。但是,还采用了深度学习进步来创建可能构成隐私,民主和国家安全威胁的软件。最近出现的那些深度学习驱动的应用程序之一是Deepfake。 DeepFake算法可以创建人类无法将它们与真实图像区分开的假图像和视频。因此,可以自动检测和评估数字视觉媒体完整性的技术的建议是必不可少的。本文介绍了一项用于创造深击的算法的调查,更重要的是,提出的方法旨在检测迄今为止文献中的深击。我们对与Deepfake技术有关的挑战,研究趋势和方向进行了广泛的讨论。通过回顾深层味和最先进的深层检测方法的背景,本研究提供了深入的深层技术的概述,并促进了新的,更强大的方法的发展,以应对日益挑战性的深击。
translated by 谷歌翻译
DeepFake是指量身定制和合成生成的视频,这些视频现在普遍存在并大规模传播,威胁到在线可用信息的可信度。尽管现有的数据集包含不同类型的深击,但它们的生成技术各不相同,但它们并不考虑以“系统发育”方式进展。现有的深层面孔可能与另一个脸交换。可以多次执行面部交换过程,并且可以演变出最终的深层效果,以使DeepFake检测算法混淆。此外,许多数据库不提供应用的生成模型作为目标标签。模型归因通过提供有关所使用的生成模型的信息,有助于增强检测结果的解释性。为了使研究界能够解决这些问题,本文提出了Deephy,这是一种新型的DeepFake系统发育数据集,由使用三种不同的一代技术生成的5040个DeepFake视频组成。有840个曾经交换深击的视频,2520个换两次交换深击的视频和1680个换装深击的视频。使用超过30 GB的大小,使用1,352 GB累积内存的18 GPU在1100多个小时内准备了数据库。我们还使用六种DeepFake检测算法在Deephy数据集上展示了基准。结果突出了需要发展深击模型归因的研究,并将过程推广到各种深层生成技术上。该数据库可在以下网址获得:http://iab-rubric.org/deephy-database
translated by 谷歌翻译
尽管令人鼓舞的是深泡检测的进展,但由于训练过程中探索的伪造线索有限,对未见伪造类型的概括仍然是一个重大挑战。相比之下,我们注意到Deepfake中的一种常见现象:虚假的视频创建不可避免地破坏了原始视频中的统计规律性。受到这一观察的启发,我们建议通过区分实际视频中没有出现的“规律性中断”来增强深层检测的概括。具体而言,通过仔细检查空间和时间属性,我们建议通过伪捕获生成器破坏真实的视频,并创建各种伪造视频以供培训。这种做法使我们能够在不使用虚假视频的情况下实现深泡沫检测,并以简单有效的方式提高概括能力。为了共同捕获空间和时间上的破坏,我们提出了一个时空增强块,以了解我们自我创建的视频之间的规律性破坏。通过全面的实验,我们的方法在几个数据集上表现出色。
translated by 谷歌翻译
近年来,社交媒体已成长为许多在线用户的主要信息来源。这引起了错误信息通过深击的传播。 Deepfakes是视频或图像,代替一个人面对另一个计算机生成的面孔,通常是社会上更知名的人。随着技术的最新进展,技术经验很少的人可以产生这些视频。这使他们能够模仿社会中的权力人物,例如总统或名人,从而产生了传播错误信息和其他对深击的邪恶用途的潜在危险。为了应对这种在线威胁,研究人员开发了旨在检测​​深击的模型。这项研究着眼于各种深层检测模型,这些模型使用深度学习算法来应对这种迫在眉睫的威胁。这项调查着重于提供深层检测模型的当前状态的全面概述,以及许多研究人员采取的独特方法来解决此问题。在本文中,将对未来工作的好处,局限性和建议进行彻底讨论。
translated by 谷歌翻译
In this paper, we introduce MINTIME, a video deepfake detection approach that captures spatial and temporal anomalies and handles instances of multiple people in the same video and variations in face sizes. Previous approaches disregard such information either by using simple a-posteriori aggregation schemes, i.e., average or max operation, or using only one identity for the inference, i.e., the largest one. On the contrary, the proposed approach builds on a Spatio-Temporal TimeSformer combined with a Convolutional Neural Network backbone to capture spatio-temporal anomalies from the face sequences of multiple identities depicted in a video. This is achieved through an Identity-aware Attention mechanism that attends to each face sequence independently based on a masking operation and facilitates video-level aggregation. In addition, two novel embeddings are employed: (i) the Temporal Coherent Positional Embedding that encodes each face sequence's temporal information and (ii) the Size Embedding that encodes the size of the faces as a ratio to the video frame size. These extensions allow our system to adapt particularly well in the wild by learning how to aggregate information of multiple identities, which is usually disregarded by other methods in the literature. It achieves state-of-the-art results on the ForgeryNet dataset with an improvement of up to 14% AUC in videos containing multiple people and demonstrates ample generalization capabilities in cross-forgery and cross-dataset settings. The code is publicly available at https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection.
translated by 谷歌翻译
DeepFake媒体如今正变得广泛,因为具有易于使用的工具和移动应用程序可以生成现实的DeepFake视频/图像,而无需任何技术知识。随着在不久的将来的这一技术领域的进一步进步,预计深冰媒体的数量和质量也将蓬勃发展,同时使DeepFake Media成为传播错误/虚假信息的可能新的实用工具。由于这些担忧,深层媒体检测工具已成为必要。在这项研究中,我们提出了一个新型混合变压器网络,利用早期功能融合策略进行深击视频检测。我们的模型采用两个不同的CNN网络,即(1)XceptionNet和(2)效率网络B4作为特征提取器。我们在FaceForensics ++,DFDC基准测试中以端到端的方式训练两个功能提取器。我们的模型在具有相对简单的体系结构的同时,在对FaceForensics ++和DFDC基准进行评估时,取得了与其他更先进的最先进方法相当的结果。除此之外,我们还提出了新颖的面部切割增加以及随机切割的增加。我们表明,提出的增强改善了模型的检测性能并减少过度拟合。除此之外,我们还表明我们的模型能够从少量数据中学习。
translated by 谷歌翻译
As ultra-realistic face forgery techniques emerge, deepfake detection has attracted increasing attention due to security concerns. Many detectors cannot achieve accurate results when detecting unseen manipulations despite excellent performance on known forgeries. In this paper, we are motivated by the observation that the discrepancies between real and fake videos are extremely subtle and localized, and inconsistencies or irregularities can exist in some critical facial regions across various information domains. To this end, we propose a novel pipeline, Cross-Domain Local Forensics (XDLF), for more general deepfake video detection. In the proposed pipeline, a specialized framework is presented to simultaneously exploit local forgery patterns from space, frequency, and time domains, thus learning cross-domain features to detect forgeries. Moreover, the framework leverages four high-level forgery-sensitive local regions of a human face to guide the model to enhance subtle artifacts and localize potential anomalies. Extensive experiments on several benchmark datasets demonstrate the impressive performance of our method, and we achieve superiority over several state-of-the-art methods on cross-dataset generalization. We also examined the factors that contribute to its performance through ablations, which suggests that exploiting cross-domain local characteristics is a noteworthy direction for developing more general deepfake detectors.
translated by 谷歌翻译
本文介绍了我们关于使用时间图像进行深泡探测的结果和发现。我们通过使用这些面部地标上的像素值构造图像(称为时间图像),模拟了在给定视频跨帧的468个面部标志物横跨给定视频框架中的临时关系。CNN能够识别给定图像的像素之间存在的空间关系。研究了10种不同的成像网模型。
translated by 谷歌翻译