虚假图像的更高质量和广泛传播已经为可靠的法医制作产生了追求。最近已经提出了许多GaN图像探测器。然而,在现实世界的情景中,他们中的大多数都表现出有限的鲁棒性和泛化能力。此外,它们通常依赖于测试时间不可用的侧面信息,即它们不是普遍的。我们研究了这些问题,并基于有限的子采样架构和合适的对比学习范例提出了一种新的GaN图像检测器。在具有挑战性的条件下进行的实验证明了提出的方法是迈向通用GaN图像检测的第一步,确保对常见的图像障碍以及看不见的架构的良好概括。
translated by 谷歌翻译
In this paper we present TruFor, a forensic framework that can be applied to a large variety of image manipulation methods, from classic cheapfakes to more recent manipulations based on deep learning. We rely on the extraction of both high-level and low-level traces through a transformer-based fusion architecture that combines the RGB image and a learned noise-sensitive fingerprint. The latter learns to embed the artifacts related to the camera internal and external processing by training only on real data in a self-supervised manner. Forgeries are detected as deviations from the expected regular pattern that characterizes each pristine image. Looking for anomalies makes the approach able to robustly detect a variety of local manipulations, ensuring generalization. In addition to a pixel-level localization map and a whole-image integrity score, our approach outputs a reliability map that highlights areas where localization predictions may be error-prone. This is particularly important in forensic applications in order to reduce false alarms and allow for a large scale analysis. Extensive experiments on several datasets show that our method is able to reliably detect and localize both cheapfakes and deepfakes manipulations outperforming state-of-the-art works. Code will be publicly available at https://grip-unina.github.io/TruFor/
translated by 谷歌翻译
Face manipulation technology is advancing very rapidly, and new methods are being proposed day by day. The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world. Our key insight is that each person has specific biometric characteristics that a synthetic generator cannot likely reproduce. Accordingly, we extract high-level audio-visual biometric features which characterize the identity of a person, and use them to create a person-of-interest (POI) deepfake detector. We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity. As a result, when the video and/or audio of a person is manipulated, its representation in the embedding space becomes inconsistent with the real identity, allowing reliable detection. Training is carried out exclusively on real talking-face videos, thus the detector does not depend on any specific manipulation method and yields the highest generalization ability. In addition, our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos by building only on high-level semantic features. Experiments on a wide variety of datasets confirm that our method ensures a SOTA performance, with an average improvement in terms of AUC of around 3%, 10%, and 4% for high-quality, low quality, and attacked videos, respectively. https://github.com/grip-unina/poi-forensics
translated by 谷歌翻译
With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
Visually realistic GAN-generated facial images raise obvious concerns on potential misuse. Many effective forensic algorithms have been developed to detect such synthetic images in recent years. It is significant to assess the vulnerability of such forensic detectors against adversarial attacks. In this paper, we propose a new black-box attack method against GAN-generated image detectors. A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model under a contrastive loss function. GAN images and their simulated real counterparts are constructed as positive and negative samples, respectively. Leveraging on the trained attack model, imperceptible contrastive perturbation could be applied to input synthetic images for removing GAN fingerprint to some extent. As such, existing GAN-generated image detectors are expected to be deceived. Extensive experimental results verify that the proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs. High visual quality of the attacked images is also achieved. The source code will be available at https://github.com/ZXMMD/BAttGAND.
translated by 谷歌翻译
综合产生的内容的广泛扩散是一种需要紧急对策的严重威胁。合成含量的产生不限于多媒体数据,如视频,照片或音频序列,但涵盖了可以包括生物图像的显着大面积,例如西幕和微观图像。在本文中,我们专注于检测综合生成的西幕图像。生物医学文献在很大程度上探讨了西部污染图像,已经表明了如何通过目视检查或标准取证检测器轻松地伪造这些图像。为了克服缺乏公开可用的数据集,我们创建了一个包含超过14k原始的西幕图像和18K合成的Western-Blot图像的新数据集,由三种不同的最先进的生成方法产生。然后,我们调查不同的策略来检测合成的Western印迹,探索二进制分类方法以及单级探测器。在这两种情况下,我们从不利用培训阶段的合成纤维图像。所达到的结果表明,即使在这些科学图像的合成版本未优化利用检测器,综合生成的西幕图像也可以具有良好的精度。
translated by 谷歌翻译
生成对抗网络(GAN)的快速进步为图像归因提出了新的挑战;检测图像是否是合成的,如果是的,则确定哪些GAN体系结构创建了它。独特的是,我们为该任务提供了一个解决方案,能够1)将图像与其语义内容匹配; 2)随着图像在线重新共享时,通常会遇到良性转换(质量,分辨率,形状等)的良好转化。为了使我们的研究形式化,收集了一个具有挑战性的基准归因88,以实现强大而实用的图像归因。然后,我们提出了基于表示混合和新颖损失的GAN指纹技术Repmix。我们验证了其追踪gan生成图像的出处的能力,不变到图像的语义内容以及对扰动的鲁棒性。我们表明,我们的方法从现有的GAN指纹识别方面有显着提高,既有语义泛化和稳健性。数据和代码可从https://github.com/tubui/image_attribution获得。
translated by 谷歌翻译
凭借生成的对抗网络(GANS)和其变体的全面合成和部分面部操纵已经提高了广泛的公众关注。在多媒体取证区,检测和最终定位图像伪造已成为一个必要的任务。在这项工作中,我们调查了现有的GaN的面部操纵方法的架构,并观察到其上采样方法的不完美可以作为GaN合成假图像检测和伪造定位的重要资产。基于这一基本观察,我们提出了一种新的方法,称为FAKELOCATOR,以在操纵的面部图像上全分辨率获得高分辨率准确性。据我们所知,这是第一次尝试解决GaN的虚假本地化问题,灰度尺寸贴身贴图,保留了更多伪造地区的信息。为了改善Fakelocator跨越多种面部属性的普遍性,我们介绍了注意机制来指导模型的培训。为了改善不同的DeepFake方法的FakElecator的普遍性,我们在训练图像上提出部分数据增强和单一样本聚类。对流行的面部刻度++,DFFD数据集和七种不同最先进的GAN的面部生成方法的实验结果表明了我们方法的有效性。与基线相比,我们的方法在各种指标上表现更好。此外,该方法对针对各种现实世界的面部图像劣化进行鲁棒,例如JPEG压缩,低分辨率,噪声和模糊。
translated by 谷歌翻译
Figure 1: FaceForensics++ is a dataset of facial forgeries that enables researchers to train deep-learning-based approaches in a supervised fashion. The dataset contains manipulations created with four state-of-the-art methods, namely, Face2Face, FaceSwap, DeepFakes, and NeuralTextures.
translated by 谷歌翻译
虽然生成模型的最新进步为社会带来了不同的优势,但它也可以滥用恶意目的,例如欺诈,诽谤和假新闻。为了防止这种情况,进行了剧烈的研究以区分生成的图像从真实图像中的图像,但仍然存在挑战以区分训练设置之外的未经证实的图像。由于模型过度的问题引起了由特定GAN生成的培训数据而产生的数据依赖性,发生了这种限制。为了克服这个问题,我们采用自我监督计划提出一个新颖的框架。我们所提出的方法由人工指纹发生器重构GaN图像的高质量人工指纹进行详细分析,并且通过学习重建的人造指纹来区分GaN图像。为了提高人工指纹发生器的泛化,我们构建具有不同数量的上耦层的多个自动泊。利用许多消融研究,即使不利用训练数据集的GaN图像,也通过表现出先前最先进的算法的概括来验证我们的方法的鲁棒广泛化。
translated by 谷歌翻译
基于GAN的生成建模的进展是,社区的推动是为了发现超出图像生成和编辑任务的使用。特别是,最近的几项工作表明,可以重新用诸如零件分割的判别任务重新用来重新用,尤其是当训练数据有限时。但这些改进如何解决自我监督学习的最新进展情况?由此引起这种激励,我们提出了一种基于对比学习的替代方法,并比较它们对标准的几次射击部分分割基准的性能。我们的实验表明,不仅GAN的方法不提供显着的性能优势,它们的多步训练很复杂,几乎是数量级较慢,并且可以引入额外的偏差。这些实验表明,由使用对比学习训练的标准前馈网络捕获的生成模型的感应偏差,例如它们的解开形状和纹理的能力。这些实验表明,目前生成模型中存在的电感偏差,例如它们的解开形状和纹理的能力,通过使用对比学习训练的标准前馈网络充分捕获。
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
得益于深度学习的最新进展,如今存在复杂的生成工具,这些工具产生了极其现实的综合语音。但是,这种工具的恶意使用是可能的,有可能对我们的社会构成严重威胁。因此,合成语音检测已成为一个紧迫的研究主题,最近提出了各种各样的检测方法。不幸的是,它们几乎没有概括为在训练阶段从未见过的工具产生的合成音频,这使他们不适合面对现实世界的情况。在这项工作中,我们旨在通过提出一种仅利用说话者的生物特征的新检测方法来克服这个问题,而无需提及特定的操纵。由于仅在实际数据上对检测器进行训练,因此可以自动确保概括。建议的方法可以基于现成的扬声器验证工具实现。我们在三个流行的测试集上测试了几种这样的解决方案,从而获得了良好的性能,高概括能力和高度鲁棒性。
translated by 谷歌翻译
In this work, we investigate improving the generalizability of GAN-generated image detectors by performing data augmentation in the fingerprint domain. Specifically, we first separate the fingerprints and contents of the GAN-generated images using an autoencoder based GAN fingerprint extractor, followed by random perturbations of the fingerprints. Then the original fingerprints are substituted with the perturbed fingerprints and added to the original contents, to produce images that are visually invariant but with distinct fingerprints. The perturbed images can successfully imitate images generated by different GANs to improve the generalization of the detectors, which is demonstrated by the spectra visualization. To our knowledge, we are the first to conduct data augmentation in the fingerprint domain. Our work explores a novel prospect that is distinct from previous works on spatial and frequency domain augmentation. Extensive cross-GAN experiments demonstrate the effectiveness of our method compared to the state-of-the-art methods in detecting fake images generated by unknown GANs.
translated by 谷歌翻译
深度学习已成功地用于解决从大数据分析到计算机视觉和人级控制的各种复杂问题。但是,还采用了深度学习进步来创建可能构成隐私,民主和国家安全威胁的软件。最近出现的那些深度学习驱动的应用程序之一是Deepfake。 DeepFake算法可以创建人类无法将它们与真实图像区分开的假图像和视频。因此,可以自动检测和评估数字视觉媒体完整性的技术的建议是必不可少的。本文介绍了一项用于创造深击的算法的调查,更重要的是,提出的方法旨在检测迄今为止文献中的深击。我们对与Deepfake技术有关的挑战,研究趋势和方向进行了广泛的讨论。通过回顾深层味和最先进的深层检测方法的背景,本研究提供了深入的深层技术的概述,并促进了新的,更强大的方法的发展,以应对日益挑战性的深击。
translated by 谷歌翻译
如果通过参考人类感知能力,他们的培训可以实现深度学习模型可以实现更大的概括吗?我们如何以实际的方式实现这一目标?本文提出了一种首次培训策略来传达大脑监督,以提高泛化(机器人)。这种新的训练方法将人类注释的显着性图纳入了一种机器人损失函数,指导了在求解给定视觉任务时从图像区域的学习特征的模型。类激活映射(CAM)机制用于探测模型在每个训练批处理中的电流显着性,与人为显着性,并惩罚模型以实现大差异。结果综合面检测任务表明,Cyborg损失导致看不见的样本的性能显着改善,这些样本由多种分类网络架构中的六个生成对抗网络(GANS)产生的面部图像组成。我们还表明,与标准损失的培训数据缩放到甚至七次甚至不能击败机器人损失的准确性。作为副作用,我们观察到,在合成面检测的任务方面增加了显式区域注释增加了人类分类性能。这项工作开启了关于如何将人类视力置于损失功能的新研究领域。本文提供了本工作中使用的所有数据,代码和预训练型号。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
我们提出了一种保护生成对抗网络(GAN)的知识产权(IP)的水印方法。目的是为GAN模型加水印,以便GAN产生的任何图像都包含一个无形的水印(签名),其在图像中的存在可以在以后的阶段检查以进行所有权验证。为了实现这一目标,在发电机的输出上插入了预先训练的CNN水印解码块。然后通过包括水印损失项来修改发电机损耗,以确保可以从生成的图像中提取规定的水印。水印是通过微调嵌入的,其时间复杂性降低。结果表明,我们的方法可以有效地将无形的水印嵌入生成的图像中。此外,我们的方法是一种通用方法,可以使用不同的GAN体系结构,不同的任务和输出图像的不同分辨率。我们还证明了嵌入式水印的良好鲁棒性能与几个后处理,其中包括JPEG压缩,噪声添加,模糊和色彩转换。
translated by 谷歌翻译
近年来有条件的GAN已经成熟,并且能够产生高质量的现实形象。但是,计算资源和培训高质量的GAN所需的培训数据是巨大的,因此对这些模型的转移学习的研究是一个紧急话题。在本文中,我们探讨了从高质量预训练的无条件GAN到有条件的GAN的转移。为此,我们提出了基于HyperNetwork的自适应权重调制。此外,我们介绍了一个自我初始化过程,不需要任何真实数据才能初始化HyperNetwork参数。为了进一步提高知识转移的样本效率,我们建议使用自我监督(对比)损失来改善GaN判别者。在广泛的实验中,我们验证了多个标准基准上的Hypernetworks,自我初始化和对比损失的效率。
translated by 谷歌翻译