随着GAN的出现,面部伪造技术被严重滥用。即将实现准确的伪造检测。受到PPG信号对应于脸部视频中心跳引起的肤色的周期性变化的启发,我们观察到,尽管在伪造过程中不可避免地损失了PPG信号,但仍然存在PPG信号的混合物,但PPG信号的混合伪造视频具有独特的节奏模式,具体取决于其生成方法。在这一关键观察中,我们提出了一个针对面孔检测和分类的框架,包括:1)用于PPG信号过滤的时空滤波网络(STFNET),以及2)用于约束和约束的时空交互网络(stinet) PPG信号的相互作用。此外,通过深入了解伪造方法的产生,我们进一步提出了源头和源中的材料,以提高框架的性能。总体而言,广泛的实验证明了我们方法的优势。
translated by 谷歌翻译
随着生成模型的快速发展,基于AI的面部操纵技术,称为DeepFakes,已经变得越来越真实。这种脸部伪造的方法可以攻击任何目标,这对个人隐私和财产安全构成了新的威胁。此外,滥用合成视频在许多领域都显示出潜在的危险,例如身份骚扰,色情和新闻谣言。受到生理信号中的空间相干性和时间一致性在所生物的内容中被破坏的事实,我们试图找到可以区分真实视频和合成视频的不一致模式,从面部像素的变化是与生理信息高度相关的。我们的方法首先将多个高斯级别的eulerian视频放大倍数(EVM)应用于原始视频,以扩大面部血容量的变化引起的生理变化,然后将原始视频和放大的视频转换为多尺度欧拉宽度的空间 - 时间地图(MemstMap),其可以代表不同八度的时变的生理增强序列。然后,这些地图以列为单位重新装入帧修补程序,并发送到视觉变压器以学习帧级别的时空描述符。最后,我们整理了嵌入功能并输出判断视频是真实还是假的概率。我们在面部框架++和DeepFake检测数据集上验证了我们的方法。结果表明,我们的模型在伪造检测中实现了出色的性能,并在交叉数据域中显示出出色的泛化能力。
translated by 谷歌翻译
近年来,随着面部编辑和发电的迅速发展,越来越多的虚假视频正在社交媒体上流传,这引起了极端公众的关注。基于频域的现有面部伪造方法发现,与真实图像相比,GAN锻造图像在频谱中具有明显的网格视觉伪像。但是对于综合视频,这些方法仅局限于单个帧,几乎不关注不同框架之间最歧视的部分和时间频率线索。为了充分利用视频序列中丰富的信息,本文对空间和时间频域进行了视频伪造检测,并提出了一个离散的基于余弦转换的伪造线索增强网络(FCAN-DCT),以实现更全面的时空功能表示。 FCAN-DCT由一个骨干网络和两个分支组成:紧凑特征提取(CFE)模块和频率时间注意(FTA)模块。我们对两个可见光(VIS)数据集Wilddeepfake和Celeb-DF(V2)进行了彻底的实验评估,以及我们的自我构建的视频伪造数据集DeepFakenir,这是第一个近境模式的视频伪造数据集。实验结果证明了我们方法在VIS和NIR场景中检测伪造视频的有效性。
translated by 谷歌翻译
基于远程的光摄影学(RPPG)的生理测量值在情感计算,非接触式健康监测,远程医疗监测等方面具有良好的应用值,这已经变得越来越重要,尤其是在Covid-19-19-19大流行期间。现有方法通常分为两组。第一个重点是从面部视频中挖掘微妙的血量脉冲(BVP)信号,但很少明确地模拟主导面部视频内容的声音。它们容易受到噪音的影响,在看不见的情况下可能会遭受泛滥能力。第二个重点是直接建模嘈杂的数据,由于缺乏这些严重的随机噪声的规律性,导致了次优性能。在本文中,我们提出了一个分解和重建网络(DRNET),重点是生理特征而不是嘈杂数据的建模。提出了新的周期损失来限制生理信息的周期性。此外,提出了插件空间注意块(SAB),以增强功能以​​及空间位置信息。此外,提出了有效的斑块种植(PC)增强策略,以合成具有不同噪声和特征的增强样品。在不同的公共数据集以及跨数据库测试上进行了广泛的实验证明了我们方法的有效性。
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
基于卷积神经网络的面部伪造检测方法在训练过程中取得了显着的结果,但在测试过程中努力保持可比的性能。我们观察到,检测器比人工制品痕迹更容易专注于内容信息,这表明检测器对数据集的内在偏置敏感,这会导致严重的过度拟合。在这一关键观察的激励下,我们设计了一个易于嵌入的拆卸框架,以删除内容信息,并进一步提出内容一致性约束(C2C)和全球表示对比度约束(GRCC),以增强分解特征的独立性。此外,我们巧妙地构建了两个不平衡的数据集来研究内容偏差的影响。广泛的可视化和实验表明,我们的框架不仅可以忽略内容信息的干扰,而且还可以指导探测器挖掘可疑的人工痕迹并实现竞争性能。
translated by 谷歌翻译
Face forgery detection plays an important role in personal privacy and social security. With the development of adversarial generative models, high-quality forgery images become more and more indistinguishable from real to humans. Existing methods always regard as forgery detection task as the common binary or multi-label classification, and ignore exploring diverse multi-modality forgery image types, e.g. visible light spectrum and near-infrared scenarios. In this paper, we propose a novel Hierarchical Forgery Classifier for Multi-modality Face Forgery Detection (HFC-MFFD), which could effectively learn robust patches-based hybrid domain representation to enhance forgery authentication in multiple-modality scenarios. The local spatial hybrid domain feature module is designed to explore strong discriminative forgery clues both in the image and frequency domain in local distinct face regions. Furthermore, the specific hierarchical face forgery classifier is proposed to alleviate the class imbalance problem and further boost detection performance. Experimental results on representative multi-modality face forgery datasets demonstrate the superior performance of the proposed HFC-MFFD compared with state-of-the-art algorithms. The source code and models are publicly available at https://github.com/EdWhites/HFC-MFFD.
translated by 谷歌翻译
基于远程光摄氏学的心率估计在几种特定情况下(例如健康监测和疲劳检测)起着重要作用。现有良好的方法致力于将多个重叠视频剪辑的预测HR平均作为30秒面部视频的最终结果。尽管这些具有数百层和数千个渠道的方法是高度准确且健壮的,但它们需要巨大的计算预算和30秒的等待时间,这极大地限制了算法的应用来扩展。在这些CicumStacnces下,我们提出了一个轻巧的快速脉冲模拟网络(LFPS-NET),在非常有限的计算和时间预算中追求最佳准确性,重点关注通用的移动平台,例如智能手机。为了抑制噪声组件并在短时间内获得稳定的脉冲,我们设计了多频模态信号融合机制,该机制利用了时频域分析理论,以将多模式信息与复杂信号分开。它有助于继续进行网络,而无需添加任何参数,可以更轻松地学习有效的热门。此外,我们设计了一个过采样培训策略,以解决由数据集的分布不平衡引起的问题。对于30秒的面部视频,我们提出的方法在大多数评估指标上取得了最佳结果,以估计心率或心率变异性与最佳论文相比。提出的方法仍然可以使用短时(〜15秒)的主体视频获得非常具竞争力的结果。
translated by 谷歌翻译
最近,面部生物识别是对传统认证系统的方便替代的巨大关注。因此,检测恶意尝试已经发现具有重要意义,导致面部抗欺骗〜(FAS),即面部呈现攻击检测。与手工制作的功能相反,深度特色学习和技术已经承诺急剧增加FAS系统的准确性,解决了实现这种系统的真实应用的关键挑战。因此,处理更广泛的发展以及准确的模型的新研究区越来越多地引起了研究界和行业的关注。在本文中,我们为自2017年以来对与基于深度特征的FAS方法相关的文献综合调查。在这一主题上阐明,基于各种特征和学习方法的语义分类。此外,我们以时间顺序排列,其进化进展和评估标准(数据集内集和数据集互联集合中集)覆盖了FAS的主要公共数据集。最后,我们讨论了开放的研究挑战和未来方向。
translated by 谷歌翻译
Remote photoplethysmography (rPPG) enables non-contact heart rate (HR) estimation from facial videos which gives significant convenience compared with traditional contact-based measurements. In the real-world long-term health monitoring scenario, the distance of the participants and their head movements usually vary by time, resulting in the inaccurate rPPG measurement due to the varying face resolution and complex motion artifacts. Different from the previous rPPG models designed for a constant distance between camera and participants, in this paper, we propose two plug-and-play blocks (i.e., physiological signal feature extraction block (PFE) and temporal face alignment block (TFA)) to alleviate the degradation of changing distance and head motion. On one side, guided with representative-area information, PFE adaptively encodes the arbitrary resolution facial frames to the fixed-resolution facial structure features. On the other side, leveraging the estimated optical flow, TFA is able to counteract the rPPG signal confusion caused by the head movement thus benefit the motion-robust rPPG signal recovery. Besides, we also train the model with a cross-resolution constraint using a two-stream dual-resolution framework, which further helps PFE learn resolution-robust facial rPPG features. Extensive experiments on three benchmark datasets (UBFC-rPPG, COHFACE and PURE) demonstrate the superior performance of the proposed method. One highlight is that with PFE and TFA, the off-the-shelf spatio-temporal rPPG models can predict more robust rPPG signals under both varying face resolution and severe head movement scenarios. The codes are available at https://github.com/LJW-GIT/Arbitrary_Resolution_rPPG.
translated by 谷歌翻译
Deep-learning-based technologies such as deepfakes ones have been attracting widespread attention in both society and academia, particularly ones used to synthesize forged face images. These automatic and professional-skill-free face manipulation technologies can be used to replace the face in an original image or video with any target object while maintaining the expression and demeanor. Since human faces are closely related to identity characteristics, maliciously disseminated identity manipulated videos could trigger a crisis of public trust in the media and could even have serious political, social, and legal implications. To effectively detect manipulated videos, we focus on the position offset in the face blending process, resulting from the forced affine transformation of the normalized forged face. We introduce a method for detecting manipulated videos that is based on the trajectory of the facial region displacement. Specifically, we develop a virtual-anchor-based method for extracting the facial trajectory, which can robustly represent displacement information. This information was used to construct a network for exposing multidimensional artifacts in the trajectory sequences of manipulated videos that is based on dual-stream spatial-temporal graph attention and a gated recurrent unit backbone. Testing of our method on various manipulation datasets demonstrated that its accuracy and generalization ability is competitive with that of the leading detection methods.
translated by 谷歌翻译
通过各种面部操作技术产生,由于安全问题,面部伪造检测引起了不断的关注。以前的作品总是根据交叉熵损失将面部伪造检测作为分类问题,这强调了类别级别差异,而不是真实和假面之间的基本差异,限制了看不见的域中的模型概括。为了解决这个问题,我们提出了一种新颖的面部伪造检测框架,名为双重对比学习(DCL),其特殊地构建了正负配对数据,并在不同粒度下进行了设计的对比学习,以学习广义特征表示。具体地,结合硬样品选择策略,首先提出通过特别构造实例对来促进与之相关的鉴别特征学习的任务相关的对比学习策略。此外,为了进一步探索基本的差异,引入内部内部对比学习(INL-ICL),以通过构建内部实例构建局部区域对来关注伪造的面中普遍存在的局部内容不一致。在若干数据集上的广泛实验和可视化证明了我们对最先进的竞争对手的方法的概括。
translated by 谷歌翻译
远程光学电瓶描绘(RPPG),其目的在没有任何接触的情况下从面部视频测量心脏活动和生理信号,在许多应用中具有很大的潜力(例如,远程医疗保健和情感计算)。最近的深度学习方法专注于利用具有有限时空接收领域的卷积神经网络进行微妙的RPPG线索,这忽略了RPPG建模的远程时空感知和相互作用。在本文中,我们提出了Physformer,基于端到端的视频变换器的架构,以自适应地聚合用于RPPG表示增强的本地和全局时空特征。作为Physformer中的关键模块,时间差异变压器首先提高了具有时间差异引导的全局关注的准周期性RPPG特征,然后优化了局部时空表示免于干扰。此外,我们还提出了标签分配学习和课程学习激发了频域中的动态约束,这为Phyformer和缓解过度装备提供了精心制造的监控。在四个基准数据集上执行综合实验,以显示我们在内部和交叉数据集测试中的卓越性能。一个突出显示的是,与大多数变压器网络不同于大规模数据集预先预订,所提出的Physformer可以从RPPG数据集上从头开始培训,这使得它作为RPPG社区的新型变压器基线。该代码将在https://github.com/zitongyu/physformer释放。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
微表达(MES)是非自愿的面部运动,揭示了人们在高利害情况下隐藏的感受,并对医疗,国家安全,审讯和许多人机交互系统具有实际重要性。早期的MER方法主要基于传统的外观和几何特征。最近,随着各种领域的深度学习(DL)的成功,神经网络已得到MER的兴趣。不同于宏观表达,MES是自发的,微妙的,快速的面部运动,导致数据收集困难,因此具有小规模的数据集。由于上述我的角色,基于DL的MER变得挑战。迄今为止,已提出各种DL方法来解决我的问题并提高MER表现。在本调查中,我们对深度微表达识别(MER)进行了全面的审查,包括数据集,深度MER管道和最具影响力方法的基准标记。本调查定义了该领域的新分类法,包括基于DL的MER的所有方面。对于每个方面,总结和讨论了基本方法和高级发展。此外,我们得出了坚固的深层MER系统设计的剩余挑战和潜在方向。据我们所知,这是对深度MEL方法的第一次调查,该调查可以作为未来MER研究的参考点。
translated by 谷歌翻译
Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting.
translated by 谷歌翻译
监督深度学习方法的最新进展是使用面部视频实现基于光电觉描绘的生理信号的远程测量。然而,这些监督方法的性能取决于大标记数据的可用性。作为自我监督方法的对比学习,最近通过最大化不同增强视图之间的互信息来实现学习代表数据特征的最先进的性能。然而,用于对比学学习的现有数据增强技术不是设计用于从视频中学习来自视频的生理信号,并且当存在复杂的噪声和微妙和微妙和周期性的颜色或视频帧之间的形状变化时,通常会失败。为了解决这些问题,我们为远程生理信号表示学习提供了一种新的自我监督的时空学习框架,其中缺乏标记的培训数据。首先,我们提出了一种基于地标的空间增强,其基于Shafer Dichromatic反射模型将面部分成几个信息部件,以表征微妙的肤色波动。我们还制定了一种基于稀疏的时间增强,利用奈奎斯特 - 香农采样定理来通过建模生理信号特征有效地捕获周期性的时间变化。此外,我们介绍了一个受限制的时空损失,为增强视频剪辑产生伪标签。它用于调节训练过程并处理复杂的噪声。我们在3个公共数据集中评估了我们的框架,并展示了比其他自我监督方法的卓越表现,并与最先进的监督方法相比实现了竞争精度。
translated by 谷歌翻译
血压(BP)监测对于日常医疗保健至关重要,尤其是对于心血管疾病。但是,BP值主要是通过接触传感方法获得的,这是不方便且不友好的BP测量。因此,我们提出了一个有效的端到端网络,以估算面部视频中的BP值,以实现日常生活中的远程BP测量。在这项研究中,我们首先得出了短期(〜15s)面部视频的时空图。根据时空图,我们随后通过设计的血压分类器回归了BP范围,并同时通过每个BP范围内的血压计算器来计算特定值。此外,我们还制定了一种创新的过采样培训策略,以解决不平衡的数据分配问题。最后,我们在私有数据集ASPD上培训了拟议的网络,并在流行的数据集MMSE-HR上对其进行了测试。结果,拟议的网络实现了收缩压和舒张压测量的最先进的MAE,为12.35 mmHg和9.5 mmHg,这比最近的工作要好。它得出的结论是,在现实世界中,提出的方法对于基于摄像头的BP监测具有巨大潜力。
translated by 谷歌翻译
Figure 1: FaceForensics++ is a dataset of facial forgeries that enables researchers to train deep-learning-based approaches in a supervised fashion. The dataset contains manipulations created with four state-of-the-art methods, namely, Face2Face, FaceSwap, DeepFakes, and NeuralTextures.
translated by 谷歌翻译