区分计算机生成(CG)和自然摄影图像(PG)图像对于验证数字图像的真实性和独创性至关重要。但是,最近的尖端生成方法使CG图像中的合成质量很高,这使得这项具有挑战性的任务变得更加棘手。为了解决这个问题,提出了具有深层质地和高频特征的联合学习策略,以进行CG图像检测。我们首先制定并深入分析CG和PG图像的不同采集过程。基于这样的发现,即图像采集中的多个不同模块将导致对图像中基于卷积神经网络(CNN)渲染的不同敏感性不一致,我们提出了一个深层纹理渲染模块,以增强纹理差异和歧视性纹理表示。具体而言,生成语义分割图来指导仿射转换操作,该操作用于恢复输入图像不同区域中的纹理。然后,原始图像和原始图像和渲染图像的高频组件的组合被馈入配备了注意机制的多支球神经网络,该神经网络分别优化了中间特征,并分别促进了空间和通道维度的痕量探索。在两个公共数据集和一个具有更现实和多样化图像的新构建的数据集上进行的广泛实验表明,所提出的方法的表现优于现有方法,从而明确的余量。此外,结果还证明了拟议方法后处理操作和生成对抗网络(GAN)生成的图像的检测鲁棒性和泛化能力。
translated by 谷歌翻译
随着计算机图形技术的开发,计算机软件合成的图像越来越接近照片。尽管计算机图形技术为我们带来了游戏和电影领域中的盛大视觉盛宴,但它也可以被不良意愿的人使用来指导公众意见并造成政治危机或社会动荡。因此,如何将计算机生成的图形(CG)与照片(PG)区分开已成为数字图像取证领域的重要主题。本文提出了基于通道关节和软杆的双流卷积神经网络。所提出的网络体系结构包括一个用于提取图像噪声信息的残差模块和一个联合通道信息提取模块,用于捕获图像的浅色语义信息。此外,我们还设计了一个残留结构,以增强特征提取并减少剩余流中信息的损失。联合通道信息提取模块可以获取输入图像的浅语义信息,该信息可以用作残差模块的信息补充块。整个网络使用Softpool来减少图像下采样的信息丢失。最后,我们融合了两个流以获得分类结果。 SPL2018和DSTOK上的实验表明,所提出的方法优于现有方法,尤其是在DSTOK数据集上。例如,我们的模型的性能超过了最先进的3%。
translated by 谷歌翻译
Face forgery detection plays an important role in personal privacy and social security. With the development of adversarial generative models, high-quality forgery images become more and more indistinguishable from real to humans. Existing methods always regard as forgery detection task as the common binary or multi-label classification, and ignore exploring diverse multi-modality forgery image types, e.g. visible light spectrum and near-infrared scenarios. In this paper, we propose a novel Hierarchical Forgery Classifier for Multi-modality Face Forgery Detection (HFC-MFFD), which could effectively learn robust patches-based hybrid domain representation to enhance forgery authentication in multiple-modality scenarios. The local spatial hybrid domain feature module is designed to explore strong discriminative forgery clues both in the image and frequency domain in local distinct face regions. Furthermore, the specific hierarchical face forgery classifier is proposed to alleviate the class imbalance problem and further boost detection performance. Experimental results on representative multi-modality face forgery datasets demonstrate the superior performance of the proposed HFC-MFFD compared with state-of-the-art algorithms. The source code and models are publicly available at https://github.com/EdWhites/HFC-MFFD.
translated by 谷歌翻译
由于高光谱摄像机传感器在较差的照明条件下捕获的能量不足,因此低光谱图像(HSIS)通常会遭受视野较低,光谱失真和各种噪音的遭受的影响。已经开发了一系列HSI恢复方法,但它们在增强低光HSIS方面的有效性受到限制。这项工作着重于低光HSI增强任务,该任务旨在揭示隐藏在黑暗区域中的空间光谱信息。为了促进低光HSI处理的开发,我们收集了室内和室外场景的低光HSI(LHSI)数据集。基于Laplacian金字塔分解和重建,我们开发了在LHSI数据集中训练的端到端数据驱动的低光HSI增强(HSIE)方法。通过观察到照明与HSI的低频组件有关,而纹理细节与高频组件密切相关,因此建议的HSIE设计为具有两个分支。采用照明增强分支以减少分辨率来启发低频组件。高频改进分支用于通过预测的掩码来完善高频组件。此外,为了提高信息流量和提高性能,我们引入了具有残留致密连接的有效通道注意块(CAB),该连接是照明增强分支的基本块。 LHSI数据集的实验结果证明了HSIE在定量评估措施和视觉效果中的有效性和效率。根据遥感印度松树数据集的分类性能,下游任务受益于增强的HSI。可用数据集和代码:\ href {https://github.com/guanguanboy/hsie} {https://github.com/guanguanboy/hsie}。
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Digital security has been an active area of research interest due to the rapid adaptation of internet infrastructure, the increasing popularity of social media, and digital cameras. Due to inherent differences in working principles to generate an image, different camera brands left behind different intrinsic processing noises which can be used to identify the camera brand. In the last decade, many signal processing and deep learning-based methods have been proposed to identify and isolate this noise from the scene details in an image to detect the source camera brand. One prominent solution is to utilize a hierarchical classification system rather than the traditional single-classifier approach. Different individual networks are used for brand-level and model-level source camera identification. This approach allows for better scaling and requires minimal modifications for adding a new camera brand/model to the solution. However, using different full-fledged networks for both brand and model-level classification substantially increases memory consumption and training complexity. Moreover, extracted low-level features from the different network's initial layers often coincide, resulting in redundant weights. To mitigate the training and memory complexity, we propose a classifier-block-level hierarchical system instead of a network-level one for source camera model classification. Our proposed approach not only results in significantly fewer parameters but also retains the capability to add a new camera model with minimal modification. Thorough experimentation on the publicly available Dresden dataset shows that our proposed approach can achieve the same level of state-of-the-art performance but requires fewer parameters compared to a state-of-the-art network-level hierarchical-based system.
translated by 谷歌翻译
Image manipulation localization aims at distinguishing forged regions from the whole test image. Although many outstanding prior arts have been proposed for this task, there are still two issues that need to be further studied: 1) how to fuse diverse types of features with forgery clues; 2) how to progressively integrate multistage features for better localization performance. In this paper, we propose a tripartite progressive integration network (TriPINet) for end-to-end image manipulation localization. First, we extract both visual perception information, e.g., RGB input images, and visual imperceptible features, e.g., frequency and noise traces for forensic feature learning. Second, we develop a guided cross-modality dual-attention (gCMDA) module to fuse different types of forged clues. Third, we design a set of progressive integration squeeze-and-excitation (PI-SE) modules to improve localization performance by appropriately incorporating multiscale features in the decoder. Extensive experiments are conducted to compare our method with state-of-the-art image forensics approaches. The proposed TriPINet obtains competitive results on several benchmark datasets.
translated by 谷歌翻译
高光谱成像由于其在捕获丰富的空间和光谱信息的能力上提供了多功能应用,这对于识别物质至关重要。但是,获取高光谱图像的设备昂贵且复杂。因此,已经通过直接从低成本,更多可用的RGB图像重建高光谱信息来提出了许多替代光谱成像方法。我们详细研究了来自广泛的RGB图像的这些最先进的光谱重建方法。对25种方法的系统研究和比较表明,尽管速度较低,但大多数数据驱动的深度学习方法在重建精度和质量方面都优于先前的方法。这项全面的审查可以成为同伴研究人员的富有成果的参考来源,从而进一步启发了相关领域的未来发展方向。
translated by 谷歌翻译
随着面部伪造技术的快速发展,DeepFake视频在数字媒体上引起了广泛的关注。肇事者大量利用这些视频来传播虚假信息并发表误导性陈述。大多数现有的DeepFake检测方法主要集中于纹理特征,纹理特征可能会受到外部波动(例如照明和噪声)的影响。此外,基于面部地标的检测方法对外部变量更强大,但缺乏足够的细节。因此,如何在空间,时间和频域中有效地挖掘独特的特征,并将其与面部地标融合以进行伪造视频检测仍然是一个悬而未决的问题。为此,我们提出了一个基于多种模式的信息和面部地标的几何特征,提出了地标增强的多模式图神经网络(LEM-GNN)。具体而言,在框架级别上,我们设计了一种融合机制来挖掘空间和频域元素的联合表示,同时引入几何面部特征以增强模型的鲁棒性。在视频级别,我们首先将视频中的每个帧视为图中的节点,然后将时间信息编码到图表的边缘。然后,通过应用图形神经网络(GNN)的消息传递机制,将有效合并多模式特征,以获得视频伪造的全面表示。广泛的实验表明,我们的方法始终优于广泛使用的基准上的最先进(SOTA)。
translated by 谷歌翻译
人行道表面数据的获取和评估在路面条件评估中起着至关重要的作用。在本文中,提出了一个称为RHA-NET的自动路面裂纹分割的有效端到端网络,以提高路面裂纹分割精度。 RHA-NET是通过将残留块(重阻)和混合注意块集成到编码器架构结构中来构建的。这些重组用于提高RHA-NET提取高级抽象特征的能力。混合注意块旨在融合低级功能和高级功能,以帮助模型专注于正确的频道和裂纹区域,从而提高RHA-NET的功能表现能力。构建并用于训练和评估所提出的模型的图像数据集,其中包含由自设计的移动机器人收集的789个路面裂纹图像。与其他最先进的网络相比,所提出的模型在全面的消融研究中验证了添加残留块和混合注意机制的功能。此外,通过引入深度可分离卷积生成的模型的轻加权版本可以更好地实现性能和更快的处理速度,而U-NET参数数量的1/30。开发的系统可以在嵌入式设备Jetson TX2(25 fps)上实时划分路面裂纹。实时实验拍摄的视频将在https://youtu.be/3xiogk0fig4上发布。
translated by 谷歌翻译
With the rapid development of deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of the human face are now of such high quality that humans can hardly distinguish them from pristine ones. Although existing detection methods have shown high performance in specific evaluation settings, e.g., on images from seen models or on images without real-world post-processings, they tend to suffer serious performance degradation in real-world scenarios where testing images can be generated by more powerful generation models or combined with various post-processing operations. To address this issue, we propose a Global and Local Feature Fusion (GLFF) to learn rich and discriminative representations by combining multi-scale global features from the whole image with refined local features from informative patches for face forgery detection. GLFF fuses information from two branches: the global branch to extract multi-scale semantic features and the local branch to select informative patches for detailed local artifacts extraction. Due to the lack of a face forgery dataset simulating real-world applications for evaluation, we further create a challenging face forgery dataset, named DeepFakeFaceForensics (DF^3), which contains 6 state-of-the-art generation models and a variety of post-processing techniques to approach the real-world scenarios. Experimental results demonstrate the superiority of our method to the state-of-the-art methods on the proposed DF^3 dataset and three other open-source datasets.
translated by 谷歌翻译
由于攻击材料的多样性,指纹识别系统(AFRSS)容易受到恶意攻击的影响。为AFRSS的安全性和可靠性提出有效的指纹介绍攻击检测(PAD)方法是非常重要的。然而,当前焊盘方法通常在新攻击材料或传感器设置下具有差的鲁棒性。因此,本文通过考虑处理先前作品中忽略的冗余“噪声”信息,提出了一种新的通道 - 方向特征去噪焊盘(CFD-PAD)方法。所提出的方法通过加权每个信道的重要性并找到这些鉴别性信道和“噪声”通道来学习指纹图像的重要特征。然后,在特征图中抑制了“噪声”通道的传播以减少干扰。具体地,设计了PA-Adaption损耗来限制特征分布,以使实时指纹的特征分布更具聚合和欺骗指纹更多的分散。我们在Livdet 2017上评估的实验结果表明,当假检出率等于1.0%(TDR @FDR = 1%)时,我们所提出的CFD-PAD可以达到2.53%的ace和93.83%的真实检测率,并且优于基于最佳的单一模型在ACE(2.53%与4.56%)和TDR @FDR方面的方法明显显着(93.83%,93.83%\%),这证明了该方法的有效性。虽然我们已经实现了与最先进的基于多模型的方法相比的可比结果,但是通过我们的方法仍然可以实现TDR @ FDR增加到91.19%的1%至93.83%。此外,与基于多模型的多模型的方法相比,我们的模型更简单,更轻,更高效,更高效地实现了74.76%的耗时减少。代码将公开。
translated by 谷歌翻译
由于波长依赖性的光衰减,折射和散射,水下图像通常遭受颜色变形和模糊的细节。然而,由于具有未变形图像的数量有限数量的图像作为参考,培训用于各种降解类型的深度增强模型非常困难。为了提高数据驱动方法的性能,必须建立更有效的学习机制,使得富裕监督来自有限培训的示例资源的信息。在本文中,我们提出了一种新的水下图像增强网络,称为Sguie-net,其中我们将语义信息引入了共享常见语义区域的不同图像的高级指导。因此,我们提出了语义区域 - 明智的增强模块,以感知不同语义区域从多个尺度的劣化,并将其送回从其原始比例提取的全局注意功能。该策略有助于实现不同的语义对象的强大和视觉上令人愉快的增强功能,这应该由于对差异化增强的语义信息的指导应该。更重要的是,对于在训练样本分布中不常见的那些劣化类型,指导根据其语义相关性与已经良好的学习类型连接。对公共数据集的广泛实验和我们拟议的数据集展示了Sguie-Net的令人印象深刻的表现。代码和建议的数据集可用于:https://trentqq.github.io/sguie-net.html
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
With the development of convolutional neural networks, hundreds of deep learning based dehazing methods have been proposed. In this paper, we provide a comprehensive survey on supervised, semi-supervised, and unsupervised single image dehazing. We first discuss the physical model, datasets, network modules, loss functions, and evaluation metrics that are commonly used. Then, the main contributions of various dehazing algorithms are categorized and summarized. Further, quantitative and qualitative experiments of various baseline methods are carried out. Finally, the unsolved issues and challenges that can inspire the future research are pointed out. A collection of useful dehazing materials is available at \url{https://github.com/Xiaofeng-life/AwesomeDehazing}.
translated by 谷歌翻译
基于对抗性学习的图像抑制方法,由于其出色的性能,已经在计算机视觉中进行了广泛的研究。但是,大多数现有方法对实际情况的质量功能有限,因为它们在相同场景的透明和合成的雾化图像上进行了培训。此外,它们在保留鲜艳的色彩和丰富的文本细节方面存在局限性。为了解决这些问题,我们开发了一个新颖的生成对抗网络,称为整体注意力融合对抗网络(HAAN),用于单个图像。 Haan由Fog2FogFogre块和FogFree2Fog块组成。在每个块中,有三个基于学习的模块,即雾除雾,颜色纹理恢复和雾合成,它们相互限制以生成高质量的图像。 Haan旨在通过学习雾图图像之间的整体通道空间特征相关性及其几个派生图像之间的整体通道空间特征相关性来利用纹理和结构信息的自相似性。此外,在雾合成模块中,我们利用大气散射模型来指导它,以通过新颖的天空分割网络专注于大气光优化来提高生成质量。关于合成和现实世界数据集的广泛实验表明,就定量准确性和主观的视觉质量而言,Haan的表现优于最先进的脱落方法。
translated by 谷歌翻译
水下杂质的光吸收和散射导致水下较差的水下成像质量。现有的基于数据驱动的基于数据的水下图像增强(UIE)技术缺乏包含各种水下场景和高保真参考图像的大规模数据集。此外,不同颜色通道和空间区域的不一致衰减不完全考虑提升增强。在这项工作中,我们构建了一个大规模的水下图像(LSUI)数据集,包括5004个图像对,并报告了一个U形变压器网络,其中变压器模型首次引入UIE任务。 U形变压器与通道 - 方面的多尺度特征融合变压器(CMSFFT)模块和空间全局功能建模变压器(SGFMT)模块集成在一起,可使用更多地加强网络对色频道和空间区域的关注严重衰减。同时,为了进一步提高对比度和饱和度,在人类视觉原理之后,设计了组合RGB,实验室和LCH颜色空间的新型损失函数。可用数据集的广泛实验验证了报告的技术的最先进性能,具有超过2dB的优势。
translated by 谷歌翻译
源相机识别工具辅助图像法医调查人员将讨论的图像与可疑摄像机相关联。已经基于在获取期间图像中留下的微妙迹线的分析来开发了各种技术。由传感器缺陷引起的照片响应非均匀性(PRNU)噪声模式已被证明是识别源相机的有效方法。现有文献表明,PRNU是唯一是特定于设备的指纹,并且能够识别确切的源设备。然而,PRNU易受相机设置,图像内容,图像处理操作和反务攻击的影响。法医调查员不知道反务攻击​​或附带图像操纵有误导的风险。两个PRNU匹配期间的空间同步要求也代表了PRNU的一个主要限制。近年来,基于深度学习的方法在识别源相机模型方面取得了成功。然而,通过这些数据驱动方法识别相同模型的各个摄像机仍然不令人满意。在本文中,我们可以在数字图像中阐明能够识别相同模型的各个摄像机的数字图像中的新的强大数据驱动设备特定指纹。发现新设备指纹是独立于无关的,随机性的,全局可用,解决空间同步问题。与驻留在高频带中的PRNU不同,从低频和中频频带提取新的设备指纹,这解析了PRNU无法抗争的脆弱问题。我们对各种数据集的实验表明,新的指纹对图像操纵具有高度弹性,例如旋转,伽马校正和侵略性JPEG压缩。
translated by 谷歌翻译
Learning-based infrared small object detection methods currently rely heavily on the classification backbone network. This tends to result in tiny object loss and feature distinguishability limitations as the network depth increases. Furthermore, small objects in infrared images are frequently emerged bright and dark, posing severe demands for obtaining precise object contrast information. For this reason, we in this paper propose a simple and effective ``U-Net in U-Net'' framework, UIU-Net for short, and detect small objects in infrared images. As the name suggests, UIU-Net embeds a tiny U-Net into a larger U-Net backbone, enabling the multi-level and multi-scale representation learning of objects. Moreover, UIU-Net can be trained from scratch, and the learned features can enhance global and local contrast information effectively. More specifically, the UIU-Net model is divided into two modules: the resolution-maintenance deep supervision (RM-DS) module and the interactive-cross attention (IC-A) module. RM-DS integrates Residual U-blocks into a deep supervision network to generate deep multi-scale resolution-maintenance features while learning global context information. Further, IC-A encodes the local context information between the low-level details and high-level semantic features. Extensive experiments conducted on two infrared single-frame image datasets, i.e., SIRST and Synthetic datasets, show the effectiveness and superiority of the proposed UIU-Net in comparison with several state-of-the-art infrared small object detection methods. The proposed UIU-Net also produces powerful generalization performance for video sequence infrared small object datasets, e.g., ATR ground/air video sequence dataset. The codes of this work are available openly at \url{https://github.com/danfenghong/IEEE_TIP_UIU-Net}.
translated by 谷歌翻译