We present a learned, spatially-varying steganography system that allows detecting when and how images have been altered by cropping, splicing or inpainting after publication. The system comprises a learned encoder that imperceptibly hides distinct positional signatures in every local image region before publication, and an accompanying learned decoder that extracts the steganographic signatures to determine, for each local image region, its 2D positional coordinates within the originally-published image. Crop and replacement edits become detectable by the inconsistencies they cause in the hidden positional signatures. Using a prototype system for small $(400 \times 400)$ images, we show experimentally that simple CNN encoder and decoder architectures can be trained jointly to achieve detection that is reliable and robust, without introducing perceptible distortion. This approach could help individuals and image-sharing platforms certify that an image was published by a trusted source, and also know which parts of such an image, if any, have been substantially altered since publication.
translated by 谷歌翻译
图像裁剪是一种廉价而有效的恶意改变图像内容的操作。现有的裁剪检测机制分析了图像裁剪的基本痕迹,例如色差和渐晕,以发现种植攻击。但是,它们在常见的后处理攻击方面脆弱,通过删除此类提示,欺骗取证。此外,他们忽略了这样一个事实,即恢复裁剪的内容可以揭示出行为造成攻击的目的。本文提出了一种新型的强大水印方案,用于图像裁剪定位和恢复(CLR-NET)。我们首先通过引入不可察觉的扰动来保护原始图像。然后,模拟典型的图像后处理攻击以侵蚀受保护的图像。在收件人方面,我们预测裁剪面膜并恢复原始图像。我们提出了两个即插即用网络,以改善CLR-NET的现实鲁棒性,即细粒生成性JPEG模拟器(FG-JPEG)和Siamese图像预处理网络。据我们所知,我们是第一个解决图像裁剪本地化和整个图像从片段中恢复的综合挑战的人。实验表明,尽管存在各种类型的图像处理攻击,但CLR-NET可以准确地定位裁剪,并以高质量和忠诚度恢复裁剪区域的细节。
translated by 谷歌翻译
由于技术成本的降低和卫星发射的增加,卫星图像变得越来越流行和更容易获得。除了提供仁慈的目的外,还可以出于恶意原因(例如错误信息)使用卫星数据。事实上,可以依靠一般图像编辑工具来轻松操纵卫星图像。此外,随着深层神经网络(DNN)的激增,可以生成属于各种领域的现实合成图像,与合成生成的卫星图像的扩散有关的其他威胁正在出现。在本文中,我们回顾了关于卫星图像的产生和操纵的最新技术(SOTA)。特别是,我们既关注从头开始的合成卫星图像的产生,又要通过图像转移技术对卫星图像进行语义操纵,包括从一种类型的传感器到另一种传感器获得的图像的转换。我们还描述了迄今已研究的法医检测技术,以对合成图像伪造进行分类和检测。虽然我们主要集中在法医技术上明确定制的,该技术是针对AI生成的合成内容物的检测,但我们还审查了一些用于一般剪接检测的方法,这些方法原则上也可以用于发现AI操纵图像
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
我们呈现SeveryGan,一种能够从单个输入示例自动生成砖纹理映射的方法。与大多数现有方法相比,专注于解决合成问题,我们的工作同时解决问题,合成和涤纶性。我们的关键思想是认识到,通过越野落扩展技术训练的生成网络内的潜伏空间产生具有在接缝交叉点的连续性的输出,然后可以通过裁剪中心区域进入彩色图像。由于不是潜在空间的每个值都有有效的来产生高质量的输出,因此我们利用鉴别者作为能够在采样过程中识别无伪纹理的感知误差度量。此外,与之前的深度纹理合成的工作相比,我们的模型设计和优化,以便使用多层纹理表示,使由多个地图组成的纹理,例如Albedo,法线等。我们广泛地测试网络的设计选择架构,丢失功能和采样参数。我们在定性和定量上展示我们的方法优于以前的方法和适用于不同类型的纹理。
translated by 谷歌翻译
数字图像水印寻求保护数字媒体信息免受未经授权的访问,其中消息被嵌入到数字图像中并从中提取,甚至在各种数据处理下应用一些噪声或失真,包括有损图像压缩和交互式内容编辑。在用一些事先约束时,传统图像水印解决方案容易受到鲁棒性,而最近的基于深度学习的水印方法无法在特征编码器和解码器的各种单独管道下进行良好的信息丢失问题。在本文中,我们提出了一种新的数字图像水印解决方案,具有一个小巧的神经网络,名为可逆的水印网络(IWN)。我们的IWN架构基于单个可逆的神经网络(INN),这种双翼飞变传播框架使我们能够通过将它们作为彼此的一对逆问题同时解决信息嵌入和提取的挑战,并学习稳定的可逆性映射。为了增强我们的水印解决方案的稳健性,我们具体地引入了一个简单但有效的位消息归一化模块,以冷凝要嵌入的位消息,并且噪声层旨在模拟我们的iWN框架下的各种实际攻击。广泛的实验表明了我们在各种扭曲下的解决方案的优越性。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
光学系统的可区分模拟可以与基于深度学习的重建网络结合使用,以通过端到端(E2E)优化光学编码器和深度解码器来实现高性能计算成像。这使成像应用程序(例如3D定位显微镜,深度估计和无透镜摄影)通过优化局部光学编码器。更具挑战性的计算成像应用,例如将3D卷压入单个2D图像的3D快照显微镜,需要高度非本地光学编码器。我们表明,现有的深网解码器具有局部性偏差,可防止这种高度非本地光学编码器的优化。我们使用全球内核傅里叶卷积神经网络(Fouriernets)基于浅神经网络体系结构的解码器来解决此问题。我们表明,在高度非本地分散镜头光学编码器捕获的照片中,傅立叶网络超过了现有的基于网络的解码器。此外,我们表明傅里叶可以对3D快照显微镜的高度非本地光学编码器进行E2E优化。通过将傅立叶网和大规模多GPU可区分的光学模拟相结合,我们能够优化非本地光学编码器170 $ \ times $ \ times $ tos 7372 $ \ times $ \ times $ \ times $比以前的最新状态,并证明了ROI的潜力-type特定的光学编码使用可编程显微镜。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
现代计算机视觉已超越了互联网照片集的领域,并进入了物理世界,通过非结构化的环境引导配备摄像头的机器人和自动驾驶汽车。为了使这些体现的代理与现实世界对象相互作用,相机越来越多地用作深度传感器,重建了各种下游推理任务的环境。机器学习辅助的深度感知或深度估计会预测图像中每个像素的距离。尽管已经在深入估算中取得了令人印象深刻的进步,但仍然存在重大挑战:(1)地面真相深度标签很难大规模收集,(2)通常认为相机信息是已知的,但通常是不可靠的,并且(3)限制性摄像机假设很常见,即使在实践中使用了各种各样的相机类型和镜头。在本论文中,我们专注于放松这些假设,并描述将相机变成真正通用深度传感器的最终目标的贡献。
translated by 谷歌翻译
In this paper we present TruFor, a forensic framework that can be applied to a large variety of image manipulation methods, from classic cheapfakes to more recent manipulations based on deep learning. We rely on the extraction of both high-level and low-level traces through a transformer-based fusion architecture that combines the RGB image and a learned noise-sensitive fingerprint. The latter learns to embed the artifacts related to the camera internal and external processing by training only on real data in a self-supervised manner. Forgeries are detected as deviations from the expected regular pattern that characterizes each pristine image. Looking for anomalies makes the approach able to robustly detect a variety of local manipulations, ensuring generalization. In addition to a pixel-level localization map and a whole-image integrity score, our approach outputs a reliability map that highlights areas where localization predictions may be error-prone. This is particularly important in forensic applications in order to reduce false alarms and allow for a large scale analysis. Extensive experiments on several datasets show that our method is able to reliably detect and localize both cheapfakes and deepfakes manipulations outperforming state-of-the-art works. Code will be publicly available at https://grip-unina.github.io/TruFor/
translated by 谷歌翻译
我们介绍了一种新的图像取证方法:将物理折射物(我们称为图腾)放入场景中,以保护该场景拍摄的任何照片。图腾弯曲并重定向光线,因此在单个图像中提供了多个(尽管扭曲)的多个(尽管扭曲)。防守者可以使用这些扭曲的图腾像素来检测是否已操纵图像。我们的方法通过估计场景中的位置并使用其已知的几何和材料特性来估算其位置,从而使光线通过图腾的光线不十障。为了验证图腾保护的图像,我们从图腾视点重建的场景与场景的外观从相机的角度来检测到不一致之处。这样的方法使对抗性操纵任务更加困难,因为对手必须以几何一致的方式对图腾和图像像素进行修改,而又不知道图腾的物理特性。与先前的基于学习的方法不同,我们的方法不需要在特定操作的数据集上进行培训,而是使用场景和相机的物理属性来解决取证问题。
translated by 谷歌翻译
深入学习被认为是可逆隐写术的有希望的解决方案。最近的最终学习的发展使得可以通过一对编码器和解码器神经网络绕过隐写操作的多个中间阶段。然而,这一框架是无法保证完美的可逆性,因为这种单片机械难以以黑匣子的形式来学习可逆计算的复杂逻辑。开发基于学习的可逆书签方案的更可靠的方法是通过分裂和征服范例。预测误差调制是一种建立的模块化框架,包括分析模块和编码模块。前者服务于分析像素相关性并预测像素强度,而后者专注于可逆编码机制。鉴于可逆性由编码模块独立管理,我们将专注于将神经网络纳入分析模块。本研究的目的是评估不同培训配置对预测神经网络的影响,并提供实用的见解。背景感知像素强度预测在可逆的隐写术中具有核心作用,并且可以被认为是低级计算机视觉任务。因此,我们可以采用最初为这种计算机视觉任务设计的神经网络模型来执行强度预测。此外,我们严格研究强度初始化对预测性能的影响以及双层预测的分布变换的影响。实验结果表明,通过先进的神经网络模型可以实现最先进的书签性能。
translated by 谷歌翻译
面部特征跟踪是成像跳芭式(BCG)的关键组成部分,其中需要精确定量面部关键点的位移,以获得良好的心率估计。皮肤特征跟踪能够在帕金森病中基于视频的电机降解量化。传统的计算机视觉算法包括刻度不变特征变换(SIFT),加速强大的功能(冲浪)和LUCAS-KANADE方法(LK)。这些长期代表了最先进的效率和准确性,但是当存在常见的变形时,如图所示,如图所示,如此。在过去的五年中,深度卷积神经网络对大多数计算机视觉任务的传统方法表现优于传统的传统方法。我们提出了一种用于特征跟踪的管道,其应用卷积堆积的AutoEncoder,以将图像中最相似的裁剪标识到包含感兴趣的特征的参考裁剪。 AutoEncoder学会将图像作物代表到特定于对象类别的深度特征编码。我们在面部图像上培训AutoEncoder,并验证其在手动标记的脸部和手视频中通常验证其跟踪皮肤功能的能力。独特的皮肤特征(痣)的跟踪误差是如此之小,因为我们不能排除他们基于$ \ chi ^ 2 $ -test的手动标签。对于0.6-4.2像素的平均误差,我们的方法在所有情况下都表现出了其他方法。更重要的是,我们的方法是唯一一个不分歧的方法。我们得出的结论是,我们的方法为特征跟踪,特征匹配和图像配准比传统算法创建更好的特征描述符。
translated by 谷歌翻译
法医分析取决于从操纵图像识别隐藏迹线。由于它们无法处理功能衰减和依赖主导空间特征,传统的神经网络失败。在这项工作中,我们提出了一种新颖的门控语言注意力网络(GCA-NET),用于全球背景学习的非本地关注块。另外,我们利用所通用的注意机制结合密集的解码器网络,以引导在解码阶段期间的相关特征的流动,允许精确定位。所提出的注意力框架允许网络通过过滤粗糙度来专注于相关区域。此外,通过利用多尺度特征融合和有效的学习策略,GCA-Net可以更好地处理操纵区域的比例变化。我们表明,我们的方法在多个基准数据集中平均优于最先进的网络,平均为4.2%-5.4%AUC。最后,我们还开展了广泛的消融实验,以展示该方法对图像取证的鲁棒性。
translated by 谷歌翻译
Apple最近透露了它的深度感知散列系统的神经枢纽,以检测文件在文件上传到其iCloud服务之前的用户设备上的儿童性滥用材料(CSAM)。关于保护用户隐私和系统可靠性的公众批评很快就会出现。本文基于神经枢纽的深度感知哈希展示了第一综合实证分析。具体而言,我们表明当前深度感知散列可能不具有稳健性。对手可以通过应用图像的略微变化来操纵散列值,或者通过基于梯度的方法或简单地执行标准图像转换,强制或预防哈希冲突来操纵。这种攻击允许恶意演员轻松利用检测系统:从隐藏滥用材料到框架无辜的用户,一切都是可能的。此外,使用散列值,仍然可以对存储在用户设备上的数据进行推断。在我们的观点中,根据我们的结果,其目前形式的深度感知散列通常不适用于强大的客户端扫描,不应从隐私角度使用。
translated by 谷歌翻译
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
translated by 谷歌翻译