作为人类视觉系统(HVS)的重要感知特性,已经研究了几十年的图像和视频处理(例如,感知视觉信号压缩)已经研究了刚刚明显的差异(JND)。然而,对于深度机器视觉(DMV)的JND存在很少的探索,尽管DMV在许多机器视觉任务中取得了很大的进步。在本文中,我们进行了初步尝试,并证明DMV具有JND,称为DMV-JND。然后,我们为DMV中的图像分类任务提出了JND模型。已经发现DMV可以通过与所提出的DMV-JND-NET的无监督学习产生JND来容忍平均PSNR的扭曲图像,其平均PSNR仅为9.56dB(越来越越好)。特别是,设计语义引导的冗余评估策略旨在抑制DMV-JND的幅度和空间分布。图像分类的实验结果表明,我们成功找到了深度机视觉的JND。我们的DMV-JND有助于DMV导向图像和视频压缩,水印,质量评估,深度神经网络安全等方向的可能方向。
translated by 谷歌翻译
由于深度神经网络的开发,尤其是对于最近开发的无监督的JND代模型,对公正的显着差异(JND)建模做出了重大改进。但是,他们有一个主要的缺点,即在现实世界信号域而不是在人脑中的感知结构域中评估了生成的JND。当在这两个域中评估JND时,存在明显的差异,因为在现实世界中的视觉信号在通过人类视觉系统(HVS)传递到大脑之前已编码。因此,我们提出了一个受HVS启发的信号降解网络进行JND估计。为了实现这一目标,我们仔细分析了JND主观观察中的HVS感知过程,以获得相关的见解,然后设计受HVS启发的信号降解(HVS-SD)网络,以表示HVS中的信号降解。一方面,知识渊博的HVS-SD使我们能够评估感知域中的JND。另一方面,它提供了更准确的先验信息,以更好地指导JND生成。此外,考虑到合理的JND不应导致视觉注意力转移的要求,提出了视觉注意力丧失以控制JND的生成。实验结果表明,所提出的方法实现了SOTA性能,以准确估计HVS的冗余性。源代码将在https://github.com/jianjin008/hvs-sd-jnd上找到。
translated by 谷歌翻译
发现深度学习模型很容易受到对抗性示例的影响,因为在深度学习模型的输入中,对扰动的扰动可能引起错误的预测。对抗图像生成的大多数现有作品都试图为大多数模型实现攻击,而其中很少有人努力确保对抗性示例的感知质量。高质量的对手示例对许多应用很重要,尤其是保留隐私。在这项工作中,我们基于最小明显差异(MND)概念开发了一个框架,以生成对对抗性隐私的保留图像,这些图像与干净的图像具有最小的感知差异,但能够攻击深度学习模型。为了实现这一目标,首先提出了对抗性损失,以使深度学习模型成功地被对抗性图像攻击。然后,通过考虑摄动和扰动引起的结构和梯度变化的大小来开发感知质量的损失,该损失旨在为对抗性图像生成保持高知觉质量。据我们所知,这是基于MND概念以保存隐私的概念来探索质量保护的对抗图像生成的第一项工作。为了评估其在感知质量方面的性能,在这项工作中,通过建议的方法和几种锚方法测试了有关图像分类和面部识别的深层模型。广泛的实验结果表明,所提出的MND框架能够生成具有明显改善的性能指标(例如PSNR,SSIM和MOS)的对抗图像,而不是用锚定方法生成的对抗性图像。
translated by 谷歌翻译
随着事物(AIOT)的发展,在我们的日常工作和生活中产生了大量的视觉数据,例如图像和视频。这些视觉数据不仅用于人类观察或理解,而且用于机器分析或决策,例如智能监控,自动化车辆和许多其他智能城市应用。为此,在这项工作中提出了一种用于人机和机器使用的新图像编解码器范例。首先,利用神经网络提取高级实例分割图和低级信号特征。然后,实例分割图还被表示为具有所提出的16位灰度表示的简档。之后,两个16位灰度曲线和信号特征都以无损编解码器编码。同时,设计和培训图像预测器以实现具有16位灰度曲线简曲和信号特征的一般质量图像重建。最后,使用用于高质量图像重建的有损编解码器来压缩原始图像和预测的剩余地图。通过这种设计,一方面,我们可以实现可扩展的图像压缩,以满足不同人类消费的要求;另一方面,我们可以通过解码的16位灰度分布配置,例如对象分类,检测和分割,直接在解码器侧直接实现多个机器视觉任务。实验结果表明,该建议的编解码器在PSNR和MS-SSIM方面实现了基于大多数基于学习的编解码器,并且优于传统编解码器(例如,BPG和JPEG2000)以进行图像重建。同时,它在对象检测和分割的映射方面优于现有的编解码器。
translated by 谷歌翻译
需要高质量的面部图像来保证在监视和安全场景中自动识别系统(FR)系统的稳定性和可靠性。但是,由于传输或存储的限制,在分析之前,通常会压缩大量的面部数据。压缩图像可能会失去强大的身份信息,从而导致FR系统的性能降低。在此,我们首次尝试研究FR系统的明显差异(JND),可以将其定义为FR系统无法注意到的最大失真。更具体地说,我们建立了一个JND数据集,其中包括3530个原始图像和137,670个由高级参考编码/解码软件生成的压缩图像,该图像基于多功能视频编码(VVC)标准(VTM-15.0)。随后,我们开发了一种新型的JND预测模型,以直接推断FR系统的JND图像。特别是,为了最大程度地删除冗余性,在不损害鲁棒身份信息的情况下,我们将编码器应用于多个功能提取和基于注意力的特征分解模块,以将面部特征逐渐分解为两个不相关的组件,即身份和残差特征,通过自我 - 监督学习。然后,剩余特征被馈入解码器以生成残差图。最后,通过从原始图像中减去残差图来获得预测的JND映射。实验结果表明,与最先进的JND模型相比,所提出的模型可以实现JND MAP预测的更高准确性,并且能够在维持FR系统的性能的同时保存更多的位置,而与VTM-15.0相比。
translated by 谷歌翻译
深度神经网络已被证明容易受到对抗图像的影响。常规攻击努力争取严格限制扰动的不可分割的对抗图像。最近,研究人员已采取行动探索可区分但非奇异的对抗图像,并证明色彩转化攻击是有效的。在这项工作中,我们提出了对抗颜色过滤器(ADVCF),这是一种新颖的颜色转换攻击,在简单颜色滤波器的参数空间中通过梯度信息进行了优化。特别是,明确指定了我们的颜色滤波器空间,以便从攻击和防御角度来对对抗性色转换进行系统的鲁棒性分析。相反,由于缺乏这种明确的空间,现有的颜色转换攻击并不能为系统分析提供机会。我们通过用户研究进一步进行了对成功率和图像可接受性的不同颜色转化攻击之间的广泛比较。其他结果为在另外三个视觉任务中针对ADVCF的模型鲁棒性提供了有趣的新见解。我们还强调了ADVCF的人类解剖性,该advcf在实际使用方案中有希望,并显示出比对图像可接受性和效率的最新人解释的色彩转化攻击的优越性。
translated by 谷歌翻译
愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
Video compression plays a crucial role in video streaming and classification systems by maximizing the end-user quality of experience (QoE) at a given bandwidth budget. In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems. Our attack framework, dubbed RoVISQ, manipulates the Rate-Distortion ($\textit{R}$-$\textit{D}$) relationship of a video compression model to achieve one or both of the following goals: (1) increasing the network bandwidth, (2) degrading the video quality for end-users. We further devise new objectives for targeted and untargeted attacks to a downstream video classification service. Finally, we design an input-invariant perturbation that universally disrupts video compression and classification systems in real time. Unlike previously proposed attacks on video classification, our adversarial perturbations are the first to withstand compression. We empirically show the resilience of RoVISQ attacks against various defenses, i.e., adversarial training, video denoising, and JPEG compression. Our extensive experimental results on various video datasets show RoVISQ attacks deteriorate peak signal-to-noise ratio by up to 5.6dB and the bit-rate by up to $\sim$ 2.4$\times$ while achieving over 90$\%$ attack success rate on a downstream classifier. Our user study further demonstrates the effect of RoVISQ attacks on users' QoE.
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
随着流媒体技术的发展,沟通的增加取决于声音和视觉信息,这给在线媒体带来了巨大的负担。数据压缩对于减少数据传输和存储的数量变得越来越重要。为了进一步提高图像压缩的效率,研究人员利用各种图像处理方法来补偿常规编解码器和基于先进的基于学习的压缩方法的局限性。我们没有修改面向压缩的方法,而是提出了一个称为Kuchen的统一图像压缩预处理框架,该框架旨在进一步提高现有编解码器的性能。该框架由混合数据标记系统以及基于学习的主链组成,以模拟个性化的预处理。据我们所知,这是在图像压缩任务中设置统一预处理基准测试的第一次探索。结果表明,我们统一的预处理框架优化的现代编解码器不断提高最新压缩的效率。
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
在本文中,我们提出了一种防御策略,以通过合并隐藏的层表示来改善对抗性鲁棒性。这种防御策略的关键旨在压缩或过滤输入信息,包括对抗扰动。而且这种防御策略可以被视为一种激活函数,可以应用于任何类型的神经网络。从理论上讲,我们在某些条件下也证明了这种防御策略的有效性。此外,合并隐藏层表示,我们提出了三种类型的对抗攻击,分别生成三种类型的对抗示例。实验表明,我们的防御方法可以显着改善深神经网络的对抗性鲁棒性,即使我们不采用对抗性训练,也可以实现最新的表现。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
基于神经网络的图像压缩已经过度研究。模型稳健性很大程度上被忽视,但它对服务能够实现至关重要。我们通过向原始源图像注入少量噪声扰动来执行对抗攻击,然后使用主要学习的图像压缩模型来编码这些对抗示例。实验报告对逆势实例的重建中的严重扭曲,揭示了现有方法的一般漏洞,无论用于底层压缩模型(例如,网络架构,丢失功能,质量标准)和用于注射扰动的优化策略(例如,噪声阈值,信号距离测量)。后来,我们应用迭代对抗的FineTuning来细化掠夺模型。在每次迭代中,将随机源图像和对抗示例混合以更新底层模型。结果通过大大提高压缩模型稳健性来表明提出的FineTuning策略的有效性。总体而言,我们的方法是简单,有效和更广泛的,使其具有开发稳健的学习图像压缩解决方案的吸引力。所有材料都在HTTPS://njuvision.github.io/trobustn中公开访问,以便可重复研究。
translated by 谷歌翻译
语义通信引起了人们的兴趣,因为它可以显着减少在不丢失关键信息的情况下要传输的数据量。大多数现有作品都探索文本的语义编码和传输,并在自然语言处理(NLP)中应用技术来解释文本的含义。在本文中,我们构想了图像数据的语义通信,这些语义数据在语义和带宽敏感方面更为丰富。我们提出了一种基于增强学习的自适应语义编码(RL-ASC)方法,该方法编码超过像素级别的图像。首先,我们定义了图像数据的语义概念,该概念包括类别,空间布置和视觉特征作为表示单元,并提出卷积语义编码器以提取语义概念。其次,我们提出了图像重建标准,该标准从传统像素的相似性演变为语义相似性和感知性能。第三,我们设计了一种基于RL的新型语义位分配模型,其奖励是用自适应量化水平编码某个语义概念后的速率语义感知性能的提高。因此,与任务相关的信息得到正确保存和重建,同时丢弃了较少重要的数据。最后,我们提出了基于生成的对抗网(GAN)的语义解码器,该语义解码器通过注意模块融合本地和全球特征。实验结果表明,所提出的RL-ASC具有噪声稳定性,可以重建视觉上令人愉悦和语义一致的图像,并节省与标准编解码器和其他基于深度学习的图像编解码器相比,可以节省位置的时间。
translated by 谷歌翻译
视频编码技术已不断改进,以更高的分辨率以更高的压缩比。但是,最先进的视频编码标准(例如H.265/HEVC和多功能视频编码)仍在设计中,该假设将被人类观看。随着深度神经网络在解决计算机视觉任务方面的巨大进步和成熟,越来越多的视频通过无人参与的深度神经网络直接分析。当计算机视觉应用程序使用压缩视频时,这种传统的视频编码标准设计并不是最佳的。尽管人类视觉系统对具有高对比度的内容一直敏感,但像素对计算机视觉算法的影响是由特定的计算机视觉任务驱动的。在本文中,我们探索并总结了计算机视觉任务的视频编码和新兴视频编码标准,机器的视频编码。
translated by 谷歌翻译
深度学习(DL)在许多与人类相关的任务中表现出巨大的成功,这导致其在许多计算机视觉的基础应用中采用,例如安全监控系统,自治车辆和医疗保健。一旦他们拥有能力克服安全关键挑战,这种安全关键型应用程序必须绘制他们的成功部署之路。在这些挑战中,防止或/和检测对抗性实例(AES)。对手可以仔细制作小型,通常是难以察觉的,称为扰动的噪声被添加到清洁图像中以产生AE。 AE的目的是愚弄DL模型,使其成为DL应用的潜在风险。在文献中提出了许多测试时间逃避攻击和对策,即防御或检测方法。此外,还发布了很少的评论和调查,理论上展示了威胁的分类和对策方法,几乎​​没有焦点检测方法。在本文中,我们专注于图像分类任务,并试图为神经网络分类器进行测试时间逃避攻击检测方法的调查。对此类方法的详细讨论提供了在四个数据集的不同场景下的八个最先进的探测器的实验结果。我们还为这一研究方向提供了潜在的挑战和未来的观点。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based ones, have been developed and applied widely to RS data analysis. The successful application of AI covers almost all aspects of Earth observation (EO) missions, from low-level vision tasks like super-resolution, denoising, and inpainting, to high-level vision tasks like scene classification, object detection, and semantic segmentation. While AI techniques enable researchers to observe and understand the Earth more accurately, the vulnerability and uncertainty of AI models deserve further attention, considering that many geoscience and RS tasks are highly safety-critical. This paper reviews the current development of AI security in the geoscience and RS field, covering the following five important aspects: adversarial attack, backdoor attack, federated learning, uncertainty, and explainability. Moreover, the potential opportunities and trends are discussed to provide insights for future research. To the best of the authors' knowledge, this paper is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community. Available code and datasets are also listed in the paper to move this vibrant field of research forward.
translated by 谷歌翻译