在这项研究中,我们介绍了机器感知的措施,灵感来自于人类感知的概念(JND)的概念。基于该措施,我们提出了一种对抗性图像生成算法,其通过添加剂噪声迭代地扭曲图像,直到模型通过输出错误标签来检测图像中的变化。添加到原始图像的噪声被定义为模型成本函数的梯度。定义了一种新的成本函数,以明确地最小化应用于输入图像的扰动量,同时强制执行对抗和输入图像之间的感知相似性。为此目的,经过众所周知的总变化和有界范围术语来规范成本函数,以满足对抗图像的自然外观。我们评估我们的算法在CiFar10,ImageNet和MS Coco Datasets上定性和定量地生成的对抗性图像。我们对图像分类和对象检测任务的实验表明,通过我们的JND方法产生的对抗性图像在欺骗识别/检测模型以及与由最先进的方法产生的图像相比,扰动扰动,即, FGV,FSGM和Deepfool方法。
translated by 谷歌翻译
发现深度学习模型很容易受到对抗性示例的影响,因为在深度学习模型的输入中,对扰动的扰动可能引起错误的预测。对抗图像生成的大多数现有作品都试图为大多数模型实现攻击,而其中很少有人努力确保对抗性示例的感知质量。高质量的对手示例对许多应用很重要,尤其是保留隐私。在这项工作中,我们基于最小明显差异(MND)概念开发了一个框架,以生成对对抗性隐私的保留图像,这些图像与干净的图像具有最小的感知差异,但能够攻击深度学习模型。为了实现这一目标,首先提出了对抗性损失,以使深度学习模型成功地被对抗性图像攻击。然后,通过考虑摄动和扰动引起的结构和梯度变化的大小来开发感知质量的损失,该损失旨在为对抗性图像生成保持高知觉质量。据我们所知,这是基于MND概念以保存隐私的概念来探索质量保护的对抗图像生成的第一项工作。为了评估其在感知质量方面的性能,在这项工作中,通过建议的方法和几种锚方法测试了有关图像分类和面部识别的深层模型。广泛的实验结果表明,所提出的MND框架能够生成具有明显改善的性能指标(例如PSNR,SSIM和MOS)的对抗图像,而不是用锚定方法生成的对抗性图像。
translated by 谷歌翻译
Although deep learning has made remarkable progress in processing various types of data such as images, text and speech, they are known to be susceptible to adversarial perturbations: perturbations specifically designed and added to the input to make the target model produce erroneous output. Most of the existing studies on generating adversarial perturbations attempt to perturb the entire input indiscriminately. In this paper, we propose ExploreADV, a general and flexible adversarial attack system that is capable of modeling regional and imperceptible attacks, allowing users to explore various kinds of adversarial examples as needed. We adapt and combine two existing boundary attack methods, DeepFool and Brendel\&Bethge Attack, and propose a mask-constrained adversarial attack system, which generates minimal adversarial perturbations under the pixel-level constraints, namely ``mask-constraints''. We study different ways of generating such mask-constraints considering the variance and importance of the input features, and show that our adversarial attack system offers users good flexibility to focus on sub-regions of inputs, explore imperceptible perturbations and understand the vulnerability of pixels/regions to adversarial attacks. We demonstrate our system to be effective based on extensive experiments and user study.
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
现代神经网络能够在涉及对象分类和图像生成的许多任务中执行至少和人类。然而,人类难以察觉的小扰动可能会显着降低训练有素的深神经网络的性能。我们提供了分布稳健的优化(DRO)框架,其集成了基于人的图像质量评估方法,以设计对人类来说难以察觉而难以察觉的最佳攻击,而是针对深度神经网络造成显着损害。通过广泛的实验,我们表明我们的攻击算法比其他最先进的人类难以察觉的攻击方法产生更好的质量(对人类)的攻击。此外,我们证明了使用我们最佳设计的人类难以察觉的攻击的DRO培训可以改善图像分类中的群体公平。在最后,我们提供了一种算法实现,以显着加速DRO训练,这可能是独立的兴趣。
translated by 谷歌翻译
对卷积神经网络(CNN)的对抗性攻击的存在质疑这种模型对严重应用的适合度。攻击操纵输入图像,使得错误分类是在对人类观察者看上去正常的同时唤起的 - 因此它们不容易被检测到。在不同的上下文中,CNN隐藏层的反向传播激活(对给定输入的“特征响应”)有助于可视化人类“调试器” CNN“在计算其输出时对CNN”的看法。在这项工作中,我们提出了一种新颖的检测方法,以防止攻击。我们通过在特征响应中跟踪对抗扰动来做到这一点,从而可以使用平均局部空间熵自动检测。该方法不会改变原始的网络体系结构,并且完全可以解释。实验证实了我们对在Imagenet训练的大规模模型的最新攻击方法的有效性。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
深度学习(DL)在许多与人类相关的任务中表现出巨大的成功,这导致其在许多计算机视觉的基础应用中采用,例如安全监控系统,自治车辆和医疗保健。一旦他们拥有能力克服安全关键挑战,这种安全关键型应用程序必须绘制他们的成功部署之路。在这些挑战中,防止或/和检测对抗性实例(AES)。对手可以仔细制作小型,通常是难以察觉的,称为扰动的噪声被添加到清洁图像中以产生AE。 AE的目的是愚弄DL模型,使其成为DL应用的潜在风险。在文献中提出了许多测试时间逃避攻击和对策,即防御或检测方法。此外,还发布了很少的评论和调查,理论上展示了威胁的分类和对策方法,几乎​​没有焦点检测方法。在本文中,我们专注于图像分类任务,并试图为神经网络分类器进行测试时间逃避攻击检测方法的调查。对此类方法的详细讨论提供了在四个数据集的不同场景下的八个最先进的探测器的实验结果。我们还为这一研究方向提供了潜在的挑战和未来的观点。
translated by 谷歌翻译
深入学习在许多任务中获得了越来越优秀的表现,例如自主驾驶和面部识别,并且也受到不同类型的攻击挑战。在图像中添加人类视力不可察觉的扰动可以误导神经网络模型以高信任获得错误的结果。对手示例是已经添加的图像,其具有特定的噪声来误导深神经网络模型,但是向图像添加噪声会破坏原始数据,使得在数字取证和其他字段中无用。为了防止非法或未授权访问图像数据,例如人面,并确保不对法律使用的情感可逆的逆势攻击技术是升高的。原始图像可以从其可逆的对抗性示例中恢复。然而,现有的可逆对抗例子生成策略均为传统的难以察觉的对抗性扰动设计。如何获得局部可见的对抗性扰动的可逆性?在本文中,我们提出了一种基于局部视觉逆势扰动产生可逆的对抗性实例的新方法。通过可逆数据隐藏技术将图像恢复所需的信息嵌入到超出对抗贴片之外的区域。为了降低图像失真并提高视觉质量,采用无损压缩和嵌入原理。 Imagenet DataSet上的实验表明,我们的方法可以在确保攻击性能的同时恢复无错误的原始图像。
translated by 谷歌翻译
基于深度神经网络(DNN)的智能信息(IOT)系统已被广泛部署在现实世界中。然而,发现DNNS易受对抗性示例的影响,这提高了人们对智能物联网系统的可靠性和安全性的担忧。测试和评估IOT系统的稳健性成为必要和必要。最近已经提出了各种攻击和策略,但效率问题仍未纠正。现有方法是计算地广泛或耗时,这在实践中不适用。在本文中,我们提出了一种称为攻击启发GaN(AI-GaN)的新框架,在有条件地产生对抗性实例。曾经接受过培训,可以有效地给予对抗扰动的输入图像和目标类。我们在白盒设置的不同数据集中应用AI-GaN,黑匣子设置和由最先进的防御保护的目标模型。通过广泛的实验,AI-GaN实现了高攻击成功率,优于现有方法,并显着降低了生成时间。此外,首次,AI-GaN成功地缩放到复杂的数据集。 Cifar-100和Imagenet,所有课程中的成功率约为90美元。
translated by 谷歌翻译
Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single $\ell_p$ distance (commonly, $p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial example in the same way. These $\ell_p$ attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple $\ell_p$ norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various $\ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.
translated by 谷歌翻译
在本文中,我们提出了一种防御策略,以通过合并隐藏的层表示来改善对抗性鲁棒性。这种防御策略的关键旨在压缩或过滤输入信息,包括对抗扰动。而且这种防御策略可以被视为一种激活函数,可以应用于任何类型的神经网络。从理论上讲,我们在某些条件下也证明了这种防御策略的有效性。此外,合并隐藏层表示,我们提出了三种类型的对抗攻击,分别生成三种类型的对抗示例。实验表明,我们的防御方法可以显着改善深神经网络的对抗性鲁棒性,即使我们不采用对抗性训练,也可以实现最新的表现。
translated by 谷歌翻译
对抗性的例子揭示了神经网络的脆弱性和不明原因的性质。研究对抗性实例的辩护具有相当大的实际重要性。大多数逆势的例子,错误分类网络通常无法被人类不可检测。在本文中,我们提出了一种防御模型,将分类器培训成具有形状偏好的人类感知分类模型。包括纹理传输网络(TTN)和辅助防御生成的对冲网络(GAN)的所提出的模型被称为人类感知辅助防御GaN(had-GaN)。 TTN用于扩展清洁图像的纹理样本,并有助于分类器聚焦在其形状上。 GaN用于为模型形成培训框架并生成必要的图像。在MNIST,时尚 - MNIST和CIFAR10上进行的一系列实验表明,所提出的模型优于网络鲁棒性的最先进的防御方法。该模型还证明了对抗性实例的防御能力的显着改善。
translated by 谷歌翻译
深度神经网络已被证明容易受到对抗图像的影响。常规攻击努力争取严格限制扰动的不可分割的对抗图像。最近,研究人员已采取行动探索可区分但非奇异的对抗图像,并证明色彩转化攻击是有效的。在这项工作中,我们提出了对抗颜色过滤器(ADVCF),这是一种新颖的颜色转换攻击,在简单颜色滤波器的参数空间中通过梯度信息进行了优化。特别是,明确指定了我们的颜色滤波器空间,以便从攻击和防御角度来对对抗性色转换进行系统的鲁棒性分析。相反,由于缺乏这种明确的空间,现有的颜色转换攻击并不能为系统分析提供机会。我们通过用户研究进一步进行了对成功率和图像可接受性的不同颜色转化攻击之间的广泛比较。其他结果为在另外三个视觉任务中针对ADVCF的模型鲁棒性提供了有趣的新见解。我们还强调了ADVCF的人类解剖性,该advcf在实际使用方案中有希望,并显示出比对图像可接受性和效率的最新人解释的色彩转化攻击的优越性。
translated by 谷歌翻译