已发现深层神经网络重新受到攻击。一种称为Adver-Sarial示例的精心设计的输入可以导致网络进行错误的预测。根据不同的情况,目标和能力,攻击的困难是不同的。难以转让的人。问题是:是否存在可以满足所有这些要求的存在攻击?在这个pa-per中,我们通过产生一种攻击这些条件来回答这个问题。我们学到了一个通用映射的对抗示例的来源。这些考试示例可以欺骗分类网络,以对所有目标类别进行分类,并且具有强大的可传递性。我们的代码在以下方面发布:xxxxx。
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
虽然近年来,在2D图像领域的攻击和防御中,许多努力已经探讨了3D模型的脆弱性。现有的3D攻击者通常在点云上执行点明智的扰动,从而导致变形的结构或异常值,这很容易被人类察觉。此外,它们的对抗示例是在白盒设置下产生的,当转移到攻击远程黑匣子型号时经常遭受低成功率。在本文中,我们通过提出一种新的难以察觉的转移攻击(ITA):1)难以察觉的3D点云攻击来自两个新的和具有挑战性的观点:1)难以察觉:沿着邻域表面的正常向量限制每个点的扰动方向,导致产生具有类似几何特性的示例,从而增强了难以察觉。 2)可转移性:我们开发了一个对抗性转变模型,以产生最有害的扭曲,并强制实施对抗性示例来抵抗它,从而提高其对未知黑匣子型号的可转移性。此外,我们建议通过学习更辨别的点云表示来培训更强大的黑盒3D模型来防御此类ITA攻击。广泛的评估表明,我们的ITA攻击比最先进的人更令人无法察觉和可转让,并验证我们的国防战略的优势。
translated by 谷歌翻译
Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
对于黑盒攻击,替代模型和受害者模型之间的差距通常很大,这表现为弱攻击性能。通过观察到,可以通过同时攻击多样的模型来提高对抗性示例的可传递性,并提出模型增强方法,这些模型通过使用转换图像模拟不同的模型。但是,空间域的现有转换不会转化为显着多样化的增强模型。为了解决这个问题,我们提出了一种新型的频谱模拟攻击,以针对正常训练和防御模型制作更容易转移的对抗性例子。具体而言,我们将频谱转换应用于输入,从而在频域中执行模型增强。从理论上讲,我们证明了从频域中得出的转换导致不同的频谱显着图,这是我们提出的指标,以反映替代模型的多样性。值得注意的是,我们的方法通常可以与现有攻击结合使用。 Imagenet数据集的广泛实验证明了我们方法的有效性,\ textit {e.g。},攻击了九个最先进的防御模型,其平均成功率为\ textbf {95.4 \%}。我们的代码可在\ url {https://github.com/yuyang-long/ssa}中获得。
translated by 谷歌翻译
Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples. Though visual imperceptibility is the desired property of adversarial examples, conventional adversarial attacks still generate traceable adversarial perturbations. In this paper, we introduce a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples. Specifically, AdvINN fully takes advantage of the information preservation property of Invertible Neural Networks and thereby generates adversarial examples by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of the original class. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that the proposed AdvINN method can produce less imperceptible adversarial images than the state-of-the-art methods and AdvINN yields more robust adversarial examples with high confidence compared to other adversarial attacks.
translated by 谷歌翻译
深度学习(DL)在许多与人类相关的任务中表现出巨大的成功,这导致其在许多计算机视觉的基础应用中采用,例如安全监控系统,自治车辆和医疗保健。一旦他们拥有能力克服安全关键挑战,这种安全关键型应用程序必须绘制他们的成功部署之路。在这些挑战中,防止或/和检测对抗性实例(AES)。对手可以仔细制作小型,通常是难以察觉的,称为扰动的噪声被添加到清洁图像中以产生AE。 AE的目的是愚弄DL模型,使其成为DL应用的潜在风险。在文献中提出了许多测试时间逃避攻击和对策,即防御或检测方法。此外,还发布了很少的评论和调查,理论上展示了威胁的分类和对策方法,几乎​​没有焦点检测方法。在本文中,我们专注于图像分类任务,并试图为神经网络分类器进行测试时间逃避攻击检测方法的调查。对此类方法的详细讨论提供了在四个数据集的不同场景下的八个最先进的探测器的实验结果。我们还为这一研究方向提供了潜在的挑战和未来的观点。
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
基于深度学习的图像识别系统已广泛部署在当今世界的移动设备上。然而,在最近的研究中,深入学习模型被证明易受对抗的例子。一种逆势例的一个变种,称为对抗性补丁,由于其强烈的攻击能力而引起了研究人员的注意。虽然对抗性补丁实现了高攻击成功率,但由于补丁和原始图像之间的视觉不一致,它们很容易被检测到。此外,它通常需要对文献中的对抗斑块产生的大量数据,这是计算昂贵且耗时的。为了解决这些挑战,我们提出一种方法来产生具有一个单一图像的不起眼的对抗性斑块。在我们的方法中,我们首先通过利用多尺度发生器和鉴别器来决定基于受害者模型的感知敏感性的补丁位置,然后以粗糙的方式产生对抗性斑块。鼓励修补程序与具有对抗性训练的背景图像一致,同时保留强烈的攻击能力。我们的方法显示了白盒设置中的强烈攻击能力以及通过对具有不同架构和培训方法的各种型号的广泛实验,通过广泛的实验进行黑盒设置的优异转移性。与其他对抗贴片相比,我们的对抗斑块具有最大忽略的风险,并且可以避免人类观察,这是由显着性图和用户评估结果的插图支持的人类观察。最后,我们表明我们的对抗性补丁可以应用于物理世界。
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
In this work, we study the black-box targeted attack problem from the model discrepancy perspective. On the theoretical side, we present a generalization error bound for black-box targeted attacks, which gives a rigorous theoretical analysis for guaranteeing the success of the attack. We reveal that the attack error on a target model mainly depends on empirical attack error on the substitute model and the maximum model discrepancy among substitute models. On the algorithmic side, we derive a new algorithm for black-box targeted attacks based on our theoretical analysis, in which we additionally minimize the maximum model discrepancy(M3D) of the substitute models when training the generator to generate adversarial examples. In this way, our model is capable of crafting highly transferable adversarial examples that are robust to the model variation, thus improving the success rate for attacking the black-box model. We conduct extensive experiments on the ImageNet dataset with different classification models, and our proposed approach outperforms existing state-of-the-art methods by a significant margin. Our codes will be released.
translated by 谷歌翻译
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
translated by 谷歌翻译
现有的转移攻击方法通常假定攻击者知道黑盒受害者模型的训练集(例如标签集,输入大小),这通常是不现实的,因为在某些情况下,攻击者不知道此信息。在本文中,我们定义了一个通用的可转移攻击(GTA)问题,在该问题中,攻击者不知道此信息,并获得攻击可能来自未知数据集的任何随机遇到的图像。为了解决GTA问题,我们提出了一种新颖的图像分类橡皮擦(ICE),该图像分类(ICE)训练特定的攻击者从任意数据集中擦除任何图像的分类信息。几个数据集的实验表明,ICE在GTA上的现有转移攻击极大地胜过了转移攻击,并表明ICE使用类似纹理的噪声来扰动不同数据集的不同图像。此外,快速傅立叶变换分析表明,每个冰噪声中的主要成分是R,G和B图像通道的三个正弦波。受这个有趣的发现的启发,我们设计了一种新颖的正弦攻击方法(SA),以优化三个正弦波。实验表明,SA的性能与冰相当,表明这三个正弦波是有效的,足以打破GTA设置下的DNN。
translated by 谷歌翻译
深度神经网络(DNN)已被证明是针对对抗性示例(AE)的脆弱性,这些例子是恶意设计用于欺骗目标模型的。添加了不可察觉的对抗扰动的正常示例(NES)可能是对DNN的安全威胁。尽管现有的AES检测方法已经达到了很高的精度,但他们未能利用检测到的AE的信息。因此,基于高维扰动提取,我们提出了一种无模型的AES检测方法,其整个过程没有查询受害者模型。研究表明,DNN对高维度敏感。对抗示例中隐藏的对抗性扰动属于高维特征,高维特征是高度预测性和非持胸膜的。 DNN比其他人从高维数据中学习更多细节。在我们的方法中,扰动提取器可以从AES作为高维特征提取对抗扰动,然后训练有素的AES鉴别器确定输入是否为AE。实验结果表明,所提出的方法不仅可以以高精度检测对抗示例,还可以检测AE的特定类别。同时,提取的扰动可用于将AE恢复到NES。
translated by 谷歌翻译
深度神经网络的面部识别模型已显示出容易受到对抗例子的影响。但是,过去的许多攻击都要求对手使用梯度下降来解决输入依赖性优化问题,这使该攻击实时不切实际。这些对抗性示例也与攻击模型紧密耦合,并且在转移到不同模型方面并不那么成功。在这项工作中,我们提出了Reface,这是对基于对抗性转换网络(ATN)的面部识别模型的实时,高度转移的攻击。 ATNS模型对抗性示例生成是馈送前向神经网络。我们发现,纯U-NET ATN的白盒攻击成功率大大低于基于梯度的攻击,例如大型面部识别数据集中的PGD。因此,我们为ATN提出了一个新的架构,该架构缩小了这一差距,同时维持PGD的10000倍加速。此外,我们发现在给定的扰动幅度下,与PGD相比,我们的ATN对抗扰动在转移到新的面部识别模型方面更有效。 Reface攻击可以在转移攻击环境中成功欺骗商业面部识别服务,并将面部识别精度从AWS SearchFaces API和Azure Face验证准确性从91%降低到50.1%,从而将面部识别精度从82%降低到16.4%。
translated by 谷歌翻译
对抗性实例的有趣现象引起了机器学习中的显着关注,对社区可能更令人惊讶的是存在普遍对抗扰动(UAPS),即欺骗目标DNN的单一扰动。随着对深层分类器的关注,本调查总结了最近普遍对抗攻击的进展,讨论了攻击和防御方的挑战,以及uap存在的原因。我们的目标是将此工作扩展为动态调查,该调查将定期更新其内容,以遵循关于在广泛的域中的UAP或通用攻击的新作品,例如图像,音频,视频,文本等。将讨论相关更新:https://bit.ly/2sbqlgg。我们欢迎未来的作者在该领域的作品,联系我们,包括您的新发现。
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
对抗攻击使他们的成功取得了“愚弄”DNN等,基于梯度的算法成为一个主流。基于线性假设[12],在$ \ ell_ \ infty $约束下,在梯度上应用于渐变的$符号$操作是生成扰动的良好选择。然而,存在来自这种操作的副作用,因为它导致真实梯度与扰动之间的方向偏差。换句话说,当前方法包含真实梯度和实际噪声之间的间隙,这导致偏置和低效的攻击。因此,在理论上,基于泰勒膨胀,偏差地分析了$ \符号$,即快速梯度非符号法(FGNM)的校正。值得注意的是,FGNM是一般例程,它可以在基于梯度的攻击中无缝地更换传统的$符号$操作,以可忽略的额外计算成本。广泛的实验证明了我们方法的有效性。具体来说,我们的大多数和\ textBF {27.5 \%}平均突出了它们,平均而言。我们的匿名代码是公开可用的:\ url {https://git.io/mm -fgnm}。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译