Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
深度神经网络容易受到通过对输入对难以察觉的变化进行制作的对抗性示例。但是,这些对手示例在适用于模型及其参数的白盒设置中最成功。寻找可转移到其他模型或在黑匣子设置中开发的对抗性示例显着更加困难。在本文中,我们提出了可转移的对抗性实例的方向聚集的对抗性攻击。我们的方法在攻击过程中使用聚集方向,以避免产生的对抗性示例在白盒模型上过度拟合。关于Imagenet的广泛实验表明,我们的提出方法显着提高了对抗性实例的可转移性,优于最先进的攻击,特别是对抗对抗性稳健的模型。我们所提出的方法的最佳平均攻击成功率达到94.6 \%,针对三种对手训练模型和94.8%抵御五种防御方法。它还表明,目前的防御方法不会阻止可转移的对抗性攻击。
translated by 谷歌翻译
我们介绍了三级管道:调整多样化输入(RDIM),多样性集合(DEM)和区域配件,共同产生可转移的对抗性示例。我们首先探讨现有攻击之间的内部关系,并提出能够利用这种关系的RDIM。然后我们提出DEM,多尺度版本的RDIM,生成多尺度梯度。在前两个步骤之后,我们将价值转换为迭代拟合的区域。 RDIM和区域拟合不需要额外的运行时间,这三个步骤可以充分集成到其他攻击中。我们最好的攻击愚弄了六个黑匣子防御,平均成功率为93%,这均高于最先进的基于梯度的攻击。此外,我们重新思考现有的攻击,而不是简单地堆叠在旧的旧方法上以获得更好的性能。预计我们的调查结果将成为探索攻击方法之间内部关系的开始。代码在https://github.com/278287847/DEM中获得。
translated by 谷歌翻译
深度神经网络容易受到对抗的例子,这可以通过添加微妙的扰动来欺骗深层模型。虽然现有的攻击已经取得了有希望的结果,但它仍然在黑盒设置下留下长途来产生可转移的对抗性示例。为此,本文提出提高对抗示例的可转移性,并将双阶段特征级扰动应用于现有模型,以隐式创建一组不同的模型。然后在迭代期间由纵向集合融合这些模型。该方法被称为双级网络侵蚀(DSNE)。我们对非残留和残余网络进行全面的实验,并获得更多可转移的对抗实例,其计算成本类似于最先进的方法。特别地,对于残余网络,通过将残余块信息偏置到跳过连接,可以显着改善对抗性示例的可转移性。我们的工作为神经网络的建筑脆弱性提供了新的见解,并对神经网络的稳健性带来了新的挑战。
translated by 谷歌翻译
对抗攻击使他们的成功取得了“愚弄”DNN等,基于梯度的算法成为一个主流。基于线性假设[12],在$ \ ell_ \ infty $约束下,在梯度上应用于渐变的$符号$操作是生成扰动的良好选择。然而,存在来自这种操作的副作用,因为它导致真实梯度与扰动之间的方向偏差。换句话说,当前方法包含真实梯度和实际噪声之间的间隙,这导致偏置和低效的攻击。因此,在理论上,基于泰勒膨胀,偏差地分析了$ \符号$,即快速梯度非符号法(FGNM)的校正。值得注意的是,FGNM是一般例程,它可以在基于梯度的攻击中无缝地更换传统的$符号$操作,以可忽略的额外计算成本。广泛的实验证明了我们方法的有效性。具体来说,我们的大多数和\ textBF {27.5 \%}平均突出了它们,平均而言。我们的匿名代码是公开可用的:\ url {https://git.io/mm -fgnm}。
translated by 谷歌翻译
快速梯度标志攻击系列是用于生成对抗示例的流行方法。然而,基于快速梯度签名系列的大多数方法不能平衡由于基本标志结构的局限性而平衡的无法区分和可转换性。为了解决这个问题,我们提出了一种方法,称为ADAM迭代快速梯度Tanh方法(AI-FGTM),以产生具有高可转换性的无法区分的对抗性示例。此外,还施加较小的核和动态步长,以产生对攻击成功率的进一步提高攻击示例。在想象中兼容的数据集上的广泛实验表明,我们的方法在没有额外运行的时间和资源的情况下,我们的方法产生更加难以区分的对抗性示例并实现更高的攻击成功率。我们最佳的转移攻击Ni-Ti-Di-Aitm可以欺骗六种经典的防御模型,平均成功率为89.3%,三种先进的防御模型,平均成功率为82.7%,其高于国家基于艺术梯度的攻击。此外,我们的方法还可以减少近20%的平均扰动。我们预计我们的方法将作为一种新的基线,用于产生具有更好的转移性和无法区分的对抗性实例。
translated by 谷歌翻译
近年来,随着神经网络的快速发展,深入学习模式的安全性越来越突出,这易于对抗性示例。几乎所有现有的基于梯度的攻击方法使用生成中的符号功能来满足$ l_ \ idty $ norm的扰动预算要求。然而,我们发现,由于它修改了精确梯度方向,因此可以对生成对抗示例的符号功能可能是不正确的。我们建议去除符号功能,并直接利用精确的梯度方向,具有缩放因子,以产生对抗扰动,即使扰动较少的扰动例子也提高了对抗性示例的攻击成功率。此外,考虑到最佳缩放因子在不同的图像上变化,我们提出了一种自适应缩放因子发生器,为每个图像寻求适当的缩放因子,这避免了手动搜索缩放因子的计算成本。我们的方法可以与几乎所有现有的基于梯度的攻击方法集成,以进一步提高攻击成功率。在CIFAR10和Imagenet数据集上的广泛实验表明,我们的方法表现出更高的可转移性和优于最先进的方法。
translated by 谷歌翻译
对于黑盒攻击,替代模型和受害者模型之间的差距通常很大,这表现为弱攻击性能。通过观察到,可以通过同时攻击多样的模型来提高对抗性示例的可传递性,并提出模型增强方法,这些模型通过使用转换图像模拟不同的模型。但是,空间域的现有转换不会转化为显着多样化的增强模型。为了解决这个问题,我们提出了一种新型的频谱模拟攻击,以针对正常训练和防御模型制作更容易转移的对抗性例子。具体而言,我们将频谱转换应用于输入,从而在频域中执行模型增强。从理论上讲,我们证明了从频域中得出的转换导致不同的频谱显着图,这是我们提出的指标,以反映替代模型的多样性。值得注意的是,我们的方法通常可以与现有攻击结合使用。 Imagenet数据集的广泛实验证明了我们方法的有效性,\ textit {e.g。},攻击了九个最先进的防御模型,其平均成功率为\ textbf {95.4 \%}。我们的代码可在\ url {https://github.com/yuyang-long/ssa}中获得。
translated by 谷歌翻译
强有力的对手例子是评估和增强深神经网络鲁棒性的关键。流行的对抗性攻击算法使用梯度上升最大化非cave损失函数。但是,每种攻击的性能通常对由于信息不足(仅一个输入示例,几乎没有白色盒子源模型和未知的防御策略)而敏感。因此,精心设计的对抗性示例容易过度拟合源模型,从而将其转移性限制在身份不明的架构上。在本文中,我们提出了多种渐近正态分布攻击(Multianda),这是一种新颖的方法,可以明确表征来自学习分布的对抗性扰动。具体而言,我们通过利用随机梯度上升(SGA)的渐近正态性能(SGA)的优势来近似于扰动,然后将整体策略应用于此过程,以估算高斯混合模型,以更好地探索潜在的优化空间。从学习分布中绘制扰动使我们能够为每个输入生成任何数量的对抗示例。近似后验实质上描述了SGA迭代的固定分布,该分布捕获了局部最佳距离周围的几何信息。因此,从分布中得出的样品可靠地保持转移性。我们提出的方法通过对七个正常训练和七个防御模型进行广泛的实验,超过了对具有或没有防御的深度学习模型的九个最先进的黑盒攻击。
translated by 谷歌翻译
具有提高可传递性的对抗性攻击 - 在已知模型上精心制作的对抗性示例的能力也欺骗了未知模型 - 由于其实用性,最近受到了很多关注。然而,现有的可转移攻击以确定性的方式制作扰动,并且常常无法完全探索损失表面,从而陷入了贫穷的当地最佳最佳效果,并且遭受了低传递性的折磨。为了解决这个问题,我们提出了细心多样性攻击(ADA),该攻击以随机方式破坏了不同的显着特征以提高可转移性。首先,我们将图像注意力扰动到破坏不同模型共享的通用特征。然后,为了有效避免局部优势差,我们以随机方式破坏了这些功能,并更加详尽地探索可转移扰动的搜索空间。更具体地说,我们使用发电机来产生对抗性扰动,每个扰动都根据输入潜在代码而以不同的方式打扰。广泛的实验评估证明了我们方法的有效性,优于最先进方法的可转移性。代码可在https://github.com/wkim97/ada上找到。
translated by 谷歌翻译
基于转移的对手示例是最重要的黑匣子攻击类别之一。然而,在对抗性扰动的可转移性和难以察觉之间存在权衡。在此方向上的事先工作经常需要固定但大量的$ \ ell_p $ -norm扰动预算,达到良好的转移成功率,导致可察觉的对抗扰动。另一方面,目前的大多数旨在产生语义保留扰动的难以限制的对抗攻击患有对目标模型的可转移性较弱。在这项工作中,我们提出了一个几何形象感知框架,以产生具有最小变化的可转移的对抗性示例。类似于在统计机器学习中的模型选择,我们利用验证模型为$ \ ell _ {\ infty} $ - norm和不受限制的威胁模型中选择每个图像的最佳扰动预算。广泛的实验验证了我们对平衡令人难以置信的难以察觉和可转移性的框架的有效性。方法论是我们进入CVPR'21安全性AI挑战者的基础:对想象成的不受限制的对抗攻击,其中我们将第1位排名第1,559支队伍,并在决赛方面超过了亚军提交的提交4.59%和23.91%分别和平均图像质量水平。代码可在https://github.com/equationliu/ga-attack获得。
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
已知用于图像分类的深神经网络(DNN)容易受到对抗性例子的影响。而且,对抗性示例具有可转移性,这意味着DNN模型的对抗示例可以欺骗其他具有非平凡概率的黑框模型。这给出了基于转移的对抗攻击,其中使用了预验证或已知模型(称为替代模型)产生的对抗示例来进行黑盒攻击。关于如何从给定的替代模型中生成对抗性示例以实现更好的可传递性,有一些工作。但是,训练一种特殊的替代模型以生成具有更好可传递性的对抗性示例的情况相对较小的探索。在本文中,我们提出了一种培训具有丰富黑暗知识的替代模型的方法,以提高替代模型产生的对抗性示例的对抗性转移性。该训练有素的替代模型被命名为“黑暗代理模型”(DSM),培训DSM的建议方法由两个关键组成部分组成:一种教师模型提取黑暗知识并提供软标签,以及增强的混合增强技能,增强了训练数据的黑暗知识。已经进行了广泛的实验,以表明所提出的方法可以基本上改善替代模型的不同体系结构和优化者的替代模型的对抗性转移性,以生成对抗性示例。我们还表明,所提出的方法可以应用于包含黑暗知识(例如面部验证)的基于转移攻击的其他情况。
translated by 谷歌翻译
恶意攻击者可以通过在图像上施加人类侵蚀的噪声来产生目标的对抗示例,从而迫使神经网络模型产生特定的不正确输出。通过跨模型可转移的对抗性示例,即使模型信息被攻击者保密,神经网络的脆弱性仍然存在。最近的研究表明,基于合奏的方法在生成可转移的对抗示例中的有效性。但是,在创建有针对性的攻击的情况下,现有方法缺乏在不同模型之间转移的目标攻击的情况。在这项工作中,我们提出了多样化的权重修剪(DWP),以通过利用在模型压缩中使用的权重修剪方法进一步增强基于合奏的方法。具体而言,我们通过随机的重量修剪方法获得多种不同的模型。这些模型可保留相似的精度,并可以作为基于合奏的方法的其他模型,从而产生更强的可转移目标攻击。在更具挑战性的情况下,提供了与Imagenet兼容数据集进行的实验:转移到不同的体系结构和对手训练的模型。结果表明,我们提出的DWP提高了目标攻击成功率,最先进方法的组合分别高达4.1%和8.0%
translated by 谷歌翻译