可转移的对抗性攻击优化了从验证的替代模型和已知标签空间中的对手,以欺骗未知的黑盒模型。因此,这些攻击受到有效的替代模型的可用性受到限制。在这项工作中,我们放宽了这一假设,并提出了对抗像素的恢复,作为一种自制的替代方案,可以在无标签和很少的数据样本的条件下从头开始训练有效的替代模型。我们的培训方法是基于一个最小目标的目标,该目标通过对抗目标减少过度拟合,从而为更概括的替代模型进行了优化。我们提出的攻击是对对抗性像素恢复的补充,并且独立于任何特定任务目标,因为它可以以自我监督的方式启动。我们成功地证明了我们对视觉变压器方法的对抗性可传递性以及卷积神经网络,用于分类,对象检测和视频分割的任务。我们的代码和预培训的代理模型可在以下网址找到:https://github.com/hashmatshadab/apr
translated by 谷歌翻译
视觉变形金刚(VITS)处理将图像输入图像作为通过自我关注的斑块;比卷积神经网络(CNNS)彻底不同的结构。这使得研究Vit模型的对抗特征空间及其可转移性有趣。特别是,我们观察到通过常规逆势攻击发现的对抗性模式,即使对于大型Vit模型,也表现出非常低的黑箱可转移性。但是,我们表明这种现象仅是由于不利用VITS的真实表示潜力的次优攻击程序。深紫色由多个块组成,具有一致的架构,包括自我关注和前馈层,其中每个块能够独立地产生类令牌。仅使用最后一类令牌(传统方法)制定攻击并不直接利用存储在早期令牌中的辨别信息,从而导致VITS的逆势转移性差。使用Vit模型的组成性质,我们通过引入特定于Vit模型结构的两种新策略来增强现有攻击的可转移性。 (i)自我合奏:我们提出了一种通过将单vit模型解剖到网络的集合来找到多种判别途径的方法。这允许在每个VIT块处明确地利用特定于类信息。 (ii)令牌改进:我们建议改进令牌,以进一步增强每种Vit障碍的歧视能力。我们的令牌细化系统地将类令牌系统组合在补丁令牌中保留的结构信息。在一个视觉变压器中发现的分类器的集合中应用于此类精炼令牌时,对抗攻击具有明显更高的可转移性。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
具有提高可传递性的对抗性攻击 - 在已知模型上精心制作的对抗性示例的能力也欺骗了未知模型 - 由于其实用性,最近受到了很多关注。然而,现有的可转移攻击以确定性的方式制作扰动,并且常常无法完全探索损失表面,从而陷入了贫穷的当地最佳最佳效果,并且遭受了低传递性的折磨。为了解决这个问题,我们提出了细心多样性攻击(ADA),该攻击以随机方式破坏了不同的显着特征以提高可转移性。首先,我们将图像注意力扰动到破坏不同模型共享的通用特征。然后,为了有效避免局部优势差,我们以随机方式破坏了这些功能,并更加详尽地探索可转移扰动的搜索空间。更具体地说,我们使用发电机来产生对抗性扰动,每个扰动都根据输入潜在代码而以不同的方式打扰。广泛的实验评估证明了我们方法的有效性,优于最先进方法的可转移性。代码可在https://github.com/wkim97/ada上找到。
translated by 谷歌翻译
深度卷积神经网络(CNN)很容易被输入图像的细微,不可察觉的变化所欺骗。为了解决此漏洞,对抗训练会创建扰动模式,并将其包括在培训设置中以鲁棒性化模型。与仅使用阶级有限信息的现有对抗训练方法(例如,使用交叉渗透损失)相反,我们建议利用功能空间中的其他信息来促进更强的对手,这些信息又用于学习强大的模型。具体来说,我们将使用另一类的目标样本的样式和内容信息以及其班级边界信息来创建对抗性扰动。我们以深入监督的方式应用了我们提出的多任务目标,从而提取了多尺度特征知识,以创建最大程度地分开对手。随后,我们提出了一种最大边缘对抗训练方法,该方法可最大程度地减少源图像与其对手之间的距离,并最大程度地提高对手和目标图像之间的距离。与最先进的防御能力相比,我们的对抗训练方法表明了强大的鲁棒性,可以很好地推广到自然发生的损坏和数据分配变化,并保留了清洁示例的模型准确性。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
In this work, we study the black-box targeted attack problem from the model discrepancy perspective. On the theoretical side, we present a generalization error bound for black-box targeted attacks, which gives a rigorous theoretical analysis for guaranteeing the success of the attack. We reveal that the attack error on a target model mainly depends on empirical attack error on the substitute model and the maximum model discrepancy among substitute models. On the algorithmic side, we derive a new algorithm for black-box targeted attacks based on our theoretical analysis, in which we additionally minimize the maximum model discrepancy(M3D) of the substitute models when training the generator to generate adversarial examples. In this way, our model is capable of crafting highly transferable adversarial examples that are robust to the model variation, thus improving the success rate for attacking the black-box model. We conduct extensive experiments on the ImageNet dataset with different classification models, and our proposed approach outperforms existing state-of-the-art methods by a significant margin. Our codes will be released.
translated by 谷歌翻译
Adversarial training is an effective approach to make deep neural networks robust against adversarial attacks. Recently, different adversarial training defenses are proposed that not only maintain a high clean accuracy but also show significant robustness against popular and well studied adversarial attacks such as PGD. High adversarial robustness can also arise if an attack fails to find adversarial gradient directions, a phenomenon known as `gradient masking'. In this work, we analyse the effect of label smoothing on adversarial training as one of the potential causes of gradient masking. We then develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA). Our attack approach is based on a `match and deceive' loss that finds optimal adversarial directions through guidance from a surrogate model. Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size. Furthermore, our proposed G-PGA is generic, thus it can be combined with an ensemble attack strategy as we demonstrate for the case of Auto-Attack, leading to efficiency and convergence speed improvements. More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
translated by 谷歌翻译
这项工作是对对使用Dino训练的自我监督视觉变压器的对抗性攻击的鲁棒性进行的首次分析。首先,我们评估通过自学学历的特征是否比受到监督学习中出现的人更强大。然后,我们介绍在潜在空间中攻击的属性。最后,我们评估了三种著名的防御策略是否可以通过微调分类头来提高下游任务中的对抗性鲁棒性,即使考虑到有限的计算资源,也可以提供鲁棒性。这些防御策略是:对抗性训练,合奏对抗训练和专业网络的合奏。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
基于转移的对手示例是最重要的黑匣子攻击类别之一。然而,在对抗性扰动的可转移性和难以察觉之间存在权衡。在此方向上的事先工作经常需要固定但大量的$ \ ell_p $ -norm扰动预算,达到良好的转移成功率,导致可察觉的对抗扰动。另一方面,目前的大多数旨在产生语义保留扰动的难以限制的对抗攻击患有对目标模型的可转移性较弱。在这项工作中,我们提出了一个几何形象感知框架,以产生具有最小变化的可转移的对抗性示例。类似于在统计机器学习中的模型选择,我们利用验证模型为$ \ ell _ {\ infty} $ - norm和不受限制的威胁模型中选择每个图像的最佳扰动预算。广泛的实验验证了我们对平衡令人难以置信的难以察觉和可转移性的框架的有效性。方法论是我们进入CVPR'21安全性AI挑战者的基础:对想象成的不受限制的对抗攻击,其中我们将第1位排名第1,559支队伍,并在决赛方面超过了亚军提交的提交4.59%和23.91%分别和平均图像质量水平。代码可在https://github.com/equationliu/ga-attack获得。
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
制作对抗性攻击的大多数方法都集中在具有单个主体对象的场景上(例如,来自Imagenet的图像)。另一方面,自然场景包括多个在语义上相关的主要对象。因此,探索设计攻击策略至关重要,这些攻击策略超出了在单对象场景上学习或攻击单对象受害者分类器。由于其固有的属性将扰动向未知模型的强大可传递性强,因此本文介绍了使用生成模型对多对象场景的对抗性攻击的第一种方法。为了代表输入场景中不同对象之间的关系,我们利用开源的预训练的视觉语言模型剪辑(对比语言图像 - 预训练),并动机利用语言中的编码语义来利用编码的语义空间与视觉空间一起。我们称这种攻击方法生成对抗性多对象场景攻击(GAMA)。 GAMA展示了剪辑模型作为攻击者的工具的实用性,以训练可强大的扰动发电机为多对象场景。使用联合图像文本功能来训练发电机,我们表明GAMA可以在各种攻击环境中制作有效的可转移扰动,以欺骗受害者分类器。例如,GAMA触发的错误分类比在黑框设置中的最新生成方法高出约16%,在黑框设置中,分类器体系结构和攻击者的数据分布都与受害者不同。我们的代码将很快公开提供。
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
视觉变压器(VITS)在一系列计算机视觉任务上表现出令人印象深刻的性能,但它们仍然遭受对抗的例子。 %以与CNN类似的方式制作。在本文中,我们对变压器的对抗攻击应特别适合其架构,共同考虑斑块和自我关注,以实现高可转移性。更具体地说,我们介绍了一种双攻击框架,其中包含不关注(PNA)攻击和果末攻击,以改善对抗不同风格的对抗性样本的可转移性。我们表明,在背部衰退期间跳过关注的梯度可以产生具有高可转换性的对抗性示例。此外,通过优化在每个迭代的随机采样的斑块的随机采样子集产生的对抗扰动,该次迭代的成功率比使用所有补丁的攻击实现了更高的攻击成功率。我们评估攻击最先进的VITS,CNN和强大的CNNS的可转移性。这些实验的结果表明,所提出的双重攻击可以大大提高VITS与VITS之间的转移性。此外,所提出的方法可以容易地与现有的传输方法组合以提高性能。代码可在https://github.com/zhipeng-wei/pna-patchout获得。
translated by 谷歌翻译
我们提出了一种新颖且有效的纯化基于纯化的普通防御方法,用于预处理盲目的白色和黑匣子攻击。我们的方法仅在一般图像上进行了自我监督学习,在计算上效率和培训,而不需要对分类模型的任何对抗训练或再培训。我们首先显示对原始图像与其对抗示例之间的残余的对抗噪声的实证分析,几乎均为对称分布。基于该观察,我们提出了一种非常简单的迭代高斯平滑(GS),其可以有效地平滑对抗性噪声并实现大大高的鲁棒精度。为了进一步改进它,我们提出了神经上下文迭代平滑(NCIS),其以自我监督的方式列举盲点网络(BSN)以重建GS也平滑的原始图像的辨别特征。从我们使用四种分类模型对大型想象成的广泛实验,我们表明我们的方法既竞争竞争标准精度和最先进的强大精度,则针对最强大的净化器 - 盲目的白色和黑匣子攻击。此外,我们提出了一种用于评估基于商业图像分类API的纯化方法的新基准,例如AWS,Azure,Clarifai和Google。我们通过基于集合转移的黑匣子攻击产生对抗性实例,这可以促进API的完全错误分类,并证明我们的方法可用于增加API的抗逆性鲁棒性。
translated by 谷歌翻译
已知用于图像分类的深神经网络(DNN)容易受到对抗性例子的影响。而且,对抗性示例具有可转移性,这意味着DNN模型的对抗示例可以欺骗其他具有非平凡概率的黑框模型。这给出了基于转移的对抗攻击,其中使用了预验证或已知模型(称为替代模型)产生的对抗示例来进行黑盒攻击。关于如何从给定的替代模型中生成对抗性示例以实现更好的可传递性,有一些工作。但是,训练一种特殊的替代模型以生成具有更好可传递性的对抗性示例的情况相对较小的探索。在本文中,我们提出了一种培训具有丰富黑暗知识的替代模型的方法,以提高替代模型产生的对抗性示例的对抗性转移性。该训练有素的替代模型被命名为“黑暗代理模型”(DSM),培训DSM的建议方法由两个关键组成部分组成:一种教师模型提取黑暗知识并提供软标签,以及增强的混合增强技能,增强了训练数据的黑暗知识。已经进行了广泛的实验,以表明所提出的方法可以基本上改善替代模型的不同体系结构和优化者的替代模型的对抗性转移性,以生成对抗性示例。我们还表明,所提出的方法可以应用于包含黑暗知识(例如面部验证)的基于转移攻击的其他情况。
translated by 谷歌翻译