我们表明,当考虑到图像域$ [0,1] ^ D $时,已建立$ L_1 $ -Projected梯度下降(PGD)攻击是次优,因为它们不认为有效的威胁模型是交叉点$ l_1 $ -ball和$ [0,1] ^ d $。我们研究了这种有效威胁模型的最陡渐进步骤的预期稀疏性,并表明该组上的确切投影是计算可行的,并且产生更好的性能。此外,我们提出了一种自适应形式的PGD,即使具有小的迭代预算,这也是非常有效的。我们的结果$ l_1 $ -apgd是一个强大的白盒攻击,表明先前的作品高估了他们的$ l_1 $ -trobustness。使用$ l_1 $ -apgd for vercersarial培训,我们获得一个强大的分类器,具有sota $ l_1 $ -trobustness。最后,我们将$ l_1 $ -apgd和平方攻击的适应组合到$ l_1 $ to $ l_1 $ -autoattack,这是一个攻击的集合,可靠地评估$ l_1 $ -ball与$的威胁模型的对抗鲁棒性进行对抗[ 0,1] ^ d $。
translated by 谷歌翻译
The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness. Many promising defenses could be broken later on, making it difficult to identify the state-of-the-art. Frequent pitfalls in the evaluation are improper tuning of hyperparameters of the attacks, gradient obfuscation or masking. In this paper we first propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function. We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness. We apply our ensemble to over 50 models from papers published at recent top machine learning and computer vision venues. In all except one of the cases we achieve lower robust test accuracy than reported in these papers, often by more than 10%, identifying several broken defenses.
translated by 谷歌翻译
The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the l p -norms for p ∈ {1, 2, ∞} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similar to stateof-the-art attacks which are partially specialized to one l p -norm, and is robust to the phenomenon of gradient masking.
translated by 谷歌翻译
与标准的训练时间相比,训练时间非常长,尤其是针对稳健模型的主要缺点,尤其是对于大型数据集而言。此外,模型不仅应适用于一个$ l_p $ - 威胁模型,而且对所有模型来说都是理想的。在本文中,我们提出了基于$ l_p $ -Balls的几何特性的多元素鲁棒性的极端规范对抗训练(E-AT)。 E-AT的成本比其他对抗性训练方法低三倍,以进行多种锻炼。使用e-at,我们证明,对于ImageNet,单个时期和CIFAR-10,三个时期足以将任何$ L_P $ - 抛光模型变成一个多符号鲁棒模型。通过这种方式,我们获得了ImageNet的第一个多元素鲁棒模型,并在CIFAR-10上提高了多个Norm鲁棒性的最新型号,以超过$ 51 \%$。最后,我们通过对不同单独的$ l_p $ threat模型之间的对抗鲁棒性进行微调研究一般的转移,并改善了Cifar-10和Imagenet上的先前的SOTA $ L_1 $ - 固定。广泛的实验表明,我们的计划在包括视觉变压器在内的数据集和架构上起作用。
translated by 谷歌翻译
We propose the Square Attack, a score-based black-box l2and l∞-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking. Square Attack is based on a randomized search scheme which selects localized squareshaped updates at random positions so that at each iteration the perturbation is situated approximately at the boundary of the feasible set. Our method is significantly more query efficient and achieves a higher success rate compared to the state-of-the-art methods, especially in the untargeted setting. In particular, on ImageNet we improve the average query efficiency in the untargeted setting for various deep networks by a factor of at least 1.8 and up to 3 compared to the recent state-ofthe-art l∞-attack of Al-Dujaili & OReilly (2020). Moreover, although our attack is black-box, it can also outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate. The code of our attack is available at https://github.com/max-andr/square-attack.
translated by 谷歌翻译
Designing powerful adversarial attacks is of paramount importance for the evaluation of $\ell_p$-bounded adversarial defenses. Projected Gradient Descent (PGD) is one of the most effective and conceptually simple algorithms to generate such adversaries. The search space of PGD is dictated by the steepest ascent directions of an objective. Despite the plethora of objective function choices, there is no universally superior option and robustness overestimation may arise from ill-suited objective selection. Driven by this observation, we postulate that the combination of different objectives through a simple loss alternating scheme renders PGD more robust towards design choices. We experimentally verify this assertion on a synthetic-data example and by evaluating our proposed method across 25 different $\ell_{\infty}$-robust models and 3 datasets. The performance improvement is consistent, when compared to the single loss counterparts. In the CIFAR-10 dataset, our strongest adversarial attack outperforms all of the white-box components of AutoAttack (AA) ensemble, as well as the most powerful attacks existing on the literature, achieving state-of-the-art results in the computational budget of our study ($T=100$, no restarts).
translated by 谷歌翻译
深度神经网络很容易被称为对抗攻击的小扰动都愚弄。对抗性培训(AT)是一种近似解决了稳健的优化问题,以最大限度地减少最坏情况损失,并且被广泛认为是对这种攻击的最有效的防御。由于产生了强大的对抗性示例的高计算时间,已经提出了单步方法来减少培训时间。然而,这些方法遭受灾难性的过度装备,在训练期间侵犯准确度下降。虽然提出了改进,但它们增加了培训时间和稳健性远非多步骤。我们为FW优化(FW-AT)开发了对抗的对抗培训的理论框架,揭示了损失景观与$ \ ell_2 $失真之间的几何连接。我们分析地表明FW攻击的高变形相当于沿攻击路径的小梯度变化。然后在各种深度神经网络架构上进行实验证明,$ \ ell \ infty $攻击对抗强大的模型实现近乎最大的$ \ ell_2 $失真,而标准网络具有较低的失真。此外,实验表明,灾难性的过度拟合与FW攻击的低变形强烈相关。为了展示我们理论框架的效用,我们开发FW-AT-Adap,这是一种新的逆势训练算法,它使用简单的失真度量来调整攻击步骤的数量,以提高效率而不会影响鲁棒性。 FW-AT-Adapt提供培训时间以单步快速分配方法,并改善了在白色盒子和黑匣子设置中的普发内精度的最小损失和多步PGD之间的差距。
translated by 谷歌翻译
在测试时间进行优化的自适应防御能力有望改善对抗性鲁棒性。我们对这种自适应测试时间防御措施进行分类,解释其潜在的好处和缺点,并评估图像分类的最新自适应防御能力的代表性。不幸的是,经过我们仔细的案例研究评估时,没有任何显着改善静态防御。有些甚至削弱了基本静态模型,同时增加了推理计算。尽管这些结果令人失望,但我们仍然认为自适应测试时间防御措施是一项有希望的研究途径,因此,我们为他们的彻底评估提供了建议。我们扩展了Carlini等人的清单。(2019年)通过提供针对自适应防御的具体步骤。
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译
作为研究界,我们仍然缺乏对对抗性稳健性的进展的系统理解,这通常使得难以识别训练强大模型中最有前途的想法。基准稳健性的关键挑战是,其评估往往是出错的导致鲁棒性高估。我们的目标是建立对抗性稳健性的标准化基准,尽可能准确地反映出考虑在合理的计算预算范围内所考虑的模型的稳健性。为此,我们首先考虑图像分类任务并在允许的型号上引入限制(可能在将来宽松)。我们评估了与AutoAtrack的对抗鲁棒性,白和黑箱攻击的集合,最近在大规模研究中显示,与原始出版物相比,改善了几乎所有稳健性评估。为防止对自动攻击进行新防御的过度适应,我们欢迎基于自适应攻击的外部评估,特别是在自动攻击稳健性潜在高估的地方。我们的排行榜,托管在https://robustbench.github.io/,包含120多个模型的评估,并旨在反映在$ \ ell_ \ infty $的一套明确的任务上的图像分类中的当前状态 - 和$ \ ell_2 $ -Threat模型和共同腐败,未来可能的扩展。此外,我们开源源是图书馆https://github.com/robustbench/robustbench,可以提供对80多个强大模型的统一访问,以方便他们的下游应用程序。最后,根据收集的模型,我们分析了稳健性对分布换档,校准,分配检测,公平性,隐私泄漏,平滑度和可转移性的影响。
translated by 谷歌翻译
最近,Wong等人。表明,使用单步FGSM的对抗训练导致一种名为灾难性过度拟合(CO)的特征故障模式,其中模型突然变得容易受到多步攻击的影响。他们表明,在FGSM(RS-FGSM)之前添加随机扰动似乎足以防止CO。但是,Andriushchenko和Flammarion观察到RS-FGSM仍会导致更大的扰动,并提出了一个昂贵的常规化器(Gradalign),DEMATER(GARGALIGN)DES昂贵(Gradalign)Dust Forrasiniger(Gradalign)Dust co避免在这项工作中,我们有条不紊地重新审视了噪声和剪辑在单步对抗训练中的作用。与以前的直觉相反,我们发现在干净的样品周围使用更强烈的噪声与不剪接相结合在避免使用大扰动半径的CO方面非常有效。基于这些观察结果,我们提出了噪声-FGSM(N-FGSM),尽管提供了单步对抗训练的好处,但在大型实验套件上没有经验分析,这表明N-FGSM能够匹配或超越以前的单步方法的性能,同时达到3 $ \ times $加速。代码可以在https://github.com/pdejorge/n-fgsm中找到
translated by 谷歌翻译
图像空间中的视觉反事实解释(VCE)是了解图像分类器的决策的重要工具,因为它们显示了图像的更改,分类器的决策将会改变。他们在图像空间中的产生具有挑战性,由于对抗性例子的问题,需要强大的模型。在图像空间中生成VCE的现有技术遭受背景虚假变化的影响。我们对VCE的新型扰动模型以及通过我们的新型自动 - 弗兰克 - 摩 - 摩托方案的有效优化产生了稀疏的VCE,从而导致了针对目标类别的细微变化。此外,我们表明,由于Imagenet数据集中的虚假特征,VCE可用于检测Imagenet分类器的不希望的行为。
translated by 谷歌翻译
对抗性训练遭受了稳健的过度装备,这是一种现象,在训练期间鲁棒测试精度开始减少。在本文中,我们专注于通过使用常见的数据增强方案来减少强大的过度装备。我们证明,与先前的发现相反,当与模型重量平均结合时,数据增强可以显着提高鲁棒精度。此外,我们比较各种增强技术,并观察到空间组合技术适用于对抗性培训。最后,我们评估了我们在Cifar-10上的方法,而不是$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动分别为尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $。与以前的最先进的方法相比,我们表现出+ 2.93%的绝对改善+ 2.93%,+ 2.16%。特别是,反对$ \ ell_ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的模型达到60.07%的强劲准确性而不使用任何外部数据。我们还通过这种方法实现了显着的性能提升,同时使用其他架构和数据集如CiFar-100,SVHN和TinyimageNet。
translated by 谷歌翻译
对对抗性攻击的鲁棒性通常以对抗精度评估。但是,该指标太粗糙,无法正确捕获机器学习模型的所有鲁棒性。当对强烈的攻击进行评估时,许多防御能力并不能提供准确的改进,同时仍会部分贡献对抗性鲁棒性。流行的认证方法遇到了同一问题,因为它们提供了准确性的下限。为了捕获更精细的鲁棒性属性,我们提出了一个针对L2鲁棒性,对抗角稀疏性的新指标,该指标部分回答了“输入周围有多少个对抗性示例”的问题。我们通过评估“强”和“弱”的防御能力来证明其有用性。我们表明,一些最先进的防御能力具有非常相似的精度,在它们不强大的输入上可能具有截然不同的稀疏性。我们还表明,一些弱防御能力实际上会降低鲁棒性,而另一些防御能力则以无法捕获的准确性来加强它。这些差异可以预测这种防御与对抗性训练相结合时的实用性。
translated by 谷歌翻译
Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Specifically, we show that adversarial training with the fast gradient sign method (FGSM), when combined with random initialization, is as effective as PGD-based training but has significantly lower cost. Furthermore we show that FGSM adversarial training can be further accelerated by using standard techniques for efficient training of deep networks, allowing us to learn a robust CIFAR10 classifier with 45% robust accuracy to PGD attacks with = 8/255 in 6 minutes, and a robust ImageNet classifier with 43% robust accuracy at = 2/255 in 12 hours, in comparison to past work based on "free" adversarial training which took 10 and 50 hours to reach the same respective thresholds. Finally, we identify a failure mode referred to as "catastrophic overfitting" which may have caused previous attempts to use FGSM adversarial training to fail. All code for reproducing the experiments in this paper as well as pretrained model weights are at https://github.com/locuslab/fast_adversarial.
translated by 谷歌翻译
评估对抗性鲁棒性的量,以找到有输入样品被错误分类所需的最小扰动。底层优化的固有复杂性需要仔细调整基于梯度的攻击,初始化,并且可能为许多计算苛刻的迭代而被执行,即使专门用于给定的扰动模型也是如此。在这项工作中,我们通过提出使用不同$ \ ell_p $ -norm扰动模型($ p = 0,1,2,\ idty $)的快速最小规范(FMN)攻击来克服这些限制(FMN)攻击选择,不需要对抗性起点,并在很少的轻量级步骤中收敛。它通过迭代地发现在$ \ ell_p $ -norm的最大信心被错误分类的样本进行了尺寸的尺寸$ \ epsilon $的限制,同时适应$ \ epsilon $,以最小化当前样本到决策边界的距离。广泛的实验表明,FMN在收敛速度和计算时间方面显着优于现有的攻击,同时报告可比或甚至更小的扰动尺寸。
translated by 谷歌翻译
对共同腐败的稳健性的文献表明对逆势培训是否可以提高这种环境的性能,没有达成共识。 First, we show that, when used with an appropriately selected perturbation radius, $\ell_p$ adversarial training can serve as a strong baseline against common corruptions improving both accuracy and calibration.然后,我们解释了为什么对抗性训练比具有简单高斯噪声的数据增强更好地表现,这被观察到是对共同腐败的有意义的基线。与此相关,我们确定了高斯增强过度适用于用于培训的特定标准偏差的$ \ sigma $ -oviting现象,这对培训具有显着不利影响的普通腐败精度。我们讨论如何缓解这一问题,然后如何通过学习的感知图像贴片相似度引入对抗性训练的有效放松来进一步增强$ \ ell_p $普发的培训。通过对CiFar-10和Imagenet-100的实验,我们表明我们的方法不仅改善了$ \ ell_p $普发的培训基线,而且还有累积的收益与Augmix,Deepaulment,Ant和Sin等数据增强方法,导致普通腐败的最先进的表现。我们的实验代码在HTTPS://github.com/tml-epfl/adv-training - 窗子上公开使用。
translated by 谷歌翻译
深度神经网络已成为现代图像识别系统的驱动力。然而,神经网络对抗对抗性攻击的脆弱性对受这些系统影响的人构成严重威胁。在本文中,我们专注于一个真实的威胁模型,中间对手恶意拦截和erturbs网页用户上传在线。这种类型的攻击可以在简单的性能下降之上提高严重的道德问题。为了防止这种攻击,我们设计了一种新的双层优化算法,该算法在对抗对抗扰动的自然图像附近找到点。CiFar-10和Imagenet的实验表明我们的方法可以有效地强制在给定的修改预算范围内的自然图像。我们还显示所提出的方法可以在共同使用随机平滑时提高鲁棒性。
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
到目前为止对抗训练是抵御对抗例子的最有效的策略。然而,由于每个训练步骤中的迭代对抗性攻击,它遭受了高的计算成本。最近的研究表明,通过随机初始化执行单步攻击,可以实现快速的对抗训练。然而,这种方法仍然落后于稳定性和模型稳健性的最先进的对手训练算法。在这项工作中,我们通过观察随机平滑的随机初始化来更好地优化内部最大化问题,对快速对抗培训进行新的理解。在这种新的视角之后,我们还提出了一种新的初始化策略,向后平滑,进一步提高单步强大培训方法的稳定性和模型稳健性。多个基准测试的实验表明,我们的方法在使用更少的训练时间(使用相同的培训计划时,使用更少的培训时间($ \ sim $ 3x改进)时,我们的方法达到了类似的模型稳健性。
translated by 谷歌翻译