提高黑箱逃避攻击的可转移性的一种既定方法是在基于合奏的替代物上制作对抗性例子,以提高多样性。我们认为可转移性与不确定性根本相关。基于一种最先进的贝叶斯深度学习技术,我们提出了一种新方法,通过大约从神经网络权重的后验分布进行采样来有效地构建代理,这代表了每个参数的价值的信念。我们对Imagenet,CIFAR-10和MNIST进行的广泛实验表明,在内部结构和结构转移性中,我们的方法显着提高了四个最新攻击的成功率(高达83.2个百分点)。在Imagenet上,与经过独立训练的DNN合奏相比,我们的方法可以达到成功率的94%,同时将训练计算从11.6降低到2.4个Exaflops。与为此目的设计的三种测试时间技术相比,我们的香草代理人的可传递性高87.5%。我们的工作表明,训练代理人的方法被忽略了,尽管这是基于转移攻击的重要组成部分。因此,我们是第一个回顾几种培训方法在提高可传递性方面的有效性的。我们提供了新的方向,以更好地了解可转移性现象,并为将来的工作提供简单但强大的基线。
translated by 谷歌翻译
我们提出了从大几何附近(LGV)的转移性,这是一种新技术,以提高黑盒对抗攻击的可传递性。LGV从预处理的替代模型开始,并从恒定且高学习率的其他一些训练时期收集了多个重量集。LGV利用了我们与可传递性相关的两个几何特性。首先,属于最佳体重的模型是更好的替代物。其次,我们确定一个能够在此更大最佳中生成有效的替代合奏的子空间。通过广泛的实验,我们表明单独使用LGV优于四个既定测试时间转换的所有(组合)。我们的发现为解释对抗性例子的转移性的几何形状的重要性提供了新的启示。
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
强有力的对手例子是评估和增强深神经网络鲁棒性的关键。流行的对抗性攻击算法使用梯度上升最大化非cave损失函数。但是,每种攻击的性能通常对由于信息不足(仅一个输入示例,几乎没有白色盒子源模型和未知的防御策略)而敏感。因此,精心设计的对抗性示例容易过度拟合源模型,从而将其转移性限制在身份不明的架构上。在本文中,我们提出了多种渐近正态分布攻击(Multianda),这是一种新颖的方法,可以明确表征来自学习分布的对抗性扰动。具体而言,我们通过利用随机梯度上升(SGA)的渐近正态性能(SGA)的优势来近似于扰动,然后将整体策略应用于此过程,以估算高斯混合模型,以更好地探索潜在的优化空间。从学习分布中绘制扰动使我们能够为每个输入生成任何数量的对抗示例。近似后验实质上描述了SGA迭代的固定分布,该分布捕获了局部最佳距离周围的几何信息。因此,从分布中得出的样品可靠地保持转移性。我们提出的方法通过对七个正常训练和七个防御模型进行广泛的实验,超过了对具有或没有防御的深度学习模型的九个最先进的黑盒攻击。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
对抗性可转移性是一种有趣的性质 - 针对一个模型制作的对抗性扰动也是对另一个模型有效的,而这些模型来自不同的模型家庭或培训过程。为了更好地保护ML系统免受对抗性攻击,提出了几个问题:对抗性转移性的充分条件是什么,以及如何绑定它?有没有办法降低对抗的转移性,以改善合奏ML模型的鲁棒性?为了回答这些问题,在这项工作中,我们首先在理论上分析和概述了模型之间的对抗性可转移的充分条件;然后提出一种实用的算法,以减少集合内基础模型之间的可转换,以提高其鲁棒性。我们的理论分析表明,只有促进基础模型梯度之间的正交性不足以确保低可转移性;与此同时,模型平滑度是控制可转移性的重要因素。我们还在某些条件下提供了对抗性可转移性的下界和上限。灵感来自我们的理论分析,我们提出了一种有效的可转让性,减少了平滑(TRS)集合培训策略,以通过实施基础模型之间的梯度正交性和模型平滑度来培训具有低可转换性的强大集成。我们对TRS进行了广泛的实验,并与6个最先进的集合基线进行比较,防止不同数据集的8个白箱攻击,表明所提出的TRS显着优于所有基线。
translated by 谷歌翻译
Designing powerful adversarial attacks is of paramount importance for the evaluation of $\ell_p$-bounded adversarial defenses. Projected Gradient Descent (PGD) is one of the most effective and conceptually simple algorithms to generate such adversaries. The search space of PGD is dictated by the steepest ascent directions of an objective. Despite the plethora of objective function choices, there is no universally superior option and robustness overestimation may arise from ill-suited objective selection. Driven by this observation, we postulate that the combination of different objectives through a simple loss alternating scheme renders PGD more robust towards design choices. We experimentally verify this assertion on a synthetic-data example and by evaluating our proposed method across 25 different $\ell_{\infty}$-robust models and 3 datasets. The performance improvement is consistent, when compared to the single loss counterparts. In the CIFAR-10 dataset, our strongest adversarial attack outperforms all of the white-box components of AutoAttack (AA) ensemble, as well as the most powerful attacks existing on the literature, achieving state-of-the-art results in the computational budget of our study ($T=100$, no restarts).
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
在不同模型中,对抗性示例(AES)的可传递性对于黑盒对抗攻击至关重要,在黑框对抗攻击中,攻击者无法访问有关黑盒模型的信息。但是,制作的AE总是表现出差的可转移性。在本文中,通过将AES作为模型的概括能力的可传递性,我们揭示了Vanilla Black-Box攻击通过解决最大似然估计(MLE)问题来制作AES。对于MLE,结果可能是特定于模型的本地最佳最佳,当可用数据较小时,即限制了AE的可传递性。相比之下,我们将可转移的AES重新构建为最大化后验概率估计问题,这是一种有效的方法,可以提高结果有限的结果的概括。由于贝叶斯后推断通常很棘手,因此开发了一种简单而有效的方法称为MaskBlock以近似估计。此外,我们表明该配方框架是各种攻击方法的概括版本。广泛的实验说明了面具可以显着提高制作的对抗性例子的可转移性,最多可以提高20%。
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
无线系统应用中深度学习(DL)的成功出现引起了人们对与安全有关的新挑战的担忧。一个这样的安全挑战是对抗性攻击。尽管已经有很多工作证明了基于DL的分类任务对对抗性攻击的敏感性,但是从攻击的角度来看,尚未对无线系统的基于回归的问题进行基于回归的问题。本文的目的是双重的:(i)我们在无线设置中考虑回归问题,并表明对抗性攻击可以打破基于DL的方法,并且(ii)我们将对抗性训练作为对抗性环境中的防御技术的有效性分析并表明基于DL的无线系统对攻击的鲁棒性有了显着改善。具体而言,本文考虑的无线应用程序是基于DL的功率分配,以多细胞大量多输入 - 销售输出系统的下行链路分配,攻击的目的是通过DL模型产生不可行的解决方案。我们扩展了基于梯度的对抗性攻击:快速梯度标志方法(FGSM),动量迭代FGSM和预计的梯度下降方法,以分析具有和没有对抗性训练的考虑的无线应用的敏感性。我们对这些攻击进行了分析深度神经网络(DNN)模型的性能,在这些攻击中,使用白色框和黑盒攻击制作了对抗性扰动。
translated by 谷歌翻译
对抗性例子的现象说明了深神经网络最基本的漏洞之一。在推出这一固有的弱点的各种技术中,对抗性训练已成为学习健壮模型的最有效策略。通常,这是通过平衡强大和自然目标来实现的。在这项工作中,我们旨在通过执行域不变的功能表示,进一步优化鲁棒和标准准确性之间的权衡。我们提出了一种新的对抗训练方法,域不变的对手学习(DIAL),该方法学习了一个既健壮又不变的功能表示形式。拨盘使用自然域及其相应的对抗域上的域对抗神经网络(DANN)的变体。在源域由自然示例组成和目标域组成的情况下,是对抗性扰动的示例,我们的方法学习了一个被限制的特征表示,以免区分自然和对抗性示例,因此可以实现更强大的表示。拨盘是一种通用和模块化技术,可以轻松地将其纳入任何对抗训练方法中。我们的实验表明,将拨号纳入对抗训练过程中可以提高鲁棒性和标准精度。
translated by 谷歌翻译
We present a new algorithm to learn a deep neural network model robust against adversarial attacks. Previous algorithms demonstrate an adversarially trained Bayesian Neural Network (BNN) provides improved robustness. We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal. Instead, we first propose preventing mode collapse to better approximate the multi-modal posterior distribution. Second, based on the intuition that a robust model should ignore perturbations and only consider the informative content of the input, we conceptualize and formulate an information gain objective to measure and force the information learned from both benign and adversarial training instances to be similar. Importantly. we prove and demonstrate that minimizing the information gain objective allows the adversarial risk to approach the conventional empirical risk. We believe our efforts provide a step toward a basis for a principled method of adversarially training BNNs. Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks with 0.035 distortion on both CIFAR-10 and STL-10 datasets.
translated by 谷歌翻译
深度学习模型在众多图像识别,分类和重建任务中表现出令人难以置信的性能。虽然由于其预测能力而非常吸引人和有价值,但一个共同的威胁仍然挑战。一个专门训练的攻击者可以引入恶意输入扰动来欺骗网络,从而导致可能有害的错误预测。此外,当对手完全访问目标模型(白盒)时,这些攻击可以成功,即使这种访问受限(黑盒设置)。模型的集合可以防止这种攻击,但在其成员(攻击转移性)中的共享漏洞下可能是脆弱的。为此,这项工作提出了一种新的多样性促进深度集成的学习方法。该想法是促进巩固地图多样性(SMD)在集合成员上,以防止攻击者通过在我们的学习目标中引入额外的术语来实现所有集合成员。在培训期间,这有助于我们最大限度地减少模型炼塞之间的对齐,以减少共享成员漏洞,从而增加对对手的合并稳健性。我们经验展示了与中型和高强度白盒攻击相比,集合成员与改进性能之间的可转换性降低。此外,我们证明我们的方法与现有方法相结合,优于白色盒子和黑匣子攻击下的防御最先进的集合算法。
translated by 谷歌翻译
The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness. Many promising defenses could be broken later on, making it difficult to identify the state-of-the-art. Frequent pitfalls in the evaluation are improper tuning of hyperparameters of the attacks, gradient obfuscation or masking. In this paper we first propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function. We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness. We apply our ensemble to over 50 models from papers published at recent top machine learning and computer vision venues. In all except one of the cases we achieve lower robust test accuracy than reported in these papers, often by more than 10%, identifying several broken defenses.
translated by 谷歌翻译
具有提高可传递性的对抗性攻击 - 在已知模型上精心制作的对抗性示例的能力也欺骗了未知模型 - 由于其实用性,最近受到了很多关注。然而,现有的可转移攻击以确定性的方式制作扰动,并且常常无法完全探索损失表面,从而陷入了贫穷的当地最佳最佳效果,并且遭受了低传递性的折磨。为了解决这个问题,我们提出了细心多样性攻击(ADA),该攻击以随机方式破坏了不同的显着特征以提高可转移性。首先,我们将图像注意力扰动到破坏不同模型共享的通用特征。然后,为了有效避免局部优势差,我们以随机方式破坏了这些功能,并更加详尽地探索可转移扰动的搜索空间。更具体地说,我们使用发电机来产生对抗性扰动,每个扰动都根据输入潜在代码而以不同的方式打扰。广泛的实验评估证明了我们方法的有效性,优于最先进方法的可转移性。代码可在https://github.com/wkim97/ada上找到。
translated by 谷歌翻译
基于转移的对手示例是最重要的黑匣子攻击类别之一。然而,在对抗性扰动的可转移性和难以察觉之间存在权衡。在此方向上的事先工作经常需要固定但大量的$ \ ell_p $ -norm扰动预算,达到良好的转移成功率,导致可察觉的对抗扰动。另一方面,目前的大多数旨在产生语义保留扰动的难以限制的对抗攻击患有对目标模型的可转移性较弱。在这项工作中,我们提出了一个几何形象感知框架,以产生具有最小变化的可转移的对抗性示例。类似于在统计机器学习中的模型选择,我们利用验证模型为$ \ ell _ {\ infty} $ - norm和不受限制的威胁模型中选择每个图像的最佳扰动预算。广泛的实验验证了我们对平衡令人难以置信的难以察觉和可转移性的框架的有效性。方法论是我们进入CVPR'21安全性AI挑战者的基础:对想象成的不受限制的对抗攻击,其中我们将第1位排名第1,559支队伍,并在决赛方面超过了亚军提交的提交4.59%和23.91%分别和平均图像质量水平。代码可在https://github.com/equationliu/ga-attack获得。
translated by 谷歌翻译