与深度卷积神经网络(CNNS)相比,视觉变压器(VITS)表现出令人印象深刻的性能和更强的对抗性鲁棒性。一方面,VITS对各个补丁之间的全局交互的关注降低了图像的局部噪声灵敏度。另一方面,CNN的现有决策攻击忽略了图像的不同区域之间的噪声灵敏度的差异,这影响了噪声压缩的效率。因此,只有查询目标模型仍然可以验证VITS的黑匣子对抗鲁棒性仍然是一个具有挑战性的问题。在本文中,我们提出了一种新的决策黑匣子攻击,反对VITS称为PACK-WISE对抗(PAR)。将图像分为粗细的搜索过程,并分别压缩每个补丁上的噪声。 PAR记录每个补丁的噪声幅度和噪声灵敏度,并选择具有最高查询值的补丁以进行噪声压缩。此外,PAR可以用作噪声初始化方法,用于其他基于判决的攻击,以提高VITS和CNNS上的噪声压缩效率而不引入额外的计算。关于Imagenet-21K,ILSVRC-2012和微型想象的数据集的大量实验表明,平均疑问的平均扰动程度降低了。
translated by 谷歌翻译
虽然深度神经网络在各种任务中表现出前所未有的性能,但对对抗性示例的脆弱性阻碍了他们在安全关键系统中的部署。许多研究表明,即使在黑盒设置中也可能攻击,其中攻击者无法访问目标模型的内部信息。大多数黑匣子攻击基于查询,每个都可以获得目标模型的输入输出,并且许多研究侧重于减少所需查询的数量。在本文中,我们注意了目标模型的输出完全对应于查询输入的隐含假设。如果将某些随机性引入模型中,它可以打破假设,因此,基于查询的攻击可能在梯度估计和本地搜索中具有巨大的困难,这是其攻击过程的核心。从这种动机来看,我们甚至观察到一个小的添加剂输入噪声可以中和大多数基于查询的攻击和名称这个简单但有效的方法小噪声防御(SND)。我们分析了SND如何防御基于查询的黑匣子攻击,并展示其与CIFAR-10和ImageNet数据集的八种最先进的攻击有效性。即使具有强大的防御能力,SND几乎保持了原始的分类准确性和计算速度。通过在推断下仅添加一行代码,SND很容易适用于预先训练的模型。
translated by 谷歌翻译
机器学习模型严重易于来自对抗性示例的逃避攻击。通常,对逆势示例的修改输入类似于原始输入的修改输入,在WhiteBox设置下由对手的WhiteBox设置构成,完全访问模型。然而,最近的攻击已经显示出使用BlackBox攻击的对逆势示例的查询号显着减少。特别是,警报是从越来越多的机器学习提供的经过培训的模型的访问界面中利用分类决定作为包括Google,Microsoft,IBM的服务提供商,并由包含这些模型的多种应用程序使用的服务提供商来利用培训的模型。对手仅利用来自模型的预测标签的能力被区别为基于决策的攻击。在我们的研究中,我们首先深入潜入最近的ICLR和SP的最先进的决策攻击,以突出发现低失真对抗采用梯度估计方法的昂贵性质。我们开发了一种强大的查询高效攻击,能够避免在梯度估计方法中看到的嘈杂渐变中的局部最小和误导中的截留。我们提出的攻击方法,ramboattack利用随机块坐标下降的概念来探索隐藏的分类器歧管,针对扰动来操纵局部输入功能以解决梯度估计方法的问题。重要的是,ramboattack对对对手和目标类别可用的不同样本输入更加强大。总的来说,对于给定的目标类,ramboattack被证明在实现给定查询预算的较低失真时更加强大。我们使用大规模的高分辨率ImageNet数据集来策划我们的广泛结果,并在GitHub上开源我们的攻击,测试样本和伪影。
translated by 谷歌翻译
We propose the Square Attack, a score-based black-box l2and l∞-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking. Square Attack is based on a randomized search scheme which selects localized squareshaped updates at random positions so that at each iteration the perturbation is situated approximately at the boundary of the feasible set. Our method is significantly more query efficient and achieves a higher success rate compared to the state-of-the-art methods, especially in the untargeted setting. In particular, on ImageNet we improve the average query efficiency in the untargeted setting for various deep networks by a factor of at least 1.8 and up to 3 compared to the recent state-ofthe-art l∞-attack of Al-Dujaili & OReilly (2020). Moreover, although our attack is black-box, it can also outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate. The code of our attack is available at https://github.com/max-andr/square-attack.
translated by 谷歌翻译
为了适用于现实情况,提出了边界攻击(BAS),并仅使用决策信息确保了100%的攻击成功率。但是,现有的BA方法通过利用简单的随机抽样(SRS)来估算梯度来制作对抗性示例,从而消耗大量模型查询。为了克服SRS的弊端,本文提出了基于拉丁超立方体采样的边界攻击(LHS-BA)以节省查询预算。与SR相比,LHS在相同数量的随机样品中具有更好的均匀性。因此,这些随机样品的平均值比SRS估计的平均梯度更接近真实梯度。在包括MNIST,CIFAR和IMAGENET-1K在内的基准数据集上进行了各种实验。实验结果表明,就查询效率而言,拟议的LHS-BA优于最先进的BA方法。源代码可在https://github.com/gzhu-dvl/lhs-ba上公开获得。
translated by 谷歌翻译
The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for 2 and ∞ similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than several state-of-the-art decision-based adversarial attacks. It also achieves competitive performance in attacking several widely-used defense mechanisms.
translated by 谷歌翻译
愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
黑匣子逆势攻击中的一个主要问题是硬盘标签攻击设置中的高查询复杂性,其中仅提供前1个预测标签。在本文中,我们提出了一种新的基于几何方法,称为切线攻击(TA),其识别位于决策边界上的虚拟半球的最佳切线,以降低攻击的失真。假设决策边界是本地平整的,我们理论上证明了最小$ \ ell_2 $失真可以通过沿着每次迭代中的这种切线的切线线路达到决策边界来获得。为了提高我们方法的稳健性,我们进一步提出了一种通过半椭圆体取代半球的广义方法,以适应弯曲的决策边界。我们的方法是免费的Quand参数和预训练。在ImageNet和CiFar-10数据集上进行的广泛实验表明,我们的方法只能消耗少量查询来实现低幅度失真。实施源代码在https://github.com/machanic/tangentattack上在线发布。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
我们提出了一种新颖且有效的纯化基于纯化的普通防御方法,用于预处理盲目的白色和黑匣子攻击。我们的方法仅在一般图像上进行了自我监督学习,在计算上效率和培训,而不需要对分类模型的任何对抗训练或再培训。我们首先显示对原始图像与其对抗示例之间的残余的对抗噪声的实证分析,几乎均为对称分布。基于该观察,我们提出了一种非常简单的迭代高斯平滑(GS),其可以有效地平滑对抗性噪声并实现大大高的鲁棒精度。为了进一步改进它,我们提出了神经上下文迭代平滑(NCIS),其以自我监督的方式列举盲点网络(BSN)以重建GS也平滑的原始图像的辨别特征。从我们使用四种分类模型对大型想象成的广泛实验,我们表明我们的方法既竞争竞争标准精度和最先进的强大精度,则针对最强大的净化器 - 盲目的白色和黑匣子攻击。此外,我们提出了一种用于评估基于商业图像分类API的纯化方法的新基准,例如AWS,Azure,Clarifai和Google。我们通过基于集合转移的黑匣子攻击产生对抗性实例,这可以促进API的完全错误分类,并证明我们的方法可用于增加API的抗逆性鲁棒性。
translated by 谷歌翻译
An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system. * Work is done while visiting UC Berkeley.
translated by 谷歌翻译
基于决策攻击对现实世界应用程序构成严重威胁,因为它将目标模型视为黑盒子,并且仅访问硬预测标签。最近已经努力减少查询的数量;然而,现有的基于决策攻击仍需要数千个疑问以产生良好的质量的对抗性示例。在这项工作中,我们发现一个良性样本,当前和下一个逆势示例可以自然地构建子空间中的三角形以获得任何迭代攻击。基于诸如SINES的规律,我们提出了一种新颖的三角形攻击(TA)来通过利用较长侧总是与任何三角形的较大角度相对的几何信息来优化扰动。然而,直接在输入图像上施加这样的信息是无效的,因为它不能彻底探索高维空间中输入样本的邻域。为了解决这个问题,TA优化低频空间中的扰动,以获得由于此类几何特性的一般性而有效减少。对ImageNet DataSet的广泛评估表明,TA在1,000个查询中实现了更高的攻击成功率,并且需要更少的查询,以在各种扰动预算下实现相同的攻击成功率,而不是现有的基于决策攻击。具有如此高的效率,我们进一步展示了TA在真实世界API上的适用性,即腾讯云API。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
基于CNN的面部识别模型带来了显着的性能改善,但它们容易受到对抗的扰动。最近的研究表明,即使只能访问模型的硬盘标签输出,对手也可以欺骗模型。然而,由于需要许多查询来寻找不可察觉的对抗性噪声,因此减少查询的数量对于这些攻击至关重要。在本文中,我们指出了现有的基于决策黑匣子攻击的两个限制。我们观察到它们浪费查询以进行背景噪声优化,并且他们不利用为其他图像产生的对抗扰动。我们利用3D面部对齐以克服这些限制,并提出了一种关于对地形识别的查询有效的黑匣子攻击的一般策略,名为几何自适应词典攻击(GADA)。我们的核心思想是在UV纹理地图中创造一个对抗扰动,并将其投影到图像中的脸上。通过将扰动搜索空间限制到面部区域并有效地回收之前的扰动来大大提高查询效率。我们将GADA策略应用于两个现有的攻击方法,并在LFW和CPLFW数据集的实验中显示出压倒性的性能改进。此外,我们还提出了一种新的攻击策略,可以规避基于类似性的有状态检测,该检测标识了基于查询的黑盒攻击过程。
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译