愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译
深度神经网络(DNNS)在各种方案中对对抗数据敏感,包括黑框方案,在该方案中,攻击者只允许查询训练有素的模型并接收输出。现有的黑框方法用于创建对抗性实例的方法是昂贵的,通常使用梯度估计或培训替换网络。本文介绍了\ textit {Attackar},这是一种基于分数的进化,黑框攻击。 Attackar是基于一个新的目标函数,可用于无梯度优化问题。攻击仅需要访问分类器的输出徽标,因此不受梯度掩蔽的影响。不需要其他信息,使我们的方法更适合现实生活中的情况。我们使用三个基准数据集(MNIST,CIFAR10和Imagenet)使用三种不同的最先进模型(Inception-V3,Resnet-50和VGG-16-BN)测试其性能。此外,我们评估了Attackar在非分辨率转换防御和最先进的强大模型上的性能。我们的结果表明,在准确性得分和查询效率方面,攻击性的表现出色。
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
对抗性实例是精心设计的输入样本,其中扰动是人眼不可察觉的,而是容易误导深神经网络(DNN)的输出。现有的作品通过利用简单的指标来惩罚扰动,缺乏对人类视觉系统(HV)的充分考虑,这产生了明显的伪像。为了探索为什么扰动是可见的,本文总结了影响人眼性灵敏度的四个主要因素。基于这一调查,我们设计了一种用于测量良性示例和对抗的感知损失的多因素度量Mulorloss。为了测试多因素度量的难以察觉,我们提出了一种新的黑匣子逆势攻击,被称为贪婪。 Greedyfool应用差分演变,以评估扰动像素对目标DNN的置信度的影响,并引入贪婪近似以自动产生对抗扰动。我们对Imagenet和CIFRA-10数据集进行了广泛的实验,以及60名参与者的全面用户学习。实验结果表明,Mulfactorloss是比现有的Pixelive度量更难以察觉的公制,并且贪婪汇率以黑盒方式实现了100%的成功率。
translated by 谷歌翻译
众所周知,深神经网络(DNN)的性能容易受到微妙的干扰。到目前为止,基于摄像机的身体对抗攻击还没有引起太多关注,但这是物理攻击的空缺。在本文中,我们提出了一种简单有效的基于相机的物理攻击,称为“对抗彩色膜”(ADVCF),该攻击操纵了彩色膜的物理参数以执行攻击。精心设计的实验显示了所提出的方法在数字和物理环境中的有效性。此外,实验结果表明,ADVCF生成的对抗样本在攻击转移性方面具有出色的性能,这可以使ADVCF有效的黑盒攻击。同时,我们通过对抗训练给予对ADVCF的防御指导。最后,我们调查了ADVCF对基于视觉的系统的威胁,并为基于摄像机的物理攻击提出了一些有希望的心态。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
在变异场景中已经揭示了对抗性实例的现象。最近的研究表明,精心设计的对抗性防御策略可以改善深度学习模型针对对抗性例子的鲁棒性。但是,随着国防技术的快速发展,由于现有手动设计的对抗性攻击的性能较弱,因此很难评估防御模型的鲁棒性。为了应对挑战,鉴于防御模型,需要进一步利用有效的对抗性攻击,较少的计算负担和较低的健壮精度。因此,我们提出了一种用于自动对抗攻击优化设计的多目标模因算法,该算法实现了对近乎最佳的对抗性攻击对防御模型的近乎最佳的对抗性攻击。首先,构建了自动对抗攻击优化设计的更通用的数学模型,其中搜索空间不仅包括攻击者操作,大小,迭代号和损失功能,还包括多个对抗性攻击的连接方式。此外,我们开发了一种组合NSGA-II和本地搜索以解决优化问题的多目标模因算法。最后,为了降低搜索过程中的评估成本,我们根据模型输出的每个图像的跨熵损失值的排序提出了代表性的数据选择策略。关于CIFAR10,CIFAR100和Imagenet数据集的实验显示了我们提出的方法的有效性。
translated by 谷歌翻译
Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE). It requires less adversarial information (a blackbox attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74.03% and 22.91% confidence on average. We also show the same vulnerability on the original CIFAR-10 dataset. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate lowcost adversarial attacks against neural networks for evaluating robustness.
translated by 谷歌翻译
机器学习模型严重易于来自对抗性示例的逃避攻击。通常,对逆势示例的修改输入类似于原始输入的修改输入,在WhiteBox设置下由对手的WhiteBox设置构成,完全访问模型。然而,最近的攻击已经显示出使用BlackBox攻击的对逆势示例的查询号显着减少。特别是,警报是从越来越多的机器学习提供的经过培训的模型的访问界面中利用分类决定作为包括Google,Microsoft,IBM的服务提供商,并由包含这些模型的多种应用程序使用的服务提供商来利用培训的模型。对手仅利用来自模型的预测标签的能力被区别为基于决策的攻击。在我们的研究中,我们首先深入潜入最近的ICLR和SP的最先进的决策攻击,以突出发现低失真对抗采用梯度估计方法的昂贵性质。我们开发了一种强大的查询高效攻击,能够避免在梯度估计方法中看到的嘈杂渐变中的局部最小和误导中的截留。我们提出的攻击方法,ramboattack利用随机块坐标下降的概念来探索隐藏的分类器歧管,针对扰动来操纵局部输入功能以解决梯度估计方法的问题。重要的是,ramboattack对对对手和目标类别可用的不同样本输入更加强大。总的来说,对于给定的目标类,ramboattack被证明在实现给定查询预算的较低失真时更加强大。我们使用大规模的高分辨率ImageNet数据集来策划我们的广泛结果,并在GitHub上开源我们的攻击,测试样本和伪影。
translated by 谷歌翻译
最近的研究表明,即使在攻击者无法访问模型信息的黑匣子场景中,基于深模型的检测器也容易受到对抗示例的影响。大多数现有的攻击方法旨在最大程度地减少真正的积极速率,这通常显示出较差的攻击性能,因为在受攻击的边界框中可以检测到另一个最佳的边界框成为新的真实积极的框架。为了解决这一挑战,我们建议最大程度地降低真实的正速率并最大化误报率,这可以鼓励更多的假阳性对象阻止新的真实正面边界框的产生。它被建模为多目标优化(MOP)问题,通用算法可以搜索帕累托最佳选择。但是,我们的任务具有超过200万个决策变量,导致搜索效率较低。因此,我们将标准的遗传算法扩展到了随机子集选择和称为GARSDC的分裂和矛盾,从而显着提高了效率。此外,为了减轻通用算法中人口质量的敏感性,我们利用具有相似骨架的不同检测器之间的可转移性产生了梯度优先人口。与最先进的攻击方法相比,GARSDC在地图中平均减少12.0,在广泛的实验中查询约1000倍。我们的代码可以在https://github.com/liangsiyuan21/ garsdc找到。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense. Among these methods, adversarial training has been drawing increasing attention because of its simplicity and effectiveness. However, the performance of the adversarial training is greatly limited by the architectures of target DNNs, which often makes the resulting DNNs with poor accuracy and unsatisfactory robustness. To address this problem, we propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training. In particular, we design a novel cell-based search space specially for adversarial training, which improves the accuracy and the robustness upper bound of the searched architectures by carefully designing the placement of the cells and the proportional relationship of the filter numbers. Then we propose a two-stage search strategy to search for both accurate and robust neural architectures. At the first stage, the architecture parameters are optimized to minimize the adversarial loss, which makes full use of the effectiveness of the adversarial training in enhancing the robustness. At the second stage, the architecture parameters are optimized to minimize both the natural loss and the adversarial loss utilizing the proposed multi-objective adversarial training method, so that the searched neural architectures are both accurate and robust. We evaluate the proposed algorithm under natural data and various adversarial attacks, which reveals the superiority of the proposed method in terms of both accurate and robust architectures. We also conclude that accurate and robust neural architectures tend to deploy very different structures near the input and the output, which has great practical significance on both hand-crafting and automatically designing of accurate and robust neural architectures.
translated by 谷歌翻译
基于深的神经网络(DNNS)基于合成孔径雷达(SAR)自动靶标识别(ATR)系统已显示出非常容易受到故意设计但几乎无法察觉的对抗扰动的影响,但是当添加到靶向物体中时,DNN推断可能会偏差。在将DNN应用于高级SAR ATR应用时,这会导致严重的安全问题。因此,增强DNN的对抗性鲁棒性对于对现代现实世界中的SAR ATR系统实施DNN至关重要。本文旨在构建更健壮的DNN基于DNN的SAR ATR模型,探讨了SAR成像过程的领域知识,并提出了一种新型的散射模型引导的对抗攻击(SMGAA)算法,该算法可以以电磁散射响应的形式产生对抗性扰动(称为对抗散射器) )。提出的SMGAA由两个部分组成:1)参数散射模型和相应的成像方法以及2)基于自定义的基于梯度的优化算法。首先,我们介绍了有效的归因散射中心模型(ASCM)和一种通用成像方法,以描述SAR成像过程中典型几何结构的散射行为。通过进一步制定几种策略来考虑SAR目标图像的领域知识并放松贪婪的搜索程序,建议的方法不需要经过审慎的态度,但是可以有效地找到有效的ASCM参数来欺骗SAR分类器并促进SAR分类器并促进强大的模型训练。对MSTAR数据集的全面评估表明,SMGAA产生的对抗散射器对SAR处理链中的扰动和转换比当前研究的攻击更为强大,并且有效地构建了针对恶意散射器的防御模型。
translated by 谷歌翻译
对抗商业黑匣子语音平台的对抗攻击,包括云语音API和语音控制设备,直到近年来接受了很少的关注。目前的“黑匣子”攻击所有严重依赖于预测/置信度评分的知识,以加工有效的对抗示例,这可以通过服务提供商直观地捍卫,而不返回这些消息。在本文中,我们提出了在更实用和严格的情况下提出了两种新的对抗攻击。对于商业云演讲API,我们提出了一个决定的黑匣子逆势攻击,这些攻击是唯一的最终决定。在偶变中,我们将决策的AE发电作为一个不连续的大规模全局优化问题,并通过自适应地将该复杂问题自适应地分解成一组子问题并协同优化每个问题来解决它。我们的春天是一种齐全的所有方法,它在一个广泛的流行语音和扬声器识别API,包括谷歌,阿里巴巴,微软,腾讯,达到100%的攻击攻击速度100%的攻击率。 iflytek,和景东,表现出最先进的黑箱攻击。对于商业语音控制设备,我们提出了Ni-Occam,第一个非交互式物理对手攻击,而对手不需要查询Oracle并且无法访问其内部信息和培训数据。我们将对抗性攻击与模型反演攻击相结合,从而产生具有高可转换性的物理有效的音频AE,而无需与目标设备的任何交互。我们的实验结果表明,NI-Occam可以成功欺骗苹果Siri,Microsoft Cortana,Google Assistant,Iflytek和Amazon Echo,平均SRO为52%和SNR为9.65dB,对抗语音控制设备的非交互式物理攻击。
translated by 谷歌翻译
已经发现深层神经网络容易受到对抗攻击的影响,从而引起了对安全敏感的环境的潜在关注。为了解决这个问题,最近的研究从建筑的角度研究了深神经网络的对抗性鲁棒性。但是,搜索深神经网络的体系结构在计算上是昂贵的,尤其是当与对抗性训练过程相结合时。为了应对上述挑战,本文提出了双重主体神经体系结构搜索方法。首先,我们制定了NAS问题,以增强深度神经网络的对抗性鲁棒性为多目标优化问题。具体而言,除了低保真绩效预测器作为第一个目标外,我们还利用辅助目标 - 其值是经过高保真评估训练的替代模型的输出。其次,我们通过结合三种性能估计方法,即参数共享,低保真评估和基于替代的预测指标来降低计算成本。在CIFAR-10,CIFAR-100和SVHN数据集上进行的广泛实验证实了所提出的方法的有效性。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译