最近的研究表明,即使在攻击者无法访问模型信息的黑匣子场景中,基于深模型的检测器也容易受到对抗示例的影响。大多数现有的攻击方法旨在最大程度地减少真正的积极速率,这通常显示出较差的攻击性能,因为在受攻击的边界框中可以检测到另一个最佳的边界框成为新的真实积极的框架。为了解决这一挑战,我们建议最大程度地降低真实的正速率并最大化误报率,这可以鼓励更多的假阳性对象阻止新的真实正面边界框的产生。它被建模为多目标优化(MOP)问题,通用算法可以搜索帕累托最佳选择。但是,我们的任务具有超过200万个决策变量,导致搜索效率较低。因此,我们将标准的遗传算法扩展到了随机子集选择和称为GARSDC的分裂和矛盾,从而显着提高了效率。此外,为了减轻通用算法中人口质量的敏感性,我们利用具有相似骨架的不同检测器之间的可转移性产生了梯度优先人口。与最先进的攻击方法相比,GARSDC在地图中平均减少12.0,在广泛的实验中查询约1000倍。我们的代码可以在https://github.com/liangsiyuan21/ garsdc找到。
translated by 谷歌翻译
愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
本文侧重于对探测器的高可转移的对抗性攻击,这很难以黑盒方式攻击,因为它们的多重输出特征和跨架构的多样性。为了追求高攻击可转让性,一种合理的方式是在探测器中找到一个共同的财产,这促进了常见弱点的发现。我们是第一个建议,来自探测器的解释器的相关性图是这样的财产。基于它,我们设计了对探测器(RAD)的相关性攻击,这实现了最先进的可转移性,超过了现有的结果超过20%。在MS Coco上,所有8个黑匣子架构的检测映射大于减半,并且分割地图也受到显着影响。鉴于RAD的巨大可转换性,我们生成用于对象检测和实例分割的第一个对抗性数据集,即对上下文(AOCO)的对手对象,这有助于快速评估和改进探测器的稳健性。
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
近年来,图像分类器的BlackBox传输攻击已被广泛研究。相比之下,对对象探测器的转移攻击取得了很小的进展。对象探测器采用图像的整体视图,并检测一个对象(或缺乏)通常取决于场景中的其他对象。这使得这种探测器本质上的上下文感知和对抗的攻击比目标图像分类器更具挑战性。在本文中,我们提出了一种新的方法来为对象检测器生成上下文感知攻击。我们表明,通过使用对象及其相关位置的共同发生和尺寸作为上下文信息,我们可以成功地生成目标的错误分类攻击,该攻击比最先进的Blackbox对象探测器上实现更高的转移成功率。我们在帕斯卡VOC和MS Coco Datasets的各种对象探测器上测试我们的方法,与其他最先进的方法相比,性能提高了高达20美元的百分点。
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
对象检测器的大多数黑盒对抗攻击方案主要面临两个缺点:需要访问目标模型并生成效率低下的对抗示例(未能使对象大量消失)。为了克服这些缺点,我们提出了基于语义分割和模型反转(SSMI)的黑框对抗攻击方案。我们首先使用语义分割技术定位目标对象的位置。接下来,我们设计一个邻居背景像素更换,以用背景像素替换目标区域像素,以确保不容易通过人类视力检测到像素修饰。最后,我们重建一个可识别的示例,并使用蒙版矩阵在重建的示例中选择像素以修改良性图像以生成对抗性示例。详细的实验结果表明,SSMI可以产生有效的对抗例子,以逃避人眼的感知并使感兴趣的对象消失。更重要的是,SSMI的表现优于同样的攻击。新标签和消失的标签的最大增加为16%,对象检测的MAP指标的最大减少为36%。
translated by 谷歌翻译
在变异场景中已经揭示了对抗性实例的现象。最近的研究表明,精心设计的对抗性防御策略可以改善深度学习模型针对对抗性例子的鲁棒性。但是,随着国防技术的快速发展,由于现有手动设计的对抗性攻击的性能较弱,因此很难评估防御模型的鲁棒性。为了应对挑战,鉴于防御模型,需要进一步利用有效的对抗性攻击,较少的计算负担和较低的健壮精度。因此,我们提出了一种用于自动对抗攻击优化设计的多目标模因算法,该算法实现了对近乎最佳的对抗性攻击对防御模型的近乎最佳的对抗性攻击。首先,构建了自动对抗攻击优化设计的更通用的数学模型,其中搜索空间不仅包括攻击者操作,大小,迭代号和损失功能,还包括多个对抗性攻击的连接方式。此外,我们开发了一种组合NSGA-II和本地搜索以解决优化问题的多目标模因算法。最后,为了降低搜索过程中的评估成本,我们根据模型输出的每个图像的跨熵损失值的排序提出了代表性的数据选择策略。关于CIFAR10,CIFAR100和Imagenet数据集的实验显示了我们提出的方法的有效性。
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
众所周知,深神经网络(DNN)的性能容易受到微妙的干扰。到目前为止,基于摄像机的身体对抗攻击还没有引起太多关注,但这是物理攻击的空缺。在本文中,我们提出了一种简单有效的基于相机的物理攻击,称为“对抗彩色膜”(ADVCF),该攻击操纵了彩色膜的物理参数以执行攻击。精心设计的实验显示了所提出的方法在数字和物理环境中的有效性。此外,实验结果表明,ADVCF生成的对抗样本在攻击转移性方面具有出色的性能,这可以使ADVCF有效的黑盒攻击。同时,我们通过对抗训练给予对ADVCF的防御指导。最后,我们调查了ADVCF对基于视觉的系统的威胁,并为基于摄像机的物理攻击提出了一些有希望的心态。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
随着机器学习技术的快速发展,在日常生活的几乎每个方面都已部署了深度学习模式。但是,这些模型的隐私和安全性受到反对派攻击的威胁。其中黑盒攻击更接近现实,其中可以从模型中获取有限的知识。在本文中,我们提供了关于对抗性攻击的基本背景知识,并分析了四个黑箱攻击算法:匪徒,NES,Square Attack和ZoSignsgd全面。我们还探索了新提出的方形攻击方法,方面规模,希望提高其查询效率。
translated by 谷歌翻译
视觉检测是自动驾驶的关键任务,它是自动驾驶计划和控制的关键基础。深度神经网络在各种视觉任务中取得了令人鼓舞的结果,但众所周知,它们容易受到对抗性攻击的影响。在人们改善其稳健性之前,需要对深层视觉探测器的脆弱性进行全面的了解。但是,只有少数对抗性攻击/防御工程集中在对象检测上,其中大多数仅采用分类和/或本地化损失,而忽略了目的方面。在本文中,我们确定了Yolo探测器中与物体相关的严重相关对抗性脆弱性,并提出了针对自动驾驶汽车视觉检测物质方面的有效攻击策略。此外,为了解决这种脆弱性,我们提出了一种新的客观性训练方法,以进行视觉检测。实验表明,针对目标方面的拟议攻击比分别在KITTI和COCO流量数据集中分类和/或本地化损失产生的攻击效率高45.17%和43.50%。此外,拟议的对抗防御方法可以分别在Kitti和Coco交通方面提高检测器对目标攻击的鲁棒性高达21%和12%的地图。
translated by 谷歌翻译
Deep learning-based 3D object detectors have made significant progress in recent years and have been deployed in a wide range of applications. It is crucial to understand the robustness of detectors against adversarial attacks when employing detectors in security-critical applications. In this paper, we make the first attempt to conduct a thorough evaluation and analysis of the robustness of 3D detectors under adversarial attacks. Specifically, we first extend three kinds of adversarial attacks to the 3D object detection task to benchmark the robustness of state-of-the-art 3D object detectors against attacks on KITTI and Waymo datasets, subsequently followed by the analysis of the relationship between robustness and properties of detectors. Then, we explore the transferability of cross-model, cross-task, and cross-data attacks. We finally conduct comprehensive experiments of defense for 3D detectors, demonstrating that simple transformations like flipping are of little help in improving robustness when the strategy of transformation imposed on input point cloud data is exposed to attackers. Our findings will facilitate investigations in understanding and defending the adversarial attacks against 3D object detectors to advance this field.
translated by 谷歌翻译
Recent researches show that the deep learning based object detection is vulnerable to adversarial examples. Generally, the adversarial attack for object detection contains targeted attack and untargeted attack. According to our detailed investigations, the research on the former is relatively fewer than the latter and all the existing methods for the targeted attack follow the same mode, i.e., the object-mislabeling mode that misleads detectors to mislabel the detected object as a specific wrong label. However, this mode has limited attack success rate, universal and generalization performances. In this paper, we propose a new object-fabrication targeted attack mode which can mislead detectors to `fabricate' extra false objects with specific target labels. Furthermore, we design a dual attention based targeted feature space attack method to implement the proposed targeted attack mode. The attack performances of the proposed mode and method are evaluated on MS COCO and BDD100K datasets using FasterRCNN and YOLOv5. Evaluation results demonstrate that, the proposed object-fabrication targeted attack mode and the corresponding targeted feature space attack method show significant improvements in terms of image-specific attack, universal performance and generalization capability, compared with the previous targeted attack for object detection. Code will be made available.
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
深度神经网络在许多重要的遥感任务中取得了巨大的成功。然而,不应忽略它们对对抗性例子的脆弱性。在这项研究中,我们第一次系统地在遥感数据中系统地分析了普遍的对抗示例,而没有受害者模型的任何知识。具体而言,我们提出了一种新型的黑盒对抗攻击方法,即混合攻击及其简单的变体混合尺寸攻击,用于遥感数据。提出方法的关键思想是通过攻击给定替代模型的浅层层中的特征来找到不同网络之间的共同漏洞。尽管它们很简单,但提出的方法仍可以生成可转移的对抗性示例,这些示例欺骗了场景分类和语义分割任务的大多数最新深层神经网络,并具有很高的成功率。我们进一步在名为AUAE-RS的数据集中提供了生成的通用对抗示例,该数据集是第一个在遥感字段中提供黑色框对面样本的数据集。我们希望阿联酋可以用作基准,以帮助研究人员设计具有对遥感领域对抗攻击的强烈抵抗力的深神经网络。代码和阿联酋-RS数据集可在线获得(https://github.com/yonghaoxu/uae-rs)。
translated by 谷歌翻译
视觉变形金刚(VITS)处理将图像输入图像作为通过自我关注的斑块;比卷积神经网络(CNNS)彻底不同的结构。这使得研究Vit模型的对抗特征空间及其可转移性有趣。特别是,我们观察到通过常规逆势攻击发现的对抗性模式,即使对于大型Vit模型,也表现出非常低的黑箱可转移性。但是,我们表明这种现象仅是由于不利用VITS的真实表示潜力的次优攻击程序。深紫色由多个块组成,具有一致的架构,包括自我关注和前馈层,其中每个块能够独立地产生类令牌。仅使用最后一类令牌(传统方法)制定攻击并不直接利用存储在早期令牌中的辨别信息,从而导致VITS的逆势转移性差。使用Vit模型的组成性质,我们通过引入特定于Vit模型结构的两种新策略来增强现有攻击的可转移性。 (i)自我合奏:我们提出了一种通过将单vit模型解剖到网络的集合来找到多种判别途径的方法。这允许在每个VIT块处明确地利用特定于类信息。 (ii)令牌改进:我们建议改进令牌,以进一步增强每种Vit障碍的歧视能力。我们的令牌细化系统地将类令牌系统组合在补丁令牌中保留的结构信息。在一个视觉变压器中发现的分类器的集合中应用于此类精炼令牌时,对抗攻击具有明显更高的可转移性。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译