基于深度神经网络的图像分类可以被小和准毫不察觉的扰动的对抗例子误导。此外,在一个分类模型上创建的对抗性示例也可以欺骗另一个不同的模型。逆势实例的可转移性最近吸引了日益增长的利益,因为它使黑盒攻击对分类模型可行。作为分类的延伸,语义细分也有很大的关注其对抗的鲁棒性。然而,尚未系统地研究了对抗模型对分段模型的转移性。在这项工作中,我们深入研究了这个话题。首先,我们探讨了对分类和分割模型的对抗实例的过度现象。与对分类模型的观察结果相比,通过对源模型的过度限制的分类模型进行了限制,我们发现分段上的对抗示例并不总是过度装备源模型。即使呈现过度拟合,逆势实例的可转移也是有限的。我们将限制归因于分段模型的架构性状,即多尺度对象识别。然后,我们提出了一种简单有效的方法,称为动态缩放,克服限制。通过我们的方法实现的高可转移性表明,与先前作品中的观察结果相比,对分割模型的对抗示例可以容易地传递到其他分段模型。我们的分析和提案得到了广泛的实验支持。
translated by 谷歌翻译
深度神经网络的图像分类容易受到对抗性扰动的影响。图像分类可以通过在输入图像中添加人造小且不可察觉的扰动来轻松愚弄。作为最有效的防御策略之一,提出了对抗性训练,以解决分类模型的脆弱性,其中创建了对抗性示例并在培训期间注入培训数据中。在过去的几年中,对分类模型的攻击和防御进行了深入研究。语义细分作为分类的扩展,最近也受到了极大的关注。最近的工作表明,需要大量的攻击迭代来创建有效的对抗性示例来欺骗分割模型。该观察结果既可以使鲁棒性评估和对分割模型的对抗性培训具有挑战性。在这项工作中,我们提出了一种称为SEGPGD的有效有效的分割攻击方法。此外,我们提供了收敛分析,以表明在相同数量的攻击迭代下,提出的SEGPGD可以创建比PGD更有效的对抗示例。此外,我们建议将SEGPGD应用于分割对抗训练的基础攻击方法。由于SEGPGD可以创建更有效的对抗性示例,因此使用SEGPGD的对抗训练可以提高分割模型的鲁棒性。我们的建议还通过对流行分割模型体系结构和标准分段数据集进行了验证。
translated by 谷歌翻译
It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of pixels/proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of blackbox adversarial attack.
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
可转移的对抗性攻击优化了从验证的替代模型和已知标签空间中的对手,以欺骗未知的黑盒模型。因此,这些攻击受到有效的替代模型的可用性受到限制。在这项工作中,我们放宽了这一假设,并提出了对抗像素的恢复,作为一种自制的替代方案,可以在无标签和很少的数据样本的条件下从头开始训练有效的替代模型。我们的培训方法是基于一个最小目标的目标,该目标通过对抗目标减少过度拟合,从而为更概括的替代模型进行了优化。我们提出的攻击是对对抗性像素恢复的补充,并且独立于任何特定任务目标,因为它可以以自我监督的方式启动。我们成功地证明了我们对视觉变压器方法的对抗性可传递性以及卷积神经网络,用于分类,对象检测和视频分割的任务。我们的代码和预培训的代理模型可在以下网址找到:https://github.com/hashmatshadab/apr
translated by 谷歌翻译
Deep Neural Networks (DNNs) are vulnerable to the black-box adversarial attack that is highly transferable. This threat comes from the distribution gap between adversarial and clean samples in feature space of the target DNNs. In this paper, we use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap. The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values. Different from previous work, we propose a more effective pixel level training constraint to make this achievable, thus enhancing robustness on adversarial samples. Further, a class-aware feature-level constraint is formulated for integrated distribution alignment. Our approach is general and applicable to multiple tasks, including image classification, semantic segmentation, and object detection. We conduct extensive experiments on different datasets. Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
translated by 谷歌翻译
深度神经网络在许多重要的遥感任务中取得了巨大的成功。然而,不应忽略它们对对抗性例子的脆弱性。在这项研究中,我们第一次系统地在遥感数据中系统地分析了普遍的对抗示例,而没有受害者模型的任何知识。具体而言,我们提出了一种新型的黑盒对抗攻击方法,即混合攻击及其简单的变体混合尺寸攻击,用于遥感数据。提出方法的关键思想是通过攻击给定替代模型的浅层层中的特征来找到不同网络之间的共同漏洞。尽管它们很简单,但提出的方法仍可以生成可转移的对抗性示例,这些示例欺骗了场景分类和语义分割任务的大多数最新深层神经网络,并具有很高的成功率。我们进一步在名为AUAE-RS的数据集中提供了生成的通用对抗示例,该数据集是第一个在遥感字段中提供黑色框对面样本的数据集。我们希望阿联酋可以用作基准,以帮助研究人员设计具有对遥感领域对抗攻击的强烈抵抗力的深神经网络。代码和阿联酋-RS数据集可在线获得(https://github.com/yonghaoxu/uae-rs)。
translated by 谷歌翻译
对抗性攻击提供了研究深层学习模式的稳健性的好方法。基于转移的黑盒攻击中的一种方法利用了几种图像变换操作来提高对逆势示例的可转换性,这是有效的,但不能考虑输入图像的特定特征。在这项工作中,我们提出了一种新颖的架构,称为自适应图像转换学习者(AIT1),其将不同的图像变换操作结合到统一的框架中,以进一步提高对抗性示例的可转移性。与现有工作中使用的固定组合变换不同,我们精心设计的转换学习者自适应地选择特定于输入图像的图像变换最有效的组合。关于Imagenet的广泛实验表明,我们的方法在各种设置下显着提高了正常培训的模型和防御模型的攻击成功率。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
In this work, we study the black-box targeted attack problem from the model discrepancy perspective. On the theoretical side, we present a generalization error bound for black-box targeted attacks, which gives a rigorous theoretical analysis for guaranteeing the success of the attack. We reveal that the attack error on a target model mainly depends on empirical attack error on the substitute model and the maximum model discrepancy among substitute models. On the algorithmic side, we derive a new algorithm for black-box targeted attacks based on our theoretical analysis, in which we additionally minimize the maximum model discrepancy(M3D) of the substitute models when training the generator to generate adversarial examples. In this way, our model is capable of crafting highly transferable adversarial examples that are robust to the model variation, thus improving the success rate for attacking the black-box model. We conduct extensive experiments on the ImageNet dataset with different classification models, and our proposed approach outperforms existing state-of-the-art methods by a significant margin. Our codes will be released.
translated by 谷歌翻译
现实世界的对抗例(通常以补丁形式)对安全关键计算机视觉任务中的深度学习模型(如在自动驾驶中的视觉感知)中使用深度学习模型构成严重威胁。本文涉及用不同类型的对抗性斑块攻击时,对语义分割模型的稳健性进行了广泛的评价,包括数字,模拟和物理。提出了一种新的损失功能,提高攻击者在诱导像素错误分类方面的能力。此外,提出了一种新的攻击策略,提高了在场景中放置补丁的转换方法的期望。最后,首先扩展用于检测对抗性补丁的最先进的方法以应对语义分割模型,然后改进以获得实时性能,并最终在现实世界场景中进行评估。实验结果表明,尽管具有数字和真实攻击的对抗效果,其影响通常在空间上限制在补丁周围的图像区域。这将打开关于实时语义分段模型的空间稳健性的进一步疑问。
translated by 谷歌翻译
最近的研究表明,在一个白盒模型上手工制作的对抗性示例可用于攻击其他黑箱型号。这种跨模型可转换性使得执行黑匣子攻击可行,这对现实世界的DNN应用程序提出了安全性问题。尽管如此,现有的作品主要专注于调查跨不同深层模型的对抗性可转移,该模型共享相同的输入数据模型。从未探索过对抗扰动的跨莫代尔转移性。本文研究了不同方式的对抗性扰动的可转移性,即利用在白盒图像模型上产生的对抗扰动,以攻击黑盒视频模型。具体而言,通过观察到图像和视频帧之间的低级特征空间是相似的,我们提出了一种简单但有效的跨模型攻击方法,名称为图像到视频(I2V)攻击。通过最小化来自对手和良性示例的预先接受的图像模型的特征之间的特征之间的余弦相似性来生成对抗性帧,然后组合生成的对抗性帧以对视频识别模型进行黑盒攻击。广泛的实验表明,I2V可以在不同的黑匣子视频识别模型上实现高攻击成功率。在动力学-400和UCF-101上,I2V分别实现了77.88%和65.68%的平均攻击成功率,阐明了跨越模态对抗攻击的可行性。
translated by 谷歌翻译
深度神经网络容易受到对抗的例子,这可以通过添加微妙的扰动来欺骗深层模型。虽然现有的攻击已经取得了有希望的结果,但它仍然在黑盒设置下留下长途来产生可转移的对抗性示例。为此,本文提出提高对抗示例的可转移性,并将双阶段特征级扰动应用于现有模型,以隐式创建一组不同的模型。然后在迭代期间由纵向集合融合这些模型。该方法被称为双级网络侵蚀(DSNE)。我们对非残留和残余网络进行全面的实验,并获得更多可转移的对抗实例,其计算成本类似于最先进的方法。特别地,对于残余网络,通过将残余块信息偏置到跳过连接,可以显着改善对抗性示例的可转移性。我们的工作为神经网络的建筑脆弱性提供了新的见解,并对神经网络的稳健性带来了新的挑战。
translated by 谷歌翻译
Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to the inference algorithm, thereby allowing the model to adjust dynamically to each individual image's unique and potentially novel characteristics at inference time. Among different constraints, we find that equivariance-based constraints are most effective, because they allow dense constraints in the feature space without overly constraining the representation at a fine-grained level. Our theoretical results validate the importance of having such dense constraints at inference time. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks. Project page is available at equi4robust.cs.columbia.edu.
translated by 谷歌翻译
对抗性实例的有趣现象引起了机器学习中的显着关注,对社区可能更令人惊讶的是存在普遍对抗扰动(UAPS),即欺骗目标DNN的单一扰动。随着对深层分类器的关注,本调查总结了最近普遍对抗攻击的进展,讨论了攻击和防御方的挑战,以及uap存在的原因。我们的目标是将此工作扩展为动态调查,该调查将定期更新其内容,以遵循关于在广泛的域中的UAP或通用攻击的新作品,例如图像,音频,视频,文本等。将讨论相关更新:https://bit.ly/2sbqlgg。我们欢迎未来的作者在该领域的作品,联系我们,包括您的新发现。
translated by 谷歌翻译
基于转移的对抗攻击可以评估黑框设置中的模型鲁棒性。几种方法表现出令人印象深刻的非目标转移性,但是,有效地产生有针对性的可转移性仍然具有挑战性。为此,我们开发了一个简单而有效的框架,以应用层次生成网络制作有针对性的基于转移的对抗性示例。特别是,我们有助于适应多级目标攻击的摊销设计。对Imagenet的广泛实验表明,我们的方法通过与现有方法相比,大幅度的余量提高了目标黑盒攻击的成功率 - 它的平均成功率为29.1 \%,而仅基于一个替代白盒的六种不同模型模型,大大优于最先进的基于梯度的攻击方法。此外,与基于梯度的方法相比,所提出的方法超出了数量级的效率也更有效。
translated by 谷歌翻译
近年来,随着神经网络的快速发展,深入学习模式的安全性越来越突出,这易于对抗性示例。几乎所有现有的基于梯度的攻击方法使用生成中的符号功能来满足$ l_ \ idty $ norm的扰动预算要求。然而,我们发现,由于它修改了精确梯度方向,因此可以对生成对抗示例的符号功能可能是不正确的。我们建议去除符号功能,并直接利用精确的梯度方向,具有缩放因子,以产生对抗扰动,即使扰动较少的扰动例子也提高了对抗性示例的攻击成功率。此外,考虑到最佳缩放因子在不同的图像上变化,我们提出了一种自适应缩放因子发生器,为每个图像寻求适当的缩放因子,这避免了手动搜索缩放因子的计算成本。我们的方法可以与几乎所有现有的基于梯度的攻击方法集成,以进一步提高攻击成功率。在CIFAR10和Imagenet数据集上的广泛实验表明,我们的方法表现出更高的可转移性和优于最先进的方法。
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译