Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples. Though visual imperceptibility is the desired property of adversarial examples, conventional adversarial attacks still generate traceable adversarial perturbations. In this paper, we introduce a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples. Specifically, AdvINN fully takes advantage of the information preservation property of Invertible Neural Networks and thereby generates adversarial examples by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of the original class. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that the proposed AdvINN method can produce less imperceptible adversarial images than the state-of-the-art methods and AdvINN yields more robust adversarial examples with high confidence compared to other adversarial attacks.
translated by 谷歌翻译
虽然深入学习模型取得了前所未有的成功,但他们对逆势袭击的脆弱性引起了越来越关注,特别是在部署安全关键域名时。为了解决挑战,已经提出了鲁棒性改善的许多辩护策略,包括反应性和积极主动。从图像特征空间的角度来看,由于特征的偏移,其中一些人无法达到满足结果。此外,模型学习的功能与分类结果无直接相关。与他们不同,我们考虑基本上从模型内部进行防御方法,并在攻击前后调查神经元行为。我们观察到,通过大大改变为正确标签的神经元大大改变神经元来误导模型。受其激励,我们介绍了神经元影响的概念,进一步将神经元分为前,中间和尾部。基于它,我们提出神经元水平逆扰动(NIP),第一神经元水平反应防御方法对抗对抗攻击。通过强化前神经元并削弱尾部中的弱化,辊隙可以消除几乎所有的对抗扰动,同时仍然保持高良好的精度。此外,它可以通过适应性,尤其是更大的扰动来应对不同的扰动。在三个数据集和六种模型上进行的综合实验表明,NIP优于最先进的基线对抗11个对抗性攻击。我们进一步通过神经元激活和可视化提供可解释的证据,以便更好地理解。
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
对抗性攻击提供了研究深层学习模式的稳健性的好方法。基于转移的黑盒攻击中的一种方法利用了几种图像变换操作来提高对逆势示例的可转换性,这是有效的,但不能考虑输入图像的特定特征。在这项工作中,我们提出了一种新颖的架构,称为自适应图像转换学习者(AIT1),其将不同的图像变换操作结合到统一的框架中,以进一步提高对抗性示例的可转移性。与现有工作中使用的固定组合变换不同,我们精心设计的转换学习者自适应地选择特定于输入图像的图像变换最有效的组合。关于Imagenet的广泛实验表明,我们的方法在各种设置下显着提高了正常培训的模型和防御模型的攻击成功率。
translated by 谷歌翻译
发现深度学习模型很容易受到对抗性示例的影响,因为在深度学习模型的输入中,对扰动的扰动可能引起错误的预测。对抗图像生成的大多数现有作品都试图为大多数模型实现攻击,而其中很少有人努力确保对抗性示例的感知质量。高质量的对手示例对许多应用很重要,尤其是保留隐私。在这项工作中,我们基于最小明显差异(MND)概念开发了一个框架,以生成对对抗性隐私的保留图像,这些图像与干净的图像具有最小的感知差异,但能够攻击深度学习模型。为了实现这一目标,首先提出了对抗性损失,以使深度学习模型成功地被对抗性图像攻击。然后,通过考虑摄动和扰动引起的结构和梯度变化的大小来开发感知质量的损失,该损失旨在为对抗性图像生成保持高知觉质量。据我们所知,这是基于MND概念以保存隐私的概念来探索质量保护的对抗图像生成的第一项工作。为了评估其在感知质量方面的性能,在这项工作中,通过建议的方法和几种锚方法测试了有关图像分类和面部识别的深层模型。广泛的实验结果表明,所提出的MND框架能够生成具有明显改善的性能指标(例如PSNR,SSIM和MOS)的对抗图像,而不是用锚定方法生成的对抗性图像。
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
深度神经网络(DNNS)最近在许多分类任务中取得了巨大的成功。不幸的是,它们容易受到对抗性攻击的影响,这些攻击会产生对抗性示例,这些示例具有很小的扰动,以欺骗DNN模型,尤其是在模型共享方案中。事实证明,对抗性训练是最有效的策略,它将对抗性示例注入模型训练中,以提高DNN模型的稳健性,以对对抗性攻击。但是,基于现有的对抗性示例的对抗训练无法很好地推广到标准,不受干扰的测试数据。为了在标准准确性和对抗性鲁棒性之间取得更好的权衡,我们提出了一个新型的对抗训练框架,称为潜在边界引导的对抗训练(梯子),该训练(梯子)在潜在的边界引导的对抗性示例上对对手进行对手训练DNN模型。与大多数在输入空间中生成对抗示例的现有方法相反,梯子通过增加对潜在特征的扰动而产生了无数的高质量对抗示例。扰动是沿SVM构建的具有注意机制的决策边界的正常情况进行的。我们从边界场的角度和可视化视图分析了生成的边界引导的对抗示例的优点。与Vanilla DNN和竞争性底线相比,对MNIST,SVHN,CELEBA和CIFAR-10的广泛实验和详细分析验证了梯子在标准准确性和对抗性鲁棒性之间取得更好的权衡方面的有效性。
translated by 谷歌翻译
Deep learning methods have gained increased attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge$^{+}$, that deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against two deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the attack implementation using various attack frameworks. We also explore the potential countermeasures against such attacks. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.
translated by 谷歌翻译
近年来,一项大量的研究努力集中在对抗图像上的对抗攻击,而对抗性视频攻击很少被探索。我们提出了对叫做Deepsava的竞争对手攻击战略。我们的模型包括通过统一优化框架的添加剂扰动和空间转换,其中采用结构相似性指数(SSIM)测量来测量对抗距离。我们设计一种有效和新的优化方案,可替代地利用贝叶斯优化来识别视频和随机梯度下降(SGD)优化中最有影响力的帧,以产生添加剂和空间变换的扰动。这样做使DeepSava能够对视频进行非常稀疏的攻击,以维持人类难以察觉,同时在攻击成功率和对抗转移性方面仍然实现最先进的性能。我们对各种类型的深神经网络和视频数据集的密集实验证实了Deepsava的优越性。
translated by 谷歌翻译
Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.
translated by 谷歌翻译
基于转移的对手示例是最重要的黑匣子攻击类别之一。然而,在对抗性扰动的可转移性和难以察觉之间存在权衡。在此方向上的事先工作经常需要固定但大量的$ \ ell_p $ -norm扰动预算,达到良好的转移成功率,导致可察觉的对抗扰动。另一方面,目前的大多数旨在产生语义保留扰动的难以限制的对抗攻击患有对目标模型的可转移性较弱。在这项工作中,我们提出了一个几何形象感知框架,以产生具有最小变化的可转移的对抗性示例。类似于在统计机器学习中的模型选择,我们利用验证模型为$ \ ell _ {\ infty} $ - norm和不受限制的威胁模型中选择每个图像的最佳扰动预算。广泛的实验验证了我们对平衡令人难以置信的难以察觉和可转移性的框架的有效性。方法论是我们进入CVPR'21安全性AI挑战者的基础:对想象成的不受限制的对抗攻击,其中我们将第1位排名第1,559支队伍,并在决赛方面超过了亚军提交的提交4.59%和23.91%分别和平均图像质量水平。代码可在https://github.com/equationliu/ga-attack获得。
translated by 谷歌翻译
基于深度神经网络(DNN)的智能信息(IOT)系统已被广泛部署在现实世界中。然而,发现DNNS易受对抗性示例的影响,这提高了人们对智能物联网系统的可靠性和安全性的担忧。测试和评估IOT系统的稳健性成为必要和必要。最近已经提出了各种攻击和策略,但效率问题仍未纠正。现有方法是计算地广泛或耗时,这在实践中不适用。在本文中,我们提出了一种称为攻击启发GaN(AI-GaN)的新框架,在有条件地产生对抗性实例。曾经接受过培训,可以有效地给予对抗扰动的输入图像和目标类。我们在白盒设置的不同数据集中应用AI-GaN,黑匣子设置和由最先进的防御保护的目标模型。通过广泛的实验,AI-GaN实现了高攻击成功率,优于现有方法,并显着降低了生成时间。此外,首次,AI-GaN成功地缩放到复杂的数据集。 Cifar-100和Imagenet,所有课程中的成功率约为90美元。
translated by 谷歌翻译
Although deep learning has made remarkable progress in processing various types of data such as images, text and speech, they are known to be susceptible to adversarial perturbations: perturbations specifically designed and added to the input to make the target model produce erroneous output. Most of the existing studies on generating adversarial perturbations attempt to perturb the entire input indiscriminately. In this paper, we propose ExploreADV, a general and flexible adversarial attack system that is capable of modeling regional and imperceptible attacks, allowing users to explore various kinds of adversarial examples as needed. We adapt and combine two existing boundary attack methods, DeepFool and Brendel\&Bethge Attack, and propose a mask-constrained adversarial attack system, which generates minimal adversarial perturbations under the pixel-level constraints, namely ``mask-constraints''. We study different ways of generating such mask-constraints considering the variance and importance of the input features, and show that our adversarial attack system offers users good flexibility to focus on sub-regions of inputs, explore imperceptible perturbations and understand the vulnerability of pixels/regions to adversarial attacks. We demonstrate our system to be effective based on extensive experiments and user study.
translated by 谷歌翻译
深度神经网络(DNN)受到对抗的示例攻击的威胁。对手可以通过将小型精心设计的扰动添加到输入来容易地改变DNN的输出。对手示例检测是基于强大的DNNS服务的基本工作。对手示例显示了人类和DNN在图像识别中的差异。从以人为本的角度来看,图像特征可以分为对人类可易于理解的主导特征,并且对人类来说是不可理解的隐性特征,但是被DNN利用。在本文中,我们揭示了难以察觉的对手实例是隐性特征误导性神经网络的乘积,并且对抗性攻击基本上是一种富集图像中的这些隐性特征的方法。对手实例的难以察觉表明扰动丰富了隐性特征,但几乎影响了主导特征。因此,对抗性实例对滤波偏离隐性特征敏感,而良性示例对这种操作免疫。受到这个想法的启发,我们提出了一种仅称为特征过滤器的标签的侵略性检测方法。功能过滤器利用离散余弦变换到占主导地位的大约单独的隐性功能,并获得默认隐性功能的突变图像。只有在输入和其突变体上进行DNN的预测标签,特征过滤器可以实时检测高精度和少量误报的难以察觉的对抗性示例。
translated by 谷歌翻译
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks that would trigger misclassification of DNNs but may be imperceptible to human perception. Adversarial defense has been important ways to improve the robustness of DNNs. Existing attack methods often construct adversarial examples relying on some metrics like the $\ell_p$ distance to perturb samples. However, these metrics can be insufficient to conduct adversarial attacks due to their limited perturbations. In this paper, we propose a new internal Wasserstein distance (IWD) to capture the semantic similarity of two samples, and thus it helps to obtain larger perturbations than currently used metrics such as the $\ell_p$ distance We then apply the internal Wasserstein distance to perform adversarial attack and defense. In particular, we develop a novel attack method relying on IWD to calculate the similarities between an image and its adversarial examples. In this way, we can generate diverse and semantically similar adversarial examples that are more difficult to defend by existing defense methods. Moreover, we devise a new defense method relying on IWD to learn robust models against unseen adversarial examples. We provide both thorough theoretical and empirical evidence to support our methods.
translated by 谷歌翻译
当系统的全面了解时然而,这种技术在灰盒设置中行动不成功,攻击者面部模板未知。在这项工作中,我们提出了一种具有新开发的目标函数的相似性的灰度逆势攻击(SGADV)技术。 SGAdv利用不同的评分来产生优化的对抗性实例,即基于相似性的对抗性攻击。这种技术适用于白盒和灰度箱攻击,针对使用不同分数确定真实或调用用户的身份验证系统。为了验证SGAdv的有效性,我们对LFW,Celeba和Celeba-HQ的面部数据集进行了广泛的实验,反对白盒和灰度箱设置的面部和洞察面的深脸识别模型。结果表明,所提出的方法显着优于灰色盒设置中的现有的对抗性攻击技术。因此,我们总结了开发对抗性示例的相似性基础方法可以令人满意地迎合去认证的灰度箱攻击场景。
translated by 谷歌翻译
Adversarial attacks hamper the decision-making ability of neural networks by perturbing the input signal. The addition of calculated small distortion to images, for instance, can deceive a well-trained image classification network. In this work, we propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF). Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers. We use the Frank-Wolfe (conditional gradient) algorithm to simultaneously optimize the attack perturbations for bounded magnitude and sparsity with $O(1/\sqrt{T})$ convergence. Empirical results show that SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
translated by 谷歌翻译
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs.Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to * Pin-Yu Chen and Huan Zhang contribute equally to this work.
translated by 谷歌翻译