Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 -it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ∼10%. Code is available at https://github.com/facebookresearch/ ImageNet-Adversarial-Training.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
通过对数据集的样本应用小而有意的最差情况扰动可以产生对抗性输入,这导致甚至最先进的深神经网络,以高信任输出不正确的答案。因此,开发了一些对抗防御技术来提高模型的安全性和稳健性,并避免它们被攻击。逐渐,攻击者和捍卫者之间的游戏类似的竞争,其中两个玩家都会试图在最大化自己的收益的同时互相反对发挥最佳策略。为了解决游戏,每个玩家都基于对对手的战略选择的预测来选择反对对手的最佳策略。在这项工作中,我们正处于防守方面,以申请防止攻击的游戏理论方法。我们使用两个随机化方法,随机初始化和随机激活修剪,以创造网络的多样性。此外,我们使用一种去噪技术,超级分辨率,通过在攻击前预处理图像来改善模型的鲁棒性。我们的实验结果表明,这三种方法可以有效提高深度学习神经网络的鲁棒性。
translated by 谷歌翻译
已知深度神经网络(DNN)容易受到用不可察觉的扰动制作的对抗性示例的影响,即,输入图像的微小变化会引起错误的分类,从而威胁着基于深度学习的部署系统的可靠性。经常采用对抗训练(AT)来通过训练损坏和干净的数据的混合物来提高DNN的鲁棒性。但是,大多数基于AT的方法在处理\ textit {转移的对抗示例}方面是无效的,这些方法是生成以欺骗各种防御模型的生成的,因此无法满足现实情况下提出的概括要求。此外,对抗性训练一般的国防模型不能对具有扰动的输入产生可解释的预测,而不同的领域专家则需要一个高度可解释的强大模型才能了解DNN的行为。在这项工作中,我们提出了一种基于Jacobian规范和选择性输入梯度正则化(J-SIGR)的方法,该方法通过Jacobian归一化提出了线性化的鲁棒性,还将基于扰动的显着性图正规化,以模仿模型的可解释预测。因此,我们既可以提高DNN的防御能力和高解释性。最后,我们评估了跨不同体系结构的方法,以针对强大的对抗性攻击。实验表明,提出的J-Sigr赋予了针对转移的对抗攻击的鲁棒性,我们还表明,来自神经网络的预测易于解释。
translated by 谷歌翻译
我们提出了一种新颖且有效的纯化基于纯化的普通防御方法,用于预处理盲目的白色和黑匣子攻击。我们的方法仅在一般图像上进行了自我监督学习,在计算上效率和培训,而不需要对分类模型的任何对抗训练或再培训。我们首先显示对原始图像与其对抗示例之间的残余的对抗噪声的实证分析,几乎均为对称分布。基于该观察,我们提出了一种非常简单的迭代高斯平滑(GS),其可以有效地平滑对抗性噪声并实现大大高的鲁棒精度。为了进一步改进它,我们提出了神经上下文迭代平滑(NCIS),其以自我监督的方式列举盲点网络(BSN)以重建GS也平滑的原始图像的辨别特征。从我们使用四种分类模型对大型想象成的广泛实验,我们表明我们的方法既竞争竞争标准精度和最先进的强大精度,则针对最强大的净化器 - 盲目的白色和黑匣子攻击。此外,我们提出了一种用于评估基于商业图像分类API的纯化方法的新基准,例如AWS,Azure,Clarifai和Google。我们通过基于集合转移的黑匣子攻击产生对抗性实例,这可以促进API的完全错误分类,并证明我们的方法可用于增加API的抗逆性鲁棒性。
translated by 谷歌翻译
大多数对抗攻击防御方法依赖于混淆渐变。这些方法在捍卫基于梯度的攻击方面是成功的;然而,它们容易被攻击绕过,该攻击不使用梯度或近似近似和使用校正梯度的攻击。不存在不存在诸如对抗培训等梯度的防御,但这些方法通常对诸如其幅度的攻击进行假设。我们提出了一种分类模型,该模型不会混淆梯度,并且通过施工而强大而不承担任何关于攻击的知识。我们的方法将分类作为优化问题,我们“反转”在不受干扰的自然图像上培训的条件发电机,以找到生成最接近查询图像的类。我们假设潜在的脆性抗逆性攻击源是前馈分类器的高度低维性质,其允许对手发现输入空间中的小扰动,从而导致输出空间的大变化。另一方面,生成模型通常是低到高维的映射。虽然该方法与防御GaN相关,但在我们的模型中使用条件生成模型和反演而不是前馈分类是临界差异。与Defense-GaN不同,它被证明生成了容易规避的混淆渐变,我们表明我们的方法不会混淆梯度。我们展示了我们的模型对黑箱攻击的极其强劲,并与自然训练的前馈分类器相比,对白盒攻击的鲁棒性提高。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
translated by 谷歌翻译
基于深度神经网络(DNN)的智能信息(IOT)系统已被广泛部署在现实世界中。然而,发现DNNS易受对抗性示例的影响,这提高了人们对智能物联网系统的可靠性和安全性的担忧。测试和评估IOT系统的稳健性成为必要和必要。最近已经提出了各种攻击和策略,但效率问题仍未纠正。现有方法是计算地广泛或耗时,这在实践中不适用。在本文中,我们提出了一种称为攻击启发GaN(AI-GaN)的新框架,在有条件地产生对抗性实例。曾经接受过培训,可以有效地给予对抗扰动的输入图像和目标类。我们在白盒设置的不同数据集中应用AI-GaN,黑匣子设置和由最先进的防御保护的目标模型。通过广泛的实验,AI-GaN实现了高攻击成功率,优于现有方法,并显着降低了生成时间。此外,首次,AI-GaN成功地缩放到复杂的数据集。 Cifar-100和Imagenet,所有课程中的成功率约为90美元。
translated by 谷歌翻译
尽管深度神经网络(DNN)在各种应用中取得了突出的性能,但众所周知,DNN易于在清洁/原始样品中具有难以察觉的扰动的对抗性实施例/样品(AES)。克服对抗对抗攻击的现有防御方法的弱点,这破坏了原始样本的信息,导致目标分类器精度的减少,提高了增强的反对对抗攻击方法IDFR(通过输入去噪和功能恢复) 。所提出的IDFR是由增强型输入丹麦优化的增强型输入丹麦(ID)和隐藏的有损特征恢复器(FR)组成。在基准数据集上进行的广泛实验表明,所提出的IDFR优于各种最先进的防御方法,对保护目标模型免受各种对抗黑盒或白盒攻击的高度有效。 \脚注{souce代码释放:\ href {https://github.com/id-fr/idfr} {https://github.com/id-fr/idfr}}
translated by 谷歌翻译
基于转移的对手示例是最重要的黑匣子攻击类别之一。然而,在对抗性扰动的可转移性和难以察觉之间存在权衡。在此方向上的事先工作经常需要固定但大量的$ \ ell_p $ -norm扰动预算,达到良好的转移成功率,导致可察觉的对抗扰动。另一方面,目前的大多数旨在产生语义保留扰动的难以限制的对抗攻击患有对目标模型的可转移性较弱。在这项工作中,我们提出了一个几何形象感知框架,以产生具有最小变化的可转移的对抗性示例。类似于在统计机器学习中的模型选择,我们利用验证模型为$ \ ell _ {\ infty} $ - norm和不受限制的威胁模型中选择每个图像的最佳扰动预算。广泛的实验验证了我们对平衡令人难以置信的难以察觉和可转移性的框架的有效性。方法论是我们进入CVPR'21安全性AI挑战者的基础:对想象成的不受限制的对抗攻击,其中我们将第1位排名第1,559支队伍,并在决赛方面超过了亚军提交的提交4.59%和23.91%分别和平均图像质量水平。代码可在https://github.com/equationliu/ga-attack获得。
translated by 谷歌翻译
在难以察觉的对抗性示例攻击时被发现深度神经网络是不稳定的,这对于它施加到需要高可靠性的医学诊断系统是危险的。然而,在自然图像中具有良好效果的防御方法可能不适合医疗诊断任务。预处理方法(例如,随机调整大小,压缩)可能导致医学图像中的小病变特征的损失。在增强的数据集中培训网络对已经在线部署的医疗模型也不实用。因此,有必要为医疗诊断任务设计易于部署和有效的防御框架。在本文中,我们为反对对抗性攻击(即Medrdf)的医疗净化模型提出了较强和初稿的初步诊断框架。它采用了Pertined Medical模型的推理时间。具体地,对于每个测试图像,MEDRDF首先创建它的大量噪声副本,并从预训经医学诊断模型获得这些副本的输出标签。然后,基于这些副本的标签,MEDRDF通过多数投票输出最终的稳健诊断结果。除了诊断结果之外,MedRDF还产生强大的公制(RM)作为结果的置信度。因此,利用MEDRDF将预先训练的非强大诊断模型转换为强大的,是方便且可靠的。 Covid-19和Dermamnist数据集的实验结果验证了MEDRDF在提高医疗模型的稳健性方面的有效性。
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
虽然深度神经网络在各种任务中表现出前所未有的性能,但对对抗性示例的脆弱性阻碍了他们在安全关键系统中的部署。许多研究表明,即使在黑盒设置中也可能攻击,其中攻击者无法访问目标模型的内部信息。大多数黑匣子攻击基于查询,每个都可以获得目标模型的输入输出,并且许多研究侧重于减少所需查询的数量。在本文中,我们注意了目标模型的输出完全对应于查询输入的隐含假设。如果将某些随机性引入模型中,它可以打破假设,因此,基于查询的攻击可能在梯度估计和本地搜索中具有巨大的困难,这是其攻击过程的核心。从这种动机来看,我们甚至观察到一个小的添加剂输入噪声可以中和大多数基于查询的攻击和名称这个简单但有效的方法小噪声防御(SND)。我们分析了SND如何防御基于查询的黑匣子攻击,并展示其与CIFAR-10和ImageNet数据集的八种最先进的攻击有效性。即使具有强大的防御能力,SND几乎保持了原始的分类准确性和计算速度。通过在推断下仅添加一行代码,SND很容易适用于预先训练的模型。
translated by 谷歌翻译
深度学习(DL)在许多与人类相关的任务中表现出巨大的成功,这导致其在许多计算机视觉的基础应用中采用,例如安全监控系统,自治车辆和医疗保健。一旦他们拥有能力克服安全关键挑战,这种安全关键型应用程序必须绘制他们的成功部署之路。在这些挑战中,防止或/和检测对抗性实例(AES)。对手可以仔细制作小型,通常是难以察觉的,称为扰动的噪声被添加到清洁图像中以产生AE。 AE的目的是愚弄DL模型,使其成为DL应用的潜在风险。在文献中提出了许多测试时间逃避攻击和对策,即防御或检测方法。此外,还发布了很少的评论和调查,理论上展示了威胁的分类和对策方法,几乎​​没有焦点检测方法。在本文中,我们专注于图像分类任务,并试图为神经网络分类器进行测试时间逃避攻击检测方法的调查。对此类方法的详细讨论提供了在四个数据集的不同场景下的八个最先进的探测器的实验结果。我们还为这一研究方向提供了潜在的挑战和未来的观点。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
深度神经网络(DNNS)最近在许多分类任务中取得了巨大的成功。不幸的是,它们容易受到对抗性攻击的影响,这些攻击会产生对抗性示例,这些示例具有很小的扰动,以欺骗DNN模型,尤其是在模型共享方案中。事实证明,对抗性训练是最有效的策略,它将对抗性示例注入模型训练中,以提高DNN模型的稳健性,以对对抗性攻击。但是,基于现有的对抗性示例的对抗训练无法很好地推广到标准,不受干扰的测试数据。为了在标准准确性和对抗性鲁棒性之间取得更好的权衡,我们提出了一个新型的对抗训练框架,称为潜在边界引导的对抗训练(梯子),该训练(梯子)在潜在的边界引导的对抗性示例上对对手进行对手训练DNN模型。与大多数在输入空间中生成对抗示例的现有方法相反,梯子通过增加对潜在特征的扰动而产生了无数的高质量对抗示例。扰动是沿SVM构建的具有注意机制的决策边界的正常情况进行的。我们从边界场的角度和可视化视图分析了生成的边界引导的对抗示例的优点。与Vanilla DNN和竞争性底线相比,对MNIST,SVHN,CELEBA和CIFAR-10的广泛实验和详细分析验证了梯子在标准准确性和对抗性鲁棒性之间取得更好的权衡方面的有效性。
translated by 谷歌翻译