随着技术的发展,卷积神经网络的应用改善了我们生活的便利。但是,在图像分类字段中,已经发现,当将某些扰动添加到图像中时,CNN会将其错误分类。因此,已经提出了各种防御方法。先前的方法仅考虑了如何将模块合并到网络中以提高鲁棒性,但并未关注模块合并的方式。在本文中,我们设计了一种新的融合方法来增强CNN的鲁棒性。我们使用基于DOT产品的方法将Denoising模块添加到RESNET18和注意机制中,以进一步提高模型的鲁棒性。CIFAR10上的实验结果表明,我们的方法比FGSM和PGD攻击下的最新方法更好,并且更好。
translated by 谷歌翻译
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 -it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ∼10%. Code is available at https://github.com/facebookresearch/ ImageNet-Adversarial-Training.
translated by 谷歌翻译
Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
我们从频道明智激活的角度调查CNN的对抗性鲁棒性。通过比较\ Textit {非鲁棒}(通常训练)和\ exingit {REXITIT {REARUSTIFIED}(普及培训的)模型,我们观察到对抗性培训(AT)通过将频道明智的数据与自然的渠道和自然的对抗激活对齐来强调CNN同行。然而,在处理逆势数据时仍仍会过度激活以\ texit {excy-computive}(nr)的频道仍会过度激活。此外,我们还观察到,在所有课程上不会导致类似的稳健性。对于强大的类,具有较大激活大小的频道通常是更长的\ extedit {正相关}(pr)到预测,但这种对齐不适用于非鲁棒类。鉴于这些观察结果,我们假设抑制NR通道并对齐PR与其相关性进一步增强了在其下的CNN的鲁棒性。为了检查这个假设,我们介绍了一种新的机制,即\下划线{C} Hannel-Wise \ Underline {i} Mportance的\下划线{F} eature \ Underline {s}选举(CIFS)。 CIFS通过基于与预测的相关性产生非负乘法器来操纵某些层的激活。在包括CIFAR10和SVHN的基准数据集上的广泛实验明确验证了强制性CNN的假设和CIFS的有效性。 \ url {https://github.com/hanshuyan/cifs}
translated by 谷歌翻译
对抗性的例子揭示了神经网络的脆弱性和不明原因的性质。研究对抗性实例的辩护具有相当大的实际重要性。大多数逆势的例子,错误分类网络通常无法被人类不可检测。在本文中,我们提出了一种防御模型,将分类器培训成具有形状偏好的人类感知分类模型。包括纹理传输网络(TTN)和辅助防御生成的对冲网络(GAN)的所提出的模型被称为人类感知辅助防御GaN(had-GaN)。 TTN用于扩展清洁图像的纹理样本,并有助于分类器聚焦在其形状上。 GaN用于为模型形成培训框架并生成必要的图像。在MNIST,时尚 - MNIST和CIFAR10上进行的一系列实验表明,所提出的模型优于网络鲁棒性的最先进的防御方法。该模型还证明了对抗性实例的防御能力的显着改善。
translated by 谷歌翻译
通过对数据集的样本应用小而有意的最差情况扰动可以产生对抗性输入,这导致甚至最先进的深神经网络,以高信任输出不正确的答案。因此,开发了一些对抗防御技术来提高模型的安全性和稳健性,并避免它们被攻击。逐渐,攻击者和捍卫者之间的游戏类似的竞争,其中两个玩家都会试图在最大化自己的收益的同时互相反对发挥最佳策略。为了解决游戏,每个玩家都基于对对手的战略选择的预测来选择反对对手的最佳策略。在这项工作中,我们正处于防守方面,以申请防止攻击的游戏理论方法。我们使用两个随机化方法,随机初始化和随机激活修剪,以创造网络的多样性。此外,我们使用一种去噪技术,超级分辨率,通过在攻击前预处理图像来改善模型的鲁棒性。我们的实验结果表明,这三种方法可以有效提高深度学习神经网络的鲁棒性。
translated by 谷歌翻译
传统的深度学习模型表现出有趣的脆弱性,使攻击者迫使他们失败的任务。诸如快速梯度标志法(FGSM)和更强大的投影梯度下降(PGD)的臭名昭着攻击通过增加扰动$ \ epsilon $对输入的计算梯度来产生对抗示例,导致效果恶化模型的分类。这项工作介绍了一个模型,这是对抗对抗攻击的有弹性。我们的模式利用生物科学的成熟原则:人口多样性会产生对环境变化的恢复力。更准确地说,我们的模型包括一群$ N $多样的子模型,每一个培训,以便在手头上单独获得高精度,而被迫在其体重张量方面保持有意义的差异。每次我们的模型都接收到分类查询时,它会随机从其人口中选择一个子模型以应答查询。为了引入和维持子模型人群的多样性,我们介绍了计数器连接权重的概念。对反相关的模型(CLM)由相同架构的子模型组成,其中在同时训练期间进行周期性随机相似度检查,以保证多样性,同时保持准确性。在我们的测试中,CLM稳健性在MNIST DataSet上测试时增强了大约20%,并且在CIFAR-10数据集上测试时至少为15%。当用普遍培训的子模型实施时,该方法实现了最先进的鲁棒性。在带有$ \ epsilon = 0.3 $的Mnist DataSet上,它可以针对FGSM实现94.34%,而对PGD为91%。在带有$ \ epsilon = 8/255 $的Cifar-10数据集上,它取得了62.97%,针对FGSM和59.16%的PGD。
translated by 谷歌翻译
深度卷积神经网络可以准确地分类各种自然图像,但是在设计时可能很容易被欺骗,图像中嵌入了不可察觉的扰动。在本文中,我们设计了一种多管齐下的培训,输入转换和图像集成系统,该系统是攻击不可知论的,不容易估计。我们的系统结合了两个新型功能。第一个是一个转换层,该转换层从集体级训练数据示例中计算级别的多项式内核,并且迭代更新在推理时间上基于其特征内核差异的输入图像副本,以创建转换后的输入集合。第二个是一个分类系统,该系统将未防御网络的预测结合在一起,对被过滤图像的合奏进行了硬投票。我们在CIFAR10数据集上的评估显示,我们的系统提高了未防御性网络在不同距离指标下的各种有界和无限的白色盒子攻击的鲁棒性,同时牺牲了清洁图像的精度很小。反对自适应的全知攻击者创建端到端攻击,我们的系统成功地增强了对抗训练的网络的现有鲁棒性,为此,我们的方法最有效地应用了。
translated by 谷歌翻译
人类严重依赖于形状信息来识别对象。相反,卷积神经网络(CNNS)偏向于纹理。这也许是CNNS易受对抗性示例的影响的主要原因。在这里,我们探索如何将偏差纳入CNN,以提高其鲁棒性。提出了两种算法,基于边缘不变,以中等难以察觉的扰动。在第一个中,分类器在具有边缘图作为附加信道的图像上进行前列地培训。在推断时间,边缘映射被重新计算并连接到图像。在第二算法中,训练了条件GaN,以将边缘映射从干净和/或扰动图像转换为清洁图像。推断在与输入的边缘图对应的生成图像上完成。超过10个数据集的广泛实验证明了算法对FGSM和$ \ ELL_ infty $ PGD-40攻击的有效性。此外,我们表明a)边缘信息还可以使其他对抗训练方法有益,并且B)在边缘增强输入上培训的CNNS对抗自然图像损坏,例如运动模糊,脉冲噪声和JPEG压缩,而不是仅培训的CNNS RGB图像。从更广泛的角度来看,我们的研究表明,CNN不会充分占对鲁棒性至关重要的图像结构。代码可用:〜\ url {https://github.com/aliborji/shapedefense.git}。
translated by 谷歌翻译
对手示例是一些可以扰乱深度神经网络的输出的一些特殊输入,以便在生产环境中产生有意的误差。用于产生对抗性示例的大多数方法需要梯度信息。甚至是与生成模型无关的普遍扰动依赖于梯度信息的一定程度。程序噪声对手示例是对普发的示例生成的一种新方法,它使用计算机图形噪声快速生成通用的对抗扰动,同时不依赖于梯度信息。结合对抗的防御训练,我们使用Perlin噪声训练神经网络以获得可以防御程序噪声对抗的模型。结合使用基于预先训练的模型的模型微调方法,我们获得更快的培训以及更高的准确性。我们的研究表明,程序噪声对抗性实例是可辩护的,但为什么程序噪声可以产生对抗性实例,以及如何防御可能在未来出现的其他过程噪声对抗性示例仍有待调查。
translated by 谷歌翻译
神经常规差分方程(ODES)最近在各种研究域中引起了不断的关注。有一些作品研究了神经杂物的优化问题和近似能力,但他们的鲁棒性尚不清楚。在这项工作中,我们通过探索神经杂物经验和理论上的神经杂物的鲁棒性质来填补这一重要差异。我们首先通过将它们暴露于具有各种类型的扰动并随后研究相应输出的变化来提出基于神经竞争的网络(odeNets)的鲁棒性的实证研究。与传统的卷积神经网络(CNNS)相反,我们发现odeenets对随机高斯扰动和对抗性攻击示例的更稳健。然后,我们通过利用连续时间颂的流动的某种理想性能来提供对这种现象的富有识别理解,即积分曲线是非交叉的。我们的工作表明,由于其内在的稳健性,它很有希望使用神经杂散作为构建强大的深网络模型的基本块。为了进一步增强香草神经杂物杂物的鲁棒性,我们提出了时间不变的稳定神经颂(Tisode),其通过时间不变性和施加稳态约束来规则地规则地规则地对扰动数据的流程。我们表明,Tisode方法优于香草神经杂物,也可以与其他最先进的架构方法一起制造更强大的深网络。 \ url {https://github.com/hanshuyan/tisode}
translated by 谷歌翻译
Adversarial training (AT) is one of the most effective ways for improving the robustness of deep convolution neural networks (CNNs). Just like common network training, the effectiveness of AT relies on the design of basic network components. In this paper, we conduct an in-depth study on the role of the basic ReLU activation component in AT for robust CNNs. We find that the spatially-shared and input-independent properties of ReLU activation make CNNs less robust to white-box adversarial attacks with either standard or adversarial training. To address this problem, we extend ReLU to a novel Sparta activation function (Spatially attentive and Adversarially Robust Activation), which enables CNNs to achieve both higher robustness, i.e., lower error rate on adversarial examples, and higher accuracy, i.e., lower error rate on clean examples, than the existing state-of-the-art (SOTA) activation functions. We further study the relationship between Sparta and the SOTA activation functions, providing more insights about the advantages of our method. With comprehensive experiments, we also find that the proposed method exhibits superior cross-CNN and cross-dataset transferability. For the former, the adversarially trained Sparta function for one CNN (e.g., ResNet-18) can be fixed and directly used to train another adversarially robust CNN (e.g., ResNet-34). For the latter, the Sparta function trained on one dataset (e.g., CIFAR-10) can be employed to train adversarially robust CNNs on another dataset (e.g., SVHN). In both cases, Sparta leads to CNNs with higher robustness than the vanilla ReLU, verifying the flexibility and versatility of the proposed method.
translated by 谷歌翻译
已经证明,基于度量学习的人重新识别(Reid)系统继承了深度神经网络(DNN)的脆弱性,这很容易被普通ararar公制攻击所迷惑。现有的工作主要依赖于对公制防御的对抗培训,并且没有完全研究更多方法。通过探索攻击对潜在特征的影响,我们提出了针对度量攻击和防御方法的有针对性的方法。在公制攻击方面,我们使用本地颜色偏差来构建输入的类内变化以攻击颜色特征。在公制防御方面,我们提出了一种联合防御方法,包括两个主动防御和被动防御的部分。主动防御有助于通过构建来自多模式图像的不同输入来增强模型到色彩变化的鲁棒性和多种方式的结构关系的学习,并且被动防御通过迂回缩放来利用变化像素空间中的结构特征的不变性以保护结构特征在消除一些对抗噪声的同时。广泛的实验表明,拟议的联合防御与现有的对抗公制防御方法相比,不仅与同时进行多次攻击而且也没有显着降低模型的泛化能力。代码可在https://github.com/finger-monkey/multi-modal_joint_defence上获得。
translated by 谷歌翻译
Pixel-wise prediction with deep neural network has become an effective paradigm for salient object detection (SOD) and achieved remarkable performance. However, very few SOD models are robust against adversarial attacks which are visually imperceptible for human visual attention. The previous work robust saliency (ROSA) shuffles the pre-segmented superpixels and then refines the coarse saliency map by the densely connected conditional random field (CRF). Different from ROSA that relies on various pre- and post-processings, this paper proposes a light-weight Learnable Noise (LeNo) to defend adversarial attacks for SOD models. LeNo preserves accuracy of SOD models on both adversarial and clean images, as well as inference speed. In general, LeNo consists of a simple shallow noise and noise estimation that embedded in the encoder and decoder of arbitrary SOD networks respectively. Inspired by the center prior of human visual attention mechanism, we initialize the shallow noise with a cross-shaped gaussian distribution for better defense against adversarial attacks. Instead of adding additional network components for post-processing, the proposed noise estimation modifies only one channel of the decoder. With the deeply-supervised noise-decoupled training on state-of-the-art RGB and RGB-D SOD networks, LeNo outperforms previous works not only on adversarial images but also on clean images, which contributes stronger robustness for SOD. Our code is available at https://github.com/ssecv/LeNo.
translated by 谷歌翻译
有必要提高某些特殊班级的表现,或者特别保护它们免受对抗学习的攻击。本文提出了一个将成本敏感分类和对抗性学习结合在一起的框架,以训练可以区分受保护和未受保护的类的模型,以使受保护的类别不太容易受到对抗性示例的影响。在此框架中,我们发现在训练深神经网络(称为Min-Max属性)期间,一个有趣的现象,即卷积层中大多数参数的绝对值。基于这种最小的最大属性,该属性是在随机分布的角度制定和分析的,我们进一步建立了一个针对对抗性示例的新防御模型,以改善对抗性鲁棒性。构建模型的一个优点是,它的性能比标准模型更好,并且可以与对抗性训练相结合,以提高性能。在实验上证实,对于所有类别的平均准确性,我们的模型在没有发生攻击时几乎与现有模型一样,并且在发生攻击时比现有模型更好。具体而言,关于受保护类的准确性,提议的模型比发生攻击时的现有模型要好得多。
translated by 谷歌翻译
虽然深入学习模型取得了前所未有的成功,但他们对逆势袭击的脆弱性引起了越来越关注,特别是在部署安全关键域名时。为了解决挑战,已经提出了鲁棒性改善的许多辩护策略,包括反应性和积极主动。从图像特征空间的角度来看,由于特征的偏移,其中一些人无法达到满足结果。此外,模型学习的功能与分类结果无直接相关。与他们不同,我们考虑基本上从模型内部进行防御方法,并在攻击前后调查神经元行为。我们观察到,通过大大改变为正确标签的神经元大大改变神经元来误导模型。受其激励,我们介绍了神经元影响的概念,进一步将神经元分为前,中间和尾部。基于它,我们提出神经元水平逆扰动(NIP),第一神经元水平反应防御方法对抗对抗攻击。通过强化前神经元并削弱尾部中的弱化,辊隙可以消除几乎所有的对抗扰动,同时仍然保持高良好的精度。此外,它可以通过适应性,尤其是更大的扰动来应对不同的扰动。在三个数据集和六种模型上进行的综合实验表明,NIP优于最先进的基线对抗11个对抗性攻击。我们进一步通过神经元激活和可视化提供可解释的证据,以便更好地理解。
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
尽管大量研究专门用于变形检测,但大多数研究都无法推广其在训练范式之外的变形面。此外,最近的变体检测方法非常容易受到对抗攻击的影响。在本文中,我们打算学习一个具有高概括的变体检测模型,以对各种形态攻击和对不同的对抗攻击的高度鲁棒性。为此,我们开发了卷积神经网络(CNN)和变压器模型的合奏,以同时受益于其能力。为了提高整体模型的鲁棒精度,我们采用多扰动对抗训练,并生成具有高可传递性的对抗性示例。我们详尽的评估表明,提出的强大合奏模型将概括为几个变形攻击和面部数据集。此外,我们验证了我们的稳健集成模型在超过最先进的研究的同时,对几次对抗性攻击获得了更好的鲁棒性。
translated by 谷歌翻译
深度学习(DL)在许多与人类相关的任务中表现出巨大的成功,这导致其在许多计算机视觉的基础应用中采用,例如安全监控系统,自治车辆和医疗保健。一旦他们拥有能力克服安全关键挑战,这种安全关键型应用程序必须绘制他们的成功部署之路。在这些挑战中,防止或/和检测对抗性实例(AES)。对手可以仔细制作小型,通常是难以察觉的,称为扰动的噪声被添加到清洁图像中以产生AE。 AE的目的是愚弄DL模型,使其成为DL应用的潜在风险。在文献中提出了许多测试时间逃避攻击和对策,即防御或检测方法。此外,还发布了很少的评论和调查,理论上展示了威胁的分类和对策方法,几乎​​没有焦点检测方法。在本文中,我们专注于图像分类任务,并试图为神经网络分类器进行测试时间逃避攻击检测方法的调查。对此类方法的详细讨论提供了在四个数据集的不同场景下的八个最先进的探测器的实验结果。我们还为这一研究方向提供了潜在的挑战和未来的观点。
translated by 谷歌翻译