深度神经网络近似高度复杂功能的能力是其成功的关键。但是,这种好处是以巨大的模型大小为代价的,这挑战了其在资源受限环境中的部署。修剪是一种用于限制此问题的有效技术,但通常以降低准确性和对抗性鲁棒性为代价。本文解决了这些缺点,并引入了Deadwooding,这是一种新型的全球修剪技术,它利用了Lagrangian双重方法来鼓励模型稀疏性,同时保持准确性并确保鲁棒性。所得模型显示出在鲁棒性和准确性度量方面的最先进研究大大优于最先进的模型。
translated by 谷歌翻译
The compute-intensive nature of neural networks (NNs) limits their deployment in resource-constrained environments such as cell phones, drones, autonomous robots, etc. Hence, developing robust sparse models fit for safety-critical applications has been an issue of longstanding interest. Though adversarial training with model sparsification has been combined to attain the goal, conventional adversarial training approaches provide no formal guarantee that the models would be robust against any rogue samples in a restricted space around a benign sample. Recently proposed verified local robustness techniques provide such a guarantee. This is the first paper that combines the ideas from verified local robustness and dynamic sparse training to develop `SparseVLR'-- a novel framework to search verified locally robust sparse networks. Obtained sparse models exhibit accuracy and robustness comparable to their dense counterparts at sparsity as high as 99%. Furthermore, unlike most conventional sparsification techniques, SparseVLR does not require a pre-trained dense model, reducing the training time by 50%. We exhaustively investigated SparseVLR's efficacy and generalizability by evaluating various benchmark and application-specific datasets across several models.
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples-inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. 1
translated by 谷歌翻译
作为反对攻击的最有效的防御方法之一,对抗性训练倾向于学习包容性的决策边界,以提高深度学习模型的鲁棒性。但是,由于沿对抗方向的边缘的大幅度和不必要的增加,对抗性训练会在自然实例和对抗性示例之间引起严重的交叉,这不利于平衡稳健性和自然准确性之间的权衡。在本文中,我们提出了一种新颖的对抗训练计划,以在稳健性和自然准确性之间进行更好的权衡。它旨在学习一个中度包容的决策边界,这意味着决策边界下的自然示例的边缘是中等的。我们称此方案为中等边缘的对抗训练(MMAT),该方案生成更细粒度的对抗示例以减轻交叉问题。我们还利用了经过良好培训的教师模型的逻辑来指导我们的模型学习。最后,MMAT在Black-Box和White-Box攻击下都可以实现高自然的精度和鲁棒性。例如,在SVHN上,实现了最新的鲁棒性和自然精度。
translated by 谷歌翻译
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models. However, these tickets are proved to be notrobust to adversarial examples, and even worse than their PLM counterparts. To address this problem, we propose a novel method based on learning binary weight masks to identify robust tickets hidden in the original PLMs. Since the loss is not differentiable for the binary mask, we assign the hard concrete distribution to the masks and encourage their sparsity using a smoothing approximation of L0 regularization.Furthermore, we design an adversarial loss objective to guide the search for robust tickets and ensure that the tickets perform well bothin accuracy and robustness. Experimental results show the significant improvement of the proposed method over previous work on adversarial robustness evaluation.
translated by 谷歌翻译
最近的研究表明,对对抗性攻击的鲁棒性可以跨网络转移。换句话说,在强大的教师模型的帮助下,我们可以使模型更加强大。我们问是否从静态教师那里学习,可以模特“学习”和“互相教导”来实现更好的稳健性?在本文中,我们研究模型之间的相互作用如何通过知识蒸馏来影响鲁棒性。我们提出了互联土训练(垫子),其中多种模型一起培训并分享对抗性示例的知识,以实现改善的鲁棒性。垫允许强大的模型来探索更大的对抗样本空间,并找到更强大的特征空间和决策边界。通过对CIFAR-10和CIFAR-100的广泛实验,我们证明垫可以在白盒攻击下有效地改善模型稳健性和最优异的现有方法,使$ \ SIM为8%的准确性增益对香草对抗培训(在PGD-100袭击下。此外,我们表明垫子还可以在不同的扰动类型中减轻鲁棒性权衡,从$ l_ \ infty $,$ l_2 $和$ l_1 $攻击中带来基线的基线。这些结果表明了该方法的优越性,并证明协作学习是设计强大模型的有效策略。
translated by 谷歌翻译
Adversarial training has been empirically shown to be more prone to overfitting than standard training. The exact underlying reasons still need to be fully understood. In this paper, we identify one cause of overfitting related to current practices of generating adversarial samples from misclassified samples. To address this, we propose an alternative approach that leverages the misclassified samples to mitigate the overfitting problem. We show that our approach achieves better generalization while having comparable robustness to state-of-the-art adversarial training methods on a wide range of computer vision, natural language processing, and tabular tasks.
translated by 谷歌翻译
对抗性培训已被广泛用于增强神经网络模型对抗对抗攻击的鲁棒性。但是,自然准确性与强大的准确性之间仍有一个显着的差距。我们发现其中一个原因是常用的标签,单热量矢量,阻碍了图像识别的学习过程。在本文中,我们提出了一种称为低温蒸馏(LTD)的方法,该方法基于知识蒸馏框架来产生所需的软标记。与以前的工作不同,LTD在教师模型中使用相对较低的温度,采用不同但固定的,温度为教师模型和学生模型。此外,我们已经调查了有限公司协同使用自然数据和对抗性的方法。实验结果表明,在没有额外的未标记数据的情况下,所提出的方法与上一项工作相结合,可以分别在CiFar-10和CiFar-100数据集上实现57.72 \%和30.36 \%的鲁棒精度,这是州的大约1.21 \%通常的方法平均。
translated by 谷歌翻译
已知深度神经网络(DNN)容易受到用不可察觉的扰动制作的对抗性示例的影响,即,输入图像的微小变化会引起错误的分类,从而威胁着基于深度学习的部署系统的可靠性。经常采用对抗训练(AT)来通过训练损坏和干净的数据的混合物来提高DNN的鲁棒性。但是,大多数基于AT的方法在处理\ textit {转移的对抗示例}方面是无效的,这些方法是生成以欺骗各种防御模型的生成的,因此无法满足现实情况下提出的概括要求。此外,对抗性训练一般的国防模型不能对具有扰动的输入产生可解释的预测,而不同的领域专家则需要一个高度可解释的强大模型才能了解DNN的行为。在这项工作中,我们提出了一种基于Jacobian规范和选择性输入梯度正则化(J-SIGR)的方法,该方法通过Jacobian归一化提出了线性化的鲁棒性,还将基于扰动的显着性图正规化,以模仿模型的可解释预测。因此,我们既可以提高DNN的防御能力和高解释性。最后,我们评估了跨不同体系结构的方法,以针对强大的对抗性攻击。实验表明,提出的J-Sigr赋予了针对转移的对抗攻击的鲁棒性,我们还表明,来自神经网络的预测易于解释。
translated by 谷歌翻译
对机器学习模型的逃避攻击通常通过迭代探测固定目标模型成功,从而曾经成功的攻击将反复成功。应对这种威胁的一种有希望的方法是使模型成为对抗输入的行动目标。为此,我们介绍了Morphence-2.0,这是一个由分布外(OOD)检测提供动力的可扩展移动目标防御(MTD),以防止对抗性例子。通过定期移动模型的决策功能,Morphence-2.0使重复或相关攻击成功的挑战变得极大。 Morphence-2.0以基本模型生成的模型池以引入足够随机性的方式对预测查询进行响应。通过OOD检测,Morphence-2.0配备了调度方法,该方法将对抗性示例分配给了强大的决策功能,并将良性样本分配给了未防御的准确模型。为了确保重复或相关的攻击失败,已部署的模型池在达到查询预算后​​自动到期,并且模型池被提前生成的新模型池无缝替换。我们在两个基准图像分类数据集(MNIST和CIFAR10)上评估Morphence-2.0,以4个参考攻击(3个白框和1个黑色框)。 Morphence-2.0始终优于先前的防御能力,同时保留清洁数据的准确性和降低攻击转移性。我们还表明,当由OOD检测提供动力时,Morphence-2.0能够精确地对模型的决策功能进行基于输入的运动,从而导致对对抗和良性查询的预测准确性更高。
translated by 谷歌翻译
Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs). However, this approach is typically more expensive than traditional fine-tuning because of the necessity to generate adversarial examples via gradient descent. Delving into the optimization process of adversarial training, we find that robust connectivity patterns emerge in the early training phase (typically $0.15\sim0.3$ epochs), far before parameters converge. Inspired by this finding, we dig out robust early-bird tickets (i.e., subnetworks) to develop an efficient adversarial training method: (1) searching for robust tickets with structured sparsity in the early stage; (2) fine-tuning robust tickets in the remaining time. To extract the robust tickets as early as possible, we design a ticket convergence metric to automatically terminate the searching process. Experiments show that the proposed efficient adversarial training method can achieve up to $7\times \sim 13 \times$ training speedups while maintaining comparable or even better robustness compared to the most competitive state-of-the-art adversarial training methods.
translated by 谷歌翻译
We present a new algorithm to learn a deep neural network model robust against adversarial attacks. Previous algorithms demonstrate an adversarially trained Bayesian Neural Network (BNN) provides improved robustness. We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal. Instead, we first propose preventing mode collapse to better approximate the multi-modal posterior distribution. Second, based on the intuition that a robust model should ignore perturbations and only consider the informative content of the input, we conceptualize and formulate an information gain objective to measure and force the information learned from both benign and adversarial training instances to be similar. Importantly. we prove and demonstrate that minimizing the information gain objective allows the adversarial risk to approach the conventional empirical risk. We believe our efforts provide a step toward a basis for a principled method of adversarially training BNNs. Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks with 0.035 distortion on both CIFAR-10 and STL-10 datasets.
translated by 谷歌翻译
随着移动设备的扩散和物联网,深入学习模型越来越多地部署在具有有限的计算资源和记忆的设备上,并且暴露于对抗性噪声的威胁。这些设备必须使用轻质和稳健性学习深层模型。然而,当前的深度学习解决方案难以学习具有在不降低一个或另一个的情况下拥有这两个性质的模型。众所周知,全连接层有助于卷积神经网络的大部分参数。我们执行完全连接层的可分离结构变换以减少参数,其中全连接层的大规模重量矩阵由几个可分离的小矩阵的张量产品分离。注意,在馈送到完全连接的层之前,诸如图像的数据(例如图像)不再需要扁平化,保留数据的有价值的空间几何信息。此外,为了进一步提高轻质和稳健性,我们提出了稀疏性和可分性条件数的关节约束,其施加在这些可分离基质上。我们评估了MLP,VGG-16和视觉变压器上提出的方法。数据集的实验结果如想象成,SVHN,CiFar-100和CiFAR10,表明我们成功将网络参数量减少了90%,而稳健的精度损耗小于1.5%,这比基于的SOTA方法更好原始完全连接的图层。有趣的是,即使以高压缩率,例如,它也可以实现压倒性的优势,例如,200次。
translated by 谷歌翻译
深度学习(DL)系统的安全性是一个极为重要的研究领域,因为它们正在部署在多个应用程序中,因为它们不断改善,以解决具有挑战性的任务。尽管有压倒性的承诺,但深度学习系统容易受到制作的对抗性例子的影响,这可能是人眼无法察觉的,但可能会导致模型错误分类。对基于整体技术的对抗性扰动的保护已被证明很容易受到更强大的对手的影响,或者证明缺乏端到端评估。在本文中,我们试图开发一种新的基于整体的解决方案,该解决方案构建具有不同决策边界的防御者模型相对于原始模型。通过(1)通过一种称为拆分和剃须的方法转换输入的分类器的合奏,以及(2)通过一种称为对比度功能的方法限制重要特征,显示出相对于相对于不同的梯度对抗性攻击,这减少了将对抗性示例从原始示例转移到针对同一类的防御者模型的机会。我们使用标准图像分类数据集(即MNIST,CIFAR-10和CIFAR-100)进行了广泛的实验,以实现最新的对抗攻击,以证明基于合奏的防御的鲁棒性。我们还在存在更强大的对手的情况下评估稳健性,该对手同时靶向合奏中的所有模型。已经提供了整体假阳性和误报的结果,以估计提出的方法的总体性能。
translated by 谷歌翻译
Recent increases in the computational demands of deep neural networks (DNNs) have sparked interest in efficient deep learning mechanisms, e.g., quantization or pruning. These mechanisms enable the construction of a small, efficient version of commercial-scale models with comparable accuracy, accelerating their deployment to resource-constrained devices. In this paper, we study the security considerations of publishing on-device variants of large-scale models. We first show that an adversary can exploit on-device models to make attacking the large models easier. In evaluations across 19 DNNs, by exploiting the published on-device models as a transfer prior, the adversarial vulnerability of the original commercial-scale models increases by up to 100x. We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase. Based on the insights, we propose a defense, $similarity$-$unpairing$, that fine-tunes on-device models with the objective of reducing the similarity. We evaluated our defense on all the 19 DNNs and found that it reduces the transferability up to 90% and the number of queries required by a factor of 10-100x. Our results suggest that further research is needed on the security (or even privacy) threats caused by publishing those efficient siblings.
translated by 谷歌翻译
近年来,深神经网络(DNN)应用的流行和成功促使对DNN压缩的研究,例如修剪和量化。这些技术加速了模型推断,减少功耗,并降低运行DNN所需的硬件的大小和复杂性,而准确性几乎没有损失。但是,由于DNN容易受到对抗输入的影响,因此重要的是要考虑压缩和对抗性鲁棒性之间的关系。在这项工作中,我们研究了几种不规则修剪方案和8位量化产生的模型的对抗性鲁棒性。此外,尽管常规修剪消除了DNN中最不重要的参数,但我们研究了一种非常规修剪方法的效果:根据对抗输入的梯度去除最重要的模型参数。我们称这种方法称贪婪的对抗修剪(GAP),我们发现这种修剪方法会导致模型可抵抗从其未压缩的对应物转移攻击的模型。
translated by 谷歌翻译
深度神经网络易于对自然投入的离前事实制作,小而难以察觉的变化影响。对这些实例的最有效的防御机制是对逆脉训练在训练期间通过迭代最大化的损失来构建对抗性实例。然后训练该模型以最小化这些构建的实施例的损失。此最小最大优化需要更多数据,更大的容量模型和额外的计算资源。它还降低了模型的标准泛化性能。我们可以更有效地实现鲁棒性吗?在这项工作中,我们从知识转移的角度探讨了这个问题。首先,我们理论上展示了在混合增强的帮助下将鲁棒性从对接地训练的教师模型到学生模型的可转换性。其次,我们提出了一种新颖的鲁棒性转移方法,称为基于混合的激活信道图(MixacM)转移。 MixacM通过匹配未在没有昂贵的对抗扰动的匹配生成的激活频道地图将强大的教师转移到学生的鲁棒性。最后,对多个数据集的广泛实验和不同的学习情景显示我们的方法可以转移鲁棒性,同时还改善自然图像的概括。
translated by 谷歌翻译
在本讨论文件中,我们调查了有关机器学习模型鲁棒性的最新研究。随着学习算法在数据驱动的控制系统中越来越流行,必须确保它们对数据不确定性的稳健性,以维持可靠的安全至关重要的操作。我们首先回顾了这种鲁棒性的共同形式主义,然后继续讨论训练健壮的机器学习模型的流行和最新技术,以及可证明这种鲁棒性的方法。从强大的机器学习的这种统一中,我们识别并讨论了该地区未来研究的迫切方向。
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译