随着移动设备的扩散和物联网,深入学习模型越来越多地部署在具有有限的计算资源和记忆的设备上,并且暴露于对抗性噪声的威胁。这些设备必须使用轻质和稳健性学习深层模型。然而,当前的深度学习解决方案难以学习具有在不降低一个或另一个的情况下拥有这两个性质的模型。众所周知,全连接层有助于卷积神经网络的大部分参数。我们执行完全连接层的可分离结构变换以减少参数,其中全连接层的大规模重量矩阵由几个可分离的小矩阵的张量产品分离。注意,在馈送到完全连接的层之前,诸如图像的数据(例如图像)不再需要扁平化,保留数据的有价值的空间几何信息。此外,为了进一步提高轻质和稳健性,我们提出了稀疏性和可分性条件数的关节约束,其施加在这些可分离基质上。我们评估了MLP,VGG-16和视觉变压器上提出的方法。数据集的实验结果如想象成,SVHN,CiFar-100和CiFAR10,表明我们成功将网络参数量减少了90%,而稳健的精度损耗小于1.5%,这比基于的SOTA方法更好原始完全连接的图层。有趣的是,即使以高压缩率,例如,它也可以实现压倒性的优势,例如,200次。
translated by 谷歌翻译
有必要提高某些特殊班级的表现,或者特别保护它们免受对抗学习的攻击。本文提出了一个将成本敏感分类和对抗性学习结合在一起的框架,以训练可以区分受保护和未受保护的类的模型,以使受保护的类别不太容易受到对抗性示例的影响。在此框架中,我们发现在训练深神经网络(称为Min-Max属性)期间,一个有趣的现象,即卷积层中大多数参数的绝对值。基于这种最小的最大属性,该属性是在随机分布的角度制定和分析的,我们进一步建立了一个针对对抗性示例的新防御模型,以改善对抗性鲁棒性。构建模型的一个优点是,它的性能比标准模型更好,并且可以与对抗性训练相结合,以提高性能。在实验上证实,对于所有类别的平均准确性,我们的模型在没有发生攻击时几乎与现有模型一样,并且在发生攻击时比现有模型更好。具体而言,关于受保护类的准确性,提议的模型比发生攻击时的现有模型要好得多。
translated by 谷歌翻译
已知深度神经网络(DNN)容易受到用不可察觉的扰动制作的对抗性示例的影响,即,输入图像的微小变化会引起错误的分类,从而威胁着基于深度学习的部署系统的可靠性。经常采用对抗训练(AT)来通过训练损坏和干净的数据的混合物来提高DNN的鲁棒性。但是,大多数基于AT的方法在处理\ textit {转移的对抗示例}方面是无效的,这些方法是生成以欺骗各种防御模型的生成的,因此无法满足现实情况下提出的概括要求。此外,对抗性训练一般的国防模型不能对具有扰动的输入产生可解释的预测,而不同的领域专家则需要一个高度可解释的强大模型才能了解DNN的行为。在这项工作中,我们提出了一种基于Jacobian规范和选择性输入梯度正则化(J-SIGR)的方法,该方法通过Jacobian归一化提出了线性化的鲁棒性,还将基于扰动的显着性图正规化,以模仿模型的可解释预测。因此,我们既可以提高DNN的防御能力和高解释性。最后,我们评估了跨不同体系结构的方法,以针对强大的对抗性攻击。实验表明,提出的J-Sigr赋予了针对转移的对抗攻击的鲁棒性,我们还表明,来自神经网络的预测易于解释。
translated by 谷歌翻译
最近的研究表明,深度神经网络(DNNS)极易受到精心设计的对抗例子的影响。对那些对抗性例子的对抗性学习已被证明是防御这种攻击的最有效方法之一。目前,大多数现有的对抗示例生成方法基于一阶梯度,这几乎无法进一步改善模型的鲁棒性,尤其是在面对二阶对抗攻击时。与一阶梯度相比,二阶梯度提供了相对于自然示例的损失格局的更准确近似。受此启发的启发,我们的工作制作了二阶的对抗示例,并使用它们来训练DNNS。然而,二阶优化涉及Hessian Inverse的耗时计算。我们通过将问题转换为Krylov子空间中的优化,提出了一种近似方法,该方法显着降低了计算复杂性以加快训练过程。在矿工和CIFAR-10数据集上进行的广泛实验表明,我们使用二阶对抗示例的对抗性学习优于其他FISRT-阶方法,这可以改善针对广泛攻击的模型稳健性。
translated by 谷歌翻译
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense. Among these methods, adversarial training has been drawing increasing attention because of its simplicity and effectiveness. However, the performance of the adversarial training is greatly limited by the architectures of target DNNs, which often makes the resulting DNNs with poor accuracy and unsatisfactory robustness. To address this problem, we propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training. In particular, we design a novel cell-based search space specially for adversarial training, which improves the accuracy and the robustness upper bound of the searched architectures by carefully designing the placement of the cells and the proportional relationship of the filter numbers. Then we propose a two-stage search strategy to search for both accurate and robust neural architectures. At the first stage, the architecture parameters are optimized to minimize the adversarial loss, which makes full use of the effectiveness of the adversarial training in enhancing the robustness. At the second stage, the architecture parameters are optimized to minimize both the natural loss and the adversarial loss utilizing the proposed multi-objective adversarial training method, so that the searched neural architectures are both accurate and robust. We evaluate the proposed algorithm under natural data and various adversarial attacks, which reveals the superiority of the proposed method in terms of both accurate and robust architectures. We also conclude that accurate and robust neural architectures tend to deploy very different structures near the input and the output, which has great practical significance on both hand-crafting and automatically designing of accurate and robust neural architectures.
translated by 谷歌翻译
深度神经网络近似高度复杂功能的能力是其成功的关键。但是,这种好处是以巨大的模型大小为代价的,这挑战了其在资源受限环境中的部署。修剪是一种用于限制此问题的有效技术,但通常以降低准确性和对抗性鲁棒性为代价。本文解决了这些缺点,并引入了Deadwooding,这是一种新型的全球修剪技术,它利用了Lagrangian双重方法来鼓励模型稀疏性,同时保持准确性并确保鲁棒性。所得模型显示出在鲁棒性和准确性度量方面的最先进研究大大优于最先进的模型。
translated by 谷歌翻译
尽管取得了巨大的成功,卷积神经网络(CNNS)旨在产生高的计算/储存成本,并且易受对抗扰动的影响。最近的鲁棒模型压缩的作品通过与对抗训练组合模型压缩技术来解决这些挑战。但这些方法无法改善现实硬件上的吞吐量(每秒框架),同时保持对抗对抗扰动的鲁棒性。为了克服这个问题,我们提出了广义深度可分离(GDWS)卷积的方法 - 一种高效,通用,训练后标准2D卷积的近似值。 GDW大大提高了现实硬件上标准预先训练网络的吞吐量,同时保留其鲁棒性。最后,GDW可以扩展到大问题大小,因为它在预先训练的模型上运行,并且不需要任何额外的培训。我们为2D卷积近似器建立GDW的最优性,并提出了在复杂性和误差约束下构造最佳GDWS卷积的精确算法。我们通过CIFAR-10,SVHN和Imagenet数据集的广泛实验展示GDWS的有效性。我们的代码可以在https://github.com/hsndbk4/gdws找到。
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
Adversarial training (AT) is one of the most effective ways for improving the robustness of deep convolution neural networks (CNNs). Just like common network training, the effectiveness of AT relies on the design of basic network components. In this paper, we conduct an in-depth study on the role of the basic ReLU activation component in AT for robust CNNs. We find that the spatially-shared and input-independent properties of ReLU activation make CNNs less robust to white-box adversarial attacks with either standard or adversarial training. To address this problem, we extend ReLU to a novel Sparta activation function (Spatially attentive and Adversarially Robust Activation), which enables CNNs to achieve both higher robustness, i.e., lower error rate on adversarial examples, and higher accuracy, i.e., lower error rate on clean examples, than the existing state-of-the-art (SOTA) activation functions. We further study the relationship between Sparta and the SOTA activation functions, providing more insights about the advantages of our method. With comprehensive experiments, we also find that the proposed method exhibits superior cross-CNN and cross-dataset transferability. For the former, the adversarially trained Sparta function for one CNN (e.g., ResNet-18) can be fixed and directly used to train another adversarially robust CNN (e.g., ResNet-34). For the latter, the Sparta function trained on one dataset (e.g., CIFAR-10) can be employed to train adversarially robust CNNs on another dataset (e.g., SVHN). In both cases, Sparta leads to CNNs with higher robustness than the vanilla ReLU, verifying the flexibility and versatility of the proposed method.
translated by 谷歌翻译
Model compression and model defense for deep neural networks (DNNs) have been extensively and individually studied. Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks. However, the structured sparse models obtained by the exiting works suffer severe performance degradation for both benign and robust accuracy, thereby causing a challenging dilemma between robustness and structuredness of the compact DNNs. To address this problem, in this paper, we propose CSTAR, an efficient solution that can simultaneously impose the low-rankness-based Compactness, high STructuredness and high Adversarial Robustness on the target DNN models. By formulating the low-rankness and robustness requirement within the same framework and globally determining the ranks, the compressed DNNs can simultaneously achieve high compression performance and strong adversarial robustness. Evaluations for various DNN models on different datasets demonstrate the effectiveness of CSTAR. Compared with the state-of-the-art robust structured pruning methods, CSTAR shows consistently better performance. For instance, when compressing ResNet-18 on CIFAR-10, CSTAR can achieve up to 20.07% and 11.91% improvement for benign accuracy and robust accuracy, respectively. For compressing ResNet-18 with 16x compression ratio on Imagenet, CSTAR can obtain 8.58% benign accuracy gain and 4.27% robust accuracy gain compared to the existing robust structured pruning method.
translated by 谷歌翻译
已知深神经网络(DNN)容易受到对抗性攻击的影响。已经提出了一系列防御方法来培训普遍稳健的DNN,其中对抗性培训已经证明了有希望的结果。然而,尽管对对抗性培训开发的初步理解,但从架构角度来看,它仍然不明确,从架构角度来看,什么配置可以导致更强大的DNN。在本文中,我们通过全面调查网络宽度和深度对前对方培训的DNN的鲁棒性的全面调查来解决这一差距。具体地,我们进行以下关键观察:1)更多参数(更高的模型容量)不一定有助于对抗冒险; 2)网络的最后阶段(最后一组块)降低能力实际上可以改善对抗性的鲁棒性; 3)在相同的参数预算下,存在对抗性鲁棒性的最佳架构配置。我们还提供了一个理论分析,解释了为什么这种网络配置可以帮助鲁棒性。这些架构见解可以帮助设计对抗的强制性DNN。代码可用于\ url {https://github.com/hanxunh/robustwrn}。
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
对抗性训练及其变体已成为使用神经网络实现对抗性稳健分类的普遍方法。但是,它的计算成本增加,以及标准性能和稳健性能之间的显着差距阻碍了进步,并提出了我们是否可以做得更好的问题。在这项工作中,我们退后一步,问:模型可以通过适当优化的集合通过标准培训来实现鲁棒性吗?为此,我们设计了一种用于鲁棒分类的元学习方法,该方法以原则性的方式在部署之前优化了数据集,并旨在有效地删除数据的非稳定部分。我们将优化方法作为内核回归的多步PGD程序进行了,其中一类核描述了无限宽的神经网(神经切线核-NTKS)。 MNIST和CIFAR-10的实验表明,当在内核回归分类器和神经网络中部署时,我们生成的数据集对PGD攻击都非常鲁棒性。但是,这种鲁棒性有些谬误,因为替代性攻击设法欺骗了模型,我们发现文献中以前的类似作品也是如此。我们讨论了这一点的潜在原因,并概述了进一步的研究途径。
translated by 谷歌翻译
在本文中,我们提出了一种防御策略,以通过合并隐藏的层表示来改善对抗性鲁棒性。这种防御策略的关键旨在压缩或过滤输入信息,包括对抗扰动。而且这种防御策略可以被视为一种激活函数,可以应用于任何类型的神经网络。从理论上讲,我们在某些条件下也证明了这种防御策略的有效性。此外,合并隐藏层表示,我们提出了三种类型的对抗攻击,分别生成三种类型的对抗示例。实验表明,我们的防御方法可以显着改善深神经网络的对抗性鲁棒性,即使我们不采用对抗性训练,也可以实现最新的表现。
translated by 谷歌翻译
We present a new algorithm to learn a deep neural network model robust against adversarial attacks. Previous algorithms demonstrate an adversarially trained Bayesian Neural Network (BNN) provides improved robustness. We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal. Instead, we first propose preventing mode collapse to better approximate the multi-modal posterior distribution. Second, based on the intuition that a robust model should ignore perturbations and only consider the informative content of the input, we conceptualize and formulate an information gain objective to measure and force the information learned from both benign and adversarial training instances to be similar. Importantly. we prove and demonstrate that minimizing the information gain objective allows the adversarial risk to approach the conventional empirical risk. We believe our efforts provide a step toward a basis for a principled method of adversarially training BNNs. Our model demonstrate significantly improved robustness--up to 20%--compared with adversarial training and Adv-BNN under PGD attacks with 0.035 distortion on both CIFAR-10 and STL-10 datasets.
translated by 谷歌翻译
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples-inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. 1
translated by 谷歌翻译
深度神经网络(DNNS)最近在许多分类任务中取得了巨大的成功。不幸的是,它们容易受到对抗性攻击的影响,这些攻击会产生对抗性示例,这些示例具有很小的扰动,以欺骗DNN模型,尤其是在模型共享方案中。事实证明,对抗性训练是最有效的策略,它将对抗性示例注入模型训练中,以提高DNN模型的稳健性,以对对抗性攻击。但是,基于现有的对抗性示例的对抗训练无法很好地推广到标准,不受干扰的测试数据。为了在标准准确性和对抗性鲁棒性之间取得更好的权衡,我们提出了一个新型的对抗训练框架,称为潜在边界引导的对抗训练(梯子),该训练(梯子)在潜在的边界引导的对抗性示例上对对手进行对手训练DNN模型。与大多数在输入空间中生成对抗示例的现有方法相反,梯子通过增加对潜在特征的扰动而产生了无数的高质量对抗示例。扰动是沿SVM构建的具有注意机制的决策边界的正常情况进行的。我们从边界场的角度和可视化视图分析了生成的边界引导的对抗示例的优点。与Vanilla DNN和竞争性底线相比,对MNIST,SVHN,CELEBA和CIFAR-10的广泛实验和详细分析验证了梯子在标准准确性和对抗性鲁棒性之间取得更好的权衡方面的有效性。
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
对抗训练(AT)在防御对抗例子方面表现出色。最近的研究表明,示例对于AT期间模型的最终鲁棒性并不同样重要,即,所谓的硬示例可以攻击容易表现出比对最终鲁棒性的鲁棒示例更大的影响。因此,保证硬示例的鲁棒性对于改善模型的最终鲁棒性至关重要。但是,定义有效的启发式方法来寻找辛苦示例仍然很困难。在本文中,受到信息瓶颈(IB)原则的启发,我们发现了一个具有高度共同信息及其相关的潜在表示的例子,更有可能受到攻击。基于此观察,我们提出了一种新颖有效的对抗训练方法(Infoat)。鼓励Infoat找到具有高相互信息的示例,并有效利用它们以提高模型的最终鲁棒性。实验结果表明,与几种最先进的方法相比,Infoat在不同数据集和模型之间达到了最佳的鲁棒性。
translated by 谷歌翻译
基于深度神经网络(DNN)的智能信息(IOT)系统已被广泛部署在现实世界中。然而,发现DNNS易受对抗性示例的影响,这提高了人们对智能物联网系统的可靠性和安全性的担忧。测试和评估IOT系统的稳健性成为必要和必要。最近已经提出了各种攻击和策略,但效率问题仍未纠正。现有方法是计算地广泛或耗时,这在实践中不适用。在本文中,我们提出了一种称为攻击启发GaN(AI-GaN)的新框架,在有条件地产生对抗性实例。曾经接受过培训,可以有效地给予对抗扰动的输入图像和目标类。我们在白盒设置的不同数据集中应用AI-GaN,黑匣子设置和由最先进的防御保护的目标模型。通过广泛的实验,AI-GaN实现了高攻击成功率,优于现有方法,并显着降低了生成时间。此外,首次,AI-GaN成功地缩放到复杂的数据集。 Cifar-100和Imagenet,所有课程中的成功率约为90美元。
translated by 谷歌翻译