We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most important feature of Parseval networks is to maintain weight matrices of linear and convolutional layers to be (approximately) Parseval tight frames, which are extensions of orthogonal matrices to non-square matrices. We describe how these constraints can be maintained efficiently during SGD. We show that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers (SVHN), while being more robust than their vanilla counterpart against adversarial examples. Incidentally, Parseval networks also tend to train faster and make a better usage of the full capacity of the networks.
translated by 谷歌翻译
深度神经网络的高度非线性性质使它们容易受到对抗例子的影响,并且具有不稳定的梯度,从而阻碍了可解释性。但是,解决这些问题的现有方法,例如对抗性训练,是昂贵的,并且通常会牺牲预测的准确性。在这项工作中,我们考虑曲率,这是编码非线性程度的数学数量。使用此功能,我们展示了低曲率的神经网络(LCNN),这些神经网络(LCNN)的曲率比标准模型大大低,同时表现出相似的预测性能,从而导致稳健性和稳定梯度,并且只有略有增加的训练时间。为了实现这一目标,我们最大程度地减少了与数据依赖性的上限在神经网络的曲率上,该曲率分解了其组成层的曲率和斜率方面的总体曲率。为了有效地最大程度地减少这种结合,我们介绍了两个新型的建筑组件:首先,一种称为中心软pplus的非线性性,是SoftPlus非线性的稳定变体,其次是Lipschitz构成的批处理标准化层。我们的实验表明,与标准的高曲率对应物相比,LCNN具有较低的曲率,更稳定的梯度和增加现成的对抗性鲁棒性,而不会影响预测性能。我们的方法易于使用,可以很容易地将其纳入现有的神经网络模型中。
translated by 谷歌翻译
Several machine learning models, including neural networks, consistently misclassify adversarial examples-inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
translated by 谷歌翻译
标准化技术已成为现代卷积神经网络(Convnets)中的基本组件。特别是,许多最近的作品表明,促进重量的正交性有助于培训深层模型并提高鲁棒性。对于Courmnets,大多数现有方法基于惩罚或归一化矩阵判断或施加卷积核的重量矩阵。这些方法经常摧毁或忽视核的良性卷积结构;因此,对于深扫描器来说,它们通常是昂贵或不切实际的。相比之下,我们介绍了一种简单富有高效的“卷积归一化”(ConvNORM)方法,可以充分利用傅立叶域中的卷积结构,并用作简单的即插即用模块,以方便地结合到任何围栏中。我们的方法是通过最近关于卷积稀疏编码的预处理方法的工作启发,可以有效地促进每个层的频道方向等距。此外,我们表明我们的判断可以降低重量矩阵的层状频谱标准,从而改善网络的嘴唇,导致培训更容易培训和改善深扫描器的鲁棒性。在噪声损坏和生成的对抗网络(GAN)下应用于分类,我们表明CONVNOMOL提高了常见扫描仪(如RENET和GAN性能)的稳健性。我们通过Cifar和Imagenet的数值实验验证了我们的研究结果。
translated by 谷歌翻译
对于深层网络而言,这是一个非常理想的属性,可与小型输入更改保持强大。实现此属性的一种流行方法是设计具有小Lipschitz常数的网络。在这项工作中,我们提出了一种用于构建具有许多理想属性的Lipschitz网络的新技术:它可以应用于任何线性网络层(完全连接或卷积),它在Lipschitz常数上提供了正式的保证,它是易于实施和运行效率,可以与任何培训目标和优化方法结合使用。实际上,我们的技术是文献中第一个同时实现所有这些属性的技术。我们的主要贡献是基于重新的重量矩阵参数化,该参数保证每个网络层最多具有LIPSCHITZ常数,并且导致学习的权重矩阵接近正交。因此,我们称这种层几乎是正交的Lipschitz(AOL)。在图像分类的背景下,实验和消融研究具有认证的鲁棒精度证实,AOL层获得与大多数现有方法相当的结果。但是,它们更容易实现,并且更广泛地适用,因为它们不需要计算昂贵的矩阵正交化或反转步骤作为网络体系结构的一部分。我们在https://github.com/berndprach/aol上提供代码。
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译
对抗性培训(AT)已成为培训强大网络的热门选择。然而,它倾向于牺牲清洁精度,以令人满意的鲁棒性,并且遭受大的概括误差。为了解决这些问题,我们提出了平稳的对抗培训(SAT),以我们对损失令人歉端的损失的终人谱指导。 We find that curriculum learning, a scheme that emphasizes on starting "easy" and gradually ramping up on the "difficulty" of training, smooths the adversarial loss landscape for a suitably chosen difficulty metric.我们展示了对普通环境中的课程学习的一般制定,并提出了一种基于最大Hessian特征值(H-SAT)和软MAX概率(P-SA)的两个难度指标。我们展示SAT稳定网络培训即使是大型扰动规范,并且允许网络以更好的清洁精度运行而与鲁棒性权衡曲线相比。与AT,交易和其他基线相比,这导致清洁精度和鲁棒性的显着改善。为了突出一些结果,我们的最佳模型将分别在CIFAR-100上提高6%和1%的稳健准确性。在Imagenette上,一个十一级想象成的子集,我们的模型分别以正常和强大的准确性达到23%和3%。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial attacks. Ideally, a robust model shall perform well on both the perturbed training data and the unseen perturbed test data. It is found empirically that fitting perturbed training data is not hard, but generalizing to perturbed test data is quite difficult. To better understand adversarial generalization, it is of great interest to study the adversarial Rademacher complexity (ARC) of deep neural networks. However, how to bound ARC in multi-layers cases is largely unclear due to the difficulty of analyzing adversarial loss in the definition of ARC. There have been two types of attempts of ARC. One is to provide the upper bound of ARC in linear and one-hidden layer cases. However, these approaches seem hard to extend to multi-layer cases. Another is to modify the adversarial loss and provide upper bounds of Rademacher complexity on such surrogate loss in multi-layer cases. However, such variants of Rademacher complexity are not guaranteed to be bounds for meaningful robust generalization gaps (RGG). In this paper, we provide a solution to this unsolved problem. Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks. Our approach is based on covering numbers. We provide a method to handle the robustify function classes of DNNs such that we can calculate the covering numbers. Finally, we provide experiments to study the empirical implication of our bounds and provide an analysis of poor adversarial generalization.
translated by 谷歌翻译
随机平滑是目前是最先进的方法,用于构建来自Neural Networks的可认真稳健的分类器,以防止$ \ ell_2 $ - vitersarial扰动。在范例下,分类器的稳健性与预测置信度对齐,即,对平滑分类器的较高的置信性意味着更好的鲁棒性。这使我们能够在校准平滑分类器的信仰方面重新思考准确性和鲁棒性之间的基本权衡。在本文中,我们提出了一种简单的训练方案,Coined Spiremix,通过自我混合来控制平滑分类器的鲁棒性:它沿着每个输入对逆势扰动方向进行样品的凸起组合。该提出的程序有效地识别过度自信,在平滑分类器的情况下,作为有限的稳健性的原因,并提供了一种直观的方法来自适应地在这些样本之间设置新的决策边界,以实现更好的鲁棒性。我们的实验结果表明,与现有的最先进的强大培训方法相比,该方法可以显着提高平滑分类器的认证$ \ ell_2 $ -toSpustness。
translated by 谷歌翻译
使用重量衰减来惩罚神经网络中的重量规范,这是一种标准的培训实践,可以使网络的复杂性正常。在本文中,我们表明,包括重量衰减在内的一个正规化家族无效地惩罚具有正均匀激活功能的网络的固有权重规范,例如线性,relu和max-pool-pool函数。由于同质性,网络指定的功能是在层之间的重量尺度转移的不变性。无效的正规化器对这种转移敏感,因此使模型容量不正常,导致过度拟合。为了解决这一缺点,我们提出了一个改进的正规器,该正常化程序是体重尺度转移不变的,因此有效地约束了神经网络的内在规范。派生的正常化程序是网络输入梯度的上限,因此最大程度地降低了改进的正规器也使对抗性鲁棒性受益。还考虑了剩余连接,我们表明我们的正规器还形成了这种残留网络的输入梯度的上限。我们证明了我们提出的正常化程序在各种数据集和神经网络体系结构上的功效,以改善概括和对抗性鲁棒性。
translated by 谷歌翻译
已知深度神经网络(DNN)容易受到用不可察觉的扰动制作的对抗性示例的影响,即,输入图像的微小变化会引起错误的分类,从而威胁着基于深度学习的部署系统的可靠性。经常采用对抗训练(AT)来通过训练损坏和干净的数据的混合物来提高DNN的鲁棒性。但是,大多数基于AT的方法在处理\ textit {转移的对抗示例}方面是无效的,这些方法是生成以欺骗各种防御模型的生成的,因此无法满足现实情况下提出的概括要求。此外,对抗性训练一般的国防模型不能对具有扰动的输入产生可解释的预测,而不同的领域专家则需要一个高度可解释的强大模型才能了解DNN的行为。在这项工作中,我们提出了一种基于Jacobian规范和选择性输入梯度正则化(J-SIGR)的方法,该方法通过Jacobian归一化提出了线性化的鲁棒性,还将基于扰动的显着性图正规化,以模仿模型的可解释预测。因此,我们既可以提高DNN的防御能力和高解释性。最后,我们评估了跨不同体系结构的方法,以针对强大的对抗性攻击。实验表明,提出的J-Sigr赋予了针对转移的对抗攻击的鲁棒性,我们还表明,来自神经网络的预测易于解释。
translated by 谷歌翻译
在神经网络的经验风险景观中扁平最小值的性质已经讨论了一段时间。越来越多的证据表明他们对尖锐物质具有更好的泛化能力。首先,我们讨论高斯混合分类模型,并分析显示存在贝叶斯最佳点估算器,其对应于属于宽平区域的最小值。可以通过直接在分类器(通常是独立的)或学习中使用的可分解损耗函数上应用最大平坦度算法来找到这些估计器。接下来,我们通过广泛的数值验证将分析扩展到深度学习场景。使用两种算法,熵-SGD和复制-SGD,明确地包括在优化目标中,所谓的非局部平整度措施称为本地熵,我们一直提高常见架构的泛化误差(例如Resnet,CeffectnNet)。易于计算的平坦度测量显示与测试精度明确的相关性。
translated by 谷歌翻译
We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.
translated by 谷歌翻译
现代神经网络Excel在图像分类中,但它们仍然容易受到常见图像损坏,如模糊,斑点噪音或雾。最近的方法关注这个问题,例如Augmix和Deepaulment,引入了在预期运行的防御,以期望图像损坏分布。相比之下,$ \ ell_p $ -norm界限扰动的文献侧重于针对最坏情况损坏的防御。在这项工作中,我们通过提出防范内人来调和两种方法,这是一种优化图像到图像模型的参数来产生对外损坏的增强图像的技术。我们理论上激发了我们的方法,并为其理想化版本的一致性以及大纲领提供了足够的条件。我们的分类机器在预期对CiFar-10-C进行的常见图像腐败基准上提高了最先进的,并改善了CIFAR-10和ImageNet上的$ \ ell_p $ -norm有界扰动的最坏情况性能。
translated by 谷歌翻译
现有针对对抗性示例(例如对抗训练)的防御能力通常假设对手将符合特定或已知的威胁模型,例如固定预算内的$ \ ell_p $扰动。在本文中,我们关注的是在训练过程中辩方假设的威胁模型中存在不匹配的情况,以及在测试时对手的实际功能。我们问一个问题:学习者是否会针对特定的“源”威胁模型进行训练,我们什么时候可以期望鲁棒性在测试时间期间概括为更强大的未知“目标”威胁模型?我们的主要贡献是通过不可预见的对手正式定义学习和概括的问题,这有助于我们从常规的对手的传统角度来理解对抗风险的增加。应用我们的框架,我们得出了将源和目标威胁模型之间的概括差距与特征提取器变化相关联的概括,该限制衡量了在给定威胁模型中提取的特征之间的预期最大差异。基于我们的概括结合,我们提出了具有变化正则化(AT-VR)的对抗训练,该训练在训练过程中降低了特征提取器在源威胁模型中的变化。我们从经验上证明,与标准的对抗训练相比,AT-VR可以改善测试时间内的概括,从而无法预见。此外,我们将变异正则化与感知对抗训练相结合[Laidlaw等。 2021]以实现不可预见的攻击的最新鲁棒性。我们的代码可在https://github.com/inspire-group/variation-regularization上公开获取。
translated by 谷歌翻译
机器学习(ML)鲁棒性和域的概括从根本上相关:它们基本上涉及对抗和自然设置下的数据分布变化。一方面,最近的研究表明,更健壮的(受对抗训练)模型更为普遍。另一方面,缺乏对其基本联系的理论理解。在本文中,我们探讨了考虑到不同因素(例如规范正规化和数据增强)(DA)等不同因素的正则化和域转移性之间的关系。我们提出了一个一般的理论框架,证明涉及模型函数类正则化的因素是相对域可传递性的足够条件。我们的分析意味着``鲁棒性''既不必需,也不足以使其可转移性;而正规化是理解域可转移性的更基本的观点。然后,我们讨论流行的DA协议(包括对抗性培训),并显示何时可以将其视为功能在某些条件下进行类正则化并因此改善了概括。我们进行了广泛的实验以验证我们的理论发现,并显示了几个反例,其中鲁棒性和概括在不同的数据集上呈负相关。
translated by 谷歌翻译