最近的原型分类器(NPC)分配给每个输入点,相对于选择的距离度量,最近原型的标签。 NPC的直接优势是这些决策是可以解释的。先前的工作可以在$ \ ell_p $ -threat模型中使用相同的npcs时$ \ ell_p $ -threat模型中的最小对抗扰动提供下限。在本文中,我们在使用$ \ ell_p $ distances和$ \ ell_q $ -threat模型的认证模型时提供了有关复杂性的完整讨论,用于$ p,q \ in \ in \ {1,2,\ infty \} $。特别是,我们为使用$ \ ell_2 $ distance \ emph {eckect}计算提供了可扩展的算法计算,并在其他情况下使用$ \ ell_2 $ distance并改善了下限。使用有效的改进界限,我们将训练我们可证明的对抗性NPC(PNPC),用于MNIST,其具有比神经网络更好的$ \ ell_2 $ - 抛光保证。此外,我们符合我们的知识,第一个认证结果W.R.T.对于LPIP的感知度量标准,它被认为是图像分类的更现实的威胁模型,而不是$ \ ell_p $ -balls。我们的PNPC在CIFAR10上具有比在(Laidlaw等,2021)中报道的经验鲁棒精度更高的鲁棒精度。该代码在我们的存储库中可用。
translated by 谷歌翻译
我们表明,当考虑到图像域$ [0,1] ^ D $时,已建立$ L_1 $ -Projected梯度下降(PGD)攻击是次优,因为它们不认为有效的威胁模型是交叉点$ l_1 $ -ball和$ [0,1] ^ d $。我们研究了这种有效威胁模型的最陡渐进步骤的预期稀疏性,并表明该组上的确切投影是计算可行的,并且产生更好的性能。此外,我们提出了一种自适应形式的PGD,即使具有小的迭代预算,这也是非常有效的。我们的结果$ l_1 $ -apgd是一个强大的白盒攻击,表明先前的作品高估了他们的$ l_1 $ -trobustness。使用$ l_1 $ -apgd for vercersarial培训,我们获得一个强大的分类器,具有sota $ l_1 $ -trobustness。最后,我们将$ l_1 $ -apgd和平方攻击的适应组合到$ l_1 $ to $ l_1 $ -autoattack,这是一个攻击的集合,可靠地评估$ l_1 $ -ball与$的威胁模型的对抗鲁棒性进行对抗[ 0,1] ^ d $。
translated by 谷歌翻译
To rigorously certify the robustness of neural networks to adversarial perturbations, most state-of-the-art techniques rely on a triangle-shaped linear programming (LP) relaxation of the ReLU activation. While the LP relaxation is exact for a single neuron, recent results suggest that it faces an inherent "convex relaxation barrier" as additional activations are added, and as the attack budget is increased. In this paper, we propose a nonconvex relaxation for the ReLU relaxation, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. We show that the nonconvex relaxation has a similar complexity to the LP relaxation, but enjoys improved tightness that is comparable to the much more expensive SDP relaxation. Despite nonconvexity, we prove that the verification problem satisfies constraint qualification, and therefore a Riemannian staircase approach is guaranteed to compute a near-globally optimal solution in polynomial time. Our experiments provide evidence that our nonconvex relaxation almost completely overcome the "convex relaxation barrier" faced by the LP relaxation.
translated by 谷歌翻译
We propose a method to learn deep ReLU-based classifiers that are provably robust against normbounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded ∞ norm less than = 0.1), and code for all experiments is available at http://github.com/ locuslab/convex_adversarial.
translated by 谷歌翻译
当前,随机平滑被认为是获得确切可靠分类器的最新方法。尽管其表现出色,但该方法仍与各种严重问题有关,例如``认证准确性瀑布'',认证与准确性权衡甚至公平性问题。已经提出了依赖输入的平滑方法,目的是克服这些缺陷。但是,我们证明了这些方法缺乏正式的保证,因此所产生的证书是没有道理的。我们表明,一般而言,输入依赖性平滑度遭受了维数的诅咒,迫使方差函数具有低半弹性。另一方面,我们提供了一个理论和实用的框架,即使在严格的限制下,即使在有维度的诅咒的情况下,即使在存在维度的诅咒的情况下,也可以使用依赖输入的平滑。我们提供平滑方差功能的一种混凝土设计,并在CIFAR10和MNIST上进行测试。我们的设计减轻了经典平滑的一些问题,并正式下划线,但仍需要进一步改进设计。
translated by 谷歌翻译
与标准的训练时间相比,训练时间非常长,尤其是针对稳健模型的主要缺点,尤其是对于大型数据集而言。此外,模型不仅应适用于一个$ l_p $ - 威胁模型,而且对所有模型来说都是理想的。在本文中,我们提出了基于$ l_p $ -Balls的几何特性的多元素鲁棒性的极端规范对抗训练(E-AT)。 E-AT的成本比其他对抗性训练方法低三倍,以进行多种锻炼。使用e-at,我们证明,对于ImageNet,单个时期和CIFAR-10,三个时期足以将任何$ L_P $ - 抛光模型变成一个多符号鲁棒模型。通过这种方式,我们获得了ImageNet的第一个多元素鲁棒模型,并在CIFAR-10上提高了多个Norm鲁棒性的最新型号,以超过$ 51 \%$。最后,我们通过对不同单独的$ l_p $ threat模型之间的对抗鲁棒性进行微调研究一般的转移,并改善了Cifar-10和Imagenet上的先前的SOTA $ L_1 $ - 固定。广泛的实验表明,我们的计划在包括视觉变压器在内的数据集和架构上起作用。
translated by 谷歌翻译
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.
translated by 谷歌翻译
We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the 2 norm. This "randomized smoothing" technique has been proposed recently in the literature, but existing guarantees are loose. We prove a tight robustness guarantee in 2 norm for smoothing with Gaussian noise. We use randomized smoothing to obtain an ImageNet classifier with e.g. a certified top-1 accuracy of 49% under adversarial perturbations with 2 norm less than 0.5 (=127/255). No certified defense has been shown feasible on ImageNet except for smoothing. On smaller-scale datasets where competing approaches to certified 2 robustness are viable, smoothing delivers higher certified accuracies. Our strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification. Code and models are available at http: //github.com/locuslab/smoothing.
translated by 谷歌翻译
众所周知,神经网络(NNS)很容易受到对抗扰动的影响,因此有一系列旨在为NNS提供稳健性认证的工作,例如随机平滑性,从某个分布中样本平滑噪声,以证明具有稳健性的稳健性分类器。但是,正如先前的工作所表明的那样,随机平滑的认证鲁棒半径从缩放到大数据集(“维度的诅咒”)。为了克服这一障碍,我们提出了一个双重抽样随机平滑(DSR)框架,该框架利用了采样概率从额外的平滑分布来拧紧先前平滑分类器的稳健性认证。从理论上讲,在温和的假设下,我们证明DSR可以证明$ \ theta(\ sqrt d)$ robust radius $ \ ell_2 $ norm,其中$ d $是输入维度,这意味着DSR可以破坏DSR的诅咒随机平滑的维度。我们将DSR实例化为高斯平滑的广义家族,并根据采样误差提出了一种基于自定义双重优化的高效和声音计算方法。关于MNIST,CIFAR-10和Imagenet的广泛实验验证了我们的理论,并表明DSR与在不同设置下始终如一的现有基准相比,稳健的半径比现有基线更大。代码可在https://github.com/llylly/dsrs上找到。
translated by 谷歌翻译
现有针对对抗性示例(例如对抗训练)的防御能力通常假设对手将符合特定或已知的威胁模型,例如固定预算内的$ \ ell_p $扰动。在本文中,我们关注的是在训练过程中辩方假设的威胁模型中存在不匹配的情况,以及在测试时对手的实际功能。我们问一个问题:学习者是否会针对特定的“源”威胁模型进行训练,我们什么时候可以期望鲁棒性在测试时间期间概括为更强大的未知“目标”威胁模型?我们的主要贡献是通过不可预见的对手正式定义学习和概括的问题,这有助于我们从常规的对手的传统角度来理解对抗风险的增加。应用我们的框架,我们得出了将源和目标威胁模型之间的概括差距与特征提取器变化相关联的概括,该限制衡量了在给定威胁模型中提取的特征之间的预期最大差异。基于我们的概括结合,我们提出了具有变化正则化(AT-VR)的对抗训练,该训练在训练过程中降低了特征提取器在源威胁模型中的变化。我们从经验上证明,与标准的对抗训练相比,AT-VR可以改善测试时间内的概括,从而无法预见。此外,我们将变异正则化与感知对抗训练相结合[Laidlaw等。 2021]以实现不可预见的攻击的最新鲁棒性。我们的代码可在https://github.com/inspire-group/variation-regularization上公开获取。
translated by 谷歌翻译
Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum 1 adversarial distortion of a ReLU network with a 0.99 ln n approximation ratio unless NP=P, where n is the number of neurons in the network.
translated by 谷歌翻译
许多最先进的对抗性培训方法利用对抗性损失的上限来提供安全保障。然而,这些方法需要在每个训练步骤中计算,该步骤不能包含在梯度中的梯度以进行反向化。我们基于封闭形式的对抗性损失的封闭溶液引入了一种新的更具内容性的对抗性培训,可以有效地培养了背部衰退。通过稳健优化的最先进的工具促进了这一界限。我们使用我们的方法推出了两种新方法。第一种方法(近似稳健的上限或arub)使用网络的第一阶近似以及来自线性鲁棒优化的基本工具,以获得可以容易地实现的对抗丢失的近似偏置。第二种方法(鲁棒上限或摩擦)计算对抗性损失的精确上限。在各种表格和视觉数据集中,我们展示了我们更加原则的方法的有效性 - 摩擦比最先进的方法更强大,而是较大的扰动的最新方法,而谷会匹配的性能 - 小扰动的艺术方法。此外,摩擦和灌注速度比标准对抗性培训快(以牺牲内存增加)。重现结果的所有代码都可以在https://github.com/kimvc7/trobustness找到。
translated by 谷歌翻译
The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the l p -norms for p ∈ {1, 2, ∞} aiming at finding the minimal perturbation necessary to change the class of a given input. It has an intuitive geometric meaning, yields quickly high quality results, minimizes the size of the perturbation (so that it returns the robust accuracy at every threshold with a single run). It performs better or similar to stateof-the-art attacks which are partially specialized to one l p -norm, and is robust to the phenomenon of gradient masking.
translated by 谷歌翻译
深度神经网络的鲁棒性对于现代AI支持系统至关重要,应正式验证。在广泛的应用中采用了类似乙状结肠的神经网络。由于它们的非线性,通常会过度评估乙状结肠样激活功能,以进行有效的验证,这不可避免地引入了不精确度。已大量的努力致力于找到所谓的更紧密的近似值,以获得更精确的验证结果。但是,现有的紧密定义是启发式的,缺乏理论基础。我们对现有神经元的紧密表征进行了彻底的经验分析,并揭示它们仅在特定的神经网络上是优越的。然后,我们将网络紧密度的概念介绍为统一的紧密度定义,并表明计算网络紧密度是一个复杂的非convex优化问题。我们通过两个有效的,最紧密的近似值从不同的角度绕过复杂性。结果表明,我们在艺术状态下的方法实现了有希望的表现:(i)达到高达251.28%的改善,以提高认证的较低鲁棒性界限; (ii)在卷积网络上表现出更为精确的验证结果。
translated by 谷歌翻译
在本讨论文件中,我们调查了有关机器学习模型鲁棒性的最新研究。随着学习算法在数据驱动的控制系统中越来越流行,必须确保它们对数据不确定性的稳健性,以维持可靠的安全至关重要的操作。我们首先回顾了这种鲁棒性的共同形式主义,然后继续讨论训练健壮的机器学习模型的流行和最新技术,以及可证明这种鲁棒性的方法。从强大的机器学习的这种统一中,我们识别并讨论了该地区未来研究的迫切方向。
translated by 谷歌翻译
对手的例子是在机器学习模型中广泛研究的现象。虽然大多数关注都集中在神经网络上,但其他实际模型也遭受了这个问题。在这项工作中,我们提出了一种用于评估$ K $ -NEALEST邻居分类的对抗鲁棒性,即找到最小常态对抗示例。从以前的建议发散,我们通过执行从给定输入点向外扩展的搜索来采用几何方法。在高级,搜索半径扩展到附近的Voronoi单元格,直到我们找到与输入点不同的单元格分类。要将算法扩展到大量的k $,我们引入了与基线相比,在各种数据集中相比,介绍了具有较小规范的近似捕获的近似步骤。此外,我们分析了DataSet的结构性属性,我们的方法优于竞争。
translated by 谷歌翻译
尽管深度神经网络(DNN)在感知和控制任务中表现出令人难以置信的性能,但几个值得信赖的问题仍然是开放的。其中一个最讨论的主题是存在对抗扰动的存在,它在能够量化给定输入的稳健性的可提供技术上开辟了一个有趣的研究线。在这方面,来自分类边界的输入的欧几里德距离表示良好被证明的鲁棒性评估,作为最小的经济适用的逆势扰动。不幸的是,由于NN的非凸性质,计算如此距离非常复杂。尽管已经提出了几种方法来解决这个问题,但据我们所知,没有提出可证明的结果来估计和绑定承诺的错误。本文通过提出两个轻量级策略来寻找最小的对抗扰动来解决这个问题。不同于现有技术,所提出的方法允许与理论上的近似距离的误差估计理论配制。最后,据报道,据报道了大量实验来评估算法的性能并支持理论发现。所获得的结果表明,该策略近似于靠近分类边界的样品的理论距离,导致可提供对任何对抗攻击的鲁棒性保障。
translated by 谷歌翻译
使用无限精度时,随机平滑是合理的。但是,我们表明,对于有限的浮点精度,随机平滑不再是声音。我们提供了一个简单的示例,即使随机平滑的$ 1.26 $在某个点附近的半径为$ 1.26 $,即使在距离中有一个对抗示例$ 0.8 $,并进一步扩展了此示例以提供CIFAR10的错误证书。我们讨论了随机平滑的隐性假设,并表明它们不适用于通常经过认证的平滑版本的通用图像分类模型。为了克服这个问题,我们提出了一种使用浮点精度的合理方法来进行随机平滑的方法,其速度基本上相等,并匹配标准的标准分类器的标准练习证书,用于迄今已测试的标准分类器。我们唯一的假设是我们可以使用公平的硬币。
translated by 谷歌翻译
人工神经网络(ANN)训练景观的非凸起带来了固有的优化困难。虽然传统的背传播随机梯度下降(SGD)算法及其变体在某些情况下是有效的,但它们可以陷入杂散的局部最小值,并且对初始化和普通公共表敏感。最近的工作表明,随着Relu激活的ANN的培训可以重新重整为凸面计划,使希望能够全局优化可解释的ANN。然而,天真地解决凸训练制剂具有指数复杂性,甚至近似启发式需要立方时间。在这项工作中,我们描述了这种近似的质量,并开发了两个有效的算法,这些算法通过全球收敛保证培训。第一算法基于乘法器(ADMM)的交替方向方法。它解决了精确的凸形配方和近似对应物。实现线性全局收敛,并且初始几次迭代通常会产生具有高预测精度的解决方案。求解近似配方时,每次迭代时间复杂度是二次的。基于“采样凸面”理论的第二种算法更简单地实现。它解决了不受约束的凸形制剂,并收敛到大约全球最佳的分类器。当考虑对抗性培训时,ANN训练景观的非凸起加剧了。我们将稳健的凸优化理论应用于凸训练,开发凸起的凸起制剂,培训Anns对抗对抗投入。我们的分析明确地关注一个隐藏层完全连接的ANN,但可以扩展到更复杂的体系结构。
translated by 谷歌翻译
Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. This gap is information theoretic and holds irrespective of the training algorithm or the model family. We complement our theoretical results with experiments on popular image classification datasets and show that a similar gap exists here as well. We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
translated by 谷歌翻译