删除攻击旨在通过略微扰动正确标记的训练示例的特征来大幅恶化学习模型的测试准确性。通过将这种恶意攻击正式地找到特定$ \ infty $ -wassersein球中的最坏情况培训数据,我们表明最小化扰动数据的对抗性风险相当于优化原始数据上的自然风险的上限。这意味着对抗性培训可以作为防止妄想攻击的原则防御。因此,通过普遍训练可以很大程度地回收测试精度。为了进一步了解国防的内部机制,我们披露了对抗性培训可以通过防止学习者过于依赖于自然环境中的非鲁棒特征来抵制妄想扰动。最后,我们将我们的理论调查结果与一系列关于流行的基准数据集进行了补充,这表明防御能够承受六种不同的实际攻击。在面对令人难以闻名的对手时,理论和经验结果投票给逆势训练。
translated by 谷歌翻译
由明确的反对派制作的对抗例子在机器学习中引起了重要的关注。然而,潜在虚假朋友带来的安全风险基本上被忽视了。在本文中,我们揭示了虚伪的例子的威胁 - 最初被错误分类但是虚假朋友扰乱的投入,以强迫正确的预测。虽然这种扰动的例子似乎是无害的,但我们首次指出,它们可能是恶意地用来隐瞒评估期间不合格(即,不如所需)模型的错误。一旦部署者信任虚伪的性能并在真实应用程序中应用“良好的”模型,即使在良性环境中也可能发生意外的失败。更严重的是,这种安全风险似乎是普遍存在的:我们发现许多类型的不合标准模型易受多个数据集的虚伪示例。此外,我们提供了第一次尝试,以称为虚伪风险的公制表征威胁,并试图通过一些对策来规避它。结果表明对策的有效性,即使在自适应稳健的培训之后,风险仍然是不可忽视的。
translated by 谷歌翻译
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed empirically in more complex settings. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.
translated by 谷歌翻译
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
translated by 谷歌翻译
后门数据中毒攻击是一种对抗的攻击,其中攻击者将几个水印,误标记的训练示例注入训练集中。水印不会影响典型数据模型的测试时间性能;但是,该模型在水印示例中可靠地错误。为获得对后门数据中毒攻击的更好的基础认识,我们展示了一个正式的理论框架,其中一个人可以讨论对分类问题的回溯数据中毒攻击。然后我们使用它来分析这些攻击的重要统计和计算问题。在统计方面,我们识别一个参数,我们称之为记忆能力,捕捉到后门攻击的学习问题的内在脆弱性。这使我们能够争论几个自然学习问题的鲁棒性与后门攻击。我们的结果,攻击者涉及介绍后门攻击的明确建设,我们的鲁棒性结果表明,一些自然问题设置不能产生成功的后门攻击。从计算的角度来看,我们表明,在某些假设下,对抗训练可以检测训练集中的后门的存在。然后,我们表明,在类似的假设下,我们称之为呼叫滤波和鲁棒概括的两个密切相关的问题几乎等同。这意味着它既是渐近必要的,并且足以设计算法,可以识别训练集中的水印示例,以便获得既广泛概念的学习算法,以便在室外稳健。
translated by 谷歌翻译
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of ~2,000 submissions, surpassing the runner-up approach by 11.41% in terms of mean 2 perturbation distance.
translated by 谷歌翻译
对抗性可转移性是一种有趣的性质 - 针对一个模型制作的对抗性扰动也是对另一个模型有效的,而这些模型来自不同的模型家庭或培训过程。为了更好地保护ML系统免受对抗性攻击,提出了几个问题:对抗性转移性的充分条件是什么,以及如何绑定它?有没有办法降低对抗的转移性,以改善合奏ML模型的鲁棒性?为了回答这些问题,在这项工作中,我们首先在理论上分析和概述了模型之间的对抗性可转移的充分条件;然后提出一种实用的算法,以减少集合内基础模型之间的可转换,以提高其鲁棒性。我们的理论分析表明,只有促进基础模型梯度之间的正交性不足以确保低可转移性;与此同时,模型平滑度是控制可转移性的重要因素。我们还在某些条件下提供了对抗性可转移性的下界和上限。灵感来自我们的理论分析,我们提出了一种有效的可转让性,减少了平滑(TRS)集合培训策略,以通过实施基础模型之间的梯度正交性和模型平滑度来培训具有低可转换性的强大集成。我们对TRS进行了广泛的实验,并与6个最先进的集合基线进行比较,防止不同数据集的8个白箱攻击,表明所提出的TRS显着优于所有基线。
translated by 谷歌翻译
最近的研究表明,深神经网络(DNN)易受对抗性攻击的影响,包括逃避和后门(中毒)攻击。在防守方面,有密集的努力,改善了对逃避袭击的经验和可怜的稳健性;然而,对后门攻击的可稳健性仍然很大程度上是未开发的。在本文中,我们专注于认证机器学习模型稳健性,反对一般威胁模型,尤其是后门攻击。我们首先通过随机平滑技术提供统一的框架,并展示如何实例化以证明对逃避和后门攻击的鲁棒性。然后,我们提出了第一个强大的培训过程Rab,以平滑训练有素的模型,并证明其稳健性对抗后门攻击。我们派生机学习模型的稳健性突出了培训的机器学习模型,并证明我们的鲁棒性受到紧张。此外,我们表明,可以有效地训练强大的平滑模型,以适用于诸如k最近邻分类器的简单模型,并提出了一种精确的平滑训练算法,该算法消除了从这种模型的噪声分布采样采样的需要。经验上,我们对MNIST,CIFAR-10和Imagenet数据集等DNN,差异私有DNN和K-NN模型等不同机器学习(ML)型号进行了全面的实验,并为反卧系攻击提供认证稳健性的第一个基准。此外,我们在SPAMBase表格数据集上评估K-NN模型,以展示所提出的精确算法的优点。对多元化模型和数据集的综合评价既有关于普通训练时间攻击的进一步强劲学习策略的多样化模型和数据集的综合评价。
translated by 谷歌翻译
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the input loss landscape (loss change with respect to input) via training on adversarially perturbed examples. However, how the widely used weight loss landscape (loss change with respect to weight) performs in adversarial training is rarely explored. In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap. Several well-recognized adversarial training improvements, such as early stopping, designing new objective functions, or leveraging unlabeled data, all implicitly flatten the weight loss landscape. Based on these observations, we propose a simple yet effective Adversarial Weight Perturbation (AWP) to explicitly regularize the flatness of weight loss landscape, forming a double-perturbation mechanism in the adversarial training framework that adversarially perturbs both inputs and weights. Extensive experiments demonstrate that AWP indeed brings flatter weight loss landscape and can be easily incorporated into various existing adversarial training methods to further boost their adversarial robustness.
translated by 谷歌翻译
随机平滑是目前是最先进的方法,用于构建来自Neural Networks的可认真稳健的分类器,以防止$ \ ell_2 $ - vitersarial扰动。在范例下,分类器的稳健性与预测置信度对齐,即,对平滑分类器的较高的置信性意味着更好的鲁棒性。这使我们能够在校准平滑分类器的信仰方面重新思考准确性和鲁棒性之间的基本权衡。在本文中,我们提出了一种简单的训练方案,Coined Spiremix,通过自我混合来控制平滑分类器的鲁棒性:它沿着每个输入对逆势扰动方向进行样品的凸起组合。该提出的程序有效地识别过度自信,在平滑分类器的情况下,作为有限的稳健性的原因,并提供了一种直观的方法来自适应地在这些样本之间设置新的决策边界,以实现更好的鲁棒性。我们的实验结果表明,与现有的最先进的强大培训方法相比,该方法可以显着提高平滑分类器的认证$ \ ell_2 $ -toSpustness。
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
防御对抗例子仍然是一个空旷的问题。一个普遍的信念是,推理的随机性增加了寻找对抗性输入的成本。这种辩护的一个例子是将随机转换应用于输入之前,然后将其馈送到模型。在本文中,我们从经验和理论上研究了这种随机预处理的防御措施,并证明它们存在缺陷。首先,我们表明大多数随机防御措施比以前想象的要弱。他们缺乏足够的随机性来承受诸如投影梯度下降之类的标准攻击。这对长期以来的假设产生了怀疑,即随机防御能力无效,旨在逃避确定性的防御和迫使攻击者以整合对转型(EOT)概念的期望。其次,我们表明随机防御与对抗性鲁棒性和模型不变性之间的权衡面临。随着辩护模型获得更多的随机化不变性,它们变得不太有效。未来的工作将需要使这两种效果分解。我们的代码在补充材料中可用。
translated by 谷歌翻译
普遍认为对抗培训是一种可靠的方法来改善对抗对抗攻击的模型稳健性。但是,在本文中,我们表明,当训练在一种类型的中毒数据时,对抗性培训也可以被愚蠢地具有灾难性行为,例如,$ <1 \%$强大的测试精度,以$> 90 \%$强大的训练准确度在CiFar-10数据集上。以前,在培训数据中,已经成功愚弄了标准培训($ 15.8 \%$标准测试精度,在CIFAR-10数据集中的标准训练准确度为99.9美元,但它们的中毒可以很容易地删除采用对抗性培训。因此,我们的目标是设计一种名为Advin的新型诱导噪声,这是一种不可动摇的培训数据中毒。 Advin不仅可以通过大幅度的利润率降低对抗性培训的鲁棒性,例如,从Cifar-10数据集每次为0.57 \%$ 0.57 \%$ 0.57 \%$ 0.1,但也有效地愚弄标准培训($ 13.1 \%$标准测试准确性$ 100 \%$标准培训准确度)。此外,否则可以应用于防止个人数据(如SELYIES)在没有授权的情况下剥削,无论是标准还是对抗性培训。
translated by 谷歌翻译
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples-inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. 1
translated by 谷歌翻译
尽管深层神经网络在各种任务中取得了巨大的成功,但它们对不可察觉的对抗性扰动的脆弱性阻碍了他们在现实世界中的部署。最近,与随机合奏的作品相对于经过最小的计算开销的标准对手训练(AT)模型,对对抗性训练(AT)模型的对抗性鲁棒性有了显着改善,这使它们成为安全临界资源限制应用程序的有前途解决方案。但是,这种令人印象深刻的表现提出了一个问题:这些稳健性是由随机合奏提供的吗?在这项工作中,我们从理论和经验上都解决了这个问题。从理论上讲,我们首先确定通常采用的鲁棒性评估方法(例如自适应PGD)在这种情况下提供了错误的安全感。随后,我们提出了一种理论上有效的对抗攻击算法(ARC),即使在自适应PGD无法做到这一点的情况下,也能妥协随机合奏。我们在各种网络体系结构,培训方案,数据集和规范上进行全面的实验,以支持我们的主张,并经验证明,随机合奏实际上比在模型上更容易受到$ \ ell_p $结合的对抗性扰动的影响。我们的代码可以在https://github.com/hsndbk4/arc上找到。
translated by 谷歌翻译
Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. This gap is information theoretic and holds irrespective of the training algorithm or the model family. We complement our theoretical results with experiments on popular image classification datasets and show that a similar gap exists here as well. We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
translated by 谷歌翻译
在本讨论文件中,我们调查了有关机器学习模型鲁棒性的最新研究。随着学习算法在数据驱动的控制系统中越来越流行,必须确保它们对数据不确定性的稳健性,以维持可靠的安全至关重要的操作。我们首先回顾了这种鲁棒性的共同形式主义,然后继续讨论训练健壮的机器学习模型的流行和最新技术,以及可证明这种鲁棒性的方法。从强大的机器学习的这种统一中,我们识别并讨论了该地区未来研究的迫切方向。
translated by 谷歌翻译
现有针对对抗性示例(例如对抗训练)的防御能力通常假设对手将符合特定或已知的威胁模型,例如固定预算内的$ \ ell_p $扰动。在本文中,我们关注的是在训练过程中辩方假设的威胁模型中存在不匹配的情况,以及在测试时对手的实际功能。我们问一个问题:学习者是否会针对特定的“源”威胁模型进行训练,我们什么时候可以期望鲁棒性在测试时间期间概括为更强大的未知“目标”威胁模型?我们的主要贡献是通过不可预见的对手正式定义学习和概括的问题,这有助于我们从常规的对手的传统角度来理解对抗风险的增加。应用我们的框架,我们得出了将源和目标威胁模型之间的概括差距与特征提取器变化相关联的概括,该限制衡量了在给定威胁模型中提取的特征之间的预期最大差异。基于我们的概括结合,我们提出了具有变化正则化(AT-VR)的对抗训练,该训练在训练过程中降低了特征提取器在源威胁模型中的变化。我们从经验上证明,与标准的对抗训练相比,AT-VR可以改善测试时间内的概括,从而无法预见。此外,我们将变异正则化与感知对抗训练相结合[Laidlaw等。 2021]以实现不可预见的攻击的最新鲁棒性。我们的代码可在https://github.com/inspire-group/variation-regularization上公开获取。
translated by 谷歌翻译
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks that would trigger misclassification of DNNs but may be imperceptible to human perception. Adversarial defense has been important ways to improve the robustness of DNNs. Existing attack methods often construct adversarial examples relying on some metrics like the $\ell_p$ distance to perturb samples. However, these metrics can be insufficient to conduct adversarial attacks due to their limited perturbations. In this paper, we propose a new internal Wasserstein distance (IWD) to capture the semantic similarity of two samples, and thus it helps to obtain larger perturbations than currently used metrics such as the $\ell_p$ distance We then apply the internal Wasserstein distance to perform adversarial attack and defense. In particular, we develop a novel attack method relying on IWD to calculate the similarities between an image and its adversarial examples. In this way, we can generate diverse and semantically similar adversarial examples that are more difficult to defend by existing defense methods. Moreover, we devise a new defense method relying on IWD to learn robust models against unseen adversarial examples. We provide both thorough theoretical and empirical evidence to support our methods.
translated by 谷歌翻译