We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data.The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.
translated by 谷歌翻译
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
translated by 谷歌翻译
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
translated by 谷歌翻译
数据中毒是对机器学习和数据驱动技术的最相关的安全威胁之一。由于许多应用程序依赖于不受信任的培训数据,因此攻击者可以轻松地将恶意样本轻松地将其注入训练数据集,以降低机器学习模型的性能。正如最近的工作所示,这种拒绝服务(DOS)数据中毒攻击非常有效。为了减轻这种威胁,我们提出了一种检测DOS中毒实例的新方法。与相关工作相比,我们偏离基于聚类和异常检测的方法,这通常遭受维度的诅咒和任意异常阈值选择。相反,我们的防御是基于以这种广义的方式从训练数据中提取信息,使得我们可以基于存在于数据的未被占部分中存在的信息来识别中毒样本。我们评估我们对两个DOS中毒攻击和七个数据集的防御,并发现它可靠地识别中毒实例。与相关的工作相比,我们的防范将误报/假负率提高至少50%,通常更多。
translated by 谷歌翻译
从外界培训的机器学习模型可能会被数据中毒攻击损坏,将恶意指向到模型的培训集中。对这些攻击的常见防御是数据消毒:在培训模型之前首先过滤出异常培训点。在本文中,我们开发了三次攻击,可以绕过广泛的常见数据消毒防御,包括基于最近邻居,训练损失和奇异值分解的异常探测器。通过增加3%的中毒数据,我们的攻击成功地将Enron垃圾邮件检测数据集的测试错误从3%增加到24%,并且IMDB情绪分类数据集从12%到29%。相比之下,没有明确占据这些数据消毒防御的现有攻击被他们击败。我们的攻击基于两个想法:(i)我们协调我们的攻击将中毒点彼此放置在彼此附近,(ii)我们将每个攻击制定为受限制的优化问题,限制旨在确保中毒点逃避检测。随着这种优化涉及解决昂贵的Bilevel问题,我们的三个攻击对应于基于影响功能的近似近似这个问题的方式; minimax二元性;和karush-kuhn-tucker(kkt)条件。我们的结果强调了对数据中毒攻击产生更强大的防御的必要性。
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
计算能力和大型培训数据集的可用性增加,机器学习的成功助长了。假设它充分代表了在测试时遇到的数据,则使用培训数据来学习新模型或更新现有模型。这种假设受到中毒威胁的挑战,这种攻击会操纵训练数据,以损害模型在测试时的表现。尽管中毒已被认为是行业应用中的相关威胁,到目前为止,已经提出了各种不同的攻击和防御措施,但对该领域的完整系统化和批判性审查仍然缺失。在这项调查中,我们在机器学习中提供了中毒攻击和防御措施的全面系统化,审查了过去15年中该领域发表的100多篇论文。我们首先对当前的威胁模型和攻击进行分类,然后相应地组织现有防御。虽然我们主要关注计算机视觉应用程序,但我们认为我们的系统化还包括其他数据模式的最新攻击和防御。最后,我们讨论了中毒研究的现有资源,并阐明了当前的局限性和该研究领域的开放研究问题。
translated by 谷歌翻译
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
translated by 谷歌翻译
Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by leaving them on the web and waiting for them to be scraped by a data collection bot. We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a "watermarking" strategy that makes poisoning reliable using multiple (≈ 50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
translated by 谷歌翻译
与令人印象深刻的进步触动了我们社会的各个方面,基于深度神经网络(DNN)的AI技术正在带来越来越多的安全问题。虽然在考试时间运行的攻击垄断了研究人员的初始关注,但是通过干扰培训过程来利用破坏DNN模型的可能性,代表了破坏训练过程的可能性,这是破坏AI技术的可靠性的进一步严重威胁。在后门攻击中,攻击者损坏了培训数据,以便在测试时间诱导错误的行为。然而,测试时间误差仅在存在与正确制作的输入样本对应的触发事件的情况下被激活。通过这种方式,损坏的网络继续正常输入的预期工作,并且只有当攻击者决定激活网络内隐藏的后门时,才会发生恶意行为。在过去几年中,后门攻击一直是强烈的研究活动的主题,重点是新的攻击阶段的发展,以及可能对策的提议。此概述文件的目标是审查发表的作品,直到现在,分类到目前为止提出的不同类型的攻击和防御。指导分析的分类基于攻击者对培训过程的控制量,以及防御者验证用于培训的数据的完整性,并监控DNN在培训和测试中的操作时间。因此,拟议的分析特别适合于参考他们在运营的应用方案的攻击和防御的强度和弱点。
translated by 谷歌翻译
后门攻击在训练期间注入中毒样本,目的是迫使机器学习模型在测试时间呈现特定触发时输出攻击者所选的类。虽然在各种环境中展示了后门攻击和针对不同的模型,但影响其有效性的因素仍然不太了解。在这项工作中,我们提供了一个统一的框架,以研究增量学习和影响功能的镜头下的后门学习过程。我们表明,后门攻击的有效性取决于:(i)由普通参数控制的学习算法的复杂性; (ii)注入训练集的后门样品的一部分; (iii)后门触发的大小和可见性。这些因素会影响模型学会与目标类别相关联的速度触发器的存在的速度。我们的分析推出了封路计空间中的区域的有趣存在,其中清洁试验样品的准确性仍然很高,而后门攻击无效,从而提示改善现有防御的新标准。
translated by 谷歌翻译
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
translated by 谷歌翻译
在过去的几十年中,人工智能的兴起使我们有能力解决日常生活中最具挑战性的问题,例如癌症的预测和自主航行。但是,如果不保护对抗性攻击,这些应用程序可能不会可靠。此外,最近的作品表明,某些对抗性示例可以在不同的模型中转移。因此,至关重要的是避免通过抵抗对抗性操纵的强大模型进行这种可传递性。在本文中,我们提出了一种基于特征随机化的方法,该方法抵抗了八次针对测试阶段深度学习模型的对抗性攻击。我们的新方法包括改变目标网络分类器中的训练策略并选择随机特征样本。我们认为攻击者具有有限的知识和半知识条件,以进行最普遍的对抗性攻击。我们使用包括现实和合成攻击的众所周知的UNSW-NB15数据集评估了方法的鲁棒性。之后,我们证明我们的策略优于现有的最新方法,例如最强大的攻击,包括针对特定的对抗性攻击进行微调网络模型。最后,我们的实验结果表明,我们的方法可以确保目标网络并抵抗对抗性攻击的转移性超过60%。
translated by 谷歌翻译
我们调查分裂学习的安全 - 一种新颖的协作机器学习框架,通过需要最小的资源消耗来实现峰值性能。在本文中,我们通过介绍客户私人培训集重建的一般攻击策略来揭示议定书的脆弱性并展示其固有的不安全。更突出地,我们表明恶意服务器可以积极地劫持分布式模型的学习过程,并将其纳入不安全状态,从而为客户端提供推动攻击。我们实施不同的攻击调整,并在各种数据集中测试它们以及现实的威胁方案。我们证明我们的攻击能够克服最近提出的防御技术,旨在提高分裂学习议定书的安全性。最后,我们还通过扩展以前设计的联合学习的攻击来说明协议对恶意客户的不安全性。要使我们的结果可重复,我们会在https://github.com/pasquini-dario/splitn_fsha提供的代码。
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译
人类活动识别(HAR)是使用有效的机器学习(ML)方法将传感器数据解释为人类运动的问题。 HAR系统依靠来自不受信任的用户的数据,使他们容易受到数据中毒攻击的影响。在中毒攻击中,攻击者操纵传感器读数以污染训练集,从而误导了har以产生错误的结果。本文介绍了针对HAR系统的标签翻转数据中毒攻击的设计,在数据收集阶段,传感器读数的标签发生了恶意更改。由于传感环境中的噪音和不确定性,这种攻击对识别系统构成了严重威胁。此外,当将活动识别模型部署在安全至关重要的应用中时,标记翻转攻击的脆弱性是危险的。本文阐明了如何通过基于智能手机的传感器数据收集应用程序在实践中进行攻击。据我们所知,这是一项较早的研究工作,它通过标签翻转中毒探索了攻击HAR模型。我们实施了提出的攻击并根据以下机器学习算法进行活动识别模型进行测试:多层感知器,决策树,随机森林和XGBoost。最后,我们评估了针对拟议攻击的基于K-Nearest邻居(KNN)的防御机制的有效性。
translated by 谷歌翻译
最接近的基于邻居的方法通常用于分类任务和其他数据分析方法的子例程。具有将自己的数据点插入训练集的攻击者可以操纵推断的最近的邻居结构。我们将此目标提取到对$ k $ neart的邻居分类($ k $ nn)执行训练集数据插入攻击的任务。我们证明,即使$ k = 1 $,计算对$ k $ nn分类的最佳训练时间(又称中毒)攻击也是NP-HARD,并且攻击者只能插入一个数据点。我们提供任何时间算法来执行此类攻击,以及一般$ K $和攻击者预算的贪婪算法。我们提供理论界限,并从经验上证明我们方法对合成和现实数据集的有效性和实用性。从经验上讲,我们发现$ k $ nn在实践中很容易受到伤害,而降低维度是有效的防御。最后,我们讨论了我们的分析阐明的开放问题。
translated by 谷歌翻译
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service ("predictive analytics") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis.The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., "steal") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.
translated by 谷歌翻译
聚类算法在决策和明智的自动化过程中发挥着基本作用。由于这些应用的广泛使用,对抗对抗性噪声的这种算法的鲁棒性分析已经成为势在必行的。然而,据我们所知,目前只有少数作品目前解决了这个问题。在尝试填补这一差距,在这项工作中,我们提出了一种黑匣子对抗性攻击,用于制作对抗性样本来测试聚类算法的稳健性。我们将问题作为一个受约束的最小化程序,一般的结构,并且根据她的能力约束,攻击者定制。我们不假设有关受害者聚类算法的内部结构的任何信息,并且我们允许攻击者仅将其查询为服务。在没有任何衍生信息的情况下,我们通过抽象遗传算法(AGA)的自定义方法进行优化。在实验部分中,我们展示了不同单一和集群聚类算法对不同情景的制作的对抗样本的敏感性。此外,我们使用最先进的方法进行了对我们的算法的比较,显示我们能够达到或甚至优于其性能。最后,为了突出生成噪声的一般性质,我们表明我们的攻击即使针对SVMS,随机林和神经网络等监督算法也可转移。
translated by 谷歌翻译
We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a ``democratization of AI'' agenda, where network architectures and trained parameter sets are shared publicly. We develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI's relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible - a modification to the weights and bias associated with a single neuron - revealing a vulnerability arising from over-parameterization. We illustrate these concepts using state of the art architectures on two standard image data sets. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.
translated by 谷歌翻译