对抗性学习的研究主要集中在均匀的非结构化数据集上,这些数据集通常自然地映射到问题空间中。将功能空间攻击在异质数据集中倒入问题空间更具挑战性,尤其是找到要执行的扰动的任务。这项工作提出了一种正式的搜索策略:“特征重要的指导攻击”(FIGA),它在异质表格数据集的特征空间中发现扰动以产生逃避攻击。我们首先在特征空间中以及在问题空间中演示FIGA。 FIGA不对捍卫模型的学习算法没有任何先验知识,也不需要任何梯度信息。 FIGA假定对特征表示形式的知识和辩护模型数据集的平均特征值。通过在目标类方向上扰动输入的最重要特征,FIGA利用具有重要的排名。虽然FIGA在概念上与使用特征选择过程(例如模仿攻击)的其他作品相似,但我们将具有三个可调参数的攻击算法形式化,并在表格数据集上研究FIGA的强度。我们通过在四个不同的表网络钓鱼数据集中训练的网络钓鱼检测模型和一个平均成功率为94%的金融数据集来证明FIGA的有效性。我们通过限制可能在网络钓鱼域中有效且可行的扰动,将FIGA扩展到网络钓鱼问题空间。我们生成有效的对抗网站,这些网站在视觉上与其不受干扰的对应物相同,并使用它们来攻击六个表格的ML模型,达到13.05%的平均成功率。
translated by 谷歌翻译
随着深度神经网络(DNNS)的进步在许多关键应用中表现出前所未有的性能水平,它们的攻击脆弱性仍然是一个悬而未决的问题。我们考虑在测试时间进行逃避攻击,以防止在受约束的环境中进行深入学习,其中需要满足特征之间的依赖性。这些情况可能自然出现在表格数据中,也可能是特定应用程序域中功能工程的结果,例如网络安全中的威胁检测。我们提出了一个普通的基于迭代梯度的框架,称为围栏,用于制定逃避攻击,考虑到约束域和应用要求的细节。我们将其应用于针对两个网络安全应用培训的前馈神经网络:网络流量僵尸网络分类和恶意域分类,以生成可行的对抗性示例。我们广泛评估了攻击的成功率和绩效,比较它们对几个基线的改进,并分析影响攻击成功率的因素,包括优化目标和数据失衡。我们表明,通过最少的努力(例如,生成12个其他网络连接),攻击者可以将模型的预测从恶意类更改为良性并逃避分类器。我们表明,在具有更高失衡的数据集上训练的模型更容易受到我们的围栏攻击。最后,我们证明了在受限领域进行对抗训练的潜力,以提高针对这些逃避攻击的模型弹性。
translated by 谷歌翻译
Although machine learning based algorithms have been extensively used for detecting phishing websites, there has been relatively little work on how adversaries may attack such "phishing detectors" (PDs for short). In this paper, we propose a set of Gray-Box attacks on PDs that an adversary may use which vary depending on the knowledge that he has about the PD. We show that these attacks severely degrade the effectiveness of several existing PDs. We then propose the concept of operation chains that iteratively map an original set of features to a new set of features and develop the "Protective Operation Chain" (POC for short) algorithm. POC leverages the combination of random feature selection and feature mappings in order to increase the attacker's uncertainty about the target PD. Using 3 existing publicly available datasets plus a fourth that we have created and will release upon the publication of this paper, we show that POC is more robust to these attacks than past competing work, while preserving predictive performance when no adversarial attacks are present. Moreover, POC is robust to attacks on 13 different classifiers, not just one. These results are shown to be statistically significant at the p < 0.001 level.
translated by 谷歌翻译
许多机器学习问题在表格域中使用数据。对抗性示例可能对这些应用尤其有害。然而,现有关于对抗鲁棒性的作品主要集中在图像和文本域中的机器学习模型。我们认为,由于表格数据和图像或文本之间的差异,现有的威胁模型不适合表格域。这些模型没有捕获该成本比不可识别更重要,也不能使对手可以将不同的价值归因于通过部署不同的对手示例获得的效用。我们表明,由于这些差异,用于图像的攻击和防御方法和文本无法直接应用于表格设置。我们通过提出新的成本和公用事业感知的威胁模型来解决这些问题,该模型量身定制了针对表格域的攻击者的攻击者的约束。我们介绍了一个框架,使我们能够设计攻击和防御机制,从而导致模型免受成本或公用事业意识的对手的影响,例如,受到一定美元预算约束的对手。我们表明,我们的方法在与对应于对抗性示例具有经济和社会影响的应用相对应的三个表格数据集中有效。
translated by 谷歌翻译
机器学习算法已被证明通过系统修改(例如,图像识别)中的输入(例如,对抗性示例)的系统修改(例如,对抗性示例)容易受到对抗操作的影响。在默认威胁模型下,对手利用了图像的无约束性质。每个功能(像素)完全由对手控制。但是,尚不清楚这些攻击如何转化为限制对手可以修改的特征以及如何修改特征的约束域(例如,网络入侵检测)。在本文中,我们探讨了受约束的域是否比不受约束的域对对抗性示例生成算法不那么脆弱。我们创建了一种用于生成对抗草图的算法:针对性的通用扰动向量,该向量在域约束的信封内编码特征显着性。为了评估这些算法的性能,我们在受约束(例如网络入侵检测)和不受约束(例如图像识别)域中评估它们。结果表明,我们的方法在约束域中产生错误分类率,这些域与不受约束的域(大于95%)相当。我们的调查表明,受约束域暴露的狭窄攻击表面仍然足够大,可以制作成功的对抗性例子。因此,约束似乎并不能使域变得健壮。实际上,只有五个随机选择的功能,仍然可以生成对抗性示例。
translated by 谷歌翻译
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.
translated by 谷歌翻译
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
translated by 谷歌翻译
机器学习与服务(MLAAS)已成为广泛的范式,即使是通过例如,也是客户可用的最复杂的机器学习模型。一个按要求的原则。这使用户避免了数据收集,超参数调整和模型培训的耗时过程。但是,通过让客户访问(预测)模型,MLAAS提供商危害其知识产权,例如敏感培训数据,优化的超参数或学到的模型参数。对手可以仅使用预测标签创建模型的副本,并以(几乎)相同的行为。尽管已经描述了这种攻击的许多变体,但仅提出了零星的防御策略,以解决孤立的威胁。这增加了对模型窃取领域进行彻底系统化的必要性,以全面了解这些攻击是成功的原因,以及如何全面地捍卫它们。我们通过对模型窃取攻击,评估其性能以及探索不同设置中相应的防御技术来解决这一问题。我们为攻击和防御方法提出了分类法,并提供有关如何根据目标和可用资源选择正确的攻击或防御策略的准则。最后,我们分析了当前攻击策略使哪些防御能力降低。
translated by 谷歌翻译
由于它们在各个域中的大量成功,深入的学习技术越来越多地用于设计网络入侵检测解决方案,该解决方案检测和减轻具有高精度检测速率和最小特征工程的未知和已知的攻击。但是,已经发现,深度学习模型容易受到可以误导模型的数据实例,以使所谓的分类决策不正确(对抗示例)。此类漏洞允许攻击者通过向恶意流量添加小的狡猾扰动来逃避检测并扰乱系统的关键功能。在计算机视觉域中广泛研究了深度对抗学习的问题;但是,它仍然是网络安全应用中的开放研究领域。因此,本调查探讨了在网络入侵检测领域采用对抗机器学习的不同方面的研究,以便为潜在解决方案提供方向。首先,调查研究基于它们对产生对抗性实例的贡献来分类,评估ML的NID对逆势示例的鲁棒性,并捍卫这些模型的这种攻击。其次,我们突出了调查研究中确定的特征。此外,我们讨论了现有的通用对抗攻击对NIDS领域的适用性,启动拟议攻击在现实世界方案中的可行性以及现有缓解解决方案的局限性。
translated by 谷歌翻译
Strengthening the robustness of machine learning-based Android malware detectors in the real world requires incorporating realizable adversarial examples (RealAEs), i.e., AEs that satisfy the domain constraints of Android malware. However, existing work focuses on generating RealAEs in the problem space, which is known to be time-consuming and impractical for adversarial training. In this paper, we propose to generate RealAEs in the feature space, leading to a simpler and more efficient solution. Our approach is driven by a novel interpretation of Android malware properties in the feature space. More concretely, we extract feature-space domain constraints by learning meaningful feature dependencies from data and applying them by constructing a robust feature space. Our experiments on DREBIN, a well-known Android malware detector, demonstrate that our approach outperforms the state-of-the-art defense, Sec-SVM, against realistic gradient- and query-based attacks. Additionally, we demonstrate that generating feature-space RealAEs is faster than generating problem-space RealAEs, indicating its high applicability in adversarial training. We further validate the ability of our learned feature-space domain constraints in representing the Android malware properties by showing that (i) re-training detectors with our feature-space RealAEs largely improves model performance on similar problem-space RealAEs and (ii) using our feature-space domain constraints can help distinguish RealAEs from unrealizable AEs (unRealAEs).
translated by 谷歌翻译
恶意软件是跨越多个操作系统和各种文件格式的计算机的最损害威胁之一。为了防止不断增长的恶意软件的威胁,已经提出了巨大的努力来提出各种恶意软件检测方法,试图有效和有效地检测恶意软件。最近的研究表明,一方面,现有的ML和DL能够卓越地检测新出现和以前看不见的恶意软件。然而,另一方面,ML和DL模型本质上易于侵犯对抗性示例形式的对抗性攻击,这通过略微仔细地扰乱了合法输入来混淆目标模型来恶意地产生。基本上,在计算机视觉领域最初广泛地研究了对抗性攻击,并且一些快速扩展到其他域,包括NLP,语音识别甚至恶意软件检测。在本文中,我们专注于Windows操作系统系列中的便携式可执行文件(PE)文件格式的恶意软件,即Windows PE恶意软件,作为在这种对抗设置中研究对抗性攻击方法的代表性案例。具体而言,我们首先首先概述基于ML / DL的Windows PE恶意软件检测的一般学习框架,随后突出了在PE恶意软件的上下文中执行对抗性攻击的三个独特挑战。然后,我们进行全面和系统的审查,以对PE恶意软件检测以及增加PE恶意软件检测的稳健性的相应防御,对近最新的对手攻击进行分类。我们首先向Windows PE恶意软件检测的其他相关攻击结束除了对抗对抗攻击之外,然后对未来的研究方向和机遇脱落。
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译
可提供许多开源和商业恶意软件探测器。然而,这些工具的功效受到新的对抗性攻击的威胁,由此恶意软件试图使用例如机器学习技术来逃避检测。在这项工作中,我们设计了依赖于特征空间和问题空间操纵的对抗逃避攻击。它使用可扩展性导向特征选择来最大限度地通过识别影响检测的最关键的特征来最大限度地逃避。然后,我们将此攻击用作评估若干最先进的恶意软件探测器的基准。我们发现(i)最先进的恶意软件探测器容易受到简单的逃避策略,并且可以使用现成的技术轻松欺骗; (ii)特征空间操纵和问题空间混淆可以组合起来,以便在不需要对探测器的白色盒子理解的情况下实现逃避; (iii)我们可以使用解释性方法(例如,Shap)来指导特征操纵并解释攻击如何跨多个检测器传输。我们的调查结果阐明了当前恶意软件探测器的弱点,以及如何改善它们。
translated by 谷歌翻译
可解释的人工智能(XAI)是提高机器学习(ML)管道透明度的有前途解决方案。我们将开发和利用XAI方法用于防御和进攻性网络安全任务的研究越来越多(但分散的)缩影。我们确定3个网络安全利益相关者,即模型用户,设计师和对手,将XAI用于ML管道中的5个不同目标,即1)启用XAI的决策支持,2)将XAI应用于安全任务,3)3)通过模型验证通过模型验证xai,4)解释验证和鲁棒性,以及5)对解释的进攻使用。我们进一步分类文献W.R.T.目标安全域。我们对文献的分析表明,许多XAI应用程序的设计都几乎没有了解如何将其集成到分析师工作流程中 - 仅在14%的情况下进行了解释评估的用户研究。文献也很少解开各种利益相关者的角色。特别是,在安全文献中将模型设计师的作用最小化。为此,我们提出了一个说明性用例,突显了模型设计师的作用。我们证明了XAI可以帮助模型验证和可能导致错误结论的案例。系统化和用例使我们能够挑战几个假设,并提出可以帮助塑造网络安全XAI未来的开放问题
translated by 谷歌翻译
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service ("predictive analytics") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis.The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., "steal") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.
translated by 谷歌翻译
人工智能(AI)和机器学习(ML)在网络安全挑战中的应用已在行业和学术界的吸引力,部分原因是对关键系统(例如云基础架构和政府机构)的广泛恶意软件攻击。入侵检测系统(IDS)使用某些形式的AI,由于能够以高预测准确性处理大量数据,因此获得了广泛的采用。这些系统托管在组织网络安全操作中心(CSOC)中,作为一种防御工具,可监视和检测恶意网络流,否则会影响机密性,完整性和可用性(CIA)。 CSOC分析师依靠这些系统来决定检测到的威胁。但是,使用深度学习(DL)技术设计的IDS通常被视为黑匣子模型,并且没有为其预测提供理由。这为CSOC分析师造成了障碍,因为他们无法根据模型的预测改善决策。解决此问题的一种解决方案是设计可解释的ID(X-IDS)。这项调查回顾了可解释的AI(XAI)的最先进的ID,目前的挑战,并讨论了这些挑战如何涉及X-ID的设计。特别是,我们全面讨论了黑匣子和白盒方法。我们还在这些方法之间的性能和产生解释的能力方面提出了权衡。此外,我们提出了一种通用体系结构,该建筑认为人类在循环中,该架构可以用作设计X-ID时的指南。研究建议是从三个关键观点提出的:需要定义ID的解释性,需要为各种利益相关者量身定制的解释以及设计指标来评估解释的需求。
translated by 谷歌翻译
A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model-malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input-a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.
translated by 谷歌翻译
窃取对受控信息的攻击,以及越来越多的信息泄漏事件,已成为近年来新兴网络安全威胁。由于蓬勃发展和部署先进的分析解决方案,新颖的窃取攻击利用机器学习(ML)算法来实现高成功率并导致大量损坏。检测和捍卫这种攻击是挑战性和紧迫的,因此政府,组织和个人应该非常重视基于ML的窃取攻击。本调查显示了这种新型攻击和相应对策的最新进展。以三类目标受控信息的视角审查了基于ML的窃取攻击,包括受控用户活动,受控ML模型相关信息和受控认证信息。最近的出版物总结了概括了总体攻击方法,并导出了基于ML的窃取攻击的限制和未来方向。此外,提出了从三个方面制定有效保护的对策 - 检测,破坏和隔离。
translated by 谷歌翻译