计算能力和大型培训数据集的可用性增加,机器学习的成功助长了。假设它充分代表了在测试时遇到的数据,则使用培训数据来学习新模型或更新现有模型。这种假设受到中毒威胁的挑战,这种攻击会操纵训练数据,以损害模型在测试时的表现。尽管中毒已被认为是行业应用中的相关威胁,到目前为止,已经提出了各种不同的攻击和防御措施,但对该领域的完整系统化和批判性审查仍然缺失。在这项调查中,我们在机器学习中提供了中毒攻击和防御措施的全面系统化,审查了过去15年中该领域发表的100多篇论文。我们首先对当前的威胁模型和攻击进行分类,然后相应地组织现有防御。虽然我们主要关注计算机视觉应用程序,但我们认为我们的系统化还包括其他数据模式的最新攻击和防御。最后,我们讨论了中毒研究的现有资源,并阐明了当前的局限性和该研究领域的开放研究问题。
translated by 谷歌翻译
与令人印象深刻的进步触动了我们社会的各个方面,基于深度神经网络(DNN)的AI技术正在带来越来越多的安全问题。虽然在考试时间运行的攻击垄断了研究人员的初始关注,但是通过干扰培训过程来利用破坏DNN模型的可能性,代表了破坏训练过程的可能性,这是破坏AI技术的可靠性的进一步严重威胁。在后门攻击中,攻击者损坏了培训数据,以便在测试时间诱导错误的行为。然而,测试时间误差仅在存在与正确制作的输入样本对应的触发事件的情况下被激活。通过这种方式,损坏的网络继续正常输入的预期工作,并且只有当攻击者决定激活网络内隐藏的后门时,才会发生恶意行为。在过去几年中,后门攻击一直是强烈的研究活动的主题,重点是新的攻击阶段的发展,以及可能对策的提议。此概述文件的目标是审查发表的作品,直到现在,分类到目前为止提出的不同类型的攻击和防御。指导分析的分类基于攻击者对培训过程的控制量,以及防御者验证用于培训的数据的完整性,并监控DNN在培训和测试中的操作时间。因此,拟议的分析特别适合于参考他们在运营的应用方案的攻击和防御的强度和弱点。
translated by 谷歌翻译
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
translated by 谷歌翻译
Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
translated by 谷歌翻译
在对抗机器学习中,防止对深度学习系统的攻击的新防御能力在释放更强大的攻击后不久就会破坏。在这种情况下,法医工具可以通过追溯成功的根本原因来为现有防御措施提供宝贵的补充,并为缓解措施提供前进的途径,以防止将来采取类似的攻击。在本文中,我们描述了我们为开发用于深度神经网络毒物攻击的法医追溯工具的努力。我们提出了一种新型的迭代聚类和修剪解决方案,该解决方案修剪了“无辜”训练样本,直到所有剩余的是一组造成攻击的中毒数据。我们的方法群群训练样本基于它们对模型参数的影响,然后使用有效的数据解读方法来修剪无辜簇。我们从经验上证明了系统对三种类型的肮脏标签(后门)毒物攻击和三种类型的清洁标签毒药攻击的功效,这些毒物跨越了计算机视觉和恶意软件分类。我们的系统在所有攻击中都达到了98.4%的精度和96.8%的召回。我们还表明,我们的系统与专门攻击它的四种抗纤维法措施相对强大。
translated by 谷歌翻译
Speech-centric machine learning systems have revolutionized many leading domains ranging from transportation and healthcare to education and defense, profoundly changing how people live, work, and interact with each other. However, recent studies have demonstrated that many speech-centric ML systems may need to be considered more trustworthy for broader deployment. Specifically, concerns over privacy breaches, discriminating performance, and vulnerability to adversarial attacks have all been discovered in ML research fields. In order to address the above challenges and risks, a significant number of efforts have been made to ensure these ML systems are trustworthy, especially private, safe, and fair. In this paper, we conduct the first comprehensive survey on speech-centric trustworthy ML topics related to privacy, safety, and fairness. In addition to serving as a summary report for the research community, we point out several promising future research directions to inspire the researchers who wish to explore further in this area.
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.
translated by 谷歌翻译
后门攻击误导机器学习模型以在测试时间呈现特定触发时输出攻击者指定的类。这些攻击需要毒害训练数据来损害学习算法,例如,通过将包含触发器的中毒样本注入训练集中,以及所需的类标签。尽管对后门攻击和防御的研究数量越来越多,但影响后门攻击成功的潜在因素以及它们对学习算法的影响尚未得到很好的理解。在这项工作中,我们的目标是通过揭幕揭示触发样本周围的更光滑的决策功能来阐明这一问题 - 这是我们称之为\ Textit {后门平滑}的现象。为了量化后门平滑,我们定义了一种评估与输入样本周围分类器的预测相关的不确定性的度量。我们的实验表明,当触发器添加到输入样本时,平滑度会增加,并且这种现象更加明显,以获得更成功的攻击。我们还提供了初步证据,后者触发器不是唯一的平滑诱导模式,而是可以通过我们的方法来检测其他人工图案,铺平了解当前防御和设计新颖的局限性。
translated by 谷歌翻译
从外界培训的机器学习模型可能会被数据中毒攻击损坏,将恶意指向到模型的培训集中。对这些攻击的常见防御是数据消毒:在培训模型之前首先过滤出异常培训点。在本文中,我们开发了三次攻击,可以绕过广泛的常见数据消毒防御,包括基于最近邻居,训练损失和奇异值分解的异常探测器。通过增加3%的中毒数据,我们的攻击成功地将Enron垃圾邮件检测数据集的测试错误从3%增加到24%,并且IMDB情绪分类数据集从12%到29%。相比之下,没有明确占据这些数据消毒防御的现有攻击被他们击败。我们的攻击基于两个想法:(i)我们协调我们的攻击将中毒点彼此放置在彼此附近,(ii)我们将每个攻击制定为受限制的优化问题,限制旨在确保中毒点逃避检测。随着这种优化涉及解决昂贵的Bilevel问题,我们的三个攻击对应于基于影响功能的近似近似这个问题的方式; minimax二元性;和karush-kuhn-tucker(kkt)条件。我们的结果强调了对数据中毒攻击产生更强大的防御的必要性。
translated by 谷歌翻译
A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model-malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input-a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.
translated by 谷歌翻译
机器学习与服务(MLAAS)已成为广泛的范式,即使是通过例如,也是客户可用的最复杂的机器学习模型。一个按要求的原则。这使用户避免了数据收集,超参数调整和模型培训的耗时过程。但是,通过让客户访问(预测)模型,MLAAS提供商危害其知识产权,例如敏感培训数据,优化的超参数或学到的模型参数。对手可以仅使用预测标签创建模型的副本,并以(几乎)相同的行为。尽管已经描述了这种攻击的许多变体,但仅提出了零星的防御策略,以解决孤立的威胁。这增加了对模型窃取领域进行彻底系统化的必要性,以全面了解这些攻击是成功的原因,以及如何全面地捍卫它们。我们通过对模型窃取攻击,评估其性能以及探索不同设置中相应的防御技术来解决这一问题。我们为攻击和防御方法提出了分类法,并提供有关如何根据目标和可用资源选择正确的攻击或防御策略的准则。最后,我们分析了当前攻击策略使哪些防御能力降低。
translated by 谷歌翻译
后门攻击在训练期间注入中毒样本,目的是迫使机器学习模型在测试时间呈现特定触发时输出攻击者所选的类。虽然在各种环境中展示了后门攻击和针对不同的模型,但影响其有效性的因素仍然不太了解。在这项工作中,我们提供了一个统一的框架,以研究增量学习和影响功能的镜头下的后门学习过程。我们表明,后门攻击的有效性取决于:(i)由普通参数控制的学习算法的复杂性; (ii)注入训练集的后门样品的一部分; (iii)后门触发的大小和可见性。这些因素会影响模型学会与目标类别相关联的速度触发器的存在的速度。我们的分析推出了封路计空间中的区域的有趣存在,其中清洁试验样品的准确性仍然很高,而后门攻击无效,从而提示改善现有防御的新标准。
translated by 谷歌翻译
机器学习(ML)模型应用于越来越多的域。大量数据和计算资源的可用性鼓励开发更复杂和有价值的模型。这些模型被认为是培训他们的合法缔约方的知识产权,这使得他们防止窃取,非法再分配和未经授权的应用迫切需要。数字水印为标记模型所有权提供了强大的机制,从而提供了对这些威胁的保护。这项工作介绍了ML模型的不同类别水印方案的分类识别和分析。它介绍了一个统一的威胁模型,以允许在不同场景中进行水印方法的有效性的结构化推理和比较。此外,它系统化了期望的安全要求和攻击ML模型水印。根据该框架,调查了该领域的代表文学以说明分类法。最后,讨论了现有方法的缺点和普遍局限性,给出了未来研究方向的前景。
translated by 谷歌翻译
随着机器学习数据的策展变得越来越自动化,数据集篡改是一种安装威胁。后门攻击者通过培训数据篡改,以嵌入在该数据上培训的模型中的漏洞。然后通过将“触发”放入模型的输入中的推理时间以推理时间激活此漏洞。典型的后门攻击将触发器直接插入训练数据,尽管在检查时可能会看到这种攻击。相比之下,隐藏的触发后托攻击攻击达到中毒,而无需将触发器放入训练数据即可。然而,这种隐藏的触发攻击在从头开始培训的中毒神经网络时无效。我们开发了一个新的隐藏触发攻击,睡眠代理,在制备过程中使用梯度匹配,数据选择和目标模型重新培训。睡眠者代理是第一个隐藏的触发后门攻击,以对从头开始培训的神经网络有效。我们展示了Imagenet和黑盒设置的有效性。我们的实现代码可以在https://github.com/hsouri/sleeper-agent找到。
translated by 谷歌翻译
恶意软件是跨越多个操作系统和各种文件格式的计算机的最损害威胁之一。为了防止不断增长的恶意软件的威胁,已经提出了巨大的努力来提出各种恶意软件检测方法,试图有效和有效地检测恶意软件。最近的研究表明,一方面,现有的ML和DL能够卓越地检测新出现和以前看不见的恶意软件。然而,另一方面,ML和DL模型本质上易于侵犯对抗性示例形式的对抗性攻击,这通过略微仔细地扰乱了合法输入来混淆目标模型来恶意地产生。基本上,在计算机视觉领域最初广泛地研究了对抗性攻击,并且一些快速扩展到其他域,包括NLP,语音识别甚至恶意软件检测。在本文中,我们专注于Windows操作系统系列中的便携式可执行文件(PE)文件格式的恶意软件,即Windows PE恶意软件,作为在这种对抗设置中研究对抗性攻击方法的代表性案例。具体而言,我们首先首先概述基于ML / DL的Windows PE恶意软件检测的一般学习框架,随后突出了在PE恶意软件的上下文中执行对抗性攻击的三个独特挑战。然后,我们进行全面和系统的审查,以对PE恶意软件检测以及增加PE恶意软件检测的稳健性的相应防御,对近最新的对手攻击进行分类。我们首先向Windows PE恶意软件检测的其他相关攻击结束除了对抗对抗攻击之外,然后对未来的研究方向和机遇脱落。
translated by 谷歌翻译
机器学习(ML)模型已广泛应用于各种应用,包括图像分类,文本生成,音频识别和图形数据分析。然而,最近的研究表明,ML模型容易受到隶属推导攻击(MIS),其目的是推断数据记录是否用于训练目标模型。 ML模型上的MIA可以直接导致隐私违规行为。例如,通过确定已经用于训练与某种疾病相关的模型的临床记录,攻击者可以推断临床记录的所有者具有很大的机会。近年来,MIS已被证明对各种ML模型有效,例如,分类模型和生成模型。同时,已经提出了许多防御方法来减轻米西亚。虽然ML模型上的MIAS形成了一个新的新兴和快速增长的研究区,但还没有对这一主题进行系统的调查。在本文中,我们对会员推论和防御进行了第一个全面调查。我们根据其特征提供攻击和防御的分类管理,并讨论其优点和缺点。根据本次调查中确定的限制和差距,我们指出了几个未来的未来研究方向,以激发希望遵循该地区的研究人员。这项调查不仅是研究社区的参考,而且还为该研究领域之外的研究人员带来了清晰的照片。为了进一步促进研究人员,我们创建了一个在线资源存储库,并与未来的相关作品继续更新。感兴趣的读者可以在https://github.com/hongshenghu/membership-inference-machine-learning-literature找到存储库。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
最近的作品表明,深度学习模型容易受到后门中毒攻击的影响,在这些攻击中,这些攻击灌输了与外部触发模式或物体(例如贴纸,太阳镜等)的虚假相关性。我们发现这种外部触发信号是不必要的,因为可以使用基于旋转的图像转换轻松插入高效的后门。我们的方法通过旋转有限数量的对象并将其标记错误来构建中毒数据集;一旦接受过培训,受害者的模型将在运行时间推理期间做出不良的预测。它表现出明显的攻击成功率,同时通过有关图像分类和对象检测任务的全面实证研究来保持清洁绩效。此外,我们评估了标准数据增强技术和针对我们的攻击的四种不同的后门防御措施,发现它们都无法作为一致的缓解方法。正如我们在图像分类和对象检测应用程序中所示,我们的攻击只能在现实世界中轻松部署在现实世界中。总体而言,我们的工作突出了一个新的,简单的,物理上可实现的,高效的矢量,用于后门攻击。我们的视频演示可在https://youtu.be/6jif8wnx34m上找到。
translated by 谷歌翻译