现代自动驾驶汽车采用最先进的DNN模型来解释传感器数据并感知环境。但是,DNN模型容易受到不同类型的对抗攻击的影响,这对车辆和乘客的安全性和安全性构成了重大风险。一个突出的威胁是后门攻击,对手可以通过中毒训练样本来妥协DNN模型。尽管已经大量精力致力于调查后门攻击对传统的计算机视觉任务,但很少探索其对自主驾驶场景的实用性和适用性,尤其是在物理世界中。在本文中,我们针对车道检测系统,该系统是许多自动驾驶任务,例如导航,车道切换的必不可少的模块。我们设计并实现了对此类系统的第一次物理后门攻击。我们的攻击是针对不同类型的车道检测算法的全面有效的。具体而言,我们引入了两种攻击方法(毒药和清洁量)来生成中毒样本。使用这些样品,训练有素的车道检测模型将被后门感染,并且可以通过公共物体(例如,交通锥)进行启动,以进行错误的检测,导致车辆从道路上或在相反的车道上行驶。对公共数据集和物理自动驾驶汽车的广泛评估表明,我们的后门攻击对各种防御解决方案都是有效,隐秘和强大的。我们的代码和实验视频可以在https://sites.google.com/view/lane-detection-attack/lda中找到。
translated by 谷歌翻译
最近的作品表明,深度学习模型容易受到后门中毒攻击的影响,在这些攻击中,这些攻击灌输了与外部触发模式或物体(例如贴纸,太阳镜等)的虚假相关性。我们发现这种外部触发信号是不必要的,因为可以使用基于旋转的图像转换轻松插入高效的后门。我们的方法通过旋转有限数量的对象并将其标记错误来构建中毒数据集;一旦接受过培训,受害者的模型将在运行时间推理期间做出不良的预测。它表现出明显的攻击成功率,同时通过有关图像分类和对象检测任务的全面实证研究来保持清洁绩效。此外,我们评估了标准数据增强技术和针对我们的攻击的四种不同的后门防御措施,发现它们都无法作为一致的缓解方法。正如我们在图像分类和对象检测应用程序中所示,我们的攻击只能在现实世界中轻松部署在现实世界中。总体而言,我们的工作突出了一个新的,简单的,物理上可实现的,高效的矢量,用于后门攻击。我们的视频演示可在https://youtu.be/6jif8wnx34m上找到。
translated by 谷歌翻译
深度神经网络众所周知,很容易受到对抗性攻击和后门攻击的影响,在该攻击中,对输入的微小修改能够误导模型以给出错误的结果。尽管已经广泛研究了针对对抗性攻击的防御措施,但有关减轻后门攻击的调查仍处于早期阶段。尚不清楚防御这两次攻击之间是否存在任何连接和共同特征。我们对对抗性示例与深神网络的后门示例之间的联系进行了全面的研究,以寻求回答以下问题:我们可以使用对抗检测方法检测后门。我们的见解是基于这样的观察结果,即在推理过程中,对抗性示例和后门示例都有异常,与良性​​样本高度区分。结果,我们修改了四种现有的对抗防御方法来检测后门示例。广泛的评估表明,这些方法可靠地防止后门攻击,其准确性比检测对抗性实例更高。这些解决方案还揭示了模型灵敏度,激活空间和特征空间中对抗性示例,后门示例和正常样本的关系。这能够增强我们对这两次攻击和防御机会的固有特征的理解。
translated by 谷歌翻译
对象检测是各种关键计算机视觉任务的基础,例如分割,对象跟踪和事件检测。要以令人满意的精度训练对象探测器,需要大量数据。但是,由于注释大型数据集涉及大量劳动力,这种数据策展任务通常被外包给第三方或依靠志愿者。这项工作揭示了此类数据策展管道的严重脆弱性。我们提出MACAB,即使数据策展人可以手动审核图像,也可以将干净的图像制作清洁的图像将后门浸入对象探测器中。我们观察到,当后门被不明确的天然物理触发器激活时,在野外实现了错误分类和披肩的后门效应。与带有清洁标签的现有图像分类任务相比,带有清洁通道的非分类对象检测具有挑战性,这是由于每个帧内有多个对象的复杂性,包括受害者和非视野性对象。通过建设性地滥用深度学习框架使用的图像尺度函数,II结合了所提出的对抗性清洁图像复制技术,以及在考虑到毒品数据选择标准的情况下,通过建设性地滥用图像尺度尺度,可以确保MACAB的功效。广泛的实验表明,在各种现实世界中,MacAB在90%的攻击成功率中表现出超过90%的攻击成功率。这包括披肩和错误分类后门效应,甚至限制了较小的攻击预算。最先进的检测技术无法有效地识别中毒样品。全面的视频演示位于https://youtu.be/ma7l_lpxkp4上,该演示基于yolov4倒置的毒药率为0.14%,yolov4 clokaking后门和更快的速度R-CNN错误分类后门。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model-malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input-a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.
translated by 谷歌翻译
与令人印象深刻的进步触动了我们社会的各个方面,基于深度神经网络(DNN)的AI技术正在带来越来越多的安全问题。虽然在考试时间运行的攻击垄断了研究人员的初始关注,但是通过干扰培训过程来利用破坏DNN模型的可能性,代表了破坏训练过程的可能性,这是破坏AI技术的可靠性的进一步严重威胁。在后门攻击中,攻击者损坏了培训数据,以便在测试时间诱导错误的行为。然而,测试时间误差仅在存在与正确制作的输入样本对应的触发事件的情况下被激活。通过这种方式,损坏的网络继续正常输入的预期工作,并且只有当攻击者决定激活网络内隐藏的后门时,才会发生恶意行为。在过去几年中,后门攻击一直是强烈的研究活动的主题,重点是新的攻击阶段的发展,以及可能对策的提议。此概述文件的目标是审查发表的作品,直到现在,分类到目前为止提出的不同类型的攻击和防御。指导分析的分类基于攻击者对培训过程的控制量,以及防御者验证用于培训的数据的完整性,并监控DNN在培训和测试中的操作时间。因此,拟议的分析特别适合于参考他们在运营的应用方案的攻击和防御的强度和弱点。
translated by 谷歌翻译
Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model's test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have been extensively studied on images, few works have investigated the threat of backdoor attacks on time series data. To fill this gap, in this paper we present a novel generative approach for time series backdoor attacks against deep learning based time series classifiers. Backdoor attacks have two main goals: high stealthiness and high attack success rate. We find that, compared to images, it can be more challenging to achieve the two goals on time series. This is because time series have fewer input dimensions and lower degrees of freedom, making it hard to achieve a high attack success rate without compromising stealthiness. Our generative approach addresses this challenge by generating trigger patterns that are as realistic as real-time series patterns while achieving a high attack success rate without causing a significant drop in clean accuracy. We also show that our proposed attack is resistant to potential backdoor defenses. Furthermore, we propose a novel universal generator that can poison any type of time series with a single generator that allows universal attacks without the need to fine-tune the generative model for new time series datasets.
translated by 谷歌翻译
As a critical threat to deep neural networks (DNNs), backdoor attacks can be categorized into two types, i.e., source-agnostic backdoor attacks (SABAs) and source-specific backdoor attacks (SSBAs). Compared to traditional SABAs, SSBAs are more advanced in that they have superior stealthier in bypassing mainstream countermeasures that are effective against SABAs. Nonetheless, existing SSBAs suffer from two major limitations. First, they can hardly achieve a good trade-off between ASR (attack success rate) and FPR (false positive rate). Besides, they can be effectively detected by the state-of-the-art (SOTA) countermeasures (e.g., SCAn). To address the limitations above, we propose a new class of viable source-specific backdoor attacks, coined as CASSOCK. Our key insight is that trigger designs when creating poisoned data and cover data in SSBAs play a crucial role in demonstrating a viable source-specific attack, which has not been considered by existing SSBAs. With this insight, we focus on trigger transparency and content when crafting triggers for poisoned dataset where a sample has an attacker-targeted label and cover dataset where a sample has a ground-truth label. Specifically, we implement $CASSOCK_{Trans}$ and $CASSOCK_{Cont}$. While both they are orthogonal, they are complementary to each other, generating a more powerful attack, called $CASSOCK_{Comp}$, with further improved attack performance and stealthiness. We perform a comprehensive evaluation of the three $CASSOCK$-based attacks on four popular datasets and three SOTA defenses. Compared with a representative SSBA as a baseline ($SSBA_{Base}$), $CASSOCK$-based attacks have significantly advanced the attack performance, i.e., higher ASR and lower FPR with comparable CDA (clean data accuracy). Besides, $CASSOCK$-based attacks have effectively bypassed the SOTA defenses, and $SSBA_{Base}$ cannot.
translated by 谷歌翻译
后门攻击已被证明是对深度学习模型的严重安全威胁,并且检测给定模型是否已成为后门成为至关重要的任务。现有的防御措施主要建立在观察到后门触发器通常尺寸很小或仅影响几个神经元激活的观察结果。但是,在许多情况下,尤其是对于高级后门攻击,违反了上述观察结果,阻碍了现有防御的性能和适用性。在本文中,我们提出了基于新观察的后门防御范围。也就是说,有效的后门攻击通常需要对中毒训练样本的高预测置信度,以确保训练有素的模型具有很高的可能性。基于此观察结果,Dtinspector首先学习一个可以改变最高信心数据的预测的补丁,然后通过检查在低信心数据上应用学习补丁后检查预测变化的比率来决定后门的存在。对五次后门攻击,四个数据集和三种高级攻击类型的广泛评估证明了拟议防御的有效性。
translated by 谷歌翻译
已知深层神经网络(DNN)容易受到后门攻击和对抗攻击的影响。在文献中,这两种攻击通常被视为明显的问题并分别解决,因为它们分别属于训练时间和推理时间攻击。但是,在本文中,我们发现它们之间有一个有趣的联系:对于具有后门种植的模型,我们观察到其对抗性示例具有与触发样品相似的行为,即都激活了同一DNN神经元的子集。这表明将后门种植到模型中会严重影响模型的对抗性例子。基于这一观察结果,我们设计了一种新的对抗性微调(AFT)算法,以防止后门攻击。我们从经验上表明,在5次最先进的后门攻击中,我们的船尾可以有效地擦除后门触发器,而无需在干净的样品上明显的性能降解,并显着优于现有的防御方法。
translated by 谷歌翻译
人群计数是一项回归任务,它估计场景图像中的人数,在一系列安全至关重要的应用程序中起着至关重要的作用,例如视频监视,交通监控和流量控制。在本文中,我们研究了基于深度学习的人群计数模型对后门攻击的脆弱性,这是对深度学习的主要安全威胁。后门攻击者通过数据中毒将后门触发植入目标模型,以控制测试时间的预测。与已经开发和测试的大多数现有后门攻击的图像分类模型不同,人群计数模型是输出多维密度图的回归模型,因此需要不同的技术来操纵。在本文中,我们提出了两次新颖的密度操纵后门攻击(DMBA $^{ - } $和DMBA $^{+} $),以攻击模型以产生任意的大或小密度估计。实验结果证明了我们对五个经典人群计数模型和四种类型数据集的DMBA攻击的有效性。我们还深入分析了后门人群计数模型的独特挑战,并揭示了有效攻击的两个关键要素:1)完整而密集的触发器以及2)操纵地面真相计数或密度图。我们的工作可以帮助评估人群计数模型对潜在后门攻击的脆弱性。
translated by 谷歌翻译
视觉变压器(VITS)具有与卷积神经网络相比,具有较小的感应偏置的根本不同的结构。随着绩效的提高,VIT的安全性和鲁棒性也非常重要。与许多最近利用VIT反对对抗性例子的鲁棒性的作品相反,本文调查了代表性的病因攻击,即后门。我们首先检查了VIT对各种后门攻击的脆弱性,发现VIT也很容易受到现有攻击的影响。但是,我们观察到,VIT的清洁数据准确性和后门攻击成功率在位置编码之前对补丁转换做出了明显的反应。然后,根据这一发现,我们为VIT提出了一种通过补丁处理来捍卫基于补丁的触发后门攻击的有效方法。在包括CIFAR10,GTSRB和Tinyimagenet在内的几个基准数据集上评估了这些表演,这些数据表明,该拟议的新颖防御在减轻VIT的后门攻击方面非常成功。据我们所知,本文提出了第一个防御性策略,该策略利用了反对后门攻击的VIT的独特特征。
translated by 谷歌翻译
随着深度神经网络(DNN)的广泛应用,后门攻击逐渐引起了人们的关注。后门攻击是阴险的,中毒模型在良性样本上的表现良好,只有在给定特定输入时才会触发,这会导致神经网络产生不正确的输出。最先进的后门攻击工作是通过数据中毒(即攻击者注入中毒样品中的数据集中)实施的,并且用该数据集训练的模型被后门感染。但是,当前研究中使用的大多数触发因素都是在一小部分图像上修补的固定图案,并且经常被明显错误地标记,这很容易被人类或防御方法(例如神经清洁和前哨)检测到。同样,DNN很难在没有标记的情况下学习,因为它们可能会忽略小图案。在本文中,我们提出了一种基于频域的广义后门攻击方法,该方法可以实现后门植入而不会错标和访问训练过程。它是人类看不见的,能够逃避常用的防御方法。我们在三个数据集(CIFAR-10,STL-10和GTSRB)的无标签和清洁标签案例中评估了我们的方法。结果表明,我们的方法可以在所有任务上实现高攻击成功率(高于90%),而不会在主要任务上进行大量绩效降解。此外,我们评估了我们的方法的旁路性能,以进行各种防御措施,包括检测训练数据(即激活聚类),输入的预处理(即过滤),检测输入(即Sentinet)和检测模型(即神经清洁)。实验结果表明,我们的方法对这种防御能力表现出极好的鲁棒性。
translated by 谷歌翻译
典型的深神经网络(DNN)后门攻击基于输入中嵌入的触发因素。现有的不可察觉的触发因素在计算上昂贵或攻击成功率低。在本文中,我们提出了一个新的后门触发器,该扳机易于生成,不可察觉和高效。新的触发器是一个均匀生成的三维(3D)二进制图案,可以水平和/或垂直重复和镜像,并将其超级贴在三通道图像上,以训练后式DNN模型。新型触发器分散在整个图像中,对单个像素产生微弱的扰动,但共同拥有强大的识别模式来训练和激活DNN的后门。我们还通过分析表明,随着图像的分辨率提高,触发因素越来越有效。实验是使用MNIST,CIFAR-10和BTSR数据集上的RESNET-18和MLP模型进行的。在无遗象的方面,新触发的表现优于现有的触发器,例如Badnet,Trojaned NN和隐藏的后门。新的触发因素达到了几乎100%的攻击成功率,仅将分类准确性降低了不到0.7%-2.4%,并使最新的防御技术无效。
translated by 谷歌翻译
We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors.
translated by 谷歌翻译
最近的研究表明,深层神经网络容易受到不同类型的攻击,例如对抗性攻击,数据中毒攻击和后门攻击。其中,后门攻击是最狡猾的攻击,几乎可以在深度学习管道的每个阶段发生。因此,后门攻击吸引了学术界和行业的许多兴趣。但是,大多数现有的后门攻击方法对于某些轻松的预处理(例如常见数据转换)都是可见的或脆弱的。为了解决这些限制,我们提出了一种强大而无形的后门攻击,称为“毒药”。具体而言,我们首先利用图像结构作为目标中毒区域,并用毒药(信息)填充它们以生成触发图案。由于图像结构可以在数据转换期间保持其语义含义,因此这种触发模式对数据转换本质上是强大的。然后,我们利用深度注射网络将这种触发模式嵌入封面图像中,以达到隐身性。与现有流行的后门攻击方法相比,毒药的墨水在隐形和健壮性方面都优于表现。通过广泛的实验,我们证明了毒药不仅是不同数据集和网络体系结构的一般性,而且对于不同的攻击场景也很灵活。此外,它对许多最先进的防御技术也具有非常强烈的抵抗力。
translated by 谷歌翻译
后门攻击已被证明是对深度学习系统的严重威胁,如生物识别认证和自主驾驶。有效的后门攻击可以在某些预定义条件下执行模型行为,即,触发器,但否则正常表现。然而,现有攻击的触发器直接注入像素空间,这往往可通过现有的防御和在训练和推理阶段进行视觉识别。在本文中,我们通过Trojaning频域提出了一个新的后门攻击ftrojan。关键的直觉是频域中的触发扰动对应于分散整个图像的小像素明智的扰动,打破了现有防御的底层假设,并使中毒图像从清洁的假设可视地无法区分。我们在几个数据集和任务中评估ftrojan,表明它实现了高攻击成功率,而不会显着降低良性输入的预测准确性。此外,中毒图像几乎看不见并保持高感性的质量。我们还评估FTROJAN,以防止最先进的防御以及在频域中设计的若干自适应防御。结果表明,FTROJAN可以强大地避开或显着降解这些防御的性能。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译