对抗贴片是旨在欺骗其他表现良好的基于​​神经网络的计算机视觉模型的图像。尽管这些攻击最初是通过数字方式构想和研究的,但由于图像的原始像素值受到干扰,但最近的工作表明,这些攻击可以成功地转移到物理世界中。可以通过打印补丁并将其添加到新捕获的图像或视频素材的场景中来实现。在这项工作中,我们进一步测试了在更具挑战性的条件下物理世界中对抗斑块攻击的功效。我们考虑通过空中或卫星摄像机获得的高架图像训练的对象检测模型,并测试插入沙漠环境场景中的物理对抗斑块。我们的主要发现是,在这些条件下成功实施对抗贴片攻击要比在先前考虑的条件下更难。这对AI安全具有重要意义,因为可能被夸大了对抗性例子所带来的现实世界威胁。
translated by 谷歌翻译
现实世界的对抗例(通常以补丁形式)对安全关键计算机视觉任务中的深度学习模型(如在自动驾驶中的视觉感知)中使用深度学习模型构成严重威胁。本文涉及用不同类型的对抗性斑块攻击时,对语义分割模型的稳健性进行了广泛的评价,包括数字,模拟和物理。提出了一种新的损失功能,提高攻击者在诱导像素错误分类方面的能力。此外,提出了一种新的攻击策略,提高了在场景中放置补丁的转换方法的期望。最后,首先扩展用于检测对抗性补丁的最先进的方法以应对语义分割模型,然后改进以获得实时性能,并最终在现实世界场景中进行评估。实验结果表明,尽管具有数字和真实攻击的对抗效果,其影响通常在空间上限制在补丁周围的图像区域。这将打开关于实时语义分段模型的空间稳健性的进一步疑问。
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
由于缺乏对AI模型的安全性和鲁棒性的信任,近年来,深度学习模型(尤其是针对安全至关重要的系统)中的对抗性攻击正在越来越受到关注。然而,更原始的对抗性攻击可能是身体上不可行的,或者需要一些难以访问的资源,例如训练数据,这激发了斑块攻击的出现。在这项调查中,我们提供了全面的概述,以涵盖现有的对抗贴片攻击技术,旨在帮助感兴趣的研究人员迅速赶上该领域的进展。我们还讨论了针对对抗贴片的检测和防御措施的现有技术,旨在帮助社区更好地了解该领域及其在现实世界中的应用。
translated by 谷歌翻译
Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attack presents a critical threat to cyber-physical systems that rely on cameras such as autonomous cars. Despite the significance of the problem, conducting research in this setting has been difficult; evaluating attacks and defenses in the real world is exceptionally costly while synthetic data are unrealistic. In this work, we propose the REAP (REalistic Adversarial Patch) benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign. Using our benchmark, we perform the first large-scale assessments of adversarial patch attacks under realistic conditions. Our experiments suggest that adversarial patch attacks may present a smaller threat than previously believed and that the success rate of an attack on simpler digital simulations is not predictive of its actual effectiveness in practice. We release our benchmark publicly at https://github.com/wagner-group/reap-benchmark.
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP 2 ), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP 2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.
translated by 谷歌翻译
对象检测在许多安全关键系统中播放关键作用。对抗性补丁攻击,在物理世界中易于实施,对最先进的对象探测器构成严重威胁。开发针对补丁攻击的对象探测器的可靠防御是至关重要的,但严重解读。在本文中,我们提出了段和完整的防御(SAC),是通过检测和消除对抗性补丁来保护对象探测器的一般框架。我们首先培训一个补丁分段器,输出补丁掩码,提供对抗性补丁的像素级定位。然后,我们提出了一种自我逆势训练算法来强制补丁分段器。此外,我们设计了一种坚固的形状完成算法,保证了给定贴片分段器的输出在地面真理贴片掩模的某个汉明距离的图像中从图像中移除整个修补程序。我们对Coco和Xview Datasets的实验表明,即使在具有清洁图像上没有性能下降的强大自适应攻击下,SAC也可以实现优越的稳健性,并且概括到未遵守的补丁形状,攻击预算和看不见的攻击方法。此外,我们介绍了股份模型数据集,该数据集增强了具有对抗修补程序的像素级注释的杏子数据集。我们展示SAC可以显着降低物理补丁攻击的目标攻击成功率。
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
对抗斑块产生的标准方法导致嘈杂的显着模式,这些模式很容易被人类识别。最近的研究提出了几种使用生成对抗网络(GAN)生成自然斑块的方法,但在对象检测用例中只评估了其中的一些方法。此外,技术的状态主要集中于通过直接与补丁重叠的输入中抑制一个大边界框。补丁附近的抑制对象是一项不同的,更复杂的任务。在这项工作中,我们评估了现有的方法,以生成不起眼的补丁。我们已经针对不同的计算机视觉任务而开发的适应方法,用于Yolov3和CoCo数据集的对象检测用例。我们已经评估了两种生成自然主义斑块的方法:通过将斑块的产生纳入GAN训练过程和使用预审计的GAN。在这两种情况下,我们都评估了性能和自然主义斑块外观之间的权衡。我们的实验表明,使用预先训练的GAN有助于获得逼真的斑块,同时保留类似于常规的对抗斑块的性能。
translated by 谷歌翻译
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.
translated by 谷歌翻译
现有的对抗示例研究重点是在现有自然图像数据集之上进行数字插入的扰动。这种对抗性例子的构造是不现实的,因为攻击者由于感应和环境影响而在现实世界中部署这种攻击可能是困难的,甚至是不可能的。为了更好地理解针对网络物理系统的对抗性示例,我们提出了通过模拟近似现实世界的。在本文中,我们描述了我们的合成数据集生成工具,该工具可以可扩展收集具有现实的对抗示例的合成数据集。我们使用Carla模拟器收集此类数据集并演示与现实世界图像相同的环境变换和处理的模拟攻击。我们的工具已用于收集数据集以帮助评估对抗性示例的功效,并可以在https://github.com/carla-simulator/carla/pull/4992上找到。
translated by 谷歌翻译
对图像分类的侵扰贴片攻击攻击图像的深度神经网络(DNN),其在图像的有界区域内注射任意扭曲,可以产生鲁棒(IE在物理世界中的侵犯)和普遍(即,在任何情况下保持对抗的侵犯扰动输入)。这种攻击可能导致现实世界的DNN系统中的严重后果。这项工作提出了jujutsu,一种检测和减轻稳健和普遍的对抗性补丁攻击的技术。对于检测,jujutsu利用攻击“通用属性 - jujutsu首先定位潜在的对抗性补丁区域,然后策略性地将其传送到新图像中的专用区域,以确定它是否真正恶意。对于攻击缓解,jujutsu通过图像修正来利用攻击本地化性质,以在攻击损坏的像素中综合语义内容,并重建“清洁”图像。我们在四个不同的数据集中评估jujutsu(想象成,想象力,celeba和place365),并表明Jujutsu实现了卓越的性能,并且显着优于现有技术。我们发现jujutsu可以进一步防御基本攻击的不同变体,包括1)物理攻击; 2)目标不同课程的攻击; 3)攻击构造不同形状和4)适应攻击的修补程序。
translated by 谷歌翻译
小型太阳能光伏(PV)阵列中电网的有效集成计划需要访问高质量的数据:单个太阳能PV阵列的位置和功率容量。不幸的是,不存在小型太阳能光伏的国家数据库。那些确实有限的空间分辨率,通常汇总到州或国家一级。尽管已经发布了几种有希望的太阳能光伏检测方法,但根据研究,研究这些模型的性能通常是高度异质的。这些方法对能源评估的实际应用的比较变得具有挑战性,可能意味着报告的绩效评估过于乐观。异质性有多种形式,我们在这项工作中探讨了每种形式:空间聚集的水平,地面真理的验证,培训和验证数据集的不一致以及培训的位置和传感器的多样性程度和验证数据始发。对于每个人,我们都会讨论文献中的新兴实践,以解决它们或暗示未来研究的方向。作为调查的一部分,我们评估了两个大区域的太阳PV识别性能。我们的发现表明,由于验证过程中的共同局限性,从卫星图像对太阳PV自动识别的传统绩效评估可能是乐观的。这项工作的收获旨在为能源研究人员和专业人员提供自动太阳能光伏评估技术的大规模实用应用。
translated by 谷歌翻译
Although deep networks have shown vulnerability to evasion attacks, such attacks have usually unrealistic requirements. Recent literature discussed the possibility to remove or not some of these requirements. This paper contributes to this literature by introducing a carpet-bombing patch attack which has almost no requirement. Targeting the feature representations, this patch attack does not require knowing the network task. This attack decreases accuracy on Imagenet, mAP on Pascal Voc, and IoU on Cityscapes without being aware that the underlying tasks involved classification, detection or semantic segmentation, respectively. Beyond the potential safety issues raised by this attack, the impact of the carpet-bombing attack highlights some interesting property of deep network layer dynamic.
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
尽管最先进的物体检测方法显示出令人信服的性能,但模型通常对对抗的攻击和分发数据不稳健。我们介绍了一个新的数据集,天然对手对象(Nao),以评估物体检测模型的稳健性。 Nao包含7,934个图像和9,943个对象,这些对象未经修改,代表了现实世界的情景,但导致最先进的检测模型以高信任误入歧途。与标准MSCOCO验证集相比,在NAO上评估时,高效的平均平均精度(MAP)降低74.5%。此外,通过比较各种对象检测架构,我们发现Mscoco验证集上的更好性能不一定转化为NAO的更好性能,这表明不能通过培训更准确的模型来简单地实现鲁棒性。我们进一步调查为什么NA​​O中难以检测和分类的原因。洗牌图像贴片的实验表明,模型对局部质地过于敏感。此外,使用集成梯度和背景替换,我们发现检测模型依赖于边界框内的像素信息,并且在预测类标签时对背景上下文不敏感。 Nao可以在https://drive.google.com/drive/folders/15p8sowojku6sseihlets86orfytgezi8下载。
translated by 谷歌翻译
背景信息:在过去几年中,机器学习(ML)一直是许多创新的核心。然而,包括在所谓的“安全关键”系统中,例如汽车或航空的系统已经被证明是非常具有挑战性的,因为ML的范式转变为ML带来完全改变传统认证方法。目的:本文旨在阐明与ML为基础的安全关键系统认证有关的挑战,以及文献中提出的解决方案,以解决它们,回答问题的问题如何证明基于机器学习的安全关键系统?'方法:我们开展2015年至2020年至2020年之间发布的研究论文的系统文献综述(SLR),涵盖了与ML系统认证有关的主题。总共确定了217篇论文涵盖了主题,被认为是ML认证的主要支柱:鲁棒性,不确定性,解释性,验证,安全强化学习和直接认证。我们分析了每个子场的主要趋势和问题,并提取了提取的论文的总结。结果:单反结果突出了社区对该主题的热情,以及在数据集和模型类型方面缺乏多样性。它还强调需要进一步发展学术界和行业之间的联系,以加深域名研究。最后,它还说明了必须在上面提到的主要支柱之间建立连接的必要性,这些主要柱主要主要研究。结论:我们强调了目前部署的努力,以实现ML基于ML的软件系统,并讨论了一些未来的研究方向。
translated by 谷歌翻译