To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
在过去的十年中,深度学习急剧改变了传统的手工艺特征方式,具有强大的功能学习能力,从而极大地改善了传统任务。然而,最近已经证明了深层神经网络容易受到对抗性例子的影响,这种恶意样本由小型设计的噪音制作,误导了DNNs做出错误的决定,同时仍然对人类无法察觉。对抗性示例可以分为数字对抗攻击和物理对抗攻击。数字对抗攻击主要是在实验室环境中进行的,重点是改善对抗性攻击算法的性能。相比之下,物理对抗性攻击集中于攻击物理世界部署的DNN系统,这是由于复杂的物理环境(即亮度,遮挡等),这是一项更具挑战性的任务。尽管数字对抗和物理对抗性示例之间的差异很小,但物理对抗示例具有特定的设计,可以克服复杂的物理环境的效果。在本文中,我们回顾了基于DNN的计算机视觉任务任务中的物理对抗攻击的开发,包括图像识别任务,对象检测任务和语义细分。为了完整的算法演化,我们将简要介绍不涉及身体对抗性攻击的作品。我们首先提出一个分类方案,以总结当前的物理对抗攻击。然后讨论现有的物理对抗攻击的优势和缺点,并专注于用于维持对抗性的技术,当应用于物理环境中时。最后,我们指出要解决的当前身体对抗攻击的问题并提供有前途的研究方向。
translated by 谷歌翻译
Although Deep Neural Networks (DNNs) have achieved impressive results in computer vision, their exposed vulnerability to adversarial attacks remains a serious concern. A series of works has shown that by adding elaborate perturbations to images, DNNs could have catastrophic degradation in performance metrics. And this phenomenon does not only exist in the digital space but also in the physical space. Therefore, estimating the security of these DNNs-based systems is critical for safely deploying them in the real world, especially for security-critical applications, e.g., autonomous cars, video surveillance, and medical diagnosis. In this paper, we focus on physical adversarial attacks and provide a comprehensive survey of over 150 existing papers. We first clarify the concept of the physical adversarial attack and analyze its characteristics. Then, we define the adversarial medium, essential to perform attacks in the physical world. Next, we present the physical adversarial attack methods in task order: classification, detection, and re-identification, and introduce their performance in solving the trilemma: effectiveness, stealthiness, and robustness. In the end, we discuss the current challenges and potential future directions.
translated by 谷歌翻译
Adversarial robustness assessment for video recognition models has raised concerns owing to their wide applications on safety-critical tasks. Compared with images, videos have much high dimension, which brings huge computational costs when generating adversarial videos. This is especially serious for the query-based black-box attacks where gradient estimation for the threat models is usually utilized, and high dimensions will lead to a large number of queries. To mitigate this issue, we propose to simultaneously eliminate the temporal and spatial redundancy within the video to achieve an effective and efficient gradient estimation on the reduced searching space, and thus query number could decrease. To implement this idea, we design the novel Adversarial spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on the simultaneously focused key frames and key regions from the inter-frames and intra-frames in the video. AstFocus attack is based on the cooperative Multi-Agent Reinforcement Learning (MARL) framework. One agent is responsible for selecting key frames, and another agent is responsible for selecting key regions. These two agents are jointly trained by the common rewards received from the black-box threat models to perform a cooperative prediction. By continuously querying, the reduced searching space composed of key frames and key regions is becoming precise, and the whole query number becomes less than that on the original video. Extensive experiments on four mainstream video recognition models and three widely used action recognition datasets demonstrate that the proposed AstFocus attack outperforms the SOTA methods, which is prevenient in fooling rate, query number, time, and perturbation magnitude at the same.
translated by 谷歌翻译
近年来,由于深度神经网络的发展,面部识别取得了很大的进步,但最近发现深神经网络容易受到对抗性例子的影响。这意味着基于深神经网络的面部识别模型或系统也容易受到对抗例子的影响。但是,现有的攻击面部识别模型或具有对抗性示例的系统可以有效地完成白色盒子攻击,而不是黑盒模仿攻击,物理攻击或方便的攻击,尤其是在商业面部识别系统上。在本文中,我们提出了一种攻击面部识别模型或称为RSTAM的系统的新方法,该方法可以使用由移动和紧凑型打印机打印的对抗性面膜进行有效的黑盒模仿攻击。首先,RSTAM通过我们提出的随机相似性转换策略来增强对抗性面罩的可传递性。此外,我们提出了一种随机的元优化策略,以结合几种预训练的面部模型来产生更一般的对抗性掩模。最后,我们在Celeba-HQ,LFW,化妆转移(MT)和CASIA-FACEV5数据集上进行实验。还对攻击的性能进行了最新的商业面部识别系统的评估:Face ++,Baidu,Aliyun,Tencent和Microsoft。广泛的实验表明,RSTAM可以有效地对面部识别模型或系统进行黑盒模仿攻击。
translated by 谷歌翻译
众所周知,深神经网络(DNN)的性能容易受到微妙的干扰。到目前为止,基于摄像机的身体对抗攻击还没有引起太多关注,但这是物理攻击的空缺。在本文中,我们提出了一种简单有效的基于相机的物理攻击,称为“对抗彩色膜”(ADVCF),该攻击操纵了彩色膜的物理参数以执行攻击。精心设计的实验显示了所提出的方法在数字和物理环境中的有效性。此外,实验结果表明,ADVCF生成的对抗样本在攻击转移性方面具有出色的性能,这可以使ADVCF有效的黑盒攻击。同时,我们通过对抗训练给予对ADVCF的防御指导。最后,我们调查了ADVCF对基于视觉的系统的威胁,并为基于摄像机的物理攻击提出了一些有希望的心态。
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
人群计数已被广泛用于估计安全至关重要的场景中的人数,被证明很容易受到物理世界中对抗性例子的影响(例如,对抗性斑块)。尽管有害,但对抗性例子也很有价值,对于评估和更好地理解模型的鲁棒性也很有价值。但是,现有的对抗人群计算的对抗性示例生成方法在不同的黑盒模型之间缺乏强大的可传递性,这限制了它们对现实世界系统的实用性。本文提出了与模型不变特征正相关的事实,本文提出了感知的对抗贴片(PAP)生成框架,以使用模型共享的感知功能来定制对对抗性的扰动。具体来说,我们将一种自适应人群密度加权方法手工制作,以捕获各种模型中不变的量表感知特征,并利用密度引导的注意力来捕获模型共享的位置感知。证明它们都可以提高我们对抗斑块的攻击性转移性。广泛的实验表明,我们的PAP可以在数字世界和物理世界中实现最先进的进攻性能,并且以大幅度的优于以前的提案(最多+685.7 MAE和+699.5 MSE)。此外,我们从经验上证明,对我们的PAP进行的对抗训练可以使香草模型的性能受益,以减轻人群计数的几个实际挑战,包括跨数据集的概括(高达-376.0 MAE和-376.0 MAE和-354.9 MSE)和对复杂背景的鲁棒性(上升)至-10.3 MAE和-16.4 MSE)。
translated by 谷歌翻译
基于深度学习的面部识别(FR)模型在过去几年中表现出最先进的性能,即使在佩戴防护医疗面罩时,面膜在Covid-19大流行期间变得普遍。鉴于这些模型的出色表现,机器学习研究界已经表明对挑战其稳健性越来越令人兴趣。最初,研究人员在数字域中呈现了对抗性攻击,后来将攻击转移到物理领域。然而,在许多情况下,物理领域的攻击是显眼的,例如,需要在脸上放置贴纸,因此可能会在真实环境中引起怀疑(例如,机场)。在本文中,我们提出了对伪装在面部面罩的最先进的FR模型的身体对抗性掩模,以仔细制作的图案的形式施加在面部面具上。在我们的实验中,我们检查了我们的对抗掩码对广泛的FR模型架构和数据集的可转移性。此外,我们通过在织物医疗面罩上印刷对抗性模式来验证了我们的对抗性面膜效果,使FR系统仅识别穿面膜的3.34%的参与者(相比最低83.34%其他评估的面具)。
translated by 谷歌翻译
Recent studies reveal that deep neural network (DNN) based object detectors are vulnerable to adversarial attacks in the form of adding the perturbation to the images, leading to the wrong output of object detectors. Most current existing works focus on generating perturbed images, also called adversarial examples, to fool object detectors. Though the generated adversarial examples themselves can remain a certain naturalness, most of them can still be easily observed by human eyes, which limits their further application in the real world. To alleviate this problem, we propose a differential evolution based dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously. Specifically, we try to obtain the camouflage texture, which can be rendered over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Besides, we also study the performance of adaptive DE_DAC, which can be adapted to the environment. Experiments show that our proposed method could obtain a good trade-off between the fooling human eyes and object detectors under multiple specific scenes and objects.
translated by 谷歌翻译
深面识别(FR)在几个具有挑战性的数据集上取得了很高的准确性,并促进了成功的现实世界应用程序,甚至表现出对照明变化的高度鲁棒性,通常被认为是对FR系统的主要威胁。但是,在现实世界中,有限的面部数据集无法完全涵盖由不同的照明条件引起的照明变化。在本文中,我们从新角度(即对抗性攻击)研究对FR的照明的威胁,并确定一项新任务,即对对抗性的重视。鉴于面部图像,对抗性的重新获得旨在在欺骗最先进的深FR方法的同时产生自然重新的对应物。为此,我们首先提出了基于物理模型的对抗重新攻击(ARA),称为反照率基于反击的对抗性重新攻击(AQ-ARA)。它在物理照明模型和FR系统的指导下生成了自然的对抗光,并合成了对抗性重新重新确认的面部图像。此外,我们通过训练对抗性重新确定网络(ARNET)提出自动预测性的对抗重新攻击(AP-ARA),以根据不同的输入面自动以一步的方式自动预测对抗光,从而允许对效率敏感的应用。更重要的是,我们建议将上述数字攻击通过精确的重新确定设备将上述数字攻击转移到物理ARA(PHY-AARA)上,从而使估计的对抗照明条件在现实世界中可再现。我们在两个公共数据集上验证了三种最先进的深FR方法(即面部,街道和符号)的方法。广泛而有见地的结果表明,我们的工作可以产生逼真的对抗性重新贴心的面部图像,轻松地欺骗了fr,从而揭示了特定的光方向和优势的威胁。
translated by 谷歌翻译
越来越多的工作表明,深层神经网络容易受到对抗例子的影响。这些采用适用于模型输入的小扰动的形式,这导致了错误的预测。不幸的是,大多数文献都集中在视觉上不可见量的扰动上,该扰动将应用于数字图像上,这些数字图像通常无法通过设计将其部署到物理目标上。我们提出了对抗性划痕:一种新颖的L0黑盒攻击,它采用图像中的划痕形式,并且比其他最先进的攻击具有更大的可部署性。对抗性划痕利用了b \'Ezier曲线,以减少搜索空间的维度,并可能将攻击限制为特定位置。我们在几种情况下测试了对抗划痕,包括公开可用的API和交通标志的图像。结果表明,我们的攻击通常比其他可部署的最先进方法更高的愚弄率更高,同时需要更少的查询并修改很少的像素。
translated by 谷歌翻译
最近的进步表明,深度神经网络(DNN)容易受到对抗性扰动的影响。因此,有必要使用对抗攻击评估高级DNN的鲁棒性。但是,将使用贴纸作为扰动的传统物理攻击比最近的基于光的物理攻击更容易受到伤害。在这项工作中,我们提出了一种基于投影仪的物理攻击,称为“对抗颜色投影(ADVCP)”,该攻击通过操纵投影光的物理参数来进行对抗攻击。实验显示了我们方法在数字和物理环境中的有效性。实验结果表明,所提出的方法具有出色的攻击传递性,它赋予了Advcp有效的BlackBox攻击。我们向ADVCP提出威胁,威胁到未来的基于视觉的系统和应用程序,并提出一些基于轻型物理攻击的想法。
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
基于深的神经网络(DNNS)基于合成孔径雷达(SAR)自动靶标识别(ATR)系统已显示出非常容易受到故意设计但几乎无法察觉的对抗扰动的影响,但是当添加到靶向物体中时,DNN推断可能会偏差。在将DNN应用于高级SAR ATR应用时,这会导致严重的安全问题。因此,增强DNN的对抗性鲁棒性对于对现代现实世界中的SAR ATR系统实施DNN至关重要。本文旨在构建更健壮的DNN基于DNN的SAR ATR模型,探讨了SAR成像过程的领域知识,并提出了一种新型的散射模型引导的对抗攻击(SMGAA)算法,该算法可以以电磁散射响应的形式产生对抗性扰动(称为对抗散射器) )。提出的SMGAA由两个部分组成:1)参数散射模型和相应的成像方法以及2)基于自定义的基于梯度的优化算法。首先,我们介绍了有效的归因散射中心模型(ASCM)和一种通用成像方法,以描述SAR成像过程中典型几何结构的散射行为。通过进一步制定几种策略来考虑SAR目标图像的领域知识并放松贪婪的搜索程序,建议的方法不需要经过审慎的态度,但是可以有效地找到有效的ASCM参数来欺骗SAR分类器并促进SAR分类器并促进强大的模型训练。对MSTAR数据集的全面评估表明,SMGAA产生的对抗散射器对SAR处理链中的扰动和转换比当前研究的攻击更为强大,并且有效地构建了针对恶意散射器的防御模型。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
愚弄深度神经网络(DNN)与黑匣子优化已成为一种流行的对抗攻击方式,因为DNN的结构先验知识始终是未知的。尽管如此,最近的黑匣子对抗性攻击可能会努力平衡其在解决高分辨率图像中产生的对抗性示例(AES)的攻击能力和视觉质量。在本文中,我们基于大规模的多目标进化优化,提出了一种关注引导的黑盒逆势攻击,称为LMOA。通过考虑图像的空间语义信息,我们首先利用注意图来确定扰动像素。而不是攻击整个图像,减少了具有注意机制的扰动像素可以有助于避免维度的臭名臭氧,从而提高攻击性能。其次,采用大规模的多目标进化算法在突出区域中遍历降低的像素。从其特征中受益,所产生的AES有可能在人类视力不可知的同时愚弄目标DNN。广泛的实验结果已经验证了所提出的LMOA在ImageNet数据集中的有效性。更重要的是,与现有的黑匣子对抗性攻击相比,产生具有更好的视觉质量的高分辨率AE更具竞争力。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
最近成功的对抗性攻击面部识别表明,尽管面部识别模型取得了显着进展,但它们仍然远远落后于人类的感知和认可。它揭示了深度卷积神经网络(CNN)作为面部识别模型的最先进的构建模型的脆弱性,这可能会对安全系统造成一定的后果。以前对基于梯度的对抗攻击进行了广泛的研究,并证明对面部识别模型取得了成功。但是,找到每张面部的优化扰动需要将大量查询数量提交目标模型。在本文中,我们提出了使用自动面部扭曲对面部识别的递归对抗攻击,该面部扭曲需要极有限的查询才能欺骗目标模型。翘曲功能不是随机的面部翘曲程序,而是应用于眉毛,鼻子,嘴唇等的特定脸部检测区域。我们在基于决策的黑盒攻击环境中评估了拟议方法的鲁棒性,攻击者有攻击者有无法访问模型参数和梯度,但是目标模型提供了硬标签的预测和置信分数。
translated by 谷歌翻译