Deepfakes的恶意应用(即,从面部图像产生目标面部属性或整个面部的技术)对个人的声誉和安全构成了巨大的威胁。为了减轻这些威胁,最近的研究已经提出了对抗DeepFake模型的对抗水印,导致它们产生扭曲的输出。尽管结果令人印象深刻,但这些对抗水印具有低的图像水平和模型级可转移性,这意味着它们可以仅保护一个特定的DeepFake模型的一个面部图像。为了解决这些问题,我们提出了一种新的解决方案,可以产生跨模型通用对抗水印(CMUA-Watermark),保护来自多个DeepFake模型的大量面部图像。具体而言,我们首先提出跨模型通用攻击管道,迭代地攻击多个DeepFake模型。然后,我们设计了一种双层扰动融合策略,以减轻不同面部图像和模型产生的对抗水印之间的冲突。此外,我们通过启发式方法解决了跨模型优化的关键问题,以自动找到不同型号的合适的攻击步骤尺寸,进一步削弱了模型级冲突。最后,我们介绍了一种更合理和全面的评估方法来完全测试所提出的方法并将其与现有的方法进行比较。广泛的实验结果表明,所提出的CMUA-Watermark可以有效地扭曲由多个DeepFake模型产生的假面部图像,同时实现比现有方法更好的性能。
translated by 谷歌翻译
近年来,由于深度神经网络的发展,面部识别取得了很大的进步,但最近发现深神经网络容易受到对抗性例子的影响。这意味着基于深神经网络的面部识别模型或系统也容易受到对抗例子的影响。但是,现有的攻击面部识别模型或具有对抗性示例的系统可以有效地完成白色盒子攻击,而不是黑盒模仿攻击,物理攻击或方便的攻击,尤其是在商业面部识别系统上。在本文中,我们提出了一种攻击面部识别模型或称为RSTAM的系统的新方法,该方法可以使用由移动和紧凑型打印机打印的对抗性面膜进行有效的黑盒模仿攻击。首先,RSTAM通过我们提出的随机相似性转换策略来增强对抗性面罩的可传递性。此外,我们提出了一种随机的元优化策略,以结合几种预训练的面部模型来产生更一般的对抗性掩模。最后,我们在Celeba-HQ,LFW,化妆转移(MT)和CASIA-FACEV5数据集上进行实验。还对攻击的性能进行了最新的商业面部识别系统的评估:Face ++,Baidu,Aliyun,Tencent和Microsoft。广泛的实验表明,RSTAM可以有效地对面部识别模型或系统进行黑盒模仿攻击。
translated by 谷歌翻译
作为一种常见的安全工具,已广泛应用可见的水印来保护数字图像的版权。但是,最近的作品表明,可见的水印可以通过DNN删除而不会损坏其宿主图像。这样的水印驱动技术对图像的所有权构成了巨大威胁。受到DNN在对抗扰动方面的脆弱性的启发,我们提出了一种新颖的防御机制,可以永久地通过对抗机器学习。从对手的角度来看,可以将盲水水印网络作为我们的目标模型提出。然后,我们实际上优化了对宿主图像上不可察觉的对抗扰动,以主动攻击水印网络,称为水印疫苗。具体而言,提出了两种类型的疫苗。破坏水印疫苗(DWV)在通过水印拆除网络后,诱导了与水印一起破坏宿主图像。相比之下,不可行的水印疫苗(IWV)以另一种方式试图保持水印不清除且仍然明显。广泛的实验证明了我们的DWV/IWV在防止水印去除方面的有效性,尤其是在各种水印去除网络上。
translated by 谷歌翻译
To assess the vulnerability of deep learning in the physical world, recent works introduce adversarial patches and apply them on different tasks. In this paper, we propose another kind of adversarial patch: the Meaningful Adversarial Sticker, a physically feasible and stealthy attack method by using real stickers existing in our life. Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks. Because the position and rotation angle are less affected by the printing loss and color distortion, adversarial stickers can keep good attacking performance in the physical world. Besides, to make adversarial stickers more practical in real scenes, we conduct attacks in the black-box setting with the limited information rather than the white-box setting with all the details of threat models. To effectively solve for the sticker's parameters, we design the Region based Heuristic Differential Evolution Algorithm, which utilizes the new-found regional aggregation of effective solutions and the adaptive adjustment strategy of the evaluation criteria. Our method is comprehensively verified in the face recognition and then extended to the image retrieval and traffic sign recognition. Extensive experiments show the proposed method is effective and efficient in complex physical conditions and has a good generalization for different tasks.
translated by 谷歌翻译
与此同时,黑匣子对抗攻击已经吸引了令人印象深刻的注意,在深度学习安全领域的实际应用,同时,由于无法访问目标模型的网络架构或内部权重,非常具有挑战性。基于假设:如果一个例子对多种型号保持过逆势,那么它更有可能将攻击能力转移到其他模型,基于集合的对抗攻击方法是高效的,用于黑匣子攻击。然而,集合攻击的方式相当不那么调查,并且现有的集合攻击只是均匀地融合所有型号的输出。在这项工作中,我们将迭代集合攻击视为随机梯度下降优化过程,其中不同模型上梯度的变化可能导致众多局部Optima差。为此,我们提出了一种新的攻击方法,称为随机方差减少了整体(SVRE)攻击,这可以降低集合模型的梯度方差,并充分利用集合攻击。标准想象数据集的经验结果表明,所提出的方法可以提高对抗性可转移性,并且优于现有的集合攻击显着。
translated by 谷歌翻译
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content. This reveals that the positions and perturbations are both important to the adversarial attack. For that, in this paper, we propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting. Technically, we regard the patch's position, the pre-designed hyper-parameters to determine the patch's perturbations as the variables, and utilize the reinforcement learning framework to simultaneously solve for the optimal solution based on the rewards obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency. Besides, experiments on the commercial FR service and physical environments confirm its practical application value. We also extend our method to the traffic sign recognition task to verify its generalization ability.
translated by 谷歌翻译
脸部图像操纵方法,尽管计算机图形中具有许多有益的应用,但也可以通过影响个人的隐私或传播令人讨厌来提高担忧。在这项工作中,我们提出了一个主动的防御,以防止脸部操纵在第一处发生。为此,我们介绍了一种新的数据驱动方法,它产生嵌入在原始图像中的图像特定的扰动。关键的想法是,这些受保护的图像通过使操纵模型产生预定义的操纵目标(在我们的情况下均匀彩色的输出图像)而不是实际操作来防止面部操纵。与传统的逆势攻击相比,为单独优化每个图像的噪声模式,我们的广义模型只需要一个前向通过,从而运行幅度的序列更快并允许在图像处理堆栈中轻松集成,即使在智能手机等资源受限的设备上也可以轻松集成。此外,我们建议利用可分解的压缩近似,因此使产生的扰动鲁棒到常见的图像压缩。我们进一步表明,产生的扰动可以同时防止多种操纵方法。
translated by 谷歌翻译
使用社交媒体网站和应用程序已经变得非常受欢迎,人们在这些网络上分享他们的照片。在这些网络上自动识别和标记人们的照片已经提出了隐私保存问题,用户寻求隐藏这些算法的方法。生成的对抗网络(GANS)被证明是非常强大的在高多样性中产生面部图像以及编辑面部图像。在本文中,我们提出了一种基于GAN的生成掩模引导的面部图像操纵(GMFIM)模型,以将无法察觉的编辑应用于输入面部图像以保护图像中的人的隐私。我们的模型由三个主要组件组成:a)面罩模块将面积从输入图像中切断并省略背景,b)用于操纵面部图像并隐藏身份的GaN的优化模块,并覆盖身份和c)用于组合输入图像的背景和操纵的去识别的面部图像的合并模块。在优化步骤的丢失功能中考虑了不同的标准,以产生与输入图像一样类似的高质量图像,同时不能通过AFR系统识别。不同数据集的实验结果表明,与最先进的方法相比,我们的模型可以实现对自动面部识别系统的更好的性能,并且它在大多数实验中捕获更高的攻击成功率。此外,我们提出的模型的产生图像具有最高的质量,更令人愉悦。
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples -crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https: //github.com/cihangxie/DI-2-FGSM .
translated by 谷歌翻译
对抗性攻击提供了研究深层学习模式的稳健性的好方法。基于转移的黑盒攻击中的一种方法利用了几种图像变换操作来提高对逆势示例的可转换性,这是有效的,但不能考虑输入图像的特定特征。在这项工作中,我们提出了一种新颖的架构,称为自适应图像转换学习者(AIT1),其将不同的图像变换操作结合到统一的框架中,以进一步提高对抗性示例的可转移性。与现有工作中使用的固定组合变换不同,我们精心设计的转换学习者自适应地选择特定于输入图像的图像变换最有效的组合。关于Imagenet的广泛实验表明,我们的方法在各种设置下显着提高了正常培训的模型和防御模型的攻击成功率。
translated by 谷歌翻译
基于深的神经网络(DNNS)基于合成孔径雷达(SAR)自动靶标识别(ATR)系统已显示出非常容易受到故意设计但几乎无法察觉的对抗扰动的影响,但是当添加到靶向物体中时,DNN推断可能会偏差。在将DNN应用于高级SAR ATR应用时,这会导致严重的安全问题。因此,增强DNN的对抗性鲁棒性对于对现代现实世界中的SAR ATR系统实施DNN至关重要。本文旨在构建更健壮的DNN基于DNN的SAR ATR模型,探讨了SAR成像过程的领域知识,并提出了一种新型的散射模型引导的对抗攻击(SMGAA)算法,该算法可以以电磁散射响应的形式产生对抗性扰动(称为对抗散射器) )。提出的SMGAA由两个部分组成:1)参数散射模型和相应的成像方法以及2)基于自定义的基于梯度的优化算法。首先,我们介绍了有效的归因散射中心模型(ASCM)和一种通用成像方法,以描述SAR成像过程中典型几何结构的散射行为。通过进一步制定几种策略来考虑SAR目标图像的领域知识并放松贪婪的搜索程序,建议的方法不需要经过审慎的态度,但是可以有效地找到有效的ASCM参数来欺骗SAR分类器并促进SAR分类器并促进强大的模型训练。对MSTAR数据集的全面评估表明,SMGAA产生的对抗散射器对SAR处理链中的扰动和转换比当前研究的攻击更为强大,并且有效地构建了针对恶意散射器的防御模型。
translated by 谷歌翻译
近年来,随着神经网络的快速发展,深入学习模式的安全性越来越突出,这易于对抗性示例。几乎所有现有的基于梯度的攻击方法使用生成中的符号功能来满足$ l_ \ idty $ norm的扰动预算要求。然而,我们发现,由于它修改了精确梯度方向,因此可以对生成对抗示例的符号功能可能是不正确的。我们建议去除符号功能,并直接利用精确的梯度方向,具有缩放因子,以产生对抗扰动,即使扰动较少的扰动例子也提高了对抗性示例的攻击成功率。此外,考虑到最佳缩放因子在不同的图像上变化,我们提出了一种自适应缩放因子发生器,为每个图像寻求适当的缩放因子,这避免了手动搜索缩放因子的计算成本。我们的方法可以与几乎所有现有的基于梯度的攻击方法集成,以进一步提高攻击成功率。在CIFAR10和Imagenet数据集上的广泛实验表明,我们的方法表现出更高的可转移性和优于最先进的方法。
translated by 谷歌翻译
Recent studies reveal that deep neural network (DNN) based object detectors are vulnerable to adversarial attacks in the form of adding the perturbation to the images, leading to the wrong output of object detectors. Most current existing works focus on generating perturbed images, also called adversarial examples, to fool object detectors. Though the generated adversarial examples themselves can remain a certain naturalness, most of them can still be easily observed by human eyes, which limits their further application in the real world. To alleviate this problem, we propose a differential evolution based dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously. Specifically, we try to obtain the camouflage texture, which can be rendered over the surface of the object. In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images, making human eyes difficult to distinguish. In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective. In addition, we introduce the differential evolution algorithm to search for the near-optimal areas of the object to attack, improving the adversarial performance under certain attack area limitations. Besides, we also study the performance of adaptive DE_DAC, which can be adapted to the environment. Experiments show that our proposed method could obtain a good trade-off between the fooling human eyes and object detectors under multiple specific scenes and objects.
translated by 谷歌翻译
转移对抗性攻击是一种非普通的黑匣子逆势攻击,旨在对替代模型进行对抗的对抗扰动,然后对受害者模型应用这种扰动。然而,来自现有方法的扰动的可转移性仍然有限,因为对逆势扰动易于用单个替代模型和特定数据模式容易接收。在本文中,我们建议学习学习可转让的攻击(LLTA)方法,这使得对逆势扰动更广泛地通过学习数据和模型增强。对于数据增强,我们采用简单的随机调整大小和填充。对于模型增强,我们随机更改后部传播而不是前向传播,以消除对模型预测的影响。通过将特定数据和修改模型作为任务的攻击处理,我们预计对抗扰动采用足够的任务来普遍。为此,在扰动生成的迭代期间进一步引入了元学习算法。基础使用的数据集上的经验结果证明了我们的攻击方法的有效性,与最先进的方法相比,转移攻击的成功率较高的12.85%。我们还评估我们在真实世界在线系统上的方法,即Google Cloud Vision API,进一步展示了我们方法的实际潜力。
translated by 谷歌翻译
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
translated by 谷歌翻译
受益于生成对抗性网络(GAN)的发展,面部操纵最近在学术界和工业中取得了重大进展。它激发了越来越多的娱乐应用,但也遭到对个人隐私甚至政治安全的严重威胁。为了减轻这种风险,已经提出了许多对策。然而,大多数方法以被动方式设计,这是为了检测它们在广泛传播之后是否篡改了面部图像或视频。这些基于检测的方法具有致命的限制,即它们仅适用于前后的取证,但不能阻止对恶意行为的发挥作用。为了解决限制,在本文中,我们提出了一种新颖的倡议防御框架,以降低恶意用户控制的面部操纵模型的性能。基本思想是在操纵之前将难以察觉的毒液纳入目标面部数据。为此,我们首先使用替代模型模仿目标操纵模型,然后设计毒药扰动发生器以获得所需的毒液。交替的培训策略进一步利用以培训代理模型和扰动发生器。两个典型的面部操纵任务:面部属性编辑和面部重新制定,在我们的倡议防御框架中考虑。广泛的实验证明了我们在不同环境中框架的有效性和稳健性。最后,我们希望这项工作能够在针对更多对抗方案的主动对策上阐明一些灯。
translated by 谷歌翻译