深度神经网络的面部识别模型已显示出容易受到对抗例子的影响。但是,过去的许多攻击都要求对手使用梯度下降来解决输入依赖性优化问题,这使该攻击实时不切实际。这些对抗性示例也与攻击模型紧密耦合,并且在转移到不同模型方面并不那么成功。在这项工作中,我们提出了Reface,这是对基于对抗性转换网络(ATN)的面部识别模型的实时,高度转移的攻击。 ATNS模型对抗性示例生成是馈送前向神经网络。我们发现,纯U-NET ATN的白盒攻击成功率大大低于基于梯度的攻击,例如大型面部识别数据集中的PGD。因此,我们为ATN提出了一个新的架构,该架构缩小了这一差距,同时维持PGD的10000倍加速。此外,我们发现在给定的扰动幅度下,与PGD相比,我们的ATN对抗扰动在转移到新的面部识别模型方面更有效。 Reface攻击可以在转移攻击环境中成功欺骗商业面部识别服务,并将面部识别精度从AWS SearchFaces API和Azure Face验证准确性从91%降低到50.1%,从而将面部识别精度从82%降低到16.4%。
translated by 谷歌翻译
近年来,由于深度神经网络的发展,面部识别取得了很大的进步,但最近发现深神经网络容易受到对抗性例子的影响。这意味着基于深神经网络的面部识别模型或系统也容易受到对抗例子的影响。但是,现有的攻击面部识别模型或具有对抗性示例的系统可以有效地完成白色盒子攻击,而不是黑盒模仿攻击,物理攻击或方便的攻击,尤其是在商业面部识别系统上。在本文中,我们提出了一种攻击面部识别模型或称为RSTAM的系统的新方法,该方法可以使用由移动和紧凑型打印机打印的对抗性面膜进行有效的黑盒模仿攻击。首先,RSTAM通过我们提出的随机相似性转换策略来增强对抗性面罩的可传递性。此外,我们提出了一种随机的元优化策略,以结合几种预训练的面部模型来产生更一般的对抗性掩模。最后,我们在Celeba-HQ,LFW,化妆转移(MT)和CASIA-FACEV5数据集上进行实验。还对攻击的性能进行了最新的商业面部识别系统的评估:Face ++,Baidu,Aliyun,Tencent和Microsoft。广泛的实验表明,RSTAM可以有效地对面部识别模型或系统进行黑盒模仿攻击。
translated by 谷歌翻译
基于转移的对抗攻击可以评估黑框设置中的模型鲁棒性。几种方法表现出令人印象深刻的非目标转移性,但是,有效地产生有针对性的可转移性仍然具有挑战性。为此,我们开发了一个简单而有效的框架,以应用层次生成网络制作有针对性的基于转移的对抗性示例。特别是,我们有助于适应多级目标攻击的摊销设计。对Imagenet的广泛实验表明,我们的方法通过与现有方法相比,大幅度的余量提高了目标黑盒攻击的成功率 - 它的平均成功率为29.1 \%,而仅基于一个替代白盒的六种不同模型模型,大大优于最先进的基于梯度的攻击方法。此外,与基于梯度的方法相比,所提出的方法超出了数量级的效率也更有效。
translated by 谷歌翻译
尽管大量研究专门用于变形检测,但大多数研究都无法推广其在训练范式之外的变形面。此外,最近的变体检测方法非常容易受到对抗攻击的影响。在本文中,我们打算学习一个具有高概括的变体检测模型,以对各种形态攻击和对不同的对抗攻击的高度鲁棒性。为此,我们开发了卷积神经网络(CNN)和变压器模型的合奏,以同时受益于其能力。为了提高整体模型的鲁棒精度,我们采用多扰动对抗训练,并生成具有高可传递性的对抗性示例。我们详尽的评估表明,提出的强大合奏模型将概括为几个变形攻击和面部数据集。此外,我们验证了我们的稳健集成模型在超过最先进的研究的同时,对几次对抗性攻击获得了更好的鲁棒性。
translated by 谷歌翻译
Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.
translated by 谷歌翻译
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples-inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. 1
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译
当系统的全面了解时然而,这种技术在灰盒设置中行动不成功,攻击者面部模板未知。在这项工作中,我们提出了一种具有新开发的目标函数的相似性的灰度逆势攻击(SGADV)技术。 SGAdv利用不同的评分来产生优化的对抗性实例,即基于相似性的对抗性攻击。这种技术适用于白盒和灰度箱攻击,针对使用不同分数确定真实或调用用户的身份验证系统。为了验证SGAdv的有效性,我们对LFW,Celeba和Celeba-HQ的面部数据集进行了广泛的实验,反对白盒和灰度箱设置的面部和洞察面的深脸识别模型。结果表明,所提出的方法显着优于灰色盒设置中的现有的对抗性攻击技术。因此,我们总结了开发对抗性示例的相似性基础方法可以令人满意地迎合去认证的灰度箱攻击场景。
translated by 谷歌翻译
最近的研究表明,在一个白盒模型上手工制作的对抗性示例可用于攻击其他黑箱型号。这种跨模型可转换性使得执行黑匣子攻击可行,这对现实世界的DNN应用程序提出了安全性问题。尽管如此,现有的作品主要专注于调查跨不同深层模型的对抗性可转移,该模型共享相同的输入数据模型。从未探索过对抗扰动的跨莫代尔转移性。本文研究了不同方式的对抗性扰动的可转移性,即利用在白盒图像模型上产生的对抗扰动,以攻击黑盒视频模型。具体而言,通过观察到图像和视频帧之间的低级特征空间是相似的,我们提出了一种简单但有效的跨模型攻击方法,名称为图像到视频(I2V)攻击。通过最小化来自对手和良性示例的预先接受的图像模型的特征之间的特征之间的余弦相似性来生成对抗性帧,然后组合生成的对抗性帧以对视频识别模型进行黑盒攻击。广泛的实验表明,I2V可以在不同的黑匣子视频识别模型上实现高攻击成功率。在动力学-400和UCF-101上,I2V分别实现了77.88%和65.68%的平均攻击成功率,阐明了跨越模态对抗攻击的可行性。
translated by 谷歌翻译
最近的工作阐明了说话者识别系统(SRSS)针对对抗性攻击的脆弱性,从而在部署SRSS时引起了严重的安全问题。但是,他们仅考虑了一些设置(例如,来源和目标扬声器的某些组合),仅在现实世界攻击方案中留下了许多有趣而重要的环境。在这项工作中,我们介绍了AS2T,这是该域中的第一次攻击,该域涵盖了所有设置,因此,对手可以使用任意源和目标扬声器来制作对抗性声音,并执行三个主要识别任务中的任何一种。由于现有的损失功能都不能应用于所有设置,因此我们探索了每种设置的许多候选损失功能,包括现有和新设计的损失功能。我们彻底评估了它们的功效,并发现某些现有的损失功能是次优的。然后,为了提高AS2T对实用的无线攻击的鲁棒性,我们研究了可能发生的扭曲发生在空中传输中,利用具有不同参数的不同转换功能来对这些扭曲进行建模,并将其整合到生成中对手的声音。我们的模拟无线评估验证了解决方案在产生强大的对抗声音方面的有效性,这些声音在各种硬件设备和各种声音环境下保持有效,具有不同的混响,环境噪声和噪声水平。最后,我们利用AS2T来执行迄今为止最大的评估,以了解14个不同SRSS之间的可转移性。可传递性分析提供了许多有趣且有用的见解,这些见解挑战了图像域中先前作品中得出的几个发现和结论。我们的研究还阐明了说话者识别域中对抗攻击的未来方向。
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
对抗性示例是故意生成用于欺骗深层神经网络的输入。最近的研究提出了不受规范限制的不受限制的对抗攻击。但是,以前的不受限制攻击方法仍然存在限制在黑框设置中欺骗现实世界应用程序的局限性。在本文中,我们提出了一种新的方法,用于使用GAN生成不受限制的对抗示例,其中攻击者只能访问分类模型的前1个最终决定。我们的潜在方法有效地利用了潜在空间中基于决策的攻击的优势,并成功地操纵了潜在的向量来欺骗分类模型。通过广泛的实验,我们证明我们提出的方法有效地评估了在黑框设置中查询有限的分类模型的鲁棒性。首先,我们证明我们的目标攻击方法是有效的,可以为包含307个身份的面部身份识别模型产生不受限制的对抗示例。然后,我们证明所提出的方法还可以成功攻击现实世界的名人识别服务。
translated by 谷歌翻译
This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
translated by 谷歌翻译
商业和政府部门中自动面部识别的扩散引起了个人的严重隐私问题。解决这些隐私问题的一种方法是采用逃避攻击针对启动面部识别系统的度量嵌入网络的攻击:面部混淆系统会产生不透彻的扰动图像,从而导致面部识别系统误解用户。受扰动的面孔是在公制嵌入网络上产生的,在面部识别的背景下,这是不公平的。人口公平的问题自然而然:面部混淆系统表现是否存在人口统计学差异?我们通过对最近的面部混淆系统的分析和经验探索来回答这个问题。指标嵌入网络在人口统计学上很有意识:面部嵌入由人口统计组群聚集。我们展示了这种聚类行为如何导致少数群体面孔的面部混淆实用性减少。直观的分析模型可以深入了解这些现象。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
基于深度学习的面部识别模型容易受到对抗攻击的影响。为了遏制这些攻击,大多数防御方法旨在提高对抗性扰动的识别模型的鲁棒性。但是,这些方法的概括能力非常有限。实际上,它们仍然容易受到看不见的对抗攻击。深度学习模型对于一般的扰动(例如高斯噪音)相当强大。一种直接的方法是使对抗性扰动失活,以便可以轻松地将它们作为一般扰动处理。在本文中,提出了一种称为扰动失活(PIN)的插件对抗防御方法,以使对抗防御的对抗性扰动灭活。我们发现,不同子空间中的扰动对识别模型有不同的影响。应该有一个称为免疫空间的子空间,其中扰动对识别模型的不利影响要比其他子空间更少。因此,我们的方法估计了免疫空间,并通过将它们限制在此子空间中来使对抗性扰动失活。可以将所提出的方法推广到看不见的对抗扰动,因为它不依赖于特定类型的对抗攻击方法。这种方法不仅优于几种最先进的对抗防御方法,而且还通过详尽的实验证明了卓越的概括能力。此外,提出的方法可以成功地应用于四个商业API,而无需额外的培训,这表明可以轻松地将其推广到现有的面部识别系统。源代码可从https://github.com/renmin1991/perturbation in-inactivate获得
translated by 谷歌翻译
深度学习的进步使得广泛的有希望的应用程序。然而,这些系统容易受到对抗机器学习(AML)攻击的影响;对他们的意见的离前事实制作的扰动可能导致他们错误分类。若干最先进的对抗性攻击已经证明他们可以可靠地欺骗分类器,使这些攻击成为一个重大威胁。对抗性攻击生成算法主要侧重于创建成功的例子,同时控制噪声幅度和分布,使检测更加困难。这些攻击的潜在假设是脱机产生的对抗噪声,使其执行时间是次要考虑因素。然而,最近,攻击者机会自由地产生对抗性示例的立即对抗攻击已经可能。本文介绍了一个新问题:我们如何在实时约束下产生对抗性噪音,以支持这种实时对抗攻击?了解这一问题提高了我们对这些攻击对实时系统构成的威胁的理解,并为未来防御提供安全评估基准。因此,我们首先进行对抗生成算法的运行时间分析。普遍攻击脱机产生一般攻击,没有在线开销,并且可以应用于任何输入;然而,由于其一般性,他们的成功率是有限的。相比之下,在特定输入上工作的在线算法是计算昂贵的,使它们不适合在时间约束下的操作。因此,我们提出房间,一种新型实时在线脱机攻击施工模型,其中离线组件用于预热在线算法,使得可以在时间限制下产生高度成功的攻击。
translated by 谷歌翻译
许多最先进的ML模型在各种任务中具有优于图像分类的人类。具有如此出色的性能,ML模型今天被广泛使用。然而,存在对抗性攻击和数据中毒攻击的真正符合ML模型的稳健性。例如,Engstrom等人。证明了最先进的图像分类器可以容易地被任意图像上的小旋转欺骗。由于ML系统越来越纳入安全性和安全敏感的应用,对抗攻击和数据中毒攻击构成了相当大的威胁。本章侧重于ML安全的两个广泛和重要的领域:对抗攻击和数据中毒攻击。
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译