模型反转攻击(MIAS)旨在创建合成图像,通过利用模型的学习知识来反映目标分类器的私人培训数据中的班级特征。先前的研究开发了生成的MIA,该MIA使用生成的对抗网络(GAN)作为针对特定目标模型的图像先验。这使得攻击时间和资源消耗,不灵活,并且容易受到数据集之间的分配变化的影响。为了克服这些缺点,我们提出了插头攻击,从而放宽了目标模型和图像之前的依赖性,并启用单个GAN来攻击广泛的目标,仅需要对攻击进行少量调整。此外,我们表明,即使在公开获得的预训练的gan和强烈的分配转变下,也可以实现强大的MIA,而先前的方法无法产生有意义的结果。我们的广泛评估证实了插头攻击的鲁棒性和灵活性,以及​​它们创建高质量图像的能力,揭示了敏感的类特征。
translated by 谷歌翻译
Apple最近透露了它的深度感知散列系统的神经枢纽,以检测文件在文件上传到其iCloud服务之前的用户设备上的儿童性滥用材料(CSAM)。关于保护用户隐私和系统可靠性的公众批评很快就会出现。本文基于神经枢纽的深度感知哈希展示了第一综合实证分析。具体而言,我们表明当前深度感知散列可能不具有稳健性。对手可以通过应用图像的略微变化来操纵散列值,或者通过基于梯度的方法或简单地执行标准图像转换,强制或预防哈希冲突来操纵。这种攻击允许恶意演员轻松利用检测系统:从隐藏滥用材料到框架无辜的用户,一切都是可能的。此外,使用散列值,仍然可以对存储在用户设备上的数据进行推断。在我们的观点中,根据我们的结果,其目前形式的深度感知散列通常不适用于强大的客户端扫描,不应从隐私角度使用。
translated by 谷歌翻译
会员推理攻击(MIS)旨在确定特定样本是否用于培训预测模型。知道这可能确实导致隐私违约。可以说,大多数MIS利用模型的预测分数 - 每个输出的概率给出一些输入 - 在其训练的模型趋于不同地在其训练数据上表现不同。我们认为这是许多现代深度网络架构的谬论,例如,Relu型神经网络几乎总是远离训练数据的高预测分数。因此,MIS将误会失败,因为这种行为不仅导致高伪率,不仅在已知的域名上,而且对分发数据外,并且隐含地作为针对偏心的防御。具体地,使用生成的对抗网络,我们能够产生虚假分类为培训数据的一部分的潜在无限数量的样本。换句话说,MIS的威胁被高估,并且泄漏的信息较少地假设。此外,在分类器的过度自信和对偏话的易感性之间实际上有一个权衡:更频率越多,他们不知道,对远离训练数据的低信任预测,他们越远,他们越多,揭示了训练数据。
translated by 谷歌翻译
通过使用预训练模型的转移学习已成为机器学习社区的增长趋势。因此,在线发布了许多预培训模型,以促进进一步的研究。但是,它引起了人们对这些预训练模型是否会泄露其培训数据的隐私敏感信息的广泛担忧。因此,在这项工作中,我们的目标是回答以下问题:“我们可以有效地从这些预训练的模型中恢复私人信息吗?检索这种敏感信息的足够条件是什么?”我们首先探索不同的统计信息,这些信息可以将私人培训分布与其他分布区分开。根据我们的观察,我们提出了一个新颖的私人数据重建框架Secretgen,以有效地恢复私人信息。与以前可以恢复私人数据的方法与目标恢复实例的真实预测相比,SecretGen不需要此类先验知识,从而使其更加实用。我们在各种情况下对不同数据集进行了广泛的实验,以将Secretgen与其他基线进行比较,并提供系统的基准,以更好地了解不同的辅助信息和优化操作的影响。我们表明,如果没有关于真实班级预测的先验知识,SecretGen能够与利用此类先验知识的私人数据相比恢复具有相似性能的私人数据。如果给出了先验知识,SecretGen将显着优于基线方法。我们还提出了几个定量指标,以进一步量化预培训模型的隐私脆弱性,这将有助于对对隐私敏感应用程序的模型选择。我们的代码可在以下网址提供:https://github.com/ai-secure/secretgen。
translated by 谷歌翻译
Model-based attacks can infer training data information from deep neural network models. These attacks heavily depend on the attacker's knowledge of the application domain, e.g., using it to determine the auxiliary data for model-inversion attacks. However, attackers may not know what the model is used for in practice. We propose a generative adversarial network (GAN) based method to explore likely or similar domains of a target model -- the model domain inference (MDI) attack. For a given target (classification) model, we assume that the attacker knows nothing but the input and output formats and can use the model to derive the prediction for any input in the desired form. Our basic idea is to use the target model to affect a GAN training process for a candidate domain's dataset that is easy to obtain. We find that the target model may distract the training procedure less if the domain is more similar to the target domain. We then measure the distraction level with the distance between GAN-generated datasets, which can be used to rank candidate domains for the target model. Our experiments show that the auxiliary dataset from an MDI top-ranked domain can effectively boost the result of model-inversion attacks.
translated by 谷歌翻译
While text-to-image synthesis currently enjoys great popularity among researchers and the general public, the security of these models has been neglected so far. Many text-guided image generation models rely on pre-trained text encoders from external sources, and their users trust that the retrieved models will behave as promised. Unfortunately, this might not be the case. We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk. Our attacks only slightly alter an encoder so that no suspicious model behavior is apparent for image generations with clean prompts. By then inserting a single non-Latin character into the prompt, the adversary can trigger the model to either generate images with pre-defined attributes or images following a hidden, potentially malicious description. We empirically demonstrate the high effectiveness of our attacks on Stable Diffusion and highlight that the injection process of a single backdoor takes less than two minutes. Besides phrasing our approach solely as an attack, it can also force an encoder to forget phrases related to certain concepts, such as nudity or violence, and help to make image generation safer.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
身份验证系统容易受到模型反演攻击的影响,在这种攻击中,对手能够近似目标机器学习模型的倒数。生物识别模型是这种攻击的主要候选者。这是因为反相生物特征模型允许攻击者产生逼真的生物识别输入,以使生物识别认证系统欺骗。进行成功模型反转攻击的主要限制之一是所需的训练数据量。在这项工作中,我们专注于虹膜和面部生物识别系统,并提出了一种新技术,可大大减少必要的训练数据量。通过利用多个模型的输出,我们能够使用1/10进行模型反演攻击,以艾哈迈德和富勒(IJCB 2020)的训练集大小(IJCB 2020)进行虹膜数据,而Mai等人的训练集大小为1/1000。 (模式分析和机器智能2019)的面部数据。我们将新的攻击技术表示为结构性随机,并损失对齐。我们的攻击是黑框,不需要了解目标神经网络的权重,只需要输出向量的维度和值。为了显示对齐损失的多功能性,我们将攻击框架应用于会员推理的任务(Shokri等,IEEE S&P 2017),对生物识别数据。对于IRIS,针对分类网络的会员推断攻击从52%提高到62%的准确性。
translated by 谷歌翻译
虽然机器学习(ML)在过去十年中取得了巨大进展,但最近的研究表明,ML模型易受各种安全和隐私攻击的影响。到目前为止,这场领域的大部分攻击都专注于由分类器代表的歧视模型。同时,一点关注的是生成模型的安全性和隐私风险,例如生成的对抗性网络(GANS)。在本文中,我们提出了对GANS的第一组培训数据集财产推论攻击。具体地,对手旨在推断宏观级训练数据集属性,即用于训练目标GaN的样本的比例,用于某个属性。成功的财产推理攻击可以允许对手来获得目标GaN的训练数据集的额外知识,从而直接违反目标模型所有者的知识产权。此外,它可以用作公平审计员,以检查目标GAN是否接受偏置数据集进行培训。此外,财产推理可以用作其他高级攻击的构建块,例如隶属推断。我们提出了一般的攻击管道,可以根据两个攻击场景量身定制,包括全黑盒设置和部分黑盒设置。对于后者,我们介绍了一种新颖的优化框架来增加攻击效果。在五个房产推理任务上超过四个代表性GaN模型的广泛实验表明我们的攻击实现了强大的表现。此外,我们表明我们的攻击可用于增强隶属推断对GANS的绩效。
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译
机器学习与服务(MLAAS)已成为广泛的范式,即使是通过例如,也是客户可用的最复杂的机器学习模型。一个按要求的原则。这使用户避免了数据收集,超参数调整和模型培训的耗时过程。但是,通过让客户访问(预测)模型,MLAAS提供商危害其知识产权,例如敏感培训数据,优化的超参数或学到的模型参数。对手可以仅使用预测标签创建模型的副本,并以(几乎)相同的行为。尽管已经描述了这种攻击的许多变体,但仅提出了零星的防御策略,以解决孤立的威胁。这增加了对模型窃取领域进行彻底系统化的必要性,以全面了解这些攻击是成功的原因,以及如何全面地捍卫它们。我们通过对模型窃取攻击,评估其性能以及探索不同设置中相应的防御技术来解决这一问题。我们为攻击和防御方法提出了分类法,并提供有关如何根据目标和可用资源选择正确的攻击或防御策略的准则。最后,我们分析了当前攻击策略使哪些防御能力降低。
translated by 谷歌翻译
The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the domain of face super-resolution (also known as face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously pos-sible.
translated by 谷歌翻译
最近的工作认为,强大的培训需要比标准分类所需的数据集大得多。在CiFar-10和CiFar-100上,这转化为仅培训的型号之间的可稳健稳健精度差距,这些型号来自原始训练集的数据,那些从“80万微小图像”数据集(TI-80M)提取的附加数据培训。在本文中,我们探讨了单独培训的生成模型如何利用人为地提高原始训练集的大小,并改善对$ \ ell_p $ norm-inded扰动的对抗鲁棒性。我们确定了包含额外生成数据的充分条件可以改善鲁棒性,并证明可以显着降低具有额外实际数据训练的模型的强大准确性差距。令人惊讶的是,我们甚至表明即使增加了非现实的随机数据(由高斯采样产生)也可以改善鲁棒性。我们在Cifar-10,CiFar-100,SVHN和Tinyimagenet上评估我们的方法,而$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $分别。与以前的最先进的方法相比,我们以强大的准确性显示出大的绝对改进。反对$ \ ell_ \ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的车型分别在Cifar-10和Cifar-100上达到66.10%和33.49%(改善状态)最新美术+ 8.96%和+ 3.29%)。反对$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 128/255 $,我们的型号在Cifar-10(+ 3.81%)上实现78.31%。这些结果击败了使用外部数据的最先前的作品。
translated by 谷歌翻译
近年来有条件的GAN已经成熟,并且能够产生高质量的现实形象。但是,计算资源和培训高质量的GAN所需的培训数据是巨大的,因此对这些模型的转移学习的研究是一个紧急话题。在本文中,我们探讨了从高质量预训练的无条件GAN到有条件的GAN的转移。为此,我们提出了基于HyperNetwork的自适应权重调制。此外,我们介绍了一个自我初始化过程,不需要任何真实数据才能初始化HyperNetwork参数。为了进一步提高知识转移的样本效率,我们建议使用自我监督(对比)损失来改善GaN判别者。在广泛的实验中,我们验证了多个标准基准上的Hypernetworks,自我初始化和对比损失的效率。
translated by 谷歌翻译
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
translated by 谷歌翻译
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译