识别分布内容对于成功实施神经网络至关重要。已经开发了看门狗技术来支持这些输入的检测,但是性能可以受到可用数据量的限制。生成的对抗网络已经显示出许多功能,包括能够以极好的精度生成传真。本文介绍并经验评估了使用GAN生成的数据开发的多层看门狗,以改善分布外检测。Cascade看门狗使用对抗训练来增加与更难检测到的分布元素相似的可用数据量。然后,按顺序添加专门的第二个防护局。结果表明,检测最具挑战性的分布输入,同时保留了极低的假阳性率。
translated by 谷歌翻译
Anomaly detection is a classical problem in computer vision, namely the determination of the normal from the abnormal when datasets are highly biased towards one class (normal) due to the insufficient sample size of the other class (abnormal). While this can be addressed as a supervised learning problem, a significantly more challenging problem is that of detecting the unknown/unseen anomaly case that takes us instead into the space of a one-class, semi-supervised learning paradigm. We introduce such a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space. Employing encoder-decoder-encoder sub-networks in the generator network enables the model to map the input image to a lower dimension vector, which is then used to reconstruct the generated output image. The use of the additional encoder network maps this generated image to its latent representation. Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution -an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches.
translated by 谷歌翻译
反事实可以以人类的可解释方式解释神经网络的分类决策。我们提出了一种简单但有效的方法来产生这种反事实。更具体地说,我们执行合适的差异坐标转换,然后在这些坐标中执行梯度上升,以查找反事实,这些反事实是由置信度良好的指定目标类别分类的。我们提出了两种方法来利用生成模型来构建完全或大约差异的合适坐标系。我们使用Riemannian差异几何形状分析了生成过程,并使用各种定性和定量测量方法验证了生成的反事实质量。
translated by 谷歌翻译
异常检测是确定不符合正常数据分布的样品。由于异常数据的无法获得,培训监督的深神经网络是一项繁琐的任务。因此,无监督的方法是解决此任务的常见方法。深度自动编码器已被广泛用作许多无监督的异常检测方法的基础。但是,深层自动编码器的一个显着缺点是,它们通过概括重建异常值来提供不足的表示异常检测的表示。在这项工作中,我们设计了一个对抗性框架,该框架由两个竞争组件组成,一个对抗性变形者和一个自动编码器。对抗性变形器是一种卷积编码器,学会产生有效的扰动,而自动编码器是一个深层卷积神经网络,旨在重建来自扰动潜在特征空间的图像。这些网络经过相反的目标训练,在这种目标中,对抗性变形者会产生用于编码器潜在特征空间的扰动,以最大化重建误差,并且自动编码器试图中和这些扰动的效果以最大程度地减少它。当应用于异常检测时,该提出的方法会由于对特征空间的扰动应用而学习语义上的富裕表示。所提出的方法在图像和视频数据集上的异常检测中优于现有的最新方法。
translated by 谷歌翻译
系外行星的检测为发现新的可居住世界的发现打开了大门,并帮助我们了解行星的形成方式。 NASA的目的是寻找类似地球的宜居行星,推出了开普勒太空望远镜及其后续任务K2。观察能力的进步增加了可用于研究的新鲜数据的范围,并且手动处理它们既耗时又困难。机器学习和深度学习技术可以极大地帮助降低人类以经济和公正的方式处理这些系外行星计划的现代工具所产生的大量数据的努力。但是,应注意精确地检测所有系外行星,同时最大程度地减少对非外界星星的错误分类。在本文中,我们利用了两种生成对抗网络的变体,即半监督的生成对抗网络和辅助分类器生成对抗网络,在K2数据中检测传播系外行星。我们发现,这些模型的用法可能有助于用系外行星的恒星分类。我们的两种技术都能够在测试数据上以召回和精度为1.00的光曲线分类。我们的半监督技术有益于解决创建标签数据集的繁琐任务。
translated by 谷歌翻译
大多数对抗攻击防御方法依赖于混淆渐变。这些方法在捍卫基于梯度的攻击方面是成功的;然而,它们容易被攻击绕过,该攻击不使用梯度或近似近似和使用校正梯度的攻击。不存在不存在诸如对抗培训等梯度的防御,但这些方法通常对诸如其幅度的攻击进行假设。我们提出了一种分类模型,该模型不会混淆梯度,并且通过施工而强大而不承担任何关于攻击的知识。我们的方法将分类作为优化问题,我们“反转”在不受干扰的自然图像上培训的条件发电机,以找到生成最接近查询图像的类。我们假设潜在的脆性抗逆性攻击源是前馈分类器的高度低维性质,其允许对手发现输入空间中的小扰动,从而导致输出空间的大变化。另一方面,生成模型通常是低到高维的映射。虽然该方法与防御GaN相关,但在我们的模型中使用条件生成模型和反演而不是前馈分类是临界差异。与Defense-GaN不同,它被证明生成了容易规避的混淆渐变,我们表明我们的方法不会混淆梯度。我们展示了我们的模型对黑箱攻击的极其强劲,并与自然训练的前馈分类器相比,对白盒攻击的鲁棒性提高。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at https://github.com/kabkabm/defensegan.
translated by 谷歌翻译
Deep learning has shown impressive performance on hard perceptual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially disastrous consequences where safety or security is crucial. Prior defenses against adversarial examples either targeted specific attacks or were shown to be ineffective. We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial examples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial examples, they generalize well. The reformer network moves adversarial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation. We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against graybox attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that Mag-Net is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples. CCS CONCEPTS• Security and privacy → Domain-specific security and privacy architectures; • Computing methodologies → Neural networks;
translated by 谷歌翻译
从积极和未标记的数据(又称PU学习)中学习的问题已在二进制(即阳性与负面)分类设置中进行了研究,其中输入数据包括(1)从正类别及其相应标签的观察结果,((( 2)来自正面和负面类别的未标记观察结果。生成对抗网络(GAN)已被用来将问题减少到监督环境中,其优势是,监督学习在分类任务中具有最新的精度。为了生成\ textIt {pseudo}阴性观察,甘恩(GAN)接受了正面和未标记的观测值的培训,并修改了损失。同时使用正面和\ textit {pseudo} - 阴性观察会导致监督的学习设置。现实到足以替代缺失的负类样品的伪阴性观察的产生是当前基于GAN的算法的瓶颈。通过在GAN体系结构中加入附加的分类器,我们提供了一种基于GAN的新方法。在我们建议的方法中,GAN歧视器指示发电机仅生成掉入未标记的数据分布中的样品,而第二分类器(观察者)网络将GAN训练监视为:(i)防止生成的样品落入正分布中; (ii)学习正面观察和负面观测之间的关键区别的特征。四个图像数据集的实验表明,我们训练有素的观察者网络在区分实际看不见的正和负样本时的性能优于现有技术。
translated by 谷歌翻译
Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging. Models are typically based on large amounts of data with annotated examples of known markers aiming at automating detection. High annotation effort and the limitation to a vocabulary of known markers limit the power of such approaches. Here, we perform unsupervised learning to identify anomalies in imaging data as candidates for markers. We propose AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space. Applied to new data, the model labels anomalies, and scores image patches indicating their fit into the learned distribution. Results on optical coherence tomography images of the retina demonstrate that the approach correctly identifies anomalous images, such as images containing retinal fluid or hyperreflective foci.
translated by 谷歌翻译
随着深度学习生成模型的最新进展,它在时间序列领域的出色表现并没有花费很长时间。用于与时间序列合作的深度神经网络在很大程度上取决于培训中使用的数据集的广度和一致性。这些类型的特征通常在现实世界中不丰富,在现实世界中,它们通常受到限制,并且通常具有必须保证的隐私限制。因此,一种有效的方法是通过添加噪声或排列并生成新的合成数据来使用\ gls {da}技术增加数据数。它正在系统地审查该领域的当前最新技术,以概述所有可用的算法,并提出对最相关研究的分类法。将评估不同变体的效率;作为过程的重要组成部分,将分析评估性能的不同指标以及有关每个模型的主要问题。这项研究的最终目的是摘要摘要,这些领域的进化和性能会产生更好的结果,以指导该领域的未来研究人员。
translated by 谷歌翻译
图像生成模型可以学习训练数据的分布,因此可以通过从这些分布中取样来生成示例。但是,当培训数据集被离群值损坏时,生成模型可能会产生与异常值相似的示例。实际上,一小部分离群值可能会诱导最新的生成模型,例如量化量化量化自动编码器(VQ-VAE),以从异常值中学习重要的模式。为了减轻此问题,我们提出了一个基于VQ-VAE的强大生成模型,我们将其命名为强大的VQ-VAE(RVQ-VAE)。为了实现鲁棒性,RVQ-VAE使用两个单独的代码簿对嵌入式和离群值。为了确保代码簿嵌入正确的组件,我们在每个培训时期内迭代更新嵌入式和异常值的集合。为了确保编码的数据点与正确的代码簿匹配,我们使用加权欧几里得距离进行量化,其权重由代码簿的定向差异确定。这两个代码手册均与编码器和解码器一起根据重建损失和量化损失共同训练。我们在实验上证明,即使大部分训练数据点损坏了RVQ-VAE,即使大部分培训数据都可以从嵌入式中产生示例。
translated by 谷歌翻译
机器学习模型通常会遇到与训练分布不同的样本。无法识别分布(OOD)样本,因此将该样本分配给课堂标签会显着损害模​​型的可靠性。由于其对在开放世界中的安全部署模型的重要性,该问题引起了重大关注。由于对所有可能的未知分布进行建模的棘手性,检测OOD样品是具有挑战性的。迄今为止,一些研究领域解决了检测陌生样本的问题,包括异常检测,新颖性检测,一级学习,开放式识别识别和分布外检测。尽管有相似和共同的概念,但分别分布,开放式检测和异常检测已被独立研究。因此,这些研究途径尚未交叉授粉,创造了研究障碍。尽管某些调查打算概述这些方法,但它们似乎仅关注特定领域,而无需检查不同领域之间的关系。这项调查旨在在确定其共同点的同时,对各个领域的众多著名作品进行跨域和全面的审查。研究人员可以从不同领域的研究进展概述中受益,并协同发展未来的方法。此外,据我们所知,虽然进行异常检测或单级学习进行了调查,但没有关于分布外检测的全面或最新的调查,我们的调查可广泛涵盖。最后,有了统一的跨域视角,我们讨论并阐明了未来的研究线,打算将这些领域更加紧密地融为一体。
translated by 谷歌翻译
新奇检测是识别不属于目标类分布的样本的任务。在培训期间,缺乏新颖的课程,防止使用传统分类方法。深度自动化器已被广泛用作许多无监督的新奇检测方法的基础。特别地,上下文自动码器在新颖的检测任务中已经成功了,因为他们通过从随机屏蔽的图像重建原始图像来学习的更有效的陈述。然而,上下文AutoEncoders的显着缺点是随机屏蔽不能一致地涵盖输入图像的重要结构,导致次优表示 - 特别是对于新颖性检测任务。在本文中,为了优化输入掩蔽,我们设计了由两个竞争网络,掩模模块和重建器组成的框架。掩码模块是一个卷积的AutoEncoder,用于生成涵盖最重要的图像的最佳掩码。或者,重建器是卷积编码器解码器,其旨在从屏蔽图像重建未受带的图像。网络训练以侵略的方式训练,其中掩模模块生成应用于给予重构的图像的掩码。以这种方式,掩码模块寻求最大化重建错误的重建错误最小化。当应用于新颖性检测时,与上下文自动置换器相比,所提出的方法学习语义上更丰富的表示,并通过更新的屏蔽增强了在测试时间的新颖性检测。 MNIST和CIFAR-10图像数据集上的新奇检测实验证明了所提出的方法对尖端方法的优越性。在用于新颖性检测的UCSD视频数据集的进一步实验中,所提出的方法实现了最先进的结果。
translated by 谷歌翻译
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives.This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
translated by 谷歌翻译
识别异常是指检测不像训练数据分布的样本。许多生成模型已被用于寻找异常,以及其中,基于生成的对抗网络(GaN)的方法目前非常受欢迎。 GANS主要依靠这些模型的丰富上下文信息来识别实际培训分布。在这一类比之后,我们建议了基于GANS -A组合的新型无人监督模型和甘甘。此外,引入了一种新的评分功能,以靶向异常,其中鉴别器的内部表示和发电机的视觉表示的线性组合加上自动化器的编码表示,共同定义所提出的异常得分。该模型进一步评估了诸如SVHN,CIFAR10和MNIST之类的基准数据集以及白血病图像的公共医疗数据集。在所有实验中,我们的模型表现出现有的对应物,同时略微改善推理时间。
translated by 谷歌翻译
在本文中,提出了强大的分类 - 自动编码器(CAE),该分类具有强大的能力来识别异常值和捍卫对手。主要思想是将自动编码器从无监督的学习模型更改为分类器,在该模型中,编码器用于将具有不同标签的样品压缩为不同的相关压缩空间,并使用解码器从其压缩空间中恢复样品。编码器既将编码器用作压缩功能学习者和分类器,并且使用解码器来确定编码器给出的分类是否正确,通过将输入样本与输出进行比较。由于目前的DNN框架似乎是不可避免的,因此基于CAE引入了捍卫对手的列表分类器,该分类器是基于CAE的,该框架输出了多个标签和CAE恢复的相应样品。广泛的实验结果用于表明CAE通过找到几乎所有异常值来识别异常值,以识别异常值。列表分类器给出了几乎无损分类的意义,即输出列表包含几乎所有对手的正确标签,并且输出列表的大小相当小。
translated by 谷歌翻译
我们表明,在AutoEncoders(AE)的潜在空间中使用最近的邻居显着提高了单一和多级上下文中半监督新颖性检测的性能。通过学习来检测新奇的方法,以区分非新颖培训类和所有其他看不见的课程。我们的方法利用了最近邻居的重建和给定输入的潜在表示的潜在邻居的结合。我们证明了我们最近的潜在邻居(NLN)算法是内存和时间效率,不需要大量的数据增强,也不依赖于预先训练的网络。此外,我们表明NLN算法很容易应用于多个数据集而无需修改。此外,所提出的算法对于AutoEncoder架构和重建错误方法是不可知的。我们通过使用重建,剩余或具有一致损耗,验证了多个不同的自动码架构,如诸如香草,对抗和变形自身额度的各种标准数据集的方法。结果表明,NLN算法在多级案例的接收器操作特性(AUROC)曲线性能下授予面积增加17%,为单级新颖性检测8%。
translated by 谷歌翻译
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
translated by 谷歌翻译