尽管深度学习模型在图像语义细分中取得了巨大进展,但它们通常需要大的注释示例,并且越来越多的注意力转移到了诸如少数射击学习(FSL)之类的问题设置中,在这些设置中,只需要少量注释才能泛化才能概括地进行概括的概括。新颖的课程。这尤其在医疗领域中可以看到,那里的像素级注释昂贵。在本文中,我们提出了正则原型神经差微分方程(R-PNODE),该方法利用神经模型的固有特性,通过额外的簇和一致性损失来辅助和增强,以执行几个器官的几分片段分割(FSS)。 R-Pnode将同一类的支持和查询功能限制在表示空间中,从而改善了基于现有的卷积神经网络(CNN)的FSS方法的性能。我们进一步证明,尽管许多现有的基于CNN的现有方法往往非常容易受到对抗攻击的影响,但R-Pnode表现出对各种攻击的对抗性鲁棒性的提高。我们在内域和跨域FSS设置中使用三个公开可用的多器官分割数据集,以证明我们方法的疗效。此外,我们在各种设置中使用七个常用的对抗攻击进行实验,以证明R-Pnode的鲁棒性。 R-Pnode的表现优于FSS的基线,并且在强度和设计方面也显示出卓越的性能。
translated by 谷歌翻译
深度卷积神经网络(CNN)很容易被输入图像的细微,不可察觉的变化所欺骗。为了解决此漏洞,对抗训练会创建扰动模式,并将其包括在培训设置中以鲁棒性化模型。与仅使用阶级有限信息的现有对抗训练方法(例如,使用交叉渗透损失)相反,我们建议利用功能空间中的其他信息来促进更强的对手,这些信息又用于学习强大的模型。具体来说,我们将使用另一类的目标样本的样式和内容信息以及其班级边界信息来创建对抗性扰动。我们以深入监督的方式应用了我们提出的多任务目标,从而提取了多尺度特征知识,以创建最大程度地分开对手。随后,我们提出了一种最大边缘对抗训练方法,该方法可最大程度地减少源图像与其对手之间的距离,并最大程度地提高对手和目标图像之间的距离。与最先进的防御能力相比,我们的对抗训练方法表明了强大的鲁棒性,可以很好地推广到自然发生的损坏和数据分配变化,并保留了清洁示例的模型准确性。
translated by 谷歌翻译
神经常规差分方程(ODES)最近在各种研究域中引起了不断的关注。有一些作品研究了神经杂物的优化问题和近似能力,但他们的鲁棒性尚不清楚。在这项工作中,我们通过探索神经杂物经验和理论上的神经杂物的鲁棒性质来填补这一重要差异。我们首先通过将它们暴露于具有各种类型的扰动并随后研究相应输出的变化来提出基于神经竞争的网络(odeNets)的鲁棒性的实证研究。与传统的卷积神经网络(CNNS)相反,我们发现odeenets对随机高斯扰动和对抗性攻击示例的更稳健。然后,我们通过利用连续时间颂的流动的某种理想性能来提供对这种现象的富有识别理解,即积分曲线是非交叉的。我们的工作表明,由于其内在的稳健性,它很有希望使用神经杂散作为构建强大的深网络模型的基本块。为了进一步增强香草神经杂物杂物的鲁棒性,我们提出了时间不变的稳定神经颂(Tisode),其通过时间不变性和施加稳态约束来规则地规则地规则地对扰动数据的流程。我们表明,Tisode方法优于香草神经杂物,也可以与其他最先进的架构方法一起制造更强大的深网络。 \ url {https://github.com/hanshuyan/tisode}
translated by 谷歌翻译
Recently, due to the increasing requirements of medical imaging applications and the professional requirements of annotating medical images, few-shot learning has gained increasing attention in the medical image semantic segmentation field. To perform segmentation with limited number of labeled medical images, most existing studies use Proto-typical Networks (PN) and have obtained compelling success. However, these approaches overlook the query image features extracted from the proposed representation network, failing to preserving the spatial connection between query and support images. In this paper, we propose a novel self-supervised few-shot medical image segmentation network and introduce a novel Cycle-Resemblance Attention (CRA) module to fully leverage the pixel-wise relation between query and support medical images. Notably, we first line up multiple attention blocks to refine more abundant relation information. Then, we present CRAPNet by integrating the CRA module with a classic prototype network, where pixel-wise relations between query and support features are well recaptured for segmentation. Extensive experiments on two different medical image datasets, e.g., abdomen MRI and abdomen CT, demonstrate the superiority of our model over existing state-of-the-art methods.
translated by 谷歌翻译
深度神经网络的图像分类容易受到对抗性扰动的影响。图像分类可以通过在输入图像中添加人造小且不可察觉的扰动来轻松愚弄。作为最有效的防御策略之一,提出了对抗性训练,以解决分类模型的脆弱性,其中创建了对抗性示例并在培训期间注入培训数据中。在过去的几年中,对分类模型的攻击和防御进行了深入研究。语义细分作为分类的扩展,最近也受到了极大的关注。最近的工作表明,需要大量的攻击迭代来创建有效的对抗性示例来欺骗分割模型。该观察结果既可以使鲁棒性评估和对分割模型的对抗性培训具有挑战性。在这项工作中,我们提出了一种称为SEGPGD的有效有效的分割攻击方法。此外,我们提供了收敛分析,以表明在相同数量的攻击迭代下,提出的SEGPGD可以创建比PGD更有效的对抗示例。此外,我们建议将SEGPGD应用于分割对抗训练的基础攻击方法。由于SEGPGD可以创建更有效的对抗性示例,因此使用SEGPGD的对抗训练可以提高分割模型的鲁棒性。我们的建议还通过对流行分割模型体系结构和标准分段数据集进行了验证。
translated by 谷歌翻译
已知深度神经网络(DNN)容易受到用不可察觉的扰动制作的对抗性示例的影响,即,输入图像的微小变化会引起错误的分类,从而威胁着基于深度学习的部署系统的可靠性。经常采用对抗训练(AT)来通过训练损坏和干净的数据的混合物来提高DNN的鲁棒性。但是,大多数基于AT的方法在处理\ textit {转移的对抗示例}方面是无效的,这些方法是生成以欺骗各种防御模型的生成的,因此无法满足现实情况下提出的概括要求。此外,对抗性训练一般的国防模型不能对具有扰动的输入产生可解释的预测,而不同的领域专家则需要一个高度可解释的强大模型才能了解DNN的行为。在这项工作中,我们提出了一种基于Jacobian规范和选择性输入梯度正则化(J-SIGR)的方法,该方法通过Jacobian归一化提出了线性化的鲁棒性,还将基于扰动的显着性图正规化,以模仿模型的可解释预测。因此,我们既可以提高DNN的防御能力和高解释性。最后,我们评估了跨不同体系结构的方法,以针对强大的对抗性攻击。实验表明,提出的J-Sigr赋予了针对转移的对抗攻击的鲁棒性,我们还表明,来自神经网络的预测易于解释。
translated by 谷歌翻译
深度学习在计算机视觉方面取得了巨大的成功,而由于数据注释的稀缺性,医疗图像细分(MIS)仍然是一个挑战。几次分割的元学习技术(meta-fs)已被广泛用于应对这一挑战,而它们忽略了查询图像和支持集之间可能的分配变化。相比之下,经验丰富的临床医生可以通过从查询图像中借用信息,然后相应地对其(她)先前的认知模型进行微调或校准。在此灵感的启发下,我们提出了一种Q-NET,这是一种质疑的Meta-FSS方法,它在精神上模仿了专家临床医生的学习机制。我们基于ADNET构建Q-NET,这是一种最近提出的异常检测启发方法。具体而言,我们将两个查询信息的计算模块添加到ADNET中,即一个查询信息的阈值适应模块和一个查询信息的原型细化模块。将它们与特征提取模块的双路扩展相结合,Q-NET在两个广泛使用的数据集上实现了最先进的性能,分别由腹部MR图像和心脏MR图像组成。我们的作品通过利用查询信息来改善元FSS技术的新颖方法。
translated by 谷歌翻译
The authors thank Nicholas Carlini (UC Berkeley) and Dimitris Tsipras (MIT) for feedback to improve the survey quality. We also acknowledge X. Huang (Uni. Liverpool), K. R. Reddy (IISC), E. Valle (UNICAMP), Y. Yoo (CLAIR) and others for providing pointers to make the survey more comprehensive.
translated by 谷歌翻译
基于深度学习的面部识别模型容易受到对抗攻击的影响。为了遏制这些攻击,大多数防御方法旨在提高对抗性扰动的识别模型的鲁棒性。但是,这些方法的概括能力非常有限。实际上,它们仍然容易受到看不见的对抗攻击。深度学习模型对于一般的扰动(例如高斯噪音)相当强大。一种直接的方法是使对抗性扰动失活,以便可以轻松地将它们作为一般扰动处理。在本文中,提出了一种称为扰动失活(PIN)的插件对抗防御方法,以使对抗防御的对抗性扰动灭活。我们发现,不同子空间中的扰动对识别模型有不同的影响。应该有一个称为免疫空间的子空间,其中扰动对识别模型的不利影响要比其他子空间更少。因此,我们的方法估计了免疫空间,并通过将它们限制在此子空间中来使对抗性扰动失活。可以将所提出的方法推广到看不见的对抗扰动,因为它不依赖于特定类型的对抗攻击方法。这种方法不仅优于几种最先进的对抗防御方法,而且还通过详尽的实验证明了卓越的概括能力。此外,提出的方法可以成功地应用于四个商业API,而无需额外的培训,这表明可以轻松地将其推广到现有的面部识别系统。源代码可从https://github.com/renmin1991/perturbation in-inactivate获得
translated by 谷歌翻译
There has been a concurrent significant improvement in the medical images used to facilitate diagnosis and the performance of machine learning techniques to perform tasks such as classification, detection, and segmentation in recent years. As a result, a rapid increase in the usage of such systems can be observed in the healthcare industry, for instance in the form of medical image classification systems, where these models have achieved diagnostic parity with human physicians. One such application where this can be observed is in computer vision tasks such as the classification of skin lesions in dermatoscopic images. However, as stakeholders in the healthcare industry, such as insurance companies, continue to invest extensively in machine learning infrastructure, it becomes increasingly important to understand the vulnerabilities in such systems. Due to the highly critical nature of the tasks being carried out by these machine learning models, it is necessary to analyze techniques that could be used to take advantage of these vulnerabilities and methods to defend against them. This paper explores common adversarial attack techniques. The Fast Sign Gradient Method and Projected Descent Gradient are used against a Convolutional Neural Network trained to classify dermatoscopic images of skin lesions. Following that, it also discusses one of the most popular adversarial defense techniques, adversarial training. The performance of the model that has been trained on adversarial examples is then tested against the previously mentioned attacks, and recommendations to improve neural networks robustness are thus provided based on the results of the experiment.
translated by 谷歌翻译
逆势培训可针对特异性对抗性扰动有用,但它们也证明旨在展示偏离用于培训的攻击的攻击。然而,我们观察到这种无效性是本质上与域的适应性,深度学习中的另一个关键问题似乎是一个有希望的解决方案。因此,我们提出了ADV-4-ADV作为一种新的逆势培训方法,旨在保持针对看不见的对抗性扰动的鲁棒性。基本上,ADV-4-ADV将攻击产生不同的扰动作为不同的域,并且通过利用逆势域适应的力量,它旨在消除域/攻击特定的功能。这迫使训练有素的模型来学习强大的域名不变的表示,这反过来增强了其泛化能力。对时尚 - MNIST,SVHN,CIFAR-10和CIFAR-100的广泛评估表明,基于由简单攻击(例如,FGSM)制备的样本训练的模型可以推广到更高级的攻击(例如, PGD​​),性能超过了这些数据集的最先进的提案。
translated by 谷歌翻译
Pixel-wise prediction with deep neural network has become an effective paradigm for salient object detection (SOD) and achieved remarkable performance. However, very few SOD models are robust against adversarial attacks which are visually imperceptible for human visual attention. The previous work robust saliency (ROSA) shuffles the pre-segmented superpixels and then refines the coarse saliency map by the densely connected conditional random field (CRF). Different from ROSA that relies on various pre- and post-processings, this paper proposes a light-weight Learnable Noise (LeNo) to defend adversarial attacks for SOD models. LeNo preserves accuracy of SOD models on both adversarial and clean images, as well as inference speed. In general, LeNo consists of a simple shallow noise and noise estimation that embedded in the encoder and decoder of arbitrary SOD networks respectively. Inspired by the center prior of human visual attention mechanism, we initialize the shallow noise with a cross-shaped gaussian distribution for better defense against adversarial attacks. Instead of adding additional network components for post-processing, the proposed noise estimation modifies only one channel of the decoder. With the deeply-supervised noise-decoupled training on state-of-the-art RGB and RGB-D SOD networks, LeNo outperforms previous works not only on adversarial images but also on clean images, which contributes stronger robustness for SOD. Our code is available at https://github.com/ssecv/LeNo.
translated by 谷歌翻译
Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast adapt to novel classes with only a few exemplars and meanwhile remain robust to adversarial attacks. The conventional solution for robust MAML is to introduce robustness-promoting regularization during meta-training stage. With such a regularization, previous robust MAML methods simply follow the typical MAML practice that the number of training shots should match with the number of test shots to achieve an optimal adaptation performance. However, although the robustness can be largely improved, previous methods sacrifice clean accuracy a lot. In this paper, we observe that introducing robustness-promoting regularization into MAML reduces the intrinsic dimension of clean sample features, which results in a lower capacity of clean representations. This may explain why the clean accuracy of previous robust MAML methods drops severely. Based on this observation, we propose a simple strategy, i.e., increasing the number of training shots, to mitigate the loss of intrinsic dimension caused by robustness-promoting regularization. Though simple, our method remarkably improves the clean accuracy of MAML without much loss of robustness, producing a robust yet accurate model. Extensive experiments demonstrate that our method outperforms prior arts in achieving a better trade-off between accuracy and robustness. Besides, we observe that our method is less sensitive to the number of fine-tuning steps during meta-training, which allows for a reduced number of fine-tuning steps to improve training efficiency.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
Deep neural networks have empowered accurate device-free human activity recognition, which has wide applications. Deep models can extract robust features from various sensors and generalize well even in challenging situations such as data-insufficient cases. However, these systems could be vulnerable to input perturbations, i.e. adversarial attacks. We empirically demonstrate that both black-box Gaussian attacks and modern adversarial white-box attacks can render their accuracies to plummet. In this paper, we firstly point out that such phenomenon can bring severe safety hazards to device-free sensing systems, and then propose a novel learning framework, SecureSense, to defend common attacks. SecureSense aims to achieve consistent predictions regardless of whether there exists an attack on its input or not, alleviating the negative effect of distribution perturbation caused by adversarial attacks. Extensive experiments demonstrate that our proposed method can significantly enhance the model robustness of existing deep models, overcoming possible attacks. The results validate that our method works well on wireless human activity recognition and person identification systems. To the best of our knowledge, this is the first work to investigate adversarial attacks and further develop a novel defense framework for wireless human activity recognition in mobile computing research.
translated by 谷歌翻译
Deep learning-based 3D object detectors have made significant progress in recent years and have been deployed in a wide range of applications. It is crucial to understand the robustness of detectors against adversarial attacks when employing detectors in security-critical applications. In this paper, we make the first attempt to conduct a thorough evaluation and analysis of the robustness of 3D detectors under adversarial attacks. Specifically, we first extend three kinds of adversarial attacks to the 3D object detection task to benchmark the robustness of state-of-the-art 3D object detectors against attacks on KITTI and Waymo datasets, subsequently followed by the analysis of the relationship between robustness and properties of detectors. Then, we explore the transferability of cross-model, cross-task, and cross-data attacks. We finally conduct comprehensive experiments of defense for 3D detectors, demonstrating that simple transformations like flipping are of little help in improving robustness when the strategy of transformation imposed on input point cloud data is exposed to attackers. Our findings will facilitate investigations in understanding and defending the adversarial attacks against 3D object detectors to advance this field.
translated by 谷歌翻译
对手示例是一些可以扰乱深度神经网络的输出的一些特殊输入,以便在生产环境中产生有意的误差。用于产生对抗性示例的大多数方法需要梯度信息。甚至是与生成模型无关的普遍扰动依赖于梯度信息的一定程度。程序噪声对手示例是对普发的示例生成的一种新方法,它使用计算机图形噪声快速生成通用的对抗扰动,同时不依赖于梯度信息。结合对抗的防御训练,我们使用Perlin噪声训练神经网络以获得可以防御程序噪声对抗的模型。结合使用基于预先训练的模型的模型微调方法,我们获得更快的培训以及更高的准确性。我们的研究表明,程序噪声对抗性实例是可辩护的,但为什么程序噪声可以产生对抗性实例,以及如何防御可能在未来出现的其他过程噪声对抗性示例仍有待调查。
translated by 谷歌翻译
深度神经网络已被证明容易受到对抗图像的影响。常规攻击努力争取严格限制扰动的不可分割的对抗图像。最近,研究人员已采取行动探索可区分但非奇异的对抗图像,并证明色彩转化攻击是有效的。在这项工作中,我们提出了对抗颜色过滤器(ADVCF),这是一种新颖的颜色转换攻击,在简单颜色滤波器的参数空间中通过梯度信息进行了优化。特别是,明确指定了我们的颜色滤波器空间,以便从攻击和防御角度来对对抗性色转换进行系统的鲁棒性分析。相反,由于缺乏这种明确的空间,现有的颜色转换攻击并不能为系统分析提供机会。我们通过用户研究进一步进行了对成功率和图像可接受性的不同颜色转化攻击之间的广泛比较。其他结果为在另外三个视觉任务中针对ADVCF的模型鲁棒性提供了有趣的新见解。我们还强调了ADVCF的人类解剖性,该advcf在实际使用方案中有希望,并显示出比对图像可接受性和效率的最新人解释的色彩转化攻击的优越性。
translated by 谷歌翻译
人群计数已被广泛用于估计安全至关重要的场景中的人数,被证明很容易受到物理世界中对抗性例子的影响(例如,对抗性斑块)。尽管有害,但对抗性例子也很有价值,对于评估和更好地理解模型的鲁棒性也很有价值。但是,现有的对抗人群计算的对抗性示例生成方法在不同的黑盒模型之间缺乏强大的可传递性,这限制了它们对现实世界系统的实用性。本文提出了与模型不变特征正相关的事实,本文提出了感知的对抗贴片(PAP)生成框架,以使用模型共享的感知功能来定制对对抗性的扰动。具体来说,我们将一种自适应人群密度加权方法手工制作,以捕获各种模型中不变的量表感知特征,并利用密度引导的注意力来捕获模型共享的位置感知。证明它们都可以提高我们对抗斑块的攻击性转移性。广泛的实验表明,我们的PAP可以在数字世界和物理世界中实现最先进的进攻性能,并且以大幅度的优于以前的提案(最多+685.7 MAE和+699.5 MSE)。此外,我们从经验上证明,对我们的PAP进行的对抗训练可以使香草模型的性能受益,以减轻人群计数的几个实际挑战,包括跨数据集的概括(高达-376.0 MAE和-376.0 MAE和-354.9 MSE)和对复杂背景的鲁棒性(上升)至-10.3 MAE和-16.4 MSE)。
translated by 谷歌翻译
在难以察觉的对抗性示例攻击时被发现深度神经网络是不稳定的,这对于它施加到需要高可靠性的医学诊断系统是危险的。然而,在自然图像中具有良好效果的防御方法可能不适合医疗诊断任务。预处理方法(例如,随机调整大小,压缩)可能导致医学图像中的小病变特征的损失。在增强的数据集中培训网络对已经在线部署的医疗模型也不实用。因此,有必要为医疗诊断任务设计易于部署和有效的防御框架。在本文中,我们为反对对抗性攻击(即Medrdf)的医疗净化模型提出了较强和初稿的初步诊断框架。它采用了Pertined Medical模型的推理时间。具体地,对于每个测试图像,MEDRDF首先创建它的大量噪声副本,并从预训经医学诊断模型获得这些副本的输出标签。然后,基于这些副本的标签,MEDRDF通过多数投票输出最终的稳健诊断结果。除了诊断结果之外,MedRDF还产生强大的公制(RM)作为结果的置信度。因此,利用MEDRDF将预先训练的非强大诊断模型转换为强大的,是方便且可靠的。 Covid-19和Dermamnist数据集的实验结果验证了MEDRDF在提高医疗模型的稳健性方面的有效性。
translated by 谷歌翻译