Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
translated by 谷歌翻译
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
translated by 谷歌翻译
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. We then propose a new method to improve Mixup based on the novel insight. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
translated by 谷歌翻译
我们介绍了嘈杂的特征混音(NFM),这是一个廉价但有效的数据增强方法,这些方法结合了基于插值的训练和噪声注入方案。不是用凸面的示例和它们的标签的凸面组合训练,而不是在输入和特征空间中使用对数据点对的噪声扰动凸组合。该方法包括混合和歧管混合作为特殊情况,但它具有额外的优点,包括更好地平滑决策边界并实现改进的模型鲁棒性。我们提供理论要理解这一点以及NFM的隐式正则化效果。与混合和歧管混合相比,我们的理论得到了经验结果的支持,展示了NFM的优势。我们表明,在一系列计算机视觉基准数据集中,使用NFM培训的剩余网络和视觉变压器在清洁数据的预测准确性和鲁棒性之间具有有利的权衡。
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
现代神经网络Excel在图像分类中,但它们仍然容易受到常见图像损坏,如模糊,斑点噪音或雾。最近的方法关注这个问题,例如Augmix和Deepaulment,引入了在预期运行的防御,以期望图像损坏分布。相比之下,$ \ ell_p $ -norm界限扰动的文献侧重于针对最坏情况损坏的防御。在这项工作中,我们通过提出防范内人来调和两种方法,这是一种优化图像到图像模型的参数来产生对外损坏的增强图像的技术。我们理论上激发了我们的方法,并为其理想化版本的一致性以及大纲领提供了足够的条件。我们的分类机器在预期对CiFar-10-C进行的常见图像腐败基准上提高了最先进的,并改善了CIFAR-10和ImageNet上的$ \ ell_p $ -norm有界扰动的最坏情况性能。
translated by 谷歌翻译
我们提出了自适应培训 - 一种统一的培训算法,通过模型预测动态校准并增强训练过程,而不会产生额外的计算成本 - 以推进深度神经网络的监督和自我监督的学习。我们分析了培训数据的深网络培训动态,例如随机噪声和对抗例。我们的分析表明,模型预测能够在数据中放大有用的基础信息,即使在没有任何标签信息的情况下,这种现象也会发生,突出显示模型预测可能会产生培训过程:自适应培训改善了深网络的概括在噪音下,增强自我监督的代表学习。分析还阐明了解深度学习,例如,在经验风险最小化和最新的自我监督学习算法的折叠问题中对最近发现的双重现象的潜在解释。在CIFAR,STL和Imagenet数据集上的实验验证了我们在三种应用中的方法的有效性:用标签噪声,选择性分类和线性评估进行分类。为了促进未来的研究,该代码已在HTTPS://github.com/layneh/Self-Aveptive-训练中公开提供。
translated by 谷歌翻译
对抗性训练遭受了稳健的过度装备,这是一种现象,在训练期间鲁棒测试精度开始减少。在本文中,我们专注于通过使用常见的数据增强方案来减少强大的过度装备。我们证明,与先前的发现相反,当与模型重量平均结合时,数据增强可以显着提高鲁棒精度。此外,我们比较各种增强技术,并观察到空间组合技术适用于对抗性培训。最后,我们评估了我们在Cifar-10上的方法,而不是$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动分别为尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $。与以前的最先进的方法相比,我们表现出+ 2.93%的绝对改善+ 2.93%,+ 2.16%。特别是,反对$ \ ell_ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的模型达到60.07%的强劲准确性而不使用任何外部数据。我们还通过这种方法实现了显着的性能提升,同时使用其他架构和数据集如CiFar-100,SVHN和TinyimageNet。
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
深度学习的最新进展依赖于大型标签的数据集来培训大容量模型。但是,以时间和成本效益的方式收集大型数据集通常会导致标签噪声。我们提出了一种从嘈杂的标签中学习的方法,该方法利用特征空间中的训练示例之间的相似性,鼓励每个示例的预测与其最近的邻居相似。与使用多个模型或不同阶段的训练算法相比,我们的方法采用了简单,附加的正规化项的形式。它可以被解释为经典的,偏置标签传播算法的归纳版本。我们在数据集上彻底评估我们的方法评估合成(CIFAR-10,CIFAR-100)和现实(迷你网络,网络vision,Clotsing1m,Mini-Imagenet-Red)噪声,并实现竞争性或最先进的精度,在所有人之间。
translated by 谷歌翻译
几个数据增强方法部署了未标记的分配(UID)数据,以弥合神经网络的培训和推理之间的差距。然而,这些方法在UID数据的可用性方面具有明确的限制和伪标签上的算法的依赖性。在此,我们提出了一种数据增强方法,通过使用缺乏上述问题的分发(OOD)数据来改善对抗和标准学习的泛化。我们展示了如何在理论上使用每个学习场景中的数据来改进泛化,并通过Cifar-10,CiFar-100和ImageNet的子集进行化学理论分析。结果表明,即使在似乎与人类角度几乎没有相关的图像数据中也是不希望的特征。我们还通过与其他数据增强方法进行比较,介绍了所提出的方法的优点,这些方法可以在没有UID数据的情况下使用。此外,我们证明该方法可以进一步改善现有的最先进的对抗培训。
translated by 谷歌翻译
深度神经网络具有强大的功能,但它们也有缺点,例如它们对对抗性例子,噪音,模糊,遮挡等的敏感性。先前提出了许多以前的工作来提高特定的鲁棒性。但是,我们发现,在神经网络模型的额外鲁棒性或概括能力的牺牲下,通常会提高特定的鲁棒性。特别是,在改善对抗性鲁棒性时,对抗性训练方法在不受干扰的数据上严重损害了对不受干扰数据的概括性能。在本文中,我们提出了一种称为AugRmixat的新数据处理和培训方法,该方法可以同时提高神经网络模型的概括能力和多重鲁棒性。最后,我们验证了AUGRMIXAT对CIFAR-10/100和Tiny-Imagenet数据集的有效性。实验表明,Augrmixat可以改善模型的概括性能,同时增强白色框的鲁棒性,黑盒鲁棒性,常见的损坏鲁棒性和部分遮挡鲁棒性。
translated by 谷歌翻译
混合是一种数据增强方法,通过混合一对输入数据来生成新数据点。虽然混合通常会改善预测性能,但它有时会降低性能。在本文中,我们首先通过理论上和经验分析混合算法来确定这种现象的主要原因。要解决此问题,我们提出了一种简单但有效的重定标记算法,专为混合而提出了Genlabel。特别是,GenLabel通过使用生成模型学习类条件数据分布,帮助混合算法正确标记混合样本。通过广泛的理论和实证分析,我们表明混合,当与Genlabel一起使用时,可以有效地解决上述现象,从而提高泛化性能和对抗鲁棒性。
translated by 谷歌翻译
随机平滑是目前是最先进的方法,用于构建来自Neural Networks的可认真稳健的分类器,以防止$ \ ell_2 $ - vitersarial扰动。在范例下,分类器的稳健性与预测置信度对齐,即,对平滑分类器的较高的置信性意味着更好的鲁棒性。这使我们能够在校准平滑分类器的信仰方面重新思考准确性和鲁棒性之间的基本权衡。在本文中,我们提出了一种简单的训练方案,Coined Spiremix,通过自我混合来控制平滑分类器的鲁棒性:它沿着每个输入对逆势扰动方向进行样品的凸起组合。该提出的程序有效地识别过度自信,在平滑分类器的情况下,作为有限的稳健性的原因,并提供了一种直观的方法来自适应地在这些样本之间设置新的决策边界,以实现更好的鲁棒性。我们的实验结果表明,与现有的最先进的强大培训方法相比,该方法可以显着提高平滑分类器的认证$ \ ell_2 $ -toSpustness。
translated by 谷歌翻译
It is common practice in deep learning to use overparameterized networks and train for as long as possible; there are numerous studies that show, both theoretically and empirically, that such practices surprisingly do not unduly harm the generalization performance of the classifier. In this paper, we empirically study this phenomenon in the setting of adversarially trained deep networks, which are trained to minimize the loss under worst-case adversarial perturbations. We find that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CIFAR-10, CIFAR-100, and ImageNet) and perturbation models ( ∞ and 2 ). Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping. We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting. Finally, we study several classical and modern deep learning remedies for overfitting, including regularization and data augmentation, and find that no approach in isolation improves significantly upon the gains achieved by early stopping. All code for reproducing the experiments as well as pretrained model weights and training logs can be found at https://github.com/ locuslab/robust_overfitting.
translated by 谷歌翻译
最近的工作认为,强大的培训需要比标准分类所需的数据集大得多。在CiFar-10和CiFar-100上,这转化为仅培训的型号之间的可稳健稳健精度差距,这些型号来自原始训练集的数据,那些从“80万微小图像”数据集(TI-80M)提取的附加数据培训。在本文中,我们探讨了单独培训的生成模型如何利用人为地提高原始训练集的大小,并改善对$ \ ell_p $ norm-inded扰动的对抗鲁棒性。我们确定了包含额外生成数据的充分条件可以改善鲁棒性,并证明可以显着降低具有额外实际数据训练的模型的强大准确性差距。令人惊讶的是,我们甚至表明即使增加了非现实的随机数据(由高斯采样产生)也可以改善鲁棒性。我们在Cifar-10,CiFar-100,SVHN和Tinyimagenet上评估我们的方法,而$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $分别。与以前的最先进的方法相比,我们以强大的准确性显示出大的绝对改进。反对$ \ ell_ \ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的车型分别在Cifar-10和Cifar-100上达到66.10%和33.49%(改善状态)最新美术+ 8.96%和+ 3.29%)。反对$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 128/255 $,我们的型号在Cifar-10(+ 3.81%)上实现78.31%。这些结果击败了使用外部数据的最先前的作品。
translated by 谷歌翻译
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the input loss landscape (loss change with respect to input) via training on adversarially perturbed examples. However, how the widely used weight loss landscape (loss change with respect to weight) performs in adversarial training is rarely explored. In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap. Several well-recognized adversarial training improvements, such as early stopping, designing new objective functions, or leveraging unlabeled data, all implicitly flatten the weight loss landscape. Based on these observations, we propose a simple yet effective Adversarial Weight Perturbation (AWP) to explicitly regularize the flatness of weight loss landscape, forming a double-perturbation mechanism in the adversarial training framework that adversarially perturbs both inputs and weights. Extensive experiments demonstrate that AWP indeed brings flatter weight loss landscape and can be easily incorporated into various existing adversarial training methods to further boost their adversarial robustness.
translated by 谷歌翻译
在本文中,我们询问视觉变形金刚(VIT)是否可以作为改善机器学习模型对抗逃避攻击的对抗性鲁棒性的基础结构。尽管较早的作品集中在改善卷积神经网络上,但我们表明VIT也非常适合对抗训练以实现竞争性能。我们使用自定义的对抗训练配方实现了这一目标,该配方是在Imagenet数据集的一部分上使用严格的消融研究发现的。与卷积相比,VIT的规范培训配方建议强大的数据增强,部分是为了补偿注意力模块的视力归纳偏置。我们表明,该食谱在用于对抗训练时可实现次优性能。相比之下,我们发现省略所有重型数据增强,并添加一些额外的零件($ \ varepsilon $ -Warmup和更大的重量衰减),从而大大提高了健壮的Vits的性能。我们表明,我们的配方在完整的Imagenet-1k上概括了不同类别的VIT体系结构和大规模模型。此外,调查了模型鲁棒性的原因,我们表明,在使用我们的食谱时,在训练过程中产生强烈的攻击更加容易,这会在测试时提高鲁棒性。最后,我们通过提出一种量化对抗性扰动的语义性质并强调其与模型的鲁棒性的相关性来进一步研究对抗训练的结果。总体而言,我们建议社区应避免将VIT的规范培训食谱转换为在对抗培训的背景下进行强大的培训和重新思考常见的培训选择。
translated by 谷歌翻译
混音是指基于插值的数据增强,最初是作为超越经验风险最小化(ERM)的一种方式。然而,它的扩展侧重于插值的定义及其发生的空间,而增强本身的研究较少:对于$ m $的小批量,大多数方法在$ m $对之间的插值与单个标量插值之间的插值因子$ \ lambda $。在这项工作中,我们通过引入Multimix来朝这个方向取得进展,Multimix插入了任意数字$ n $的元组,每个元组,长度$ m $,一个vector $ \ lambda $每个元组。在序列数据上,我们进一步扩展到所有空间位置上的密集插值和损失计算。总体而言,我们通过数量级以几乎没有成本来增加每个小批量的元素数量。通过在分类器之前的最后一层插值来可以通过插值。最后,为了解决因线性目标插值而引起的不一致之处,我们引入了一种自我鉴定方法来生成和插值合成目标。我们从经验上表明,我们的贡献导致对四个基准测试的最先进混合方法的显着改善。通过分析嵌入空间,我们观察到这些类更紧密地聚集并均匀地分布在嵌入空间上,从而解释了改善的行为。
translated by 谷歌翻译
深度网络泛化界限的研究旨在使用仅使用训练数据集和网络参数来预测测试错误的方法。虽然泛化界限可以给出关于架构设计,培训算法等的许多见解,但他们目前无法做些什么是对实际测试错误的良好预测。最近引入的深度学习竞争中的预测概括旨在鼓励发现更好地预测测试错误的方法。目前的论文调查了一个简单的想法:可以使用使用在同一训练数据集上培训的生成对冲网络(GaN)产生的“合成数据”来预测测试错误?在调查几个GAN模型和架构后,我们发现这结果是这种情况。实际上,使用预先培训的标准数据集预先培训的GAN,可以预测测试错误而不需要任何额外的超参数调谐。这一结果令人惊讶,因为GAN具有众所周知的限制(例如模式崩溃),并且已知不准确地学习数据分布。然而,所生成的样本足以替代测试数据。提出了几个额外的实验以探讨Gans在这项任务中做得好的原因。除了一种预测概括的新方法外,我们工作中展示的反向直观现象还可以更好地了解GANS的优势和局限性。
translated by 谷歌翻译