混合是一种数据相关的正则化技术,其包括线性内插输入样本和相关输出。它已被证明在用于培训标准机器学习数据集时提高准确性。然而,作者已经指出,混合可以在增强训练集中产生分配的虚拟样本,甚至是矛盾,可能导致对抗效应。在本文中,我们介绍了当地混合,其中在计算损失时加权远处输入样本。在约束的环境中,我们证明了本地混合可以在偏差和方差之间产生权衡,极端情况降低了香草培训和古典混合。使用标准化的计算机视觉基准测试,我们还表明本地混合可以提高测试精度。
translated by 谷歌翻译
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
translated by 谷歌翻译
我们介绍了嘈杂的特征混音(NFM),这是一个廉价但有效的数据增强方法,这些方法结合了基于插值的训练和噪声注入方案。不是用凸面的示例和它们的标签的凸面组合训练,而不是在输入和特征空间中使用对数据点对的噪声扰动凸组合。该方法包括混合和歧管混合作为特殊情况,但它具有额外的优点,包括更好地平滑决策边界并实现改进的模型鲁棒性。我们提供理论要理解这一点以及NFM的隐式正则化效果。与混合和歧管混合相比,我们的理论得到了经验结果的支持,展示了NFM的优势。我们表明,在一系列计算机视觉基准数据集中,使用NFM培训的剩余网络和视觉变压器在清洁数据的预测准确性和鲁棒性之间具有有利的权衡。
translated by 谷歌翻译
我们提出了混合样品数据增强(MSDA)的第一个统一的理论分析,例如混合和cutmix。我们的理论结果表明,无论选择混合策略如何,MSDA都表现为基础训练损失的像素级正规化和第一层参数的正则化。同样,我们的理论结果支持MSDA培训策略可以改善与香草训练策略相比的对抗性鲁棒性和泛化。利用理论结果,我们对MSDA的不同设计选择的工作方式提供了高级了解。例如,我们表明,最流行的MSDA方法,混合和cutmix的表现不同,例如,CutMix通过像素距离正规化输入梯度,而混合量则使输入梯度正常于像素距离。我们的理论结果还表明,最佳MSDA策略取决于任务,数据集或模型参数。从这些观察结果中,我们提出了广义MSDA,这是混合版的混合和Cutmix(HMIX)和Gaussian Mixup(GMIX),简单的混合和CutMix。我们的实施可以利用混合和cutmix的优势,而我们的实施非常有效,并且计算成本几乎可以忽略为混合和cutmix。我们的实证研究表明,我们的HMIX和GMIX优于CIFAR-100和Imagenet分类任务中先前最先进的MSDA方法。源代码可从https://github.com/naver-ai/hmix-gmix获得
translated by 谷歌翻译
混合是一种数据增强方法,通过混合一对输入数据来生成新数据点。虽然混合通常会改善预测性能,但它有时会降低性能。在本文中,我们首先通过理论上和经验分析混合算法来确定这种现象的主要原因。要解决此问题,我们提出了一种简单但有效的重定标记算法,专为混合而提出了Genlabel。特别是,GenLabel通过使用生成模型学习类条件数据分布,帮助混合算法正确标记混合样本。通过广泛的理论和实证分析,我们表明混合,当与Genlabel一起使用时,可以有效地解决上述现象,从而提高泛化性能和对抗鲁棒性。
translated by 谷歌翻译
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
当前,随机平滑被认为是获得确切可靠分类器的最新方法。尽管其表现出色,但该方法仍与各种严重问题有关,例如``认证准确性瀑布'',认证与准确性权衡甚至公平性问题。已经提出了依赖输入的平滑方法,目的是克服这些缺陷。但是,我们证明了这些方法缺乏正式的保证,因此所产生的证书是没有道理的。我们表明,一般而言,输入依赖性平滑度遭受了维数的诅咒,迫使方差函数具有低半弹性。另一方面,我们提供了一个理论和实用的框架,即使在严格的限制下,即使在有维度的诅咒的情况下,即使在存在维度的诅咒的情况下,也可以使用依赖输入的平滑。我们提供平滑方差功能的一种混凝土设计,并在CIFAR10和MNIST上进行测试。我们的设计减轻了经典平滑的一些问题,并正式下划线,但仍需要进一步改进设计。
translated by 谷歌翻译
在许多机器学习应用中,对于模型而言,提供置信分数以准确捕获其预测不确定性非常重要。尽管现代学习方法在预测准确性方面取得了巨大的成功,但产生校准的置信度得分仍然是一个重大挑战。基于采用凸面的培训示例组合的一种流行而简单的数据增强技术,已被经验发现可显着改善各种应用程序之间的置信度校准。但是,混音何时以及如何帮助校准仍然是一个谜。在本文中,我们从理论上证明,混合通过研究自然统计模型来改善\ textit {高维}设置中的校准。有趣的是,随着模型容量的增加,混合的校准益处会增加。我们通过对共同体系结构和数据集的实验来支持我们的理论。此外,我们研究混合如何改善半监督学习的校准。在合并未标记的数据的同时,有时可以使模型降低校准,从而增加混合训练可以减轻此问题并证明可以改善校准。我们的分析提供了新的见解和一个框架,以了解混合和校准。
translated by 谷歌翻译
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. We then propose a new method to improve Mixup based on the novel insight. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
translated by 谷歌翻译
Neural networks are known to be a class of highly expressive functions able to fit even random inputoutput mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we highlight a learning bias of deep networks towards low frequency functions -i.e. functions that vary globally without local fluctuations -which manifests itself as a frequency-dependent learning speed. Intuitively, this property is in line with the observation that over-parameterized networks prioritize learning simple patterns that generalize across data samples. We also investigate the role of the shape of the data manifold by presenting empirical and theoretical evidence that, somewhat counter-intuitively, learning higher frequencies gets easier with increasing manifold complexity.
translated by 谷歌翻译
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a minmax optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at https: //github.com/google-research/sam. * Work done as part of the Google AI Residency program.
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译
我们研究了紧凑型歧管M上的回归问题。为了利用数据的基本几何形状和拓扑结构,回归任务是基于歧管的前几个特征函数执行的,该特征是歧管的laplace-beltrami操作员,通过拓扑处罚进行正规化。提出的惩罚基于本征函数或估计功能的子级集的拓扑。显示总体方法可在合成和真实数据集上对各种应用产生有希望的和竞争性能。我们还根据回归函数估计,其预测误差及其平滑度(从拓扑意义上)提供理论保证。综上所述,这些结果支持我们方法在目标函数“拓扑平滑”的情况下的相关性。
translated by 谷歌翻译
在许多情况下,更简单的模型比更复杂的模型更可取,并且该模型复杂性的控制是机器学习中许多方法的目标,例如正则化,高参数调整和体系结构设计。在深度学习中,很难理解复杂性控制的潜在机制,因为许多传统措施并不适合深度神经网络。在这里,我们开发了几何复杂性的概念,该概念是使用离散的dirichlet能量计算的模型函数变异性的量度。使用理论论据和经验结果的结合,我们表明,许多常见的训练启发式方法,例如参数规范正规化,光谱规范正则化,平稳性正则化,隐式梯度正则化,噪声正则化和参数初始化的选择,都可以控制几何学复杂性,并提供一个统一的框架,以表征深度学习模型的行为。
translated by 谷歌翻译
机器学习(ML)鲁棒性和域的概括从根本上相关:它们基本上涉及对抗和自然设置下的数据分布变化。一方面,最近的研究表明,更健壮的(受对抗训练)模型更为普遍。另一方面,缺乏对其基本联系的理论理解。在本文中,我们探讨了考虑到不同因素(例如规范正规化和数据增强)(DA)等不同因素的正则化和域转移性之间的关系。我们提出了一个一般的理论框架,证明涉及模型函数类正则化的因素是相对域可传递性的足够条件。我们的分析意味着``鲁棒性''既不必需,也不足以使其可转移性;而正规化是理解域可转移性的更基本的观点。然后,我们讨论流行的DA协议(包括对抗性培训),并显示何时可以将其视为功能在某些条件下进行类正则化并因此改善了概括。我们进行了广泛的实验以验证我们的理论发现,并显示了几个反例,其中鲁棒性和概括在不同的数据集上呈负相关。
translated by 谷歌翻译
适应数据分布的结构(例如对称性和转型Imarerces)是机器学习中的重要挑战。通过架构设计或通过增强数据集,可以内在学习过程中内置Inhormces。两者都需要先验的了解对称性的确切性质。缺乏这种知识,从业者求助于昂贵且耗时的调整。为了解决这个问题,我们提出了一种新的方法来学习增强变换的分布,以新的\ emph {转换风险最小化}(trm)框架。除了预测模型之外,我们还优化了从假说空间中选择的转换。作为算法框架,我们的TRM方法是(1)有效(共同学习增强和模型,以\ emph {单训练环}),(2)模块化(使用\ emph {任何训练算法),以及(3)一般(处理\ \ ich {离散和连续}增强)。理论上与标准风险最小化的TRM比较,并在其泛化误差上给出PAC-Bayes上限。我们建议通过块组成的新参数化优化富裕的增强空间,导致新的\ EMPH {随机成分增强学习}(SCALE)算法。我们在CIFAR10 / 100,SVHN上使用先前的方法(快速自身自动化和武术器)进行实际比较规模。此外,我们表明规模可以在数据分布中正确地学习某些对称性(恢复旋转Mnist上的旋转),并且还可以改善学习模型的校准。
translated by 谷歌翻译
We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most important feature of Parseval networks is to maintain weight matrices of linear and convolutional layers to be (approximately) Parseval tight frames, which are extensions of orthogonal matrices to non-square matrices. We describe how these constraints can be maintained efficiently during SGD. We show that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers (SVHN), while being more robust than their vanilla counterpart against adversarial examples. Incidentally, Parseval networks also tend to train faster and make a better usage of the full capacity of the networks.
translated by 谷歌翻译
随机平滑是目前是最先进的方法,用于构建来自Neural Networks的可认真稳健的分类器,以防止$ \ ell_2 $ - vitersarial扰动。在范例下,分类器的稳健性与预测置信度对齐,即,对平滑分类器的较高的置信性意味着更好的鲁棒性。这使我们能够在校准平滑分类器的信仰方面重新思考准确性和鲁棒性之间的基本权衡。在本文中,我们提出了一种简单的训练方案,Coined Spiremix,通过自我混合来控制平滑分类器的鲁棒性:它沿着每个输入对逆势扰动方向进行样品的凸起组合。该提出的程序有效地识别过度自信,在平滑分类器的情况下,作为有限的稳健性的原因,并提供了一种直观的方法来自适应地在这些样本之间设置新的决策边界,以实现更好的鲁棒性。我们的实验结果表明,与现有的最先进的强大培训方法相比,该方法可以显着提高平滑分类器的认证$ \ ell_2 $ -toSpustness。
translated by 谷歌翻译
了解深度神经网络的泛化是深度学习中最重要的任务之一。虽然已经取得了很大进展,但理论错误界限仍然往往与经验观察结果不同。在这项工作中,我们开发基于保证金的泛化界,其中边距是在从训练分布中采样的独立随机子集之间的最佳运输成本标准化。特别地,最佳运输成本可以被解释为方差的概念,其捕获学习特征空间的结构特性。我们的界限强大地预测了在大规模数据集上给定培训数据和网络参数的泛化误差。从理论上讲,我们表明特征的浓度和分离在泛化中起着至关重要的作用,支持文献中的经验结果。该代码可用于\ url {https:/github.com/chingyaoc/kv-margin}。
translated by 谷歌翻译