Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning. While the effect of importance weighting is wellcharacterized for low-capacity misspecified models, little is known about how it impacts overparameterized, deep neural networks. Inspired by recent theoretical results showing that on (linearly) separable data, deep linear networks optimized by SGD learn weight-agnostic solutions, we ask, for realistic deep networks, for which many practical datasets are separable, what is the effect of importance weighting? We present the surprising finding that while importance weighting impacts deep nets early in training, so long as the nets are able to separate the training data, its effect diminishes over successive epochs. Moreover, while L2 regularization and batch normalization (but not dropout), restore some of the impact of importance weighting, they express the effect via (seemingly) the wrong abstraction: why should practitioners tweak the L2 regularization, and by how much, to produce the correct weighting effect? We experimentally confirm these findings across a range of architectures and datasets.
translated by 谷歌翻译
以前的工作提出了许多新的损失函数和常规程序,可提高图像分类任务的测试准确性。但是,目前尚不清楚这些损失函数是否了解下游任务的更好表示。本文研究了培训目标的选择如何影响卷积神经网络隐藏表示的可转移性,训练在想象中。我们展示了许多目标在Vanilla Softmax交叉熵上导致想象的精度有统计学意义的改进,但由此产生的固定特征提取器转移到下游任务基本较差,并且当网络完全微调时,损失的选择几乎没有效果新任务。使用居中内核对齐来测量网络隐藏表示之间的相似性,我们发现损失函数之间的差异仅在网络的最后几层中都很明显。我们深入了解倒数第二层的陈述,发现不同的目标和近奇计的组合导致大幅不同的类别分离。具有较高类别分离的表示可以在原始任务上获得更高的准确性,但它们的功能对于下游任务不太有用。我们的结果表明,用于原始任务的学习不变功能与传输任务相关的功能之间存在权衡。
translated by 谷歌翻译
Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains 1 .
translated by 谷歌翻译
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
translated by 谷歌翻译
在数据中存在错误标记的观察是统计和机器学习中令人惊叹的挑战性问题,与传统分类器的差的概括特性相关,也许更灵活的分类器,如神经网络。在这里,我们提出了一种新的双重正规化的神经网络培训损失,这些训练损失结合了对分类模型的复杂性的惩罚以及对训练观测的最佳重新重量。综合惩罚导致普遍存在的普遍设置的普遍性特性和强大的稳健性,以及在训练时违反初始参数值的变化。我们为我们提出的方法提供了一个理论上的理由,该方法衍生出一种简单的逻辑回归。我们展示了双重正则化模型,这里由DRFIT表示,用于(i)MNIST和(II)CIFAR-10的神经净分类,在两种情况下都有模拟误标标记。我们还说明DRFIT以非常好的精度识别错误标记的数据点。这为DRFIT提供了强大的支持,作为一种现成的分类器,因为没有任何表现牺牲,我们获得了一个分类器,同时降低了对误标标记的过度装备,并准确衡量标签的可信度。
translated by 谷歌翻译
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. * Work performed while interning at Google Brain.† Work performed at Google Brain.
translated by 谷歌翻译
已知经过类不平衡数据培训的分类器在“次要”类的测试数据上表现不佳,我们的培训数据不足。在本文中,我们调查在这种情况下学习Convnet分类器。我们发现,Convnet显着夸大了次要类别,这与通常拟合的次要类别的传统机器学习算法完全相反。我们进行了一系列分析,并发现了特征偏差现象 - 学识渊博的Convnet在次要类别的训练和测试数据之间产生了偏差的特征 - 这解释了过度拟合的情况。为了补偿特征偏差的影响,将测试数据推向低决策价值区域,我们建议将依赖类的温度(CDT)纳入训练convnet。 CDT在训练阶段模拟特征偏差,迫使Convnet扩大次级数据的决策值,从而可以在测试阶段克服实际特征偏差。我们在基准数据集上验证我们的方法并实现有希望的性能。我们希望我们的见解能够激发解决阶级失去平衡深度学习的新思维方式。
translated by 谷歌翻译
我们识别并形式化基本梯度下降现象,导致过度参数化神经网络中的学习倾向。尽管存在对任务相关的特征的子集最小化跨熵损失最小化梯度饥饿,尽管存在是否存在无法被发现的其他预测功能。这项工作为神经网络中这种特征不平衡的出现提供了理论解释。使用来自动态系统理论的工具,我们在梯度下降期间确定了学习动态的简单属性,从而导致这种不平衡,并证明可以预期这种情况在训练数据中提供某些统计结构。根据我们拟议的形式主义,我们为旨在解耦特征学习动态的新型正则化方法,提高患者渐变饥饿阻碍的准确性和鲁棒性的担保。我们用简单和真实的分配(OOD)泛化实验说明了我们的研究结果。
translated by 谷歌翻译
Power等人报道的\ emph {grokking现象} {power2021grokking}是指一个长期过度拟合之后,似乎突然过渡到完美的概括。在本文中,我们试图通过一系列经验研究来揭示Grokking的基础。具体而言,我们在极端的训练阶段(称为\ emph {slingshot机构)发现了一个优化的异常缺陷自适应优化器。可以通过稳定和不稳定的训练方案之间的循环过渡来测量弹弓机制的突出伪像,并且可以通过最后一层重量的规范的循环行为轻松监测。我们从经验上观察到,在\ cite {power2021grokking}中报道的无明确正规化,几乎完全发生在\ emph {slingshots}的开始时,并且没有它。虽然在更一般的环境中常见且容易复制,但弹弓机制并不遵循我们所知道的任何已知优化理论,并且可以轻松地忽略而无需深入研究。我们的工作表明,在培训的后期阶段,适应性梯度优化器的令人惊讶且有用的归纳偏见,要求对其起源进行修订。
translated by 谷歌翻译
尽管过度参数化的模型已经在许多机器学习任务上表现出成功,但与培训不同的测试分布的准确性可能会下降。这种准确性下降仍然限制了在野外应用机器学习的限制。同时,重要的加权是一种处理分配转移的传统技术,已被证明在经验和理论上对过度参数化模型的影响较小甚至没有影响。在本文中,我们提出了重要的回火来改善决策界限,并为过度参数化模型取得更好的结果。从理论上讲,我们证明在标签移位和虚假相关设置下,组温度的选择可能不同。同时,我们还证明正确选择的温度可以解脱出少数群体崩溃的分类不平衡。从经验上讲,我们使用重要性回火来实现最严重的小组分类任务的最新结果。
translated by 谷歌翻译
监督学习的关键假设是培训和测试数据遵循相同的概率分布。然而,这种基本假设在实践中并不总是满足,例如,由于不断变化的环境,样本选择偏差,隐私问题或高标签成本。转移学习(TL)放松这种假设,并允许我们在分销班次下学习。通常依赖于重要性加权的经典TL方法 - 基于根据重要性(即测试过度训练密度比率)的训练损失培训预测器。然而,由于现实世界机器学习任务变得越来越复杂,高维和动态,探讨了新的新方法,以应对这些挑战最近。在本文中,在介绍基于重要性加权的TL基础之后,我们根据关节和动态重要预测估计审查最近的进步。此外,我们介绍一种因果机制转移方法,该方法包含T1中的因果结构。最后,我们讨论了TL研究的未来观点。
translated by 谷歌翻译
最近的结果表明,在训练期间重新升级神经网络参数的子集可以改善泛化,特别是对于小型训练集。我们研究不同重新初始化方法在12个基准图像分类数据集中的几种卷积架构中的影响,分析了它们的潜在收益和突出显示限制。我们还介绍了一种新的层状重新初始化算法,优于先前的方法,并建议观察到的改进的泛化的解释。首先,我们表明,无需增加重量的规范,可以在不增加重量的规范的情况下增加训练示例的余量。因此,导致神经网络的边缘的泛化范围的改善。其次,我们证明它在损失表面的平坦局部最小值中稳定。第三,它鼓励学习一般规则,并通过强调神经网络的下层来劝阻记忆。我们的外带消息是使用自下而上的层状重新初始化的小型数据集可以改善卷积神经网络的准确性,其中重新初始层的数量可能因可用计算预算而变化。
translated by 谷歌翻译
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. * Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work.
translated by 谷歌翻译
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
translated by 谷歌翻译
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures -stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers -demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
标签 - 不平衡和组敏感分类中的目标是优化相关的指标,例如平衡错误和相同的机会。经典方法,例如加权交叉熵,在训练深网络到训练(TPT)的终端阶段时,这是超越零训练误差的训练。这种观察发生了最近在促进少数群体更大边值的直观机制之后开发启发式替代品的动力。与之前的启发式相比,我们遵循原则性分析,说明不同的损失调整如何影响边距。首先,我们证明,对于在TPT中训练的所有线性分类器,有必要引入乘法,而不是添加性的Logit调整,以便对杂项边缘进行适当的变化。为了表明这一点,我们发现将乘法CE修改的连接到成本敏感的支持向量机。也许是违反,我们还发现,在培训开始时,相同的乘法权重实际上可以损害少数群体。因此,虽然在TPT中,添加剂调整无效,但我们表明它们可以通过对乘法重量的初始负效应进行抗衡来加速会聚。通过这些发现的动机,我们制定了矢量缩放(VS)丢失,即捕获现有技术作为特殊情况。此外,我们引入了对群体敏感分类的VS损失的自然延伸,从而以统一的方式处理两种常见类型的不平衡(标签/组)。重要的是,我们对最先进的数据集的实验与我们的理论见解完全一致,并确认了我们算法的卓越性能。最后,对于不平衡的高斯 - 混合数据,我们执行泛化分析,揭示平衡/标准错误和相同机会之间的权衡。
translated by 谷歌翻译
最近证明,接受SGD训练的神经网络优先依赖线性预测的特征,并且可以忽略复杂的,同样可预测的功能。这种简单性偏见可以解释他们缺乏分布(OOD)的鲁棒性。学习任务越复杂,统计工件(即选择偏见,虚假相关性)的可能性就越大比学习的机制更简单。我们证明可以减轻简单性偏差并改善了OOD的概括。我们使用对其输入梯度对齐的惩罚来训练一组类似的模型以不同的方式拟合数据。我们从理论和经验上展示了这会导致学习更复杂的预测模式的学习。 OOD的概括从根本上需要超出I.I.D.示例,例如多个培训环境,反事实示例或其他侧面信息。我们的方法表明,我们可以将此要求推迟到独立的模型选择阶段。我们获得了SOTA的结果,可以在视觉域偏置数据和概括方面进行视觉识别。该方法 - 第一个逃避简单性偏见的方法 - 突出了需要更好地理解和控制深度学习中的归纳偏见。
translated by 谷歌翻译
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that-when trained in isolationreach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
translated by 谷歌翻译
Shift Invariance是CNN的关键属性,可提高分类性能。然而,我们表明,与循环偏移的不变性也可能导致对对抗性攻击的更大敏感性。我们首先在使用换档不变线性分类器时表征类之间的余量。我们表明边际只能依赖于信号的DC分量。然后,使用关于无限宽网络的结果,我们显示在一些简单的情况下,完全连接和换档不变神经网络产生线性决策边界。使用这一点,我们证明了神经网络中的换档不变性为两个类的简单情况产生了对手示例,每个案例由灰色背景上的黑色或白点组成的单个图像。这不仅仅是一种好奇心;我们凭经验显示,使用真实的数据集和现实的架构,换档不变性降低了对抗性的鲁棒性。最后,我们描述了使用合成数据来探测这种连接源的初始实验。
translated by 谷歌翻译