尽管过度参数化的模型已经在许多机器学习任务上表现出成功,但与培训不同的测试分布的准确性可能会下降。这种准确性下降仍然限制了在野外应用机器学习的限制。同时,重要的加权是一种处理分配转移的传统技术,已被证明在经验和理论上对过度参数化模型的影响较小甚至没有影响。在本文中,我们提出了重要的回火来改善决策界限,并为过度参数化模型取得更好的结果。从理论上讲,我们证明在标签移位和虚假相关设置下,组温度的选择可能不同。同时,我们还证明正确选择的温度可以解脱出少数群体崩溃的分类不平衡。从经验上讲,我们使用重要性回火来实现最严重的小组分类任务的最新结果。
translated by 谷歌翻译
重要性加权是一种处理分销班次的经典技术。然而,事先工作呈现出强大的实证和理论证据,证明重要性重量对过度分辨的神经网络没有影响。重要性加权与过度分辨率的神经网络的培训真正不相容吗?我们的论文在负面回答。我们表明重要的权重不是因为过度分辨率,而是因为使用像物流或交叉熵损失等指数尾损失。作为一种补救措施,我们表明多项式尾损失恢复了重要性重量在校正过度分配模型中的分布换档的影响。我们表征了梯度下降的行为,其具有过度分辨的线性模型的重要性加权多项式损耗,并且理论上证明了在标签换档设置中使用多环尾损失的优点。令人惊讶的是,我们的理论表明,使用通过以指数来引入经典无偏的重要性重量而获得的权重可以提高性能。最后,我们展示了我们对亚潜班班和标签移位数据集的神经网络实验的分析的实际价值。重新重复时,我们的损耗函数可以在测试精度的高达9%的跨熵优先于重复的交叉熵。我们的损耗功能还提供了与校正分配换档的最先进的方法可比或甚至超过的测试精度。
translated by 谷歌翻译
标签 - 不平衡和组敏感分类中的目标是优化相关的指标,例如平衡错误和相同的机会。经典方法,例如加权交叉熵,在训练深网络到训练(TPT)的终端阶段时,这是超越零训练误差的训练。这种观察发生了最近在促进少数群体更大边值的直观机制之后开发启发式替代品的动力。与之前的启发式相比,我们遵循原则性分析,说明不同的损失调整如何影响边距。首先,我们证明,对于在TPT中训练的所有线性分类器,有必要引入乘法,而不是添加性的Logit调整,以便对杂项边缘进行适当的变化。为了表明这一点,我们发现将乘法CE修改的连接到成本敏感的支持向量机。也许是违反,我们还发现,在培训开始时,相同的乘法权重实际上可以损害少数群体。因此,虽然在TPT中,添加剂调整无效,但我们表明它们可以通过对乘法重量的初始负效应进行抗衡来加速会聚。通过这些发现的动机,我们制定了矢量缩放(VS)丢失,即捕获现有技术作为特殊情况。此外,我们引入了对群体敏感分类的VS损失的自然延伸,从而以统一的方式处理两种常见类型的不平衡(标签/组)。重要的是,我们对最先进的数据集的实验与我们的理论见解完全一致,并确认了我们算法的卓越性能。最后,对于不平衡的高斯 - 混合数据,我们执行泛化分析,揭示平衡/标准错误和相同机会之间的权衡。
translated by 谷歌翻译
过度参数化模型即使与传统的减轻失衡技术结合使用,在存在数据失衡的情况下也无法很好地概括。本文着重于分类数据集,其中一小部分人口(少数​​)可能包含与类标签相关的功能。对于跨凝结损失修饰和代表性高斯混合模型的参数家族,我们在最严重的组误差上得出了非反应泛化的边界,该误差揭示了不同的超参数的作用。具体而言,我们证明,在适当调整后,最近提出的VS-Loss学会了一个模型,即使伪造的特征很强,也对少数群体也是公平的。另一方面,替代性启发式方法,例如加权CE和LA-loss,可能会急剧失败。与以前的作品相比,我们的界限适用于更多的通用模型,它们是非吸血管的,即使在极端不平衡的情况下,它们也适用。
translated by 谷歌翻译
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work we provide a theoretical justification for these observations. We prove that -- even in the simplest of settings -- any interpolating learning rule (with arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that -- in the same setting -- successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
translated by 谷歌翻译
Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains 1 .
translated by 谷歌翻译
神经塌陷是指表征类嵌入和分类器重量的几何形状的显着结构特性,当经过零训练误差以外的训练时,深网被发现。但是,这种表征仅适用于平衡数据。因此,我们在这里询问是否可以使阶级失衡不变。为此,我们采用了不受限制的功能模型(UFM),这是一种用于研究神经塌陷的最新理论模型,并引入了单纯形编码标签的插值(SELI)作为神经崩溃现象的不变特征。具体而言,我们证明了UFM的跨凝结损失和消失的正则化,无论阶级失衡如何,嵌入和分类器总是插入单纯形编码的标签矩阵,并且其单个几何形状都由同一标签矩阵矩阵矩阵的SVD因子确定。然后,我们对合成和真实数据集进行了广泛的实验,这些实验确认了与SELI几何形状的收敛。但是,我们警告说,融合会随着不平衡的增加而恶化。从理论上讲,我们通过表明与平衡的情况不同,当存在少数民族时,山脊规范化在调整几何形状中起着至关重要的作用。这定义了新的问题,并激发了对阶级失衡对一阶方法融合其渐近优先解决方案的速率的影响的进一步研究。
translated by 谷歌翻译
过度分化的深网络的泛化神秘具有有动力的努力,了解梯度下降(GD)如何收敛到概括井的低损耗解决方案。现实生活中的神经网络从小随机值初始化,并以分类的“懒惰”或“懒惰”或“NTK”的训练训练,分析更成功,以及最近的结果序列(Lyu和Li ,2020年; Chizat和Bach,2020; Ji和Telgarsky,2020)提供了理论证据,即GD可以收敛到“Max-ramin”解决方案,其零损失可能呈现良好。但是,仅在某些环境中证明了余量的全球最优性,其中神经网络无限或呈指数级宽。目前的纸张能够为具有梯度流动训练的两层泄漏的Relu网,无论宽度如何,都能为具有梯度流动的双层泄漏的Relu网建立这种全局最优性。分析还为最近的经验研究结果(Kalimeris等,2019)给出了一些理论上的理由,就GD的所谓简单的偏见为线性或其他“简单”的解决方案,特别是在训练中。在悲观方面,该论文表明这种结果是脆弱的。简单的数据操作可以使梯度流量会聚到具有次优裕度的线性分类器。
translated by 谷歌翻译
在负面的感知问题中,我们给出了$ n $数据点$({\ boldsymbol x} _i,y_i)$,其中$ {\ boldsymbol x} _i $是$ d $ -densional vector和$ y_i \ in \ { + 1,-1 \} $是二进制标签。数据不是线性可分离的,因此我们满足自己的内容,以找到最大的线性分类器,具有最大的\ emph {否定}余量。换句话说,我们想找到一个单位常规矢量$ {\ boldsymbol \ theta} $,最大化$ \ min_ {i \ le n} y_i \ langle {\ boldsymbol \ theta},{\ boldsymbol x} _i \ rangle $ 。这是一个非凸优化问题(它相当于在Polytope中找到最大标准矢量),我们在两个随机模型下研究其典型属性。我们考虑比例渐近,其中$ n,d \ to \ idty $以$ n / d \ to \ delta $,并在最大边缘$ \ kappa _ {\ text {s}}(\ delta)上证明了上限和下限)$或 - 等效 - 在其逆函数$ \ delta _ {\ text {s}}(\ kappa)$。换句话说,$ \ delta _ {\ text {s}}(\ kappa)$是overparametization阈值:以$ n / d \ le \ delta _ {\ text {s}}(\ kappa) - \ varepsilon $一个分类器实现了消失的训练错误,具有高概率,而以$ n / d \ ge \ delta _ {\ text {s}}(\ kappa)+ \ varepsilon $。我们在$ \ delta _ {\ text {s}}(\ kappa)$匹配,以$ \ kappa \ to - \ idty $匹配。然后,我们分析了线性编程算法来查找解决方案,并表征相应的阈值$ \ delta _ {\ text {lin}}(\ kappa)$。我们观察插值阈值$ \ delta _ {\ text {s}}(\ kappa)$和线性编程阈值$ \ delta _ {\ text {lin {lin}}(\ kappa)$之间的差距,提出了行为的问题其他算法。
translated by 谷歌翻译
成功的深度学习模型往往涉及培训具有比训练样本数量更多的参数的神经网络架构。近年来已经广泛研究了这种超分子化的模型,并且通过双下降现象和通过优化景观的结构特性,从统计的角度和计算视角都建立了过分统计化的优点。尽管在过上分层的制度中深入学习架构的显着成功,但也众所周知,这些模型对其投入中的小对抗扰动感到高度脆弱。即使在普遍培训的情况下,它们在扰动输入(鲁棒泛化)上的性能也会比良性输入(标准概括)的最佳可达到的性能更糟糕。因此,必须了解如何从根本上影响稳健性的情况下如何影响鲁棒性。在本文中,我们将通过专注于随机特征回归模型(具有随机第一层权重的两层神经网络)来提供超分度化对鲁棒性的作用的精确表征。我们考虑一个制度,其中样本量,输入维度和参数的数量彼此成比例地生长,并且当模型发生前列地训练时,可以为鲁棒泛化误差导出渐近精确的公式。我们的发达理论揭示了过分统计化对鲁棒性的非竞争效果,表明对于普遍训练的随机特征模型,高度公正化可能会损害鲁棒泛化。
translated by 谷歌翻译
The standard empirical risk minimization (ERM) can underperform on certain minority groups (i.e., waterbirds in lands or landbirds in water) due to the spurious correlation between the input and its label. Several studies have improved the worst-group accuracy by focusing on the high-loss samples. The hypothesis behind this is that such high-loss samples are \textit{spurious-cue-free} (SCF) samples. However, these approaches can be problematic since the high-loss samples may also be samples with noisy labels in the real-world scenarios. To resolve this issue, we utilize the predictive uncertainty of a model to improve the worst-group accuracy under noisy labels. To motivate this, we theoretically show that the high-uncertainty samples are the SCF samples in the binary classification problem. This theoretical result implies that the predictive uncertainty is an adequate indicator to identify SCF samples in a noisy label setting. Motivated from this, we propose a novel ENtropy based Debiasing (END) framework that prevents models from learning the spurious cues while being robust to the noisy labels. In the END framework, we first train the \textit{identification model} to obtain the SCF samples from a training set using its predictive uncertainty. Then, another model is trained on the dataset augmented with an oversampled SCF set. The experimental results show that our END framework outperforms other strong baselines on several real-world benchmarks that consider both the noisy labels and the spurious-cues.
translated by 谷歌翻译
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between $\alpha$-loss and Arimoto conditional entropy, verify the classification-calibration of $\alpha$-loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of $\alpha$-loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning $\alpha$-loss away from log-loss ($\alpha = 1$), and to this end we provide simple heuristics for the practitioner. In particular, navigating the $\alpha$ hyperparameter can readily provide superior model robustness to label flips ($\alpha > 1$) and sensitivity to imbalanced classes ($\alpha < 1$).
translated by 谷歌翻译
我们检查了在未注册的逻辑回归问题上的梯度下降,并在线性可分离数据集上具有均匀的线性预测指标。我们显示了预测变量收敛到最大边缘(硬边缘SVM)解决方案的方向。结果还推广到其他单调的损失函数,在无穷大时降低了损失功能,多级问题,并在某个受限的环境中训练在深网中的重量层。此外,我们表明这种融合非常慢,只有在损失本身的融合中的对数。这可以有助于解释即使训练错误为零,并且训练损失非常小,并且正如我们所显示的,即使验证损失增加了,也可以继续优化逻辑或跨透明度损失的好处。我们的方法还可以帮助理解隐式正则化n更复杂的模型以及其他优化方法。
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
数据增强是机器学习管道的基石,但其理论基础尚不清楚。它只是人为增加数据集大小的一种方法吗?还是鼓励模型满足某些不变性?在这项工作中,我们考虑了另一个角度,我们研究了数据增强对学习过程动态的影响。我们发现,数据增强可以改变各种功能的相对重要性,从而有效地使某些信息性但难以学习的功能更有可能在学习过程中捕获。重要的是,我们表明,对于非线性模型,例如神经网络,这种效果更为明显。我们的主要贡献是对Allen-Zhu和Li [2020]最近提出的多视图数据模型中两层卷积神经网络的学习动态数据的详细分析。我们通过进一步的实验证据来补充这一分析,证明数据增加可以看作是特征操纵。
translated by 谷歌翻译
The fundamental learning theory behind neural networks remains largely open. What classes of functions can neural networks actually learn? Why doesn't the trained network overfit when it is overparameterized?In this work, we prove that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations. Moreover, the learning can be simply done by SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples. The sample complexity can also be almost independent of the number of parameters in the network.On the technique side, our analysis goes beyond the so-called NTK (neural tangent kernel) linearization of neural networks in prior works. We establish a new notion of quadratic approximation of the neural network (that can be viewed as a second-order variant of NTK), and connect it to the SGD theory of escaping saddle points.
translated by 谷歌翻译
过度参数化对现代机器学习(ML)模型的整体性能的好处是众所周知的。但是,在更颗粒状的数据亚组水平上过度参数化的影响知之甚少。最近的实证研究表明了令人鼓舞的结果:(i)当尚不清楚的团体时,对经验风险最小化训练的过度参数化模型(ERM)对少数群体的表现更好;(ii)当已知组时,对数据进行均采样以均衡的数据将产生过度参数化的制度中最新的群体临界性。在本文中,我们通过对少数群体过度参数化特征模型的风险进行理论研究来补充这些经验研究。在大多数和少数群体的回归功能不同的环境中,我们表明过度参数始终可以改善少数群体的绩效。
translated by 谷歌翻译
我们识别并形式化基本梯度下降现象,导致过度参数化神经网络中的学习倾向。尽管存在对任务相关的特征的子集最小化跨熵损失最小化梯度饥饿,尽管存在是否存在无法被发现的其他预测功能。这项工作为神经网络中这种特征不平衡的出现提供了理论解释。使用来自动态系统理论的工具,我们在梯度下降期间确定了学习动态的简单属性,从而导致这种不平衡,并证明可以预期这种情况在训练数据中提供某些统计结构。根据我们拟议的形式主义,我们为旨在解耦特征学习动态的新型正则化方法,提高患者渐变饥饿阻碍的准确性和鲁棒性的担保。我们用简单和真实的分配(OOD)泛化实验说明了我们的研究结果。
translated by 谷歌翻译
虽然神经网络在平均病例的性能方面对分类任务的成功显着,但它们通常无法在某些数据组上表现良好。这样的组信息可能是昂贵的;因此,即使在培训数据不可用的组标签不可用,较稳健性和公平的最新作品也提出了改善最差组性能的方法。然而,这些方法通常在培训时间使用集团信息的表现不佳。在这项工作中,我们假设没有组标签的较大数据集一起访问少量组标签。我们提出了一个简单的两步框架,利用这个部分组信息来提高最差组性能:训练模型以预测训练数据的丢失组标签,然后在强大的优化目标中使用这些预测的组标签。从理论上讲,我们在最差的组性能方面为我们的方法提供泛化界限,展示了泛化误差如何相对于培训点总数和具有组标签的培训点的数量。凭经验,我们的方法优于不使用群组信息的基线表达,即使只有1-33%的积分都有组标签。我们提供消融研究,以支持我们框架的稳健性和可扩展性。
translated by 谷歌翻译
Overparameterized neural networks can be highly accurate on average on an i.i.d.test set yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization-a stronger-than-typical 2 penalty or early stopping-we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
translated by 谷歌翻译