显示过次分辨率化,导致在亚组信息的各种设置下在罕见的子组上的测试精度差。为了获得更完整的图片,我们考虑子组信息未知的情况。我们调查模型规模在多种设置的经验风险最小化(ERM)下最差组泛化的影响,不同:1)架构(Reset,VGG或BERT),2)域(视觉或自然语言处理)3)模型尺寸(宽度或深度)和4)初始化(具有预先培训或随机重量)。我们的系统评价显示,模型大小的增加不会受到伤害,并且可以帮助所有设置的ERM下的最差群体测试性能。特别是,增加预先训练的模型大小一致地提高水鸟和多液体的性能。当子组标签未知时,我们建议从业者使用更大的预训练模型。
translated by 谷歌翻译
Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label. Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point, whereas approaches that do not use such group annotations typically achieve unsatisfactory worst-group accuracy. In this paper, we propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified. Intuitively, this upweights examples from groups on which standard ERM models perform poorly, leading to improved worst-group performance. Averaged over four image classification and natural language processing tasks with spurious correlations, JTT closes 75% of the gap in worst-group accuracy between standard ERM and group DRO, while only requiring group annotations on a small validation set in order to tune hyperparameters.
translated by 谷歌翻译
Overparameterized neural networks can be highly accurate on average on an i.i.d.test set yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization-a stronger-than-typical 2 penalty or early stopping-we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
translated by 谷歌翻译
Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels. Such models typically lead to poor performance during inference for data lacking such correlations. In this work, we explicitly consider a situation where potential spurious correlations are present in the majority of training data. In contrast with existing approaches, which use the ERM model outputs to detect the samples without spurious correlations, and either heuristically upweighting or upsampling those samples; we propose the logit correction (LC) loss, a simple yet effective improvement on the softmax cross-entropy loss, to correct the sample logit. We demonstrate that minimizing the LC loss is equivalent to maximizing the group-balanced accuracy, so the proposed LC could mitigate the negative impacts of spurious correlations. Our extensive experimental results further reveal that the proposed LC loss outperforms the SoTA solutions on multiple popular benchmarks by a large margin, an average 5.5% absolute improvement, without access to spurious attribute labels. LC is also competitive with oracle methods that make use of the attribute labels. Code is available at https://github.com/shengliu66/LC.
translated by 谷歌翻译
Models trained via empirical risk minimization (ERM) are known to rely on spurious correlations between labels and task-independent input features, resulting in poor generalization to distributional shifts. Group distributionally robust optimization (G-DRO) can alleviate this problem by minimizing the worst-case loss over a set of pre-defined groups over training data. G-DRO successfully improves performance of the worst-group, where the correlation does not hold. However, G-DRO assumes that the spurious correlations and associated worst groups are known in advance, making it challenging to apply it to new tasks with potentially multiple unknown spurious correlations. We propose AGRO -- Adversarial Group discovery for Distributionally Robust Optimization -- an end-to-end approach that jointly identifies error-prone groups and improves accuracy on them. AGRO equips G-DRO with an adversarial slicing model to find a group assignment for training examples which maximizes worst-case loss over the discovered groups. On the WILDS benchmark, AGRO results in 8% higher model performance on average on known worst-groups, compared to prior group discovery approaches used with G-DRO. AGRO also improves out-of-distribution performance on SST2, QQP, and MS-COCO -- datasets where potential spurious correlations are as yet uncharacterized. Human evaluation of ARGO groups shows that they contain well-defined, yet previously unstudied spurious correlations that lead to model errors.
translated by 谷歌翻译
在偏置数据集中培训时,分类器会偏差。作为一种补救措施,我们建议学习分裂(LS),这是一种用于自动偏置检测的算法。给定一个具有输入标签对的数据集,LS学会了将该数据集分开,以便在训练分训练上训练的预测因素不能推广到测试分配。该性能差距表明,数据集中的测试拆分代表性不足,这是潜在偏差的信号。识别不可替代的分裂是具有挑战性的,因为我们对偏见没有注释。在这项工作中,我们表明,测试拆分中每个示例的预测正确性可以用作弱监督的来源:如果我们移动正确预测的示例,将概括性能下降错误预测。 LS是任务不合时宜的,可以应用于任何监督的学习问题,从自然语言理解和图像分类到分子财产预测。经验结果表明,LS能够产生与人类识别偏见相关的惊人挑战分裂。此外,我们证明,将强大的学习算法(例如群DRO)与LS启用自动偏差确定的拆分相结合。与以前的最先进相比,当训练和验证过程中偏见的来源未知时,我们显着提高了最差的组绩效(平均为23.4%)。
translated by 谷歌翻译
接受经验风险最小化(ERM)训练的机器学习模型的预测性能可以大大降解分配变化。在训练数据集中存在虚假相关性的存在导致ERM训练的模型在对不存在此类相关性的少数群体评估时表现出很高的损失。已经进行了广泛的尝试来开发改善最差的鲁棒性的方法。但是,他们需要每个培训输入的组信息,或者至少需要一个带有组标签的验证设置来调整其超参数,这可能是昂贵的或未知的。在本文中,我们应对在培训或验证期间没有小组注释的情况下提高组鲁棒性的挑战。为此,我们建议根据``识别''模型提取的特征的革兰氏集矩阵将训练数据集分为组,并根据这些伪组应用强大的优化。在不可用的小组标签的现实情况下,我们的实验表明,我们的方法不仅可以改善对ERM的稳健性,而且还优于所有最近的基线
translated by 谷歌翻译
通常对机器学习分类器进行培训,以最大程度地减少数据集的平均误差。不幸的是,在实践中,这个过程通常会利用训练数据中亚组不平衡引起的虚假相关性,从而导致高平均性能,但跨亚组的性能高度可变。解决此问题的最新工作提出了使用骆驼进行模型修补。这种先前的方法使用生成的对抗网络来执行类内的群间数据增强,需要(a)训练许多计算昂贵的模型以及(b)给定域模型的合成输出的足够质量。在这项工作中,我们提出了RealPatch,这是一个基于统计匹配的简单,更快,更快的数据增强的框架。我们的框架通过使用真实样本增强数据集来执行模型修补程序,从而减轻了为目标任务训练生成模型的需求。我们证明了RealPatch在三个基准数据集,Celeba,Waterbird和IwildCam的一部分中的有效性,显示了最差的亚组性能和二进制分类中亚组性能差距的改进。此外,我们使用IMSITU数据集进行了211个类的实验,在这种设置中,基于生成模型的修补(例如骆驼)是不切实际的。我们表明,RealPatch可以成功消除数据集泄漏,同时减少模型泄漏并保持高实用程序。可以在https://github.com/wearepal/realpatch上找到RealPatch的代码。
translated by 谷歌翻译
我们通过对杂散相关性的因果解释提出了一种信息 - 理论偏置测量技术,这通过利用条件相互信息来识别特征级算法偏压有效。尽管已经提出了几种偏置测量方法并广泛地研究以在各种任务中实现诸如面部识别的各种任务中的算法公平,但它们的准确性或基于Logit的度量易于导致普通预测得分调整而不是基本偏差减少。因此,我们设计针对算法偏差的新型扩张框架,其包括由所提出的信息 - 理论偏置测量方法导出的偏压正则化损耗。此外,我们介绍了一种基于随机标签噪声的简单而有效的无监督的脱叠技术,这不需要明确的偏置信息监督。通过多种标准基准测试的广泛实验,在不同的现实情景中验证了所提出的偏差测量和脱叠方法。
translated by 谷歌翻译
机器学习算法通常假设培训和测试示例是从相同的分布中汲取的。然而,分发转移是现实世界应用中的常见问题,并且可以在测试时间造成模型急剧执行。在本文中,我们特别考虑域移位和亚泊素班次的问题(例如,不平衡数据)。虽然先前的作品通常会寻求明确地将模型的内部表示和预测器进行明确,以成为域不变的,但我们旨在规范整个功能而不限制模型的内部表示。这导致了一种简单的基于混合技术,它通过名为LISA的选择性增强来学习不变函数。 Lisa选择性地用相同的标签而单独地插值样本,但不同的域或具有相同的域但不同的标签。我们分析了线性设置,从理论上展示了LISA如何导致较小的最差组错误。凭经验,我们研究了LISA对从亚本化转变到域移位的九个基准的有效性,我们发现LISA一直以其他最先进的方法表达。
translated by 谷歌翻译
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
translated by 谷歌翻译
尽管无偏见的机器学习模型对于许多应用程序至关重要,但偏见是一个人为定义的概念,可以在任务中有所不同。只有输入标签对,算法可能缺乏足够的信息来区分稳定(因果)特征和不稳定(虚假)特征。但是,相关任务通常具有类似的偏见 - 我们可以利用在转移环境中开发稳定的分类器的观察结果。在这项工作中,我们明确通知目标分类器有关源任务中不稳定功能的信息。具体而言,我们得出一个表示,该表示通过对比源任务中的不同数据环境来编码不稳定的功能。我们通过根据此表示形式将目标任务的数据聚类来实现鲁棒性,并最大程度地降低这些集群中最坏情况的风险。我们对文本和图像分类进行评估。经验结果表明,我们的算法能够在合成生成的环境和现实环境的目标任务上保持鲁棒性。我们的代码可在https://github.com/yujiabao/tofu上找到。
translated by 谷歌翻译
虽然神经网络在平均病例的性能方面对分类任务的成功显着,但它们通常无法在某些数据组上表现良好。这样的组信息可能是昂贵的;因此,即使在培训数据不可用的组标签不可用,较稳健性和公平的最新作品也提出了改善最差组性能的方法。然而,这些方法通常在培训时间使用集团信息的表现不佳。在这项工作中,我们假设没有组标签的较大数据集一起访问少量组标签。我们提出了一个简单的两步框架,利用这个部分组信息来提高最差组性能:训练模型以预测训练数据的丢失组标签,然后在强大的优化目标中使用这些预测的组标签。从理论上讲,我们在最差的组性能方面为我们的方法提供泛化界限,展示了泛化误差如何相对于培训点总数和具有组标签的培训点的数量。凭经验,我们的方法优于不使用群组信息的基线表达,即使只有1-33%的积分都有组标签。我们提供消融研究,以支持我们框架的稳健性和可扩展性。
translated by 谷歌翻译
最近,对分布(OOD)数据具有相关性转移的概括引起了极大的关注。相关转移是由与类标签相关的虚假属性引起的,因为它们之间的相关性可能在训练和测试数据中有所不同。对于这样一个问题,我们表明,鉴于类标签,有条件独立的虚假属性模型是可推广的。基于此,提出了控制OOD泛化误差的度量条件伪变异(CSV),以衡量这种条件独立性。为了改善OOD的概括,我们将培训过程正常使用拟议的CSV。在温和的假设下,我们的训练目标可以作为非Convex-Concave Mini-Max问题提出。提出了具有可证明的收敛速率的算法来解决该问题。广泛的经验结果验证了我们算法在改善OOD概括方面的功效。
translated by 谷歌翻译
许多数据集被指定:给定任务存在多个同样可行的解决方案。对于学习单个假设的方法,指定的指定可能是有问题的,因为实现低训练损失的不同功能可以集中在不同的预测特征上,从而在分布数据的数据上产生明显变化的预测。我们提出了Divdis,这是一个简单的两阶段框架,首先通过利用测试分布中的未标记数据来学习多种假设,以实现任务。然后,我们通过使用其他标签的形式或检查功能可视化的形式选择最小的其他监督来选择一个发现的假设之一来消除歧义。我们证明了Divdis找到在图像分类中使用强大特征的假设和自然语言处理问题的能力。
translated by 谷歌翻译
现有的一些作品分别研究深神经网络的对抗或自然分布鲁棒性。但是,实际上,模型需要享受两种类型的鲁棒性,以确保可靠性。在这项工作中,我们弥合了这一差距,并表明实际上,对抗性和自然分配鲁棒性之间存在明确的权衡。我们首先考虑具有与核心和虚假功能不相交的高斯数据上的简单线性回归设置。在这种情况下,通过理论和经验分析,我们表明(i)使用$ \ ell_1 $和$ \ ell_2 $规范的对抗性培训增加了对虚假功能的模型依赖; (ii)对于$ \ ell_ \ infty $ versarial训练,仅在伪造功能的比例大于核心功能的范围时才会出现伪造的依赖; (iii)对抗训练可能会在降低分布鲁棒性方面具有意外的后果,特别是当新的测试域中更改虚假相关性时。接下来,我们使用二十个经过对抗训练的模型的测试套件提出了广泛的经验证据受过训练的对应物,验证了我们的理论结果。我们还表明,训练数据中的虚假相关性(保留在测试域中)可以改善对抗性的鲁棒性,表明先前的主张表明对抗性脆弱性植根于虚假相关性是不完整的。
translated by 谷歌翻译
神经网络倾向于在训练数据的主要部分中表现出的类和潜在属性之间的虚假相关性,这破坏了其概括能力。本文提出了一种新的方法,用于培训错误的分类器,没有虚假属性标签。该方法的关键思想是采用分类器委员会作为辅助模块,该模块可以识别偏置冲突的数据,即没有虚假相关性的数据,并在训练主要分类器时向它们分配了很大的权重。该委员会被学到了一个自举的合奏,因此大多数分类器都具有偏见和多样化,并且故意无法相应地预测偏见的偏见。因此,预测难度委员会的共识为识别和加权偏见冲突数据提供了可靠的提示。此外,该委员会还接受了从主要分类器转移的知识的培训,以便它逐渐与主要分类器一起变得偏见,并强调随着培训的进行而更加困难的数据。在五个现实世界数据集中,我们的方法在没有像我们这样的虚假属性标签的现有方法上优于现有方法,甚至偶尔会超越依靠偏见标签的方法。
translated by 谷歌翻译
DNN的成功是由过度参数化网络概括的违反直觉能力驱动的,即使它们完全适合培训数据。实际上,测试误差通常会随着过度参数化的增加而继续减少,称为双重下降。这使从业者可以实例化大型模型,而不必担心过度合适。但是,尽管有好处,但先前的工作表明,过度参数会加剧偏见对少数族裔亚组。已经提出了几种公平约束的DNN培训方法来解决这一问题。在这里,我们对Mindiff进行了严格的研究,这是Tensorflow负责AI工具包中实施的公平约束培训程序,旨在实现机会平等。我们表明,尽管Mindiff改善了参数化不足的模型的公平性,但在过度参数化的制度中可能是无效的。这是因为一个具有零训练损失的过度合适模型在培训数据上是微不足道的,造成了“公平幻想”,因此可以关闭Mindiff的优化(这将适用于任何基于差异的措施,这些措施关心错误或准确性。它不适用于人口统计)。在指定的公平限制内,与参数过度的同行相比,参数化的Mindiff模型甚至可能具有较低的错误(尽管基线过度参数化模型的错误较低)。我们进一步表明,Mindiff优化对在参数不足的制度中的批处理大小非常敏感。因此,使用Mindiff的公平模型培训需要耗时的超参数搜索。最后,我们建议使用先前提出的正则化技术,即。 L2,与Mindiff结合使用的早期停止和洪水训练公平的参数化模型。
translated by 谷歌翻译
The standard empirical risk minimization (ERM) can underperform on certain minority groups (i.e., waterbirds in lands or landbirds in water) due to the spurious correlation between the input and its label. Several studies have improved the worst-group accuracy by focusing on the high-loss samples. The hypothesis behind this is that such high-loss samples are \textit{spurious-cue-free} (SCF) samples. However, these approaches can be problematic since the high-loss samples may also be samples with noisy labels in the real-world scenarios. To resolve this issue, we utilize the predictive uncertainty of a model to improve the worst-group accuracy under noisy labels. To motivate this, we theoretically show that the high-uncertainty samples are the SCF samples in the binary classification problem. This theoretical result implies that the predictive uncertainty is an adequate indicator to identify SCF samples in a noisy label setting. Motivated from this, we propose a novel ENtropy based Debiasing (END) framework that prevents models from learning the spurious cues while being robust to the noisy labels. In the END framework, we first train the \textit{identification model} to obtain the SCF samples from a training set using its predictive uncertainty. Then, another model is trained on the dataset augmented with an oversampled SCF set. The experimental results show that our END framework outperforms other strong baselines on several real-world benchmarks that consider both the noisy labels and the spurious-cues.
translated by 谷歌翻译
深度网络模型在配送(ID)数据上卓越地表现,但可以显着失败,在分销(OOD)数据上。虽然开发方法专注于改善ood泛化,但已经有很少的注意力来评估模型以处理ood数据的能力。本研究致力于分析实验ID试验和设计ood试验范式的问题,以准确评估实际性能。我们的分析基于引入的三种类型的分配转移来基于为生成ood数据进行分类。主要观察包括:(1)ID测试既不反映单个型号的实际性能也没有比较OOD数据下的不同模型。 (2)ID试验失败可以归因于所学到的边际和有条件的杂散相关性来自相应的分布换档。基于此,我们提出了新的OOD测试范式来评估模型的概念化能力,以说明数据,并讨论如何使用OCT测试结果来查找模型的错误以指导模型调试。
translated by 谷歌翻译