我们与最近发布的狂野基准分享我们的经验,这是一个致力于开发模型和培训策略的十个数据集的集合,这些策略对域班较强。几个实验产生了几个批判性观察,我们认为对任何未来的野外工作都是普遍的兴趣。我们的研究侧重于两个数据集:IWILDCAM和FMOW。我们展示(1)对每个评估度量进行单独的交叉验证对于两个数据集来说至关重要,(2)验证和测试性能之间的相关性可能使IWIndCAM的模型开发难以困难,(3)超级培训的次要变化困难 - 参数通过相对较大的边缘(主要是FMOW)来改善基线,(4)某些域和某些目标标签之间存在强烈的相关性(主要是IWINDCAM)之间存在强烈的相关性。据我们所知,尽管有明显的重要性,但这些数据集上没有关于这些观察结果的工作。我们的代码是公开的。
translated by 谷歌翻译
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase. However, the out-of-distribution (OOD) generalization problem remains a challenge in many NLP tasks, limiting the real-world deployment of these methods. This paper presents the first attempt at creating a unified benchmark named GLUE-X for evaluating OOD robustness in NLP models, highlighting the importance of OOD robustness and providing insights on how to measure the robustness of a model and how to improve it. The benchmark includes 13 publicly available datasets for OOD testing, and evaluations are conducted on 8 classic NLP tasks over 19 popularly used PLMs. Our findings confirm the need for improved OOD accuracy in NLP tasks, as significant performance degradation was observed in all settings compared to in-distribution (ID) accuracy.
translated by 谷歌翻译
机器学习模型与虚假相关性的脆弱性主要在监督学习(SL)的背景下进行了讨论。但是,缺乏对虚假相关性如何影响流行的自学学习(SSL)和基于自动编码器模型(AE)的表现的见解。在这项工作中,我们通过评估这些模型在现实世界和合成分配变化数据集上的性能来阐明这一点。在观察到线性头可能容易受到虚假相关性的观察之后,我们使用对分布外(OOD)数据训练的线性头制定了一种新颖的评估方案,以将预训练模型的性能隔离为潜在的偏差用于评估的线性头。通过这种新方法,我们表明SSL模型始终比AE和SL模型在OOD概括方面始终更健壮,因此在OOD概括方面更好。
translated by 谷歌翻译
由于分布式概括是一个普遍不足的问题,因此在不同的研究计划中研究了各种代理目标(例如,校准,对抗性鲁棒性,算法腐败,跨轮班的不变性),导致不同的研究计划,从而提出不同的建议。在共享相同的抱负目标的同时,这些方法从未在相同的实验条件下对真实数据进行测试。在本文中,我们对以前的工作进行了统一的看法,突出了我们经验解决的消息差异,并提供有关如何衡量模型鲁棒性以及如何改进它的建议。为此,我们收集了172个公开可用的数据集对,用于培训和分布外评估准确性,校准错误,对抗性攻击,环境不变性和合成腐败。我们从九个不同的架构中的九个不同的架构中微调了31k网络。我们的发现证实,分布的精度往往会共同增加,但表明它们的关系在很大程度上取决于数据集依赖性,并且通常比以前较小的规模研究所提出的更加细微和更复杂。
translated by 谷歌翻译
分数(OOD)学习涉及培训和测试数据遵循不同分布的方案。尽管在机器学习中已经深入研究了一般的OOD问题,但图形OOD只是一个新兴领域。目前,缺少针对图形OOD方法评估的系统基准。在这项工作中,我们旨在为图表开发一个被称为GOOD的OOD基准。我们明确地在协变量和概念变化和设计数据拆分之间进行了区分,以准确反映不同的变化。我们考虑图形和节点预测任务,因为在设计变化时存在关键差异。总体而言,Good包含8个具有14个域选择的数据集。当与协变量,概念和无移位结合使用时,我们获得了42个不同的分裂。我们在7种常见的基线方法上提供了10种随机运行的性能结果。这总共导致294个数据集模型组合。我们的结果表明,分布和OOD设置之间的性能差距很大。我们的结果还阐明了通过不同方法的协变量和概念转移之间的不同性能趋势。我们的良好基准是一个不断增长的项目,并希望随着该地区的发展,数量和种类繁多。可以通过$ \ href {https://github.com/divelab/good/} {\ text {https://github.com/divelab/good/good/}} $访问良好基准。
translated by 谷歌翻译
Clinical machine learning models show a significant performance drop when tested in settings not seen during training. Domain generalisation models promise to alleviate this problem, however, there is still scepticism about whether they improve over traditional training. In this work, we take a principled approach to identifying Out of Distribution (OoD) environments, motivated by the problem of cross-hospital generalization in critical care. We propose model-based and heuristic approaches to identify OoD environments and systematically compare models with different levels of held-out information. We find that access to OoD data does not translate to increased performance, pointing to inherent limitations in defining potential OoD environments potentially due to data harmonisation and sampling. Echoing similar results with other popular clinical benchmarks in the literature, new approaches are required to evaluate robust models on health records.
translated by 谷歌翻译
最近,Miller等。结果表明,模型的分布(ID)精度与几个OOD基准上的分布(OOD)精度具有很强的线性相关性 - 一种将它们称为“准确性”的现象。虽然一种用于模型选择的有用工具(即,最有可能执行最佳OOD的模型是具有最高ID精度的模型),但此事实无助于估计模型的实际OOD性能,而无需访问标记的OOD验证集。在本文中,我们展示了一种类似但令人惊讶的现象,也与神经网络分类器对之间的一致性一致:每当在线准确性时,我们都会观察到任何两个神经网络的预测之间的OOD一致性(具有潜在的不同架构)还观察到与他们的ID协议有很强的线性相关性。此外,我们观察到OOD与ID协议的斜率和偏置与OOD与ID准确性的偏差非常匹配。我们称之为“协议”的现象具有重要的实际应用:没有任何标记的数据,我们可以预测分类器的OOD准确性},因为只需使用未标记的数据就可以估算OOD一致性。我们的预测算法在同意在线达成的变化中都优于先前的方法,而且令人惊讶的是,当准确性不在线上时。这种现象还为深度神经网络提供了新的见解:与在线的准确性不同,一致性似乎仅适用于神经网络分类器。
translated by 谷歌翻译
几项研究在经验上比较了各种模型的分布(ID)和分布(OOD)性能。他们报告了计算机视觉和NLP中基准的频繁正相关。令人惊讶的是,他们从未观察到反相关性表明必要的权衡。这重要的是确定ID性能是否可以作为OOD概括的代理。这篇简短的论文表明,ID和OOD性能之间的逆相关性确实在现实基准中发生。由于模型的选择有偏见,因此在过去的研究中可能被错过。我们使用来自多个训练时期和随机种子的模型展示了Wilds-Amelyon17数据集上模式的示例。我们的观察结果尤其引人注目,对经过正规化器训练的模型,将解决方案多样化为ERM目标。我们在过去的研究中得出了细微的建议和结论。 (1)高OOD性能有时确实需要交易ID性能。 (2)仅专注于ID性能可能不会导致最佳OOD性能:它可能导致OOD性能的减少并最终带来负面回报。 (3)我们的示例提醒人们,实证研究仅按照现有方法来制定制度:在提出规定的建议时有必要进行护理。
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
我们经常在强大的机器学习中看到不良的权衡,其中分布(OOD)的精度与分布式(ID)的准确性不一致:通过删除伪造功能的专用技术获得的强大分类器通常具有更好的OOD,但ID较差,但ID较差。与通过ERM训练的标准分类器相比,准确性。在本文中,我们发现由ID校准的合奏(仅在ID数据上校准ID数据之后简单地整合标准和健壮的模型)优于ID和ID和OOD准确性。在11个自然分配移位数据集中,ID校准的合奏获得了两全其美的最佳:强大的ID准确性和OOD精度。我们在风格化的设置中分析了此方法,并确定了两个重要条件以使合奏执行良好的ID和OOD:(1)我们需要校准标准和可靠的模型(在ID数据上,因为OOD数据不可用),(2)OOD没有反相关的虚假特征。
translated by 谷歌翻译
The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
部署在野外的机器学习系统通常在源分布上培训,但部署在不同的目标分布上。未标记的数据可以是用于缓解这些分布班次的强大的利用点,因为它通常比标记数据更具可用。然而,未标记数据的现有分配转换基准不反映现实世界应用中出现的方案的广度。在这项工作中,我们介绍了Wilds 2.0更新,该更新在分发转移的野外基准中扩展了10个数据集中的8个,以包括将在部署中逼真获得的策划未标记数据。为了保持一致性,标记的培训,验证和测试集以及评估度量与原始野外基准中的标记与评估度量完全相同。这些数据集涵盖了广泛的应用程序(从组织学到野生动物保护),任务(分类,回归和检测)和方式(照片,卫星图像,显微镜载玻片,文本,分子图)。我们系统地基准测试最先进的方法,可以利用未标记的数据,包括域不变,自我培训和自我监督方法,并表明他们在野外的成功2.0是有限的。为了方便方法开发和评估,我们提供了一个自动化数据加载的开源包,并包含本文中使用的所有模型架构和方法。代码和排行榜可在https://wilds.stanford.edu获得。
translated by 谷歌翻译
用于现实世界应用程序的时间序列分类器的安全部署依赖于检测未从与培训数据相同的分布生成的数据的能力。此任务称为离分布(OOD)检测。我们考虑了时间序列域的OOD检测的新问题。我们讨论了时间序列数据带来的独特挑战,并解释了为什么来自图像域的先前方法会表现不佳。受这些挑战的激励,本文提出了一种新颖的{\ em季节性评分(SRS)}方法。 SRS由三个关键算法步骤组成。首先,将每个输入分解为类别的语义组件和余数。其次,使用这种分解来估计输入的阶级条件可能性和使用深层生成模型的条件。从这些估计值中计算出季节性比率得分。第三,从分布数据中确定阈值间隔以检测OOD示例。对不同现实世界基准的实验表明,与基线方法相比,SRS方法非常适合于时间序列OOD检测。 https://github.com/tahabelkhouja/srs提供了SRS方法的开源代码
translated by 谷歌翻译
域名(ood)概括是机器学习模型的重大挑战。已经提出了许多技术来克服这一挑战,通常专注于具有某些不变性属性的学习模型。在这项工作中,我们绘制了ood性能和模型校准之间的链接,争论跨多个域的校准可以被视为一个特殊的表达,导致更好的EOD泛化。具体而言,我们表明,在某些条件下,实现\ EMPH {多域校准}的模型可被证明无杂散相关性。这导致我们提出多域校准作为分类器的性能的可测量和可训练的代理。因此,我们介绍了易于申请的方法,并允许从业者通过训练或修改现有模型来改善多域校准,从而更好地在看不见的域上的性能。使用最近提出的野外的四个数据集以及彩色的MNIST数据集,我们证明了训练或调整模型,以便在多个域中校准它们导致在看不见的测试域中显着提高性能。我们认为,校准和革建化之间的这种有趣联系是从一个实际和理论的观点出发的。
translated by 谷歌翻译
深度度量学习(DML)旨在找到适合于零拍摄传输到先验未知测试分布的表示。但是,公共评估协议仅测试单个固定数据拆分,其中列车和测试类被随机分配。更现实的评估应考虑广泛的分布转变,具有潜在的变化和困难。在这项工作中,我们系统地构建了增加难度的培训 - 测试分裂,并呈现OHLML基准,以在DML中的分发外换档下表征概括。 OODML旨在探讨更具挑战性的泛化性能,多样化的火车到测试分配换档。根据我们的新基准,我们对最先进的DML方法进行了彻底的实证分析。我们发现,虽然泛化趋于难以困难地降解,但随着分布偏移的增加,一些方法在保持性能方面更好。最后,我们提出了几次拍摄的DML作为一种有效的方法,以响应于OHML中呈现的未知测试班次而始终如一地改善泛化。此处可用的代码:https://github.com/compvis/charracterizing_generalization_in_dml。
translated by 谷歌翻译
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% -15% on CIFAR-10 and 11% -14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
translated by 谷歌翻译
在过去的十年中,AI AID毒品发现(AIDD)的计算方法和数据集策划的繁荣发展。但是,现实世界中的药物数据集经常表现出高度不平衡的分布,这在很大程度上被当前的文献忽略了,但可能会严重损害机器学习应用程序的公平性和概括。在这一观察结果的激励下,我们介绍了Imdrug,这是一个全面的基准标准,其开源python库由4个不平衡设置,11个AI-Ready数据集,54个学习任务和16种为不平衡学习量身定制的基线算法。它为涵盖广泛的药物发现管道(例如分子建模,药物靶标相互作用和逆合合成)的问题和解决方案提供了可访问且可定制的测试床。我们通过新的评估指标进行广泛的实证研究,以证明现有算法在数据不平衡情况下无法解决药物和药物挑战。我们认为,Imdrug为未来的研究和发展开辟了途径,在AIDD和深度不平衡学习的交集中对现实世界中的挑战开辟了道路。
translated by 谷歌翻译
在视频监视和时尚检索中,识别软性识别人行人属性至关重要。最近的作品在单个数据集上显示了有希望的结果。然而,这些方法在不同属性分布,观点,不同的照明和低分辨率下的概括能力很少因当前数据集中的强偏差和变化属性而很少被理解。为了缩小这一差距并支持系统的调查,我们介绍了UPAR,即统一的人属性识别数据集。它基于四个知名人士属性识别数据集:PA100K,PETA,RAPV2和Market1501。我们通过提供3300万个附加注释来统一这些数据集,以在整个数据集中统一40个属性类别的40个重要二进制属性。因此,我们首次对可概括的行人属性识别以及基于属性的人检索进行研究。由于图像分布,行人姿势,规模和遮挡的巨大差异,现有方法在准确性和效率方面都受到了极大的挑战。此外,我们基于对正则化方法的彻底分析,为基于PAR和属性的人检索开发了强大的基线。我们的模型在PA100K,PETA,RAPV2,Market1501-Atributes和UPAR上的跨域和专业设置中实现了最先进的性能。我们相信UPAR和我们的强大基线将为人工智能界做出贡献,并促进有关大规模,可推广属性识别系统的研究。
translated by 谷歌翻译
Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-life application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the abundance of publicized research on confidence scoring. Code and trained models are at https://github.com/IML-DKFZ/fd-shifts.
translated by 谷歌翻译