当可能的许多标签是可能的时,选择单个可以导致低精度。一个常见的替代方案,称为顶级k $分类,是选择一些数字$ k $(通常约5),并返回最高分数的$ k $标签。不幸的是,对于明确的案例,$ k> 1 $太多,对于非常暧昧的情况,$ k \ leq 5 $(例如)太小。另一种明智的策略是使用一种自适应方法,其中返回的标签数量随着计算的歧义而变化,但必须平均到所有样本的某些特定的$ k $。我们表示这种替代方案 - $ k $分类。本文在平均值的含量较低的误差率时,本文正式地表征了模糊性曲线,比固定的顶级k $分类更低。此外,它为固定尺寸和自适应分类器提供了自然估计程序,并证明了它们的一致性。最后,它报告了实际图像数据集的实验,揭示了平均值的效益 - 在实践中的价格超过高度k $分类。总的来说,当含糊不清的歧义时,平均值-$ k $永远不会比Top-$ K $更差,并且在我们的实验中,当估计时,这也持有。
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
现在通常用于高风险设置,如医疗诊断,如医疗诊断,那么需要不确定量化,以避免后续模型失败。无分发的不确定性量化(无分布UQ)是用户友好的范式,用于为这种预测创建统计上严格的置信区间/集合。批判性地,间隔/集合有效而不进行分布假设或模型假设,即使具有最多许多DataPoints也具有显式保证。此外,它们适应输入的难度;当输入示例很困难时,不确定性间隔/集很大,信号传达模型可能是错误的。在没有多大的工作和没有再培训的情况下,可以在任何潜在的算法(例如神经网络)上使用无分​​发方法,以产生置信度集,以便包含用户指定概率,例如90%。实际上,这些方法易于理解和一般,应用于计算机视觉,自然语言处理,深度加强学习等领域出现的许多现代预测问题。这种实践介绍是针对对无需统计学家的免费UQ的实际实施感兴趣的读者。我们通过实际的理论和无分发UQ的应用领导读者,从保形预测开始,并使无关的任何风险的分布控制,如虚假发现率,假阳性分布检测,等等。我们将包括Python中的许多解释性插图,示例和代码样本,具有Pytorch语法。目标是提供读者对无分配UQ的工作理解,使它们能够将置信间隔放在算法上,其中包含一个自包含的文档。
translated by 谷歌翻译
本文介绍了分类器校准原理和实践的简介和详细概述。校准的分类器正确地量化了与其实例明智的预测相关的不确定性或信心水平。这对于关键应用,最佳决策,成本敏感的分类以及某些类型的上下文变化至关重要。校准研究具有丰富的历史,其中几十年来预测机器学习作为学术领域的诞生。然而,校准兴趣的最近增加导致了新的方法和从二进制到多种子体设置的扩展。需要考虑的选项和问题的空间很大,并导航它需要正确的概念和工具集。我们提供了主要概念和方法的介绍性材料和最新的技术细节,包括适当的评分规则和其他评估指标,可视化方法,全面陈述二进制和多字数分类的HOC校准方法,以及几个先进的话题。
translated by 谷歌翻译
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between $\alpha$-loss and Arimoto conditional entropy, verify the classification-calibration of $\alpha$-loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of $\alpha$-loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning $\alpha$-loss away from log-loss ($\alpha = 1$), and to this end we provide simple heuristics for the practitioner. In particular, navigating the $\alpha$ hyperparameter can readily provide superior model robustness to label flips ($\alpha > 1$) and sensitivity to imbalanced classes ($\alpha < 1$).
translated by 谷歌翻译
在过去几十年中,已经提出了各种方法,用于估计回归设置中的预测间隔,包括贝叶斯方法,集合方法,直接间隔估计方法和保形预测方法。重要问题是这些方法的校准:生成的预测间隔应该具有预定义的覆盖水平,而不会过于保守。在这项工作中,我们从概念和实验的角度审查上述四类方法。结果来自各个域的基准数据集突出显示从一个数据集中的性能的大波动。这些观察可能归因于违反某些类别的某些方法所固有的某些假设。我们说明了如何将共形预测用作提供不具有校准步骤的方法的方法的一般校准程序。
translated by 谷歌翻译
The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the "active labeling" problem, which focuses on active learning with partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples. We illustrate our technique in depth for robust regression.
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
经典的错误发现率(FDR)控制程序提供了强大而可解释的保证,而它们通常缺乏灵活性。另一方面,最近的机器学习分类算法是基于随机森林(RF)或神经网络(NN)的算法,具有出色的实践表现,但缺乏解释和理论保证。在本文中,我们通过引入新的自适应新颖性检测程序(称为Adadetect)来使这两个相遇。它将多个测试文献的最新作品范围扩展到高维度的范围,尤其是Yang等人的范围。 (2021)。显示AD​​ADETECT既可以强烈控制FDR,又具有在特定意义上模仿甲骨文之一的力量。理论结果,几个基准数据集上的数值实验以及对天体物理数据的应用,我们的方法的兴趣和有效性得到了证明。特别是,虽然可以将AdadEtect与任何分类器结合使用,但它在带有RF的现实世界数据集以及带有NN的图像上特别有效。
translated by 谷歌翻译
Uncertainty is prevalent in engineering design, statistical learning, and decision making broadly. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measure of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From its beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.
translated by 谷歌翻译
学习推迟(L2D)框架有可能使AI系统更安全。对于给定的输入,如果人类比模型更有可能采取正确的行动,则系统可以将决定推迟给人类。我们研究L2D系统的校准,研究它们输出的概率是否合理。我们发现Mozannar&Sontag(2020)多类框架没有针对专家正确性进行校准。此外,由于其参数化是为此目的而退化的,因此甚至不能保证产生有效的概率。我们提出了一个基于单VS-ALL分类器的L2D系统,该系统能够产生专家正确性的校准概率。此外,我们的损失功能也是多类L2D的一致替代,例如Mozannar&Sontag(2020)。我们的实验验证了我们的系统校准不仅是我们的系统校准,而且这种好处无需准确。我们的模型的准确性始终可与Mozannar&Sontag(2020)模型的模型相当(通常是优越),从仇恨言语检测到星系分类到诊断皮肤病变的任务。
translated by 谷歌翻译
适应数据分布的结构(例如对称性和转型Imarerces)是机器学习中的重要挑战。通过架构设计或通过增强数据集,可以内在学习过程中内置Inhormces。两者都需要先验的了解对称性的确切性质。缺乏这种知识,从业者求助于昂贵且耗时的调整。为了解决这个问题,我们提出了一种新的方法来学习增强变换的分布,以新的\ emph {转换风险最小化}(trm)框架。除了预测模型之外,我们还优化了从假说空间中选择的转换。作为算法框架,我们的TRM方法是(1)有效(共同学习增强和模型,以\ emph {单训练环}),(2)模块化(使用\ emph {任何训练算法),以及(3)一般(处理\ \ ich {离散和连续}增强)。理论上与标准风险最小化的TRM比较,并在其泛化误差上给出PAC-Bayes上限。我们建议通过块组成的新参数化优化富裕的增强空间,导致新的\ EMPH {随机成分增强学习}(SCALE)算法。我们在CIFAR10 / 100,SVHN上使用先前的方法(快速自身自动化和武术器)进行实际比较规模。此外,我们表明规模可以在数据分布中正确地学习某些对称性(恢复旋转Mnist上的旋转),并且还可以改善学习模型的校准。
translated by 谷歌翻译
在这项工作中,我们对基本思想和新颖的发展进行了综述的综述,这是基于最小的假设的一种无创新的,无分配的,非参数预测的方法 - 能够以非常简单的方式预测集屈服在有限样本案例中,在统计意义上也有效。论文中提供的深入讨论涵盖了共形预测的理论基础,然后继续列出原始想法的更高级的发展和改编。
translated by 谷歌翻译
我们正式化并研究通过嵌入设计凸替代损失函数的自然方法,例如分类,排名或结构化预测等问题。在这种方法中,一个人将每一个有限的预测(例如排名)嵌入$ r^d $中的一个点,将原始损失值分配给这些要点,并以某种方式“凸出”损失以获得替代物。我们在这种方法和多面体(分段线性凸)的替代损失之间建立了牢固的联系:每个离散损失都被一些多面体损失嵌入,并且每个多面体损失都嵌入了一些离散的损失。此外,嵌入会产生一致的链接功能以及线性替代遗憾界限。正如我们用几个示例所说明的那样,我们的结果具有建设性。特别是,我们的框架为文献中各种多面体替代物以及不一致的替代物提供了简洁的证据或不一致的证据,它进一步揭示了这些代理人一致的离散损失。我们继续展示嵌入的其他结构,例如嵌入和匹配贝叶斯风险的等效性以及各种非算术概念的等效性。使用这些结果,我们确定与多面体替代物一起工作时,间接启发是一致性的必要条件也足够了。
translated by 谷歌翻译
卷积图像分类器可以实现高预测的准确性,但是量化其不确定性仍然是尚未解决的挑战,阻碍了他们在结果环境中的部署。现有的不确定性量化技术(例如PLATT缩放)试图校准网络的概率估计,但它们没有正式的保证。我们提出了一种算法,该算法会修改任何分类器,以输出包含具有用户指定概率的真实标签的预测集,例如90%。该算法像PLATT缩放一样简单快捷,但为每个模型和数据集提供了正式的有限样本覆盖范围保证。我们的方法修改了现有的保形预测算法,从而通过在PLATT缩放后正规化不太可能的类别分数来提供更稳定的预测集。在具有RESNET-152和其他分类器的ImageNet和Imagenet-V2的实验中,我们的方案的表现优于现有方法,通过通常比独立PLATT缩放基线小的5到10个因素实现覆盖范围。
translated by 谷歌翻译
State-of-the-art results on image recognition tasks are achieved using over-parameterized learning algorithms that (nearly) perfectly fit the training set and are known to fit well even random labels. This tendency to memorize the labels of the training data is not explained by existing theoretical analyses. Memorization of the training data also presents significant privacy risks when the training data contains sensitive personal information and thus it is important to understand whether such memorization is necessary for accurate learning.We provide the first conceptual explanation and a theoretical model for this phenomenon. Specifically, we demonstrate that for natural data distributions memorization of labels is necessary for achieving closeto-optimal generalization error. Crucially, even labels of outliers and noisy labels need to be memorized. The model is motivated and supported by the results of several recent empirical works. In our model, data is sampled from a mixture of subpopulations and our results show that memorization is necessary whenever the distribution of subpopulation frequencies is long-tailed. Image and text data is known to be long-tailed and therefore our results establish a formal link between these empirical phenomena. Our results allow to quantify the cost of limiting memorization in learning and explain the disparate effects that privacy and model compression have on different subgroups.
translated by 谷歌翻译
A flexible method is developed to construct a confidence interval for the frequency of a queried object in a very large data set, based on a much smaller sketch of the data. The approach requires no knowledge of the data distribution or of the details of the sketching algorithm; instead, it constructs provably valid frequentist confidence intervals for random queries using a conformal inference approach. After achieving marginal coverage for random queries under the assumption of data exchangeability, the proposed method is extended to provide stronger inferences accounting for possibly heterogeneous frequencies of different random queries, redundant queries, and distribution shifts. While the presented methods are broadly applicable, this paper focuses on use cases involving the count-min sketch algorithm and a non-linear variation thereof, to facilitate comparison to prior work. In particular, the developed methods are compared empirically to frequentist and Bayesian alternatives, through simulations and experiments with data sets of SARS-CoV-2 DNA sequences and classic English literature.
translated by 谷歌翻译
当训练概率分类器和校准时,可以容易地忽略校准损耗的所谓分组损耗组件。分组损失是指观察信息与实际校准运动中的信息之间的差距。我们调查分组损失与充足概念之间的关系,将Conoonotonics识别为充足的有用标准。我们重新审视Langford&Zadrozny(2005)的探测方法,发现它产生了减少分组损失的概率分类器的估计。最后,我们将Brier曲线讨论为支持培训的工具和“足够”概率分类器的校准。
translated by 谷歌翻译
我们介绍了学习然后测试,校准机器学习模型的框架,使其预测满足明确的,有限样本统计保证,无论底层模型如何和(未知)数据生成分布。框架地址,以及在其他示例中,在多标签分类中的错误发现速率控制,在实例分割中交叉联盟控制,以及同时控制分类或回归中的异常检测和置信度覆盖的类型误差。为实现这一目标,我们解决了一个关键的技术挑战:控制不一定单调的任意风险。我们的主要洞察力是将风险控制问题重新构建为多个假设检测,使技术和数学论据不同于先前文献中的技术。我们使用我们的框架为多个核心机器学习任务提供新的校准方法,在计算机视觉中具有详细的工作示例。
translated by 谷歌翻译
Virtually all machine learning tasks are characterized using some form of loss function, and "good performance" is typically stated in terms of a sufficiently small average loss, taken over the random draw of test data. While optimizing for performance on average is intuitive, convenient to analyze in theory, and easy to implement in practice, such a choice brings about trade-offs. In this work, we survey and introduce a wide variety of non-traditional criteria used to design and evaluate machine learning algorithms, place the classical paradigm within the proper historical context, and propose a view of learning problems which emphasizes the question of "what makes for a desirable loss distribution?" in place of tacit use of the expected loss.
translated by 谷歌翻译