Conformal prediction (CP) is a wrapper around traditional machine learning models, giving coverage guarantees under the sole assumption of exchangeability; in classification problems, for a chosen significance level $\varepsilon$, CP guarantees that the error rate is at most $\varepsilon$, irrespective of whether the underlying model is misspecified. However, the prohibitive computational costs of "full" CP led researchers to design scalable alternatives, which alas do not attain the same guarantees or statistical power of full CP. In this paper, we use influence functions to efficiently approximate full CP. We prove that our method is a consistent approximation of full CP, and empirically show that the approximation error becomes smaller as the training set increases; e.g., for $10^{3}$ training points the two methods output p-values that are $<10^{-3}$ apart: a negligible error for any practical application. Our methods enable scaling full CP to large real-world datasets. We compare our full CP approximation (ACP) to mainstream CP alternatives, and observe that our method is computationally competitive whilst enjoying the statistical predictive power of full CP.
translated by 谷歌翻译
对于现实世界中的问题,尤其是在高风险环境中,具有可衡量的置信度的深度学习预测越来越多。共形预测(CP)框架是一种多功能解决方案,可自动保证最大错误率。但是,CP遭受计算效率低下,将其应用于大规模数据集。在本文中,我们提出了一种新型的保形损耗函数,该功能在一个步骤中近似传统上两步的CP方法。通过评估和惩罚与严格的预期CP输出分布的偏差,深度学习模型可以学习输入数据和保形P值之间的直接关系。与汇总的共形预测(ACP)(一种公认的CP近似变体)相比,我们的方法可实现高达86%的训练时间缩短。在近似有效性和预测效率方面,我们进行了全面的经验评估,以显示我们新颖的损失函数与ACP的竞争力在良好的MNIST数据集上。
translated by 谷歌翻译
在这项工作中,我们对基本思想和新颖的发展进行了综述的综述,这是基于最小的假设的一种无创新的,无分配的,非参数预测的方法 - 能够以非常简单的方式预测集屈服在有限样本案例中,在统计意义上也有效。论文中提供的深入讨论涵盖了共形预测的理论基础,然后继续列出原始想法的更高级的发展和改编。
translated by 谷歌翻译
Conformal prediction constructs a confidence set for an unobserved response of a feature vector based on previous identically distributed and exchangeable observations of responses and features. It has a coverage guarantee at any nominal level without additional assumptions on their distribution. Its computation deplorably requires a refitting procedure for all replacement candidates of the target response. In regression settings, this corresponds to an infinite number of model fits. Apart from relatively simple estimators that can be written as pieces of linear function of the response, efficiently computing such sets is difficult, and is still considered as an open problem. We exploit the fact that, \emph{often}, conformal prediction sets are intervals whose boundaries can be efficiently approximated by classical root-finding algorithms. We investigate how this approach can overcome many limitations of formerly used strategies; we discuss its complexity and drawbacks.
translated by 谷歌翻译
当一个人观察到一系列变量$(x_1,y_1),...,(x_n,y_n)$,按成形预测是一种方法,允许估计为$ y_ {n + 1} $给定的$ y_ {n + 1} $仅仅假设数据的分布是可交换的。虽然吸引人,但是这种设置的计算通常通常是不可行的,例如,当未知变量$ y_ {n + 1} $持续。在本文中,我们将共形预测技术与算法稳定性界限相结合,以导出具有单个模型拟合的可计算的预测集。我们执行一些数值实验,说明当样本尺寸足够大时估计的紧张性。
translated by 谷歌翻译
预测一组结果 - 而不是独特的结果 - 是统计学习中不确定性定量的有前途的解决方案。尽管有关于构建具有统计保证的预测集的丰富文献,但适应未知的协变量转变(实践中普遍存在的问题)还是一个严重的未解决的挑战。在本文中,我们表明具有有限样本覆盖范围保证的预测集是非信息性的,并提出了一种新型的无灵活分配方法PredSet-1Step,以有效地构建了在未知协方差转移下具有渐近覆盖范围保证的预测集。我们正式表明我们的方法是\ textIt {渐近上可能是近似正确},对大型样本的置信度有很好的覆盖误差。我们说明,在南非队列研究中,它在许多实验和有关HIV风险预测的数据集中实现了名义覆盖范围。我们的理论取决于基于一般渐近线性估计器的WALD置信区间覆盖范围的融合率的新结合。
translated by 谷歌翻译
我们提出\ textbf {jaws},这是一系列用于无分配的不确定性量化任务的包装方法,以协变量偏移为中心,以我们的核心方法\ textbf {jaw}为中心,\ textbf {ja} ckknife+ \ textbf {w}八 - 重量。下巴还包括使用高阶影响函数的JAW的计算有效\ TextBf {a} pproximations:\ textbf {jawa}。从理论上讲,我们表明JAW放宽了Jackknife+对数据交换性的假设,即使在协变量转移下,也可以实现相同的有限样本覆盖范围保证。 Jawa在轻度假设下进一步以样本量或影响函数顺序的限制接近JAW保证。此外,我们提出了一种通用方法,以重新利用任何无分配不确定性量化方法及其对风险评估的任务的保证:该任务产生了真正标签在用户指定间隔内的估计概率。然后,我们将\ textbf {Jaw-r}和\ textbf {Jawa-r}作为\ textbf {r} ISK评估的建议方法的重新定义版本。实际上,在各种有偏见的现实世界数据集中,下颌的最先进的预测推理基准都超出了间隔生成和风险评估审计任务的偏差。
translated by 谷歌翻译
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ǫ-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
translated by 谷歌翻译
从机器学习模型中删除指定的培训数据子集的影响可能需要解决隐私,公平和数据质量等问题。删除子集后剩余数据从头开始对模型进行重新审查是有效但通常是不可行的,因为其计算费用。因此,在过去的几年中,已经看到了几种有效拆除的新方法,形成了“机器学习”领域,但是,到目前为止,出版的文献的许多方面都是不同的,缺乏共识。在本文中,我们总结并比较了七个最先进的机器学习算法,合并对现场中使用的核心概念的定义,调和不同的方法来评估算法,并讨论与在实践中应用机器相关的问题。
translated by 谷歌翻译
我们研究了使用经验风险最小化训练的机器学习模型中删除用户数据的问题。我们的重点是学习算法,这些算法返回经验风险最小化和近似符合符合流式传输缩写的删除请求的近似学习算法。利用Infintesimal Jacknife,我们开发了一种在线学习算法,既是计算和内存效率又有效的。与先前的记忆有效学习算法不同,我们针对的模型可以最大程度地减少非平滑正则化机构的目标,例如常用的$ \ ell_1 $,弹性网或核量规范惩罚。我们还提供与最先进的方法一致的概括,删除能力和学习保证。在各种基准数据集中,我们的算法在先验方法的运行时间上有所改善,同时保持相同的内存需求和测试准确性。最后,我们通过证明到目前为止引入的所有近似近似学习算法在问题设置中未能在常见的超参数调谐方法(例如交叉验证)中使用的所有近似近似学习算法来打开新的询问方向。
translated by 谷歌翻译
在本文中,我们考虑了第一和二阶技术来解决机器学习中产生的连续优化问题。在一阶案例中,我们提出了一种从确定性或半确定性到随机二次正则化方法的转换框架。我们利用随机优化的两相性质提出了一种具有自适应采样和自适应步长的新型一阶算法。在二阶案例中,我们提出了一种新型随机阻尼L-BFGS方法,该方法可以在深度学习的高度非凸起背景下提高先前的算法。这两种算法都在众所周知的深度学习数据集上进行评估并表现出有希望的性能。
translated by 谷歌翻译
机器学习(ML)模型需要经常在改变各种应用场景中更改数据集,包括数据估值和不确定量化。为了有效地重新培训模型,已经提出了线性近似方法,例如影响功能,以估计数据变化对模型参数的影响。但是,对于大型数据集的变化,这些方法变得不准确。在这项工作中,我们专注于凸起的学习问题,并提出了一般框架,用于学习使用神经网络进行不同训练集的优化模型参数。我们建议强制执行预测的模型参数,以通过正则化技术遵守最优性条件并保持效用,从而显着提高泛化。此外,我们严格地表征了神经网络的表现力,以近似凸起问题的优化器。经验结果展示了与最先进的准确高效的模型参数估计中提出的方法的优点。
translated by 谷歌翻译
经典的错误发现率(FDR)控制程序提供了强大而可解释的保证,而它们通常缺乏灵活性。另一方面,最近的机器学习分类算法是基于随机森林(RF)或神经网络(NN)的算法,具有出色的实践表现,但缺乏解释和理论保证。在本文中,我们通过引入新的自适应新颖性检测程序(称为Adadetect)来使这两个相遇。它将多个测试文献的最新作品范围扩展到高维度的范围,尤其是Yang等人的范围。 (2021)。显示AD​​ADETECT既可以强烈控制FDR,又具有在特定意义上模仿甲骨文之一的力量。理论结果,几个基准数据集上的数值实验以及对天体物理数据的应用,我们的方法的兴趣和有效性得到了证明。特别是,虽然可以将AdadEtect与任何分类器结合使用,但它在带有RF的现实世界数据集以及带有NN的图像上特别有效。
translated by 谷歌翻译
We introduce SketchySGD, a stochastic quasi-Newton method that uses sketching to approximate the curvature of the loss function. Quasi-Newton methods are among the most effective algorithms in traditional optimization, where they converge much faster than first-order methods such as SGD. However, for contemporary deep learning, quasi-Newton methods are considered inferior to first-order methods like SGD and Adam owing to higher per-iteration complexity and fragility due to inexact gradients. SketchySGD circumvents these issues by a novel combination of subsampling, randomized low-rank approximation, and dynamic regularization. In the convex case, we show SketchySGD with a fixed stepsize converges to a small ball around the optimum at a faster rate than SGD for ill-conditioned problems. In the non-convex case, SketchySGD converges linearly under two additional assumptions, interpolation and the Polyak-Lojaciewicz condition, the latter of which holds with high probability for wide neural networks. Numerical experiments on image and tabular data demonstrate the improved reliability and speed of SketchySGD for deep learning, compared to standard optimizers such as SGD and Adam and existing quasi-Newton methods.
translated by 谷歌翻译
在过去几十年中,已经提出了各种方法,用于估计回归设置中的预测间隔,包括贝叶斯方法,集合方法,直接间隔估计方法和保形预测方法。重要问题是这些方法的校准:生成的预测间隔应该具有预定义的覆盖水平,而不会过于保守。在这项工作中,我们从概念和实验的角度审查上述四类方法。结果来自各个域的基准数据集突出显示从一个数据集中的性能的大波动。这些观察可能归因于违反某些类别的某些方法所固有的某些假设。我们说明了如何将共形预测用作提供不具有校准步骤的方法的方法的一般校准程序。
translated by 谷歌翻译
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between $\alpha$-loss and Arimoto conditional entropy, verify the classification-calibration of $\alpha$-loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of $\alpha$-loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning $\alpha$-loss away from log-loss ($\alpha = 1$), and to this end we provide simple heuristics for the practitioner. In particular, navigating the $\alpha$ hyperparameter can readily provide superior model robustness to label flips ($\alpha > 1$) and sensitivity to imbalanced classes ($\alpha < 1$).
translated by 谷歌翻译
Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training's underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data's influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound. A curated, up-to-date list of resources related to influence analysis is available at https://github.com/ZaydH/influence_analysis_papers.
translated by 谷歌翻译
In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
translated by 谷歌翻译
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications. Influence diagnostics are powerful statistical tools to identify influential datapoints or subsets of datapoints. We establish finite-sample statistical bounds, as well as computational complexity bounds, for influence functions and approximate maximum influence perturbations using efficient inverse-Hessian-vector product implementations. We illustrate our results with generalized linear models and large attention based models on synthetic and real data.
translated by 谷歌翻译
最近的立法导致对机器学习的兴趣,即从预测模型中删除特定的培训样本,就好像它们在培训数据集中从未存在。由于损坏/对抗性数据或仅仅是用户的更新隐私要求,也可能需要进行学习。对于不需要培训的模型(K-NN),只需删除最近的原始样品即可有效。但是,这个想法不适合学习更丰富的表示的模型。由于模型维度D的趋势,最新的想法利用了基于优化的更新,因为损失函数的Hessian颠倒了。我们使用新的条件独立系数L-CODEC的变体来识别模型参数的子集,其语义重叠在单个样本级别上。我们的方法完全避免了将(可能)巨大矩阵倒置的必要性。通过利用马尔可夫毯子的选择,我们前提是l-codec也适合深度学习以及视觉中的其他应用。与替代方案相比,L-Codec在原本是不可行的设置中可以实现近似学习,包括用于面部识别的视觉模型,人重新识别和可能需要未经学习的样品进行排除的NLP模型。代码可以在https://github.com/vsingh-group/lcodec-deep-unlearning/
translated by 谷歌翻译