Bagging predictors
分类:
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
translated by 谷歌翻译
Random forests
分类:
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, * * * , 148-156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
translated by 谷歌翻译
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepestdescent minimization. A general gradient descent "boosting" paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such "TreeBoost" models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed.
translated by 谷歌翻译
This paper proposes a new tree-based ensemble method for supervised classification and regression problems. It essentially consists of randomizing strongly both attribute and cut-point choice while splitting a tree node. In the extreme case, it builds totally randomized trees whose structures are independent of the output values of the learning sample. The strength of the randomization can be tuned to problem specifics by the appropriate choice of a parameter. We evaluate the robustness of the default choice of this parameter, and we also provide insight on how to adjust it in particular situations. Besides accuracy, the main strength of the resulting algorithm is computational efficiency. A bias/variance analysis of the Extra-Trees algorithm is also provided as well as a geometrical and a kernel characterization of the models induced.
translated by 谷歌翻译
交叉验证是一种广泛使用的技术来估计预测误差,但其行为很复杂且不完全理解。理想情况下,人们想认为,交叉验证估计手头模型的预测错误,适合训练数据。我们证明,普通最小二乘拟合的线性模型并非如此。相反,它估计模型的平均预测误差适合于同一人群提取的其他看不见的训练集。我们进一步表明,这种现象发生在大多数流行的预测误差估计中,包括数据拆分,自举和锦葵的CP。接下来,从交叉验证得出的预测误差的标准置信区间可能的覆盖范围远低于所需水平。由于每个数据点都用于训练和测试,因此每个折叠的测量精度之间存在相关性,因此方差的通常估计值太小。我们引入了嵌套的交叉验证方案,以更准确地估计该方差,并从经验上表明,在传统的交叉验证间隔失败的许多示例中,这种修改导致间隔大致正确覆盖。
translated by 谷歌翻译
我们举例说明数据生成模型的示例,其中Breiman的随机森林可能极慢地收敛到最佳预测器,甚至无法保持一致。为这些属性提供的证据是基于主要是直观的论点,类似于前面使用的那些具有更简单的示例以及数值实验。虽然可以始终选择随机森林表现得非常严重的模型,但我们表明基于“变量使用”和“变量重要性”统计的简单方法通常可用于构建基于“许多武装”的更好的预测因子通过强制初始拆分获得的随机森林,该变量是算法的默认版本倾向于忽略的变量。
translated by 谷歌翻译
Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability ǫ, together with a method that makes a prediction ŷ of a label y, it produces a set of labels, typically containing ŷ, that also contains y with probability 1 − ǫ. Conformal prediction can be applied to any method for producing ŷ: a nearest-neighbor method, a support-vector machine, ridge regression, etc.Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right 1 − ǫ of the time, even though they are based on an accumulating dataset rather than on independent datasets.In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these.This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in Algorithmic Learning in a Random World, by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).
translated by 谷歌翻译
我们引入了一种新颖的方式,将增强功能与高斯工艺和混合效应模型相结合。首先,在高斯过程中先前的平均函数的零或线性假设可以放松,并以灵活的非参数方式分组随机效应模型,其次,第二个在大多数增强算法中做出的独立性假设。前者有利于预测准确性和避免模型错误。后者对于有效学习固定效应预测函数和获得概率预测很重要。我们提出的算法也是用于处理培养树木中高心电图分类变量的新颖解决方案。此外,我们提出了一个扩展名,该扩展是使用维奇亚近似为高斯工艺模型缩放到大数据的,该模型依靠新的结果进行协方差参数推断。与几个模拟和现实世界数据集的现有方法相比,我们获得了提高的预测准确性。
translated by 谷歌翻译
While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.
translated by 谷歌翻译
Many scientific and engineering challenges-ranging from personalized medicine to customized marketing recommendations-require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical inference. In experiments, we find causal forests to be substantially more powerful than classical methods based on nearest-neighbor matching, especially in the presence of irrelevant covariates.
translated by 谷歌翻译
孔隙度已被识别为混凝土耐久性特性的关键指标暴露于侵略性环境。本文采用集体学习来预测含有补充水泥材料的高性能混凝土的孔隙率。本研究中使用的混凝土样品的特征在于八种组合物特征,包括W / B比,粘合剂含量,粉煤灰,GGB,过度塑化剂,粗/细骨料比,固化条件和固化天。组装数据库由240个数据记录组成,具有74个独特的混凝土混合设计。所提出的机器学习算法在从数据集中随机选择的180个观察(75%)培训,然后在剩余的60个观察中进行测试(25%)。数值实验表明,回归树集合可以精确地预测其混合组合物的混凝土的孔隙率。梯度提升树木通常在预测准确性方面优于随机森林。对于随机森林,发现基于袋出错的误差的超参数调整策略比K倍交叉验证更有效。
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
在过去几十年中,已经提出了各种方法,用于估计回归设置中的预测间隔,包括贝叶斯方法,集合方法,直接间隔估计方法和保形预测方法。重要问题是这些方法的校准:生成的预测间隔应该具有预定义的覆盖水平,而不会过于保守。在这项工作中,我们从概念和实验的角度审查上述四类方法。结果来自各个域的基准数据集突出显示从一个数据集中的性能的大波动。这些观察可能归因于违反某些类别的某些方法所固有的某些假设。我们说明了如何将共形预测用作提供不具有校准步骤的方法的方法的一般校准程序。
translated by 谷歌翻译
在这项工作中,我们对基本思想和新颖的发展进行了综述的综述,这是基于最小的假设的一种无创新的,无分配的,非参数预测的方法 - 能够以非常简单的方式预测集屈服在有限样本案例中,在统计意义上也有效。论文中提供的深入讨论涵盖了共形预测的理论基础,然后继续列出原始想法的更高级的发展和改编。
translated by 谷歌翻译
Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.
translated by 谷歌翻译
这篇综述的目的是将读者介绍到图表内,以将其应用于化学信息学中的分类问题。图内核是使我们能够推断分子的化学特性的功能,可以帮助您完成诸如寻找适合药物设计的化合物等任务。内核方法的使用只是一种特殊的两种方式量化了图之间的相似性。我们将讨论限制在这种方法上,尽管近年来已经出现了流行的替代方法,但最著名的是图形神经网络。
translated by 谷歌翻译
Precision Medicine根据患者的特征为患者提供定制的治疗方法,是提高治疗效率的一种有希望的方法。大规模的OMICS数据对于患者表征很有用,但是它们的测量经常会随着时间而变化,从而导致纵向数据。随机森林是用于构建预测模型的最先进的机器学习方法之一,并且可以在精密医学中发挥关键作用。在本文中,我们回顾了标准随机森林方法的扩展,以进行纵向数据分析。扩展方法根据其设计的数据结构进行分类。我们考虑单变量和多变量响应,并根据时间效应是否相关,进一步对重复测量进行分类。还提供了审查扩展程序的可用软件实现信息。最后,我们讨论了我们审查的局限性和一些未来的研究指示。
translated by 谷歌翻译
This chapter is dedicated to the assessment and performance estimation of machine learning (ML) algorithms, a topic that is equally important to the construction of these algorithms, in particular in the context of cyberphysical security design. The literature is full of nonparametric methods to estimate a statistic from just one available dataset through resampling techniques, e.g., jackknife, bootstrap and cross validation (CV). Special statistics of great interest are the error rate and the area under the ROC curve (AUC) of a classification rule. The importance of these resampling methods stems from the fact that they require no knowledge about the probability distribution of the data or the construction details of the ML algorithm. This chapter provides a concise review of this literature to establish a coherent theoretical framework for these methods that can estimate both the error rate (a one-sample statistic) and the AUC (a two-sample statistic). The resampling methods are usually computationally expensive, because they rely on repeating the training and testing of a ML algorithm after each resampling iteration. Therefore, the practical applicability of some of these methods may be limited to the traditional ML algorithms rather than the very computationally demanding approaches of the recent deep neural networks (DNN). In the field of cyberphysical security, many applications generate structured (tabular) data, which can be fed to all traditional ML approaches. This is in contrast to the DNN approaches, which favor unstructured data, e.g., images, text, voice, etc.; hence, the relevance of this chapter to this field.%
translated by 谷歌翻译
In this paper we i n vestigate the use of receiver operating characteristic (ROC) curve f o r the evaluation of machine learning algorithms. In particular, we i n vestigate the use of the area under the ROC curve ( A UC) as a measure of classi er performance. The machine learning algorithms used are chosen to be representative of those in common use: two decision trees (C4.5 and Multiscale Classi er) two n e u r a l n e t works (Perceptron and Multi-layer Perceptron) and two statistical methods (K-Nearest Neighbours and a Quadratic Discriminant F unction).The evaluation is done using six, \real world," medical diagnostics data sets that contain a varying numbers of inputs and samples, but are primarily continuous input, binary classi cation problems. We i d e n tify three forms of bias that can a ect comparisons of this type (estimation, selection, and expert bias) and detail the methods used to avoid them. We compare and discuss the use of AUC with the conventional measure of classi er performance, overall accuracy (the probability of a correct response). It is found that AUC exhibits a number of desirable properties when compared to overall accuracy: increased sensitivity in Analysis of Variance (ANOVA) tests a standard error that decreased as both AUC and the number of test samples increased decision threshold independent invariant t o a priori class probabilities and it gives an indication of the amount o f \ w ork done" by a classi cation scheme, giving low scores to both random and \one class only" classi ers.It has been known for some time that AUC actually represents the probability that a randomly chosen positive example is correctly rated (ranked) with greater suspicion than a randomly chosen negative example. Moreover, this probability of correct ranking is the same quantity estimated by the non-parametric Wilcoxon statistic. We use this equivalence to show that the standard deviation of AUC, estimated using 10 fold cross validation, is a reliable estimator of the standard error estimated using the Wilcoxon test. The paper concludes with the recommendation that AUC be used in preference to overall accuracy when \single number" evaluation of machine learning algorithms is required.
translated by 谷歌翻译
This paper computationally demonstrates a sharp improvement in predictive performance for $k$ nearest neighbors thanks to an efficient forward selection of the predictor variables. We show both simulated and real-world data that this novel repeatedly approaches outperformance regression models under stepwise selection
translated by 谷歌翻译