资源受限的分类任务在实际应用中很常见,例如为疾病诊断分配测试,填补有限数量的职位时雇用决策以及在有限检查预算下制造环境中的缺陷检测。典型的分类算法将学习过程和资源约束视为两个单独的顺序任务。在这里,我们设计了一种自适应学习方法,该方法通过迭代微调错误分类成本来考虑资源限制和共同学习。通过使用公开可用数据集的结构化实验研究,我们评估了采用建议方法的决策树分类器。自适应学习方法的表现要比替代方法要好得多,尤其是对于困难的分类问题,在这种问题上,普通方法的表现可能不令人满意。我们将适应性学习方法设想为处理资源受限分类问题的技术曲目的重要补充。
translated by 谷歌翻译
我们设计了一种新的自适应学习算法,以进行错误分类成本问题,试图降低源于各种误差后果的错误分类实例的成本。我们的算法(自适应成本敏感学习 - ADACSL)自适应地调整损耗功能,使得分类器桥接训练中样本子组之间的类分布的差异,以及具有类似预测概率的测试数据集(即本地训练测试类分布不匹配)。我们在所提出的算法上提供一些理论性能保证,并提出了与所提出的ADACSL算法一起使用的深度神经网络的经验证据,并在与其他替代方法相比具有类别不平衡和类平衡分布的几个二进制分类数据集上产生更好的成本结果。
translated by 谷歌翻译
Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
translated by 谷歌翻译
由于欺诈模式随着时间的流逝而变化,并且欺诈示例的可用性有限,以学习这种复杂的模式,因此欺诈检测是一项具有挑战性的任务。因此,借助智能版本的机器学习(ML)工具的欺诈检测对于确保安全至关重要。欺诈检测是主要的ML分类任务;但是,相应的ML工具的最佳性能取决于最佳的超参数值的使用。此外,在不平衡类中的分类非常具有挑战性,因为它在少数群体中导致绩效差,大多数ML分类技术都忽略了。因此,我们研究了四种最先进的ML技术,即逻辑回归,决策树,随机森林和极端梯度提升,它们适用于处理不平衡类别以最大程度地提高精度并同时降低假阳性。首先,这些分类器经过两个原始基准测试不平衡检测数据集的培训,即网站网站URL和欺诈性信用卡交易。然后,通过实现采样框架,即RandomundSampler,Smote和Smoteenn,为每个原始数据集生产了三个合成平衡的数据集。使用RandomzedSearchCV方法揭示了所有16个实验的最佳超参数。使用两个基准性能指标比较了欺诈检测中16种方法的有效性,即接收器操作特性(AUC ROC)和精度和召回曲线下的面积(AUC PR)(AUC PR)。对于网络钓鱼网站URL和信用卡欺诈事务数据集,结果表明,对原始数据的极端梯度提升显示了不平衡数据集中值得信赖的性能,并以AUC ROC和AUC PR来超越其他三种方法。
translated by 谷歌翻译
在整个宇宙学模拟中,初始条件中的物质密度场的性质对今天形成的结构的特征具有决定性的影响。在本文中,我们使用随机森林分类算法来推断暗物质颗粒是否追溯到初始条件,最终将在肿块上高于一些阈值的暗物质卤素。该问题可能被构成为二进制分类任务,其中物质密度字段的初始条件映射到由光环发现者程序提供的分类标签。我们的研究结果表明,随机森林是有效的工具,无法在不运行完整过程的情况下预测宇宙学模拟的输出。在将来可能使用这些技术来降低计算时间并更有效地探索不同暗物质/暗能候选对宇宙结构的形成的影响。
translated by 谷歌翻译
使用不平衡数据集的二进制分类具有挑战性。模型倾向于将所有样本视为属于多数类的样本。尽管现有的解决方案(例如抽样方法,成本敏感方法和合奏学习方法)提高了少数族裔类别的准确性,但这些方法受到过度拟合问题或难以决定的成本参数的限制。我们提出了HADR,这是一种降低尺寸的混合方法,包括数据块构建,降低性降低和与深度神经网络分类器的合奏学习。我们评估了八个不平衡的公共数据集的性能,从召回,g均值和AUC方面。结果表明,我们的模型优于最先进的方法。
translated by 谷歌翻译
The costs and impacts of government corruption range from impairing a country's economic growth to affecting its citizens' well-being and safety. Public contracting between government dependencies and private sector instances, referred to as public procurement, is a fertile land of opportunity for corrupt practices, generating substantial monetary losses worldwide. Thus, identifying and deterring corrupt activities between the government and the private sector is paramount. However, due to several factors, corruption in public procurement is challenging to identify and track, leading to corrupt practices going unnoticed. This paper proposes a machine learning model based on an ensemble of random forest classifiers, which we call hyper-forest, to identify and predict corrupt contracts in M\'exico's public procurement data. This method's results correctly detect most of the corrupt and non-corrupt contracts evaluated in the dataset. Furthermore, we found that the most critical predictors considered in the model are those related to the relationship between buyers and suppliers rather than those related to features of individual contracts. Also, the method proposed here is general enough to be trained with data from other countries. Overall, our work presents a tool that can help in the decision-making process to identify, predict and analyze corruption in public procurement contracts.
translated by 谷歌翻译
班级失衡对机器学习构成了重大挑战,因为大多数监督学习模型可能对多数级别和少数族裔表现不佳表现出偏见。成本敏感的学习通过以不同的方式处理类别,通常通过用户定义的固定错误分类成本矩阵来解决此问题,以提供给学习者的输入。这种参数调整是一项具有挑战性的任务,需要域知识,此外,错误的调整可能会导致整体预测性能恶化。在这项工作中,我们为不平衡数据提出了一种新颖的成本敏感方法,该方法可以动态地调整错误分类的成本,以响应Model的性能,而不是使用固定的错误分类成本矩阵。我们的方法称为ADACC,是无参数的,因为它依赖于增强模型的累积行为,以便调整下一次增强回合的错误分类成本,并具有有关培训错误的理论保证。来自不同领域的27个现实世界数据集的实验表明,我们方法的优势超过了12种最先进的成本敏感方法,这些方法在不同度量方面表现出一致的改进,例如[0.3] AUC的%-28.56%],平衡精度[3.4%-21.4%],Gmean [4.8%-45%]和[7.4%-85.5%]用于召回。
translated by 谷歌翻译
这项研究研究了在美国国税局(IRS)为税收审计选择的系统中,算法公平性问题。尽管算法公平的领域主要围绕着像个人一样对待的概念发展,但我们却探索了垂直平等的概念 - 适当地考虑到个人之间的相关差异 - 这在许多公共政策环境中都是公平性的核心组成部分。应用于美国个人所得税体系的设计,垂直权益与不同收入水平的纳税人之间的税收和执法负担的公平分配有关。通过与财政部和国税局的独特合作,我们使用匿名个人纳税人微型数据,风险选择的审计以及2010 - 14年度的随机审计来研究税务管理的垂直平等。特别是,我们评估了现代机器学习方法选择审核的使用如何影响垂直权益。首先,我们展示了更灵活的机器学习(分类)方法(而不是简单的模型)如何将审计负担从高收入纳税人转移到中等收入纳税人。其次,我们表明,尽管现有的算法公平技术可以减轻跨收入的某些差异,但它们可能会造成巨大的绩效成本。第三,我们表明,是否将低报告的风险视为分类或回归问题的选择是高度的。从分类转变为回归模型,以预测不足的审计转变会大大向高收入个人转移,同时增加收入。最后,我们探讨了差异审计成本在塑造审计分配中的作用。我们表明,对回报的狭窄关注会破坏垂直权益。我们的结果对整个公共部门的算法工具的设计具有影响。
translated by 谷歌翻译
算法决策的兴起催生了许多关于公平机器学习(ML)的研究。金融机构使用ML来建立支持一系列与信贷有关的决定的风险记分卡。然而,关于信用评分的公平ML的文献很少。该论文做出了三项贡献。首先,我们重新审视统计公平标准,并检查其对信用评分的适当性。其次,我们对将公平目标纳入ML模型开发管道中的算法选项进行了分类。最后,我们从经验上比较了使用现实世界数据以利润为导向的信用评分上下文中的不同公平处理器。经验结果证实了对公平措施的评估,确定了实施公平信用评分的合适选择,并阐明了贷款决策中的利润权衡。我们发现,可以立即达到多个公平标准,并建议分离作为衡量记分卡的公平性的适当标准。我们还发现公平的过程中,可以在利润和公平之间实现良好的平衡,并表明算法歧视可以以相对较低的成本降低到合理的水平。与该论文相对应的代码可在GitHub上获得。
translated by 谷歌翻译
在机器学习中,使用算法 - 不足的方法是一个新兴领域,用于解释单个特征对预测结果的贡献。尽管重点放在解释预测本身上,但已经做了一些解释这些模型的鲁棒性,即每个功能如何有助于实现这种鲁棒性。在本文中,我们建议使用沙普利值来解释每个特征对模型鲁棒性的贡献,该功能以接收器操作特性(ROC)曲线和ROC曲线(AUC)下的面积来衡量。在一个说明性示例的帮助下,我们证明了解释ROC曲线的拟议思想,并可以看到这些曲线中的不确定性。对于不平衡的数据集,使用Precision-Recall曲线(PRC)被认为更合适,因此我们还演示了如何借助Shapley值解释PRC。
translated by 谷歌翻译
作为一种预测模型的评分系统具有可解释性和透明度的显着优势,并有助于快速决策。因此,评分系统已广泛用于各种行业,如医疗保健和刑事司法。然而,这些模型中的公平问题长期以来一直受到批评,并且使用大数据和机器学习算法在评分系统的构建中提高了这个问题。在本文中,我们提出了一般框架来创建公平知识,数据驱动评分系统。首先,我们开发一个社会福利功能,融入了效率和群体公平。然后,我们将社会福利最大化问题转换为机器学习中的风险最小化任务,并在混合整数编程的帮助下导出了公平感知评分系统。最后,导出了几种理论界限用于提供参数选择建议。我们拟议的框架提供了适当的解决方案,以解决进程中的分组公平问题。它使政策制定者能够设置和定制其所需的公平要求以及其他特定于应用程序的约束。我们用几个经验数据集测试所提出的算法。实验证据支持拟议的评分制度在实现利益攸关方的最佳福利以及平衡可解释性,公平性和效率的需求方面的有效性。
translated by 谷歌翻译
In this paper we i n vestigate the use of receiver operating characteristic (ROC) curve f o r the evaluation of machine learning algorithms. In particular, we i n vestigate the use of the area under the ROC curve ( A UC) as a measure of classi er performance. The machine learning algorithms used are chosen to be representative of those in common use: two decision trees (C4.5 and Multiscale Classi er) two n e u r a l n e t works (Perceptron and Multi-layer Perceptron) and two statistical methods (K-Nearest Neighbours and a Quadratic Discriminant F unction).The evaluation is done using six, \real world," medical diagnostics data sets that contain a varying numbers of inputs and samples, but are primarily continuous input, binary classi cation problems. We i d e n tify three forms of bias that can a ect comparisons of this type (estimation, selection, and expert bias) and detail the methods used to avoid them. We compare and discuss the use of AUC with the conventional measure of classi er performance, overall accuracy (the probability of a correct response). It is found that AUC exhibits a number of desirable properties when compared to overall accuracy: increased sensitivity in Analysis of Variance (ANOVA) tests a standard error that decreased as both AUC and the number of test samples increased decision threshold independent invariant t o a priori class probabilities and it gives an indication of the amount o f \ w ork done" by a classi cation scheme, giving low scores to both random and \one class only" classi ers.It has been known for some time that AUC actually represents the probability that a randomly chosen positive example is correctly rated (ranked) with greater suspicion than a randomly chosen negative example. Moreover, this probability of correct ranking is the same quantity estimated by the non-parametric Wilcoxon statistic. We use this equivalence to show that the standard deviation of AUC, estimated using 10 fold cross validation, is a reliable estimator of the standard error estimated using the Wilcoxon test. The paper concludes with the recommendation that AUC be used in preference to overall accuracy when \single number" evaluation of machine learning algorithms is required.
translated by 谷歌翻译
深度神经网络(DNN)对于对培训期间的样品大大减少的课程进行更多错误是臭名昭着的。这种类别不平衡在临床应用中普遍存在,并且对处理非常重要,因为样品较少的类通常对应于临界病例(例如,癌症),其中错误分类可能具有严重后果。不要错过这种情况,通过设定更高的阈值,需要以高真正的阳性率(TPRS)运行二进制分类器,但这是类别不平衡问题的非常高的假阳性率(FPRS)的成本。在课堂失衡下的现有方法通常不会考虑到这一点。我们认为,通过在高TPRS处于阳性的错误分类时强调减少FPRS,应提高预测准确性,即赋予阳性,即批判性,类样本与更高的成本相关。为此,我们将DNN的训练训练为二进制分类作为约束优化问题,并引入一种新的约束,可以通过在高TPR处优先考虑FPR减少来强制ROC曲线(AUC)下强制实施最大面积的新约束。我们使用增强拉格朗日方法(ALM)解决了由此产生的受限优化问题。超越二进制文件,我们还提出了两个可能的延长了多级分类问题的建议约束。我们使用内部医学成像数据集,CIFAR10和CIFAR100呈现基于图像的二元和多级分类应用的实验结果。我们的结果表明,该方法通过在关键类别的准确性上获得了大多数病例的拟议方法,同时降低了非关键类别样本的错误分类率。
translated by 谷歌翻译
This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics.
translated by 谷歌翻译
Learning classifiers using skewed or imbalanced datasets can occasionally lead to classification issues; this is a serious issue. In some cases, one class contains the majority of examples while the other, which is frequently the more important class, is nevertheless represented by a smaller proportion of examples. Using this kind of data could make many carefully designed machine-learning systems ineffective. High training fidelity was a term used to describe biases vs. all other instances of the class. The best approach to all possible remedies to this issue is typically to gain from the minority class. The article examines the most widely used methods for addressing the problem of learning with a class imbalance, including data-level, algorithm-level, hybrid, cost-sensitive learning, and deep learning, etc. including their advantages and limitations. The efficiency and performance of the classifier are assessed using a myriad of evaluation metrics.
translated by 谷歌翻译
An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
translated by 谷歌翻译
Network intrusion detection systems (NIDSs) play an important role in computer network security. There are several detection mechanisms where anomaly-based automated detection outperforms others significantly. Amid the sophistication and growing number of attacks, dealing with large amounts of data is a recognized issue in the development of anomaly-based NIDS. However, do current models meet the needs of today's networks in terms of required accuracy and dependability? In this research, we propose a new hybrid model that combines machine learning and deep learning to increase detection rates while securing dependability. Our proposed method ensures efficient pre-processing by combining SMOTE for data balancing and XGBoost for feature selection. We compared our developed method to various machine learning and deep learning algorithms to find a more efficient algorithm to implement in the pipeline. Furthermore, we chose the most effective model for network intrusion based on a set of benchmarked performance analysis criteria. Our method produces excellent results when tested on two datasets, KDDCUP'99 and CIC-MalMem-2022, with an accuracy of 99.99% and 100% for KDDCUP'99 and CIC-MalMem-2022, respectively, and no overfitting or Type-1 and Type-2 issues.
translated by 谷歌翻译
我们在分类的背景下研究公平,其中在接收器的曲线下的区域(AUC)下的区域测量的性能。当I型(误报)和II型(假阴性)错误都很重要时,通常使用AUC。然而,相同的分类器可以针对不同的保护组具有显着变化的AUC,并且在现实世界中,通常希望减少这种交叉组差异。我们解决如何选择其他功能,以便最大地改善弱势群体的AUC。我们的结果表明,功能的无条件方差不会通知我们关于AUC公平,而是类条件方差。使用此连接,我们基于功能增强(添加功能)来开发一种新颖的方法Fairauc,以减轻可识别组之间的偏差。我们评估综合性和现实世界(Compas)数据集的Fairauc,并发现它对于相对于基准,最大限度地提高了总体AUC并最大限度地减少了组之间的偏见的基准,它显着改善了弱势群体的AUC。
translated by 谷歌翻译
本文考虑了在分解正常形式(DNF,ANDS的DNF,ANDS,相当于判定规则集)或联合正常形式(CNF,ORS)作为分类模型的联合正常形式的学习。为规则简化,将整数程序配制成最佳贸易分类准确性。我们还考虑公平设定,并扩大制定,以包括对两种不同分类措施的明确限制:机会平等和均等的赔率。列生成(CG)用于有效地搜索候选条款(连词或剖钉)的指数数量,而不需要启发式规则挖掘。此方法还会绑定所选规则集之间的间隙和培训数据上的最佳规则集。要处理大型数据集,我们建议使用随机化的近似CG算法。与三个最近提出的替代方案相比,CG算法主导了16个数据集中的8个中的精度简单折衷。当最大限度地提高精度时,CG与为此目的设计的规则学习者具有竞争力,有时发现明显更简单的解决方案,这些解决方案不太准确。与其他公平和可解释的分类器相比,我们的方法能够找到符合较严格的公平概念的规则集,以适度的折衷准确性。
translated by 谷歌翻译