Recommendation Systems (RSs) are ubiquitous in modern society and are one of the largest points of interaction between humans and AI. Modern RSs are often implemented using deep learning models, which are infamously difficult to interpret. This problem is particularly exasperated in the context of recommendation scenarios, as it erodes the user's trust in the RS. In contrast, the newly introduced Tsetlin Machines (TM) possess some valuable properties due to their inherent interpretability. TMs are still fairly young as a technology. As no RS has been developed for TMs before, it has become necessary to perform some preliminary research regarding the practicality of such a system. In this paper, we develop the first RS based on TMs to evaluate its practicality in this application domain. This paper compares the viability of TMs with other machine learning models prevalent in the field of RS. We train and investigate the performance of the TM compared with a vanilla feed-forward deep learning model. These comparisons are based on model performance, interpretability/explainability, and scalability. Further, we provide some benchmark performance comparisons to similar machine learning solutions relevant to RSs.
translated by 谷歌翻译
人工智能(AI)和机器学习(ML)在网络安全挑战中的应用已在行业和学术界的吸引力,部分原因是对关键系统(例如云基础架构和政府机构)的广泛恶意软件攻击。入侵检测系统(IDS)使用某些形式的AI,由于能够以高预测准确性处理大量数据,因此获得了广泛的采用。这些系统托管在组织网络安全操作中心(CSOC)中,作为一种防御工具,可监视和检测恶意网络流,否则会影响机密性,完整性和可用性(CIA)。 CSOC分析师依靠这些系统来决定检测到的威胁。但是,使用深度学习(DL)技术设计的IDS通常被视为黑匣子模型,并且没有为其预测提供理由。这为CSOC分析师造成了障碍,因为他们无法根据模型的预测改善决策。解决此问题的一种解决方案是设计可解释的ID(X-IDS)。这项调查回顾了可解释的AI(XAI)的最先进的ID,目前的挑战,并讨论了这些挑战如何涉及X-ID的设计。特别是,我们全面讨论了黑匣子和白盒方法。我们还在这些方法之间的性能和产生解释的能力方面提出了权衡。此外,我们提出了一种通用体系结构,该建筑认为人类在循环中,该架构可以用作设计X-ID时的指南。研究建议是从三个关键观点提出的:需要定义ID的解释性,需要为各种利益相关者量身定制的解释以及设计指标来评估解释的需求。
translated by 谷歌翻译
Recommender systems are ubiquitous in most of our interactions in the current digital world. Whether shopping for clothes, scrolling YouTube for exciting videos, or searching for restaurants in a new city, the recommender systems at the back-end power these services. Most large-scale recommender systems are huge models trained on extensive datasets and are black-boxes to both their developers and end-users. Prior research has shown that providing recommendations along with their reason enhances trust, scrutability, and persuasiveness of the recommender systems. Recent literature in explainability has been inundated with works proposing several algorithms to this end. Most of these works provide item-style explanations, i.e., `We recommend item A because you bought item B.' We propose a novel approach, RecXplainer, to generate more fine-grained explanations based on the user's preference over the attributes of the recommended items. We perform experiments using real-world datasets and demonstrate the efficacy of RecXplainer in capturing users' preferences and using them to explain recommendations. We also propose ten new evaluation metrics and compare RecXplainer to six baseline methods.
translated by 谷歌翻译
人工智能(AI)使机器能够从人类经验中学习,适应新的输入,并执行人类的人类任务。 AI正在迅速发展,从过程自动化到认知增强任务和智能流程/数据分析的方式转换业务方式。然而,人类用户的主要挑战是理解和适当地信任AI算法和方法的结果。在本文中,为了解决这一挑战,我们研究并分析了最近在解释的人工智能(XAI)方法和工具中所做的最新工作。我们介绍了一种新颖的XAI进程,便于生产可解释的模型,同时保持高水平的学习性能。我们提出了一种基于互动的证据方法,以帮助人类用户理解和信任启用AI的算法创建的结果和输出。我们在银行域中采用典型方案进行分析客户交易。我们开发数字仪表板以促进与算法的互动结果,并讨论如何提出的XAI方法如何显着提高数据科学家对理解启用AI的算法结果的置信度。
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
translated by 谷歌翻译
本文研究了与可解释的AI(XAI)实践有关的两个不同但相关的问题。机器学习(ML)在金融服务中越来越重要,例如预批准,信用承销,投资以及各种前端和后端活动。机器学习可以自动检测培训数据中的非线性和相互作用,从而促进更快,更准确的信用决策。但是,机器学习模型是不透明的,难以解释,这是建立可靠技术所需的关键要素。该研究比较了各种机器学习模型,包括单个分类器(逻辑回归,决策树,LDA,QDA),异质集合(Adaboost,随机森林)和顺序神经网络。结果表明,整体分类器和神经网络的表现优于表现。此外,使用基于美国P2P贷款平台Lending Club提供的开放式访问数据集评估了两种先进的事后不可解释能力 - 石灰和外形来评估基于ML的信用评分模型。对于这项研究,我们还使用机器学习算法来开发新的投资模型,并探索可以最大化盈利能力同时最大程度地降低风险的投资组合策略。
translated by 谷歌翻译
最近,在以结果为导向的预测过程监测(OOPPM)的领域进行了转变,以使用可解释的人工智能范式中的模型,但是评估仍然主要是通过基于绩效的指标来进行的,而不是考虑到启示性和缺乏可行性。解释。在本文中,我们通过解释的解释性(通过广泛使用的XAI属性和功能复杂性)和解释性模型的忠诚(通过单调性和分歧的水平)来定义解释性。沿事件,情况和控制流透视图分析了引入的属性,这些视角是基于过程的分析的典型代表。这允许定量比较,除其他外,固有地创建了用事后解释(例如Shapley值)(例如Shapley值)的固有创建的解释(例如逻辑回归系数)。此外,本文通过洞悉如何在OOPPM中典型的OOPPM中典型的变化预处理,模型的复杂性和事后解释性技术来撰写基于事件日志和手头的任务的准则,以根据事件日志规范和手头的任务选择适当的模型,以根据事件日志规范和手头任务选择适当的模型。影响模型的解释性。为此,我们在13个现实生活事件日志上基准了七个分类器。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
如今,人工智能(AI)已成为临床和远程医疗保健应用程序的基本组成部分,但是最佳性能的AI系统通常太复杂了,无法自我解释。可解释的AI(XAI)技术被定义为揭示系统的预测和决策背后的推理,并且在处理敏感和个人健康数据时,它们变得更加至关重要。值得注意的是,XAI并未在不同的研究领域和数据类型中引起相同的关注,尤其是在医疗保健领域。特别是,许多临床和远程健康应用程序分别基于表格和时间序列数据,而XAI并未在这些数据类型上进行分析,而计算机视觉和自然语言处理(NLP)是参考应用程序。为了提供最适合医疗领域表格和时间序列数据的XAI方法的概述,本文提供了过去5年中文献的审查,说明了生成的解释的类型以及为评估其相关性所提供的努力和质量。具体而言,我们确定临床验证,一致性评估,客观和标准化质量评估以及以人为本的质量评估作为确保最终用户有效解释的关键特征。最后,我们强调了该领域的主要研究挑战以及现有XAI方法的局限性。
translated by 谷歌翻译
We apply classical statistical methods in conjunction with the state-of-the-art machine learning techniques to develop a hybrid interpretable model to analyse 454,897 online customers' behavior for a particular product category at the largest online retailer in China, that is JD. While most mere machine learning methods are plagued by the lack of interpretability in practice, our novel hybrid approach will address this practical issue by generating explainable output. This analysis involves identifying what features and characteristics have the most significant impact on customers' purchase behavior, thereby enabling us to predict future sales with a high level of accuracy, and identify the most impactful variables. Our results reveal that customers' product choice is insensitive to the promised delivery time, but this factor significantly impacts customers' order quantity. We also show that the effectiveness of various discounting methods depends on the specific product and the discount size. We identify product classes for which certain discounting approaches are more effective and provide recommendations on better use of different discounting tools. Customers' choice behavior across different product classes is mostly driven by price, and to a lesser extent, by customer demographics. The former finding asks for exercising care in deciding when and how much discount should be offered, whereas the latter identifies opportunities for personalized ads and targeted marketing. Further, to curb customers' batch ordering behavior and avoid the undesirable Bullwhip effect, JD should improve its logistics to ensure faster delivery of orders.
translated by 谷歌翻译
数十年来,对信用违约风险的预测一直是一个积极的研究领域。从历史上看,逻辑回归由于遵守法规要求而被用作主要工具:透明度,解释性和公平性。近年来,研究人员越来越多地使用复杂和先进的机器学习方法来提高预测准确性。即使机器学习方法可以潜在地提高模型的准确性,但它会使简单的逻辑回归复杂化,会使解释性恶化并经常违反公平性。在没有法规要求的情况下,公司即使是高度准确的机器学习方法也不太可能被公司接受信用评分。在本文中,我们介绍了一类新颖的单调神经添加剂模型,这些模型通过简化神经网络体系结构并实施单调性来满足调节要求。通过利用神经添加剂模型的特殊体系结构特征,单调神经添加剂模型有效地违反了单调性。因此,训练的计算成本单调神经添加剂模型类似于训练神经添加剂模型的计算成本,作为免费午餐。我们通过经验结果证明,我们的新模型与Black-Box完全连接的神经网络一样准确,提供了一种高度准确且受调节的机器学习方法。
translated by 谷歌翻译
检测数据集中的潜在结构是执行数据集分析的重要步骤。然而,用于子类发现的现有最先进的技术是有限的:它们仅限于检测非常少量的异常值,或者它们缺乏处理诸如图像或音频的复杂数据的统计功率。本文提出了解决该子类发现问题的解决方案:通过利用实例说明方法,可以扩展现有分类器以通过分类器的内部决策的差异来检测潜在类。这不仅使用简单的分类技术,还可以使用深度神经网络,允许一种强大而灵活的方法来检测数据集中的潜在结构。有效地,这代表了数据集进入分类器的“解释空间”的投影,并且初步结果表明,即使在处理有限的情况下,该技术也越突出了用于检测潜在类的基线。本文还包含用于自动分析分类器的管道,以及用于交互式探索该技术的结果的Web应用程序。
translated by 谷歌翻译
在人类循环机器学习应用程序的背景下,如决策支持系统,可解释性方法应在不使用户等待的情况下提供可操作的见解。在本文中,我们提出了加速的模型 - 不可知论解释(ACME),一种可解释的方法,即在全球和本地层面迅速提供特征重要性分数。可以将acme应用于每个回归或分类模型的后验。 ACME计算功能排名不仅提供了一个什么,但它还提供了一个用于评估功能值的变化如何影响模型预测的原因 - 如果分析工具。我们评估了综合性和现实世界数据集的建议方法,同时也与福芙添加剂解释(Shap)相比,我们制作了灵感的方法,目前是最先进的模型无关的解释性方法。我们在生产解释的质量方面取得了可比的结果,同时急剧减少计算时间并为全局和局部解释提供一致的可视化。为了促进该领域的研究,为重复性,我们还提供了一种存储库,其中代码用于实验。
translated by 谷歌翻译
异构表格数据是最常用的数据形式,对于众多关键和计算要求的应用程序至关重要。在同质数据集上,深度神经网络反复显示出卓越的性能,因此被广泛采用。但是,它们适应了推理或数据生成任务的表格数据仍然具有挑战性。为了促进该领域的进一步进展,这项工作概述了表格数据的最新深度学习方法。我们将这些方法分为三组:数据转换,专业体系结构和正则化模型。对于每个小组,我们的工作提供了主要方法的全面概述。此外,我们讨论了生成表格数据的深度学习方法,并且还提供了有关解释对表格数据的深层模型的策略的概述。因此,我们的第一个贡献是解决上述领域中的主要研究流和现有方法,同时强调相关的挑战和开放研究问题。我们的第二个贡献是在传统的机器学习方法中提供经验比较,并在五个流行的现实世界中的十种深度学习方法中,具有不同规模和不同的学习目标的经验比较。我们已将作为竞争性基准公开提供的结果表明,基于梯度增强的树合奏的算法仍然大多在监督学习任务上超过了深度学习模型,这表明对表格数据的竞争性深度学习模型的研究进度停滞不前。据我们所知,这是对表格数据深度学习方法的第一个深入概述。因此,这项工作可以成为有价值的起点,以指导对使用表格数据深入学习感兴趣的研究人员和从业人员。
translated by 谷歌翻译
元学习用于通过组合数据和先验知识来有效地自动选择机器学习模型。由于传统的元学习技术缺乏解释性,并且在透明度和公平性方面存在缺点,因此实现元学习的解释性至关重要。本文提出了一个可解释的元学习框架,该框架不仅可以解释元学习算法选择的建议结果,而且还可以对建议算法在特定数据集中的性能和业务场景中更完整,更准确地解释。通过广泛的实验证明了该框架的有效性和正确性。
translated by 谷歌翻译
解释性是决策系统的压迫问题。已经提出了许多后的HOC方法来解释任何机器学习模型的预测。但是,业务流程和决策系统很少归属于单个独立的模型。这些系统组合了产生关键预测的多个模型,然后应用决策规则以生成最终决定。为了解释此类决定,我们呈现SMACE,半模型 - 不可知论式解释器,一种新的解释方法,该方法将决策规则与现有的机器学习模型进行决策规则,以生成对最终用户身份定制的直观特征排名。我们表明,建立的模型 - 无可止境方法在这一框架中产生了不良的结果。
translated by 谷歌翻译
Building an accurate model of travel behaviour based on individuals' characteristics and built environment attributes is of importance for policy-making and transportation planning. Recent experiments with big data and Machine Learning (ML) algorithms toward a better travel behaviour analysis have mainly overlooked socially disadvantaged groups. Accordingly, in this study, we explore the travel behaviour responses of low-income individuals to transit investments in the Greater Toronto and Hamilton Area, Canada, using statistical and ML models. We first investigate how the model choice affects the prediction of transit use by the low-income group. This step includes comparing the predictive performance of traditional and ML algorithms and then evaluating a transit investment policy by contrasting the predicted activities and the spatial distribution of transit trips generated by vulnerable households after improving accessibility. We also empirically investigate the proposed transit investment by each algorithm and compare it with the city of Brampton's future transportation plan. While, unsurprisingly, the ML algorithms outperform classical models, there are still doubts about using them due to interpretability concerns. Hence, we adopt recent local and global model-agnostic interpretation tools to interpret how the model arrives at its predictions. Our findings reveal the great potential of ML algorithms for enhanced travel behaviour predictions for low-income strata without considerably sacrificing interpretability.
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
尽管有无数的同伴审查的论文,证明了新颖的人工智能(AI)基于大流行期间的Covid-19挑战的解决方案,但很少有临床影响。人工智能在Covid-19大流行期间的影响因缺乏模型透明度而受到极大的限制。这种系统审查考察了在大流行期间使用可解释的人工智能(Xai)以及如何使用它可以克服现实世界成功的障碍。我们发现,Xai的成功使用可以提高模型性能,灌输信任在最终用户,并提供影响用户决策所需的值。我们将读者介绍给常见的XAI技术,其实用程序以及其应用程序的具体例子。 XAI结果的评估还讨论了最大化AI的临床决策支持系统的价值的重要步骤。我们说明了Xai的古典,现代和潜在的未来趋势,以阐明新颖的XAI技术的演变。最后,我们在最近出版物支持的实验设计过程中提供了建议的清单。潜在解决方案的具体示例也解决了AI解决方案期间的共同挑战。我们希望本次审查可以作为提高未来基于AI的解决方案的临床影响的指导。
translated by 谷歌翻译