Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems, e.g., image classification, natural language processing or human action recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method is based on deep Taylor decomposition and efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.
translated by 谷歌翻译
Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels wrt the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of neural network performance.
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
除了机器学习(ML)模型的令人印象深刻的预测力外,最近还出现了解释方法,使得能够解释诸如深神经网络的复杂非线性学习模型。获得更好的理解尤其重要。对于安全 - 关键的ML应用或医学诊断等。虽然这种可解释的AI(XAI)技术对分类器达到了重大普及,但到目前为止对XAI的重点进行了很少的关注(Xair)。在这篇综述中,我们澄清了XAI对回归和分类任务的基本概念差异,为Xair建立了新的理论见解和分析,为Xair提供了真正的实际回归问题的示范,最后讨论了该领域仍然存在的挑战。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
最近的机器学习趋势一直是通过解释自己的预测的能力来丰富学习模式。到目前为止,迄今为止,可解释的AI(XAI)的新兴领域主要集中在监督学习,特别是深度神经网络分类器。然而,在许多实际问题中,未给出标签信息,并且目标是发现数据的基础结构,例如,其群集。虽然存在强大的方法来提取数据中的群集结构,但它们通常不会回答为什么已分配给给定群集的某些数据点的原因。我们提出了一种新的框架,它首次以有效可靠的方式在输入特征方面解释群集分配。它基于小说洞察力,即聚类模型可以被重写为神经网络 - 或“神经化”。然后,所获得的网络的集群预测可以快速准确地归因于输入特征。几个陈列室展示了我们的方法评估学习集群质量的能力,并从分析的数据和表示中提取新颖的见解。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
卷积神经网络(CNN)最近由于捕获非线性系统行为并提取预测性时空模式而引起了地球科学的极大关注。然而,鉴于其黑盒的性质以及预测性的重要性,可解释的人工智能方法(XAI)已成为解释CNN决策策略的一种手段。在这里,我们建立了一些最受欢迎的XAI方法的比较,并研究了它们在解释CNN的地球科学应用决策方面的保真度。我们的目标是提高对这些方法的理论局限性的认识,并深入了解相对优势和缺点,以帮助指导最佳实践。所考虑的XAI方法首先应用于理想化的归因基准,在该基准中,该网络解释的基础真实是先验,以帮助客观地评估其性能。其次,我们将XAI应用于与气候相关的预测设置,即解释CNN,该CNN经过训练,可以预测气候模拟每日快照中的大气河流数量。我们的结果突出了XAI方法的几个重要问题(例如,梯度破碎,无法区分归因的迹象,对零输入的无知),这些迹象以前在我们的领域被忽略了,如果不谨慎地考虑,可能会导致扭曲的图片CNN决策策略。我们设想,我们的分析将激发对XAI保真度的进一步调查,并将有助于在地球科学中谨慎地实施XAI,这可能导致进一步剥削CNN和深入学习预测问题。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
变压器已成为机器学习的重要主力,并具有许多应用。这需要开发可靠的方法来提高其透明度。已经提出了多种基于梯度信息的多种可解释性方法。我们表明,变压器中的梯度仅在本地反映该函数,因此无法可靠地确定输入特征对预测的贡献。我们将注意力头和分层确定为这种不可靠的解释的主要原因,并提出了通过这些层传播的一种更稳定的方式。我们的建议在理论上和经验上都显示出良好的LRP方法的适当扩展,以克服简单基于梯度的方法的缺乏,并实现先进的解释绩效在广泛的变压器模型和数据集上。
translated by 谷歌翻译
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.Real-life document recognition systems are composed of multiple modules including eld extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure.Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the exibility of Graph Transformer Networks.A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.
translated by 谷歌翻译
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.
translated by 谷歌翻译
在本文中,我们将分析如何在某些非线性模型中计算每个输入值对其汇总输出的贡献。提出了回归和分类应用,以及有关深神经网络的相关算法。所提出的方法合并了文献中目前存在的两种方法:综合梯度和深泰勒分解。与深置和深度的外形相比,它提供了自然选择使用模型所特有的参考点。
translated by 谷歌翻译
去年的特征是不透明的自动决策支持系统(例如深神经网络(DNNS))激增。尽管它们具有出色的概括和预测技能,但其功能不允许对其行为获得详细的解释。由于不透明的机器学习模型越来越多地用于在关键环境中做出重要的预测,因此危险是创建和使用不合理或合法的决策。因此,关于赋予机器学习模型具有解释性的重要性有一个普遍的共识。可解释的人工智能(XAI)技术可以用来验证和认证模型输出,并以可信赖,问责制,透明度和公平等理想的概念来增强它们。本指南旨在成为任何具有计算机科学背景的受众的首选手册,旨在获得对机器学习模型的直观见解,并伴随着笔直,快速和直观的解释。本文旨在通过在其特定的日常型号,数据集和用例中应用XAI技术来填补缺乏引人注目的XAI指南。图1充当读者的流程图/地图,应帮助他根据自己的数据类型找到理想的使用方法。在每章中,读者将找到所提出的方法的描述,以及在生物医学应用程序和Python笔记本上使用的示例。它可以轻松修改以应用于特定应用程序。
translated by 谷歌翻译
背景信息:在过去几年中,机器学习(ML)一直是许多创新的核心。然而,包括在所谓的“安全关键”系统中,例如汽车或航空的系统已经被证明是非常具有挑战性的,因为ML的范式转变为ML带来完全改变传统认证方法。目的:本文旨在阐明与ML为基础的安全关键系统认证有关的挑战,以及文献中提出的解决方案,以解决它们,回答问题的问题如何证明基于机器学习的安全关键系统?'方法:我们开展2015年至2020年至2020年之间发布的研究论文的系统文献综述(SLR),涵盖了与ML系统认证有关的主题。总共确定了217篇论文涵盖了主题,被认为是ML认证的主要支柱:鲁棒性,不确定性,解释性,验证,安全强化学习和直接认证。我们分析了每个子场的主要趋势和问题,并提取了提取的论文的总结。结果:单反结果突出了社区对该主题的热情,以及在数据集和模型类型方面缺乏多样性。它还强调需要进一步发展学术界和行业之间的联系,以加深域名研究。最后,它还说明了必须在上面提到的主要支柱之间建立连接的必要性,这些主要柱主要主要研究。结论:我们强调了目前部署的努力,以实现ML基于ML的软件系统,并讨论了一些未来的研究方向。
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译
These notes were compiled as lecture notes for a course developed and taught at the University of the Southern California. They should be accessible to a typical engineering graduate student with a strong background in Applied Mathematics. The main objective of these notes is to introduce a student who is familiar with concepts in linear algebra and partial differential equations to select topics in deep learning. These lecture notes exploit the strong connections between deep learning algorithms and the more conventional techniques of computational physics to achieve two goals. First, they use concepts from computational physics to develop an understanding of deep learning algorithms. Not surprisingly, many concepts in deep learning can be connected to similar concepts in computational physics, and one can utilize this connection to better understand these algorithms. Second, several novel deep learning algorithms can be used to solve challenging problems in computational physics. Thus, they offer someone who is interested in modeling a physical phenomena with a complementary set of tools.
translated by 谷歌翻译
在许多情况下,基于一些高级概念来解释人为的决定。在这项工作中,我们通过检查其内部代表或神经元对概念的激活来迈出神经网络的可解释性。一个概念的特征在于一组具有共同特征的样本。我们提出了一个框架来检查概念(或其否定)和任务类之间存在因果关系的存在。虽然以前的方法专注于概念对任务类的重要性,但我们进一步进一步介绍了四项措施来定量地确定因果关系的顺序。此外,我们提出了一种以基于概念的决策树的形式构建一种概念的层次结构,其可以阐明各种概念如何在神经网络内交互朝向预测输出类。通过实验,我们展示了提出方法在解释神经网络的概念与预测行为之间的因果关系中的有效性以及通过构建概念层次结构来确定不同概念之间的相互作用。
translated by 谷歌翻译