Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GNNEXPLAINER, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GNNEXPLAINER identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GNNEXPLAINER can generate consistent and concise explanations for an entire class of instances. We formulate GNNEXPLAINER as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GNNEXPLAINER provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs.
translated by 谷歌翻译
In this paper, we investigate the degree of explainability of graph neural networks (GNNs). Existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-learning framework for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure towards what we call `interpretable minima'. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies upon a set of features that can be `better' understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost for the accuracy of the model.
translated by 谷歌翻译
在高措施应用中大量部署图神经网络(GNNS)对对噪声的强大解释产生了强烈的需求,这些解释与人类的直觉很好。大多数现有方法通过识别与预测有很强相关性的输入图的子图来生成解释。这些解释对噪声并不强大,因为独立优化单个输入的相关性很容易过分拟合噪声。此外,它们与人类直觉并不十分吻合,因为从输入图中删除已识别的子图并不一定会改变预测结果。在本文中,我们提出了一种新颖的方法,可以通过在类似的输入图上明确建模GNNS的共同决策逻辑来生成对GNN的强大反事实解释。我们的解释自然对噪声是强大的,因为它们是由控制许多类似输入图的GNN的共同决策边界产生的。该解释也与人类的直觉很好地吻合,因为从输入图中的解释中删除了一组边缘,从而显着改变了预测。许多公共数据集上的详尽实验证明了我们方法的出色性能。
translated by 谷歌翻译
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and fail to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons of spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a simple yet effective countermeasure by aligning embeddings. Concretely, concerning potential shifts in the high-dimensional space, we design a distribution-aware alignment algorithm based on anchors. This new objective is easy to compute and can be incorporated into existing techniques with no or little effort. Theoretical analysis shows that it is in effect optimizing a more faithful explanation objective in design, which further justifies the proposed approach.
translated by 谷歌翻译
尽管近期图形神经网络(GNN)进展,但解释了GNN的预测仍然具有挑战性。现有的解释方法主要专注于后性后解释,其中采用另一种解释模型提供培训的GNN的解释。后HOC方法未能揭示GNN的原始推理过程的事实引发了建立GNN与内置解释性的需求。在这项工作中,我们提出了原型图形神经网络(Protgnn),其将原型学习与GNNS相结合,并提供了对GNN的解释的新视角。在Protgnn中,解释自然地从基于案例的推理过程衍生,并且实际在分类期间使用。通过将输入与潜伏空间中的一些学习原型的输入进行比较来获得ProtGnn的预测。此外,为了更好地解释性和更高的效率,结合了一种新颖的条件子图采样模块,以指示输入图的哪个部分与ProtGnn +中的每个原型最相似。最后,我们在各种数据集中评估我们的方法并进行具体的案例研究。广泛的结果表明,Protgnn和Protgnn +可以提供固有的解释性,同时实现与非可解释对方的准确性有关的准确性。
translated by 谷歌翻译
深度学习方法正在实现许多人工智能任务上的不断增长。深层模型的一个主要局限性是它们不适合可解释性。可以通过开发事后技术来解释预测,从而产生解释性领域,从而规避这种限制。最近,关于图像和文本的深层模型的解释性取得了重大进展。在图数据的领域,图形神经网络(GNN)及其解释性正在迅速发展。但是,既没有对GNN解释性方法的统一处理,也没有标准的基准和测试床。在这项调查中,我们提供了当前GNN解释性方法的统一和分类观点。我们对这一主题的统一和分类治疗对现有方法的共同性和差异阐明了灯光,并为进一步的方法论发展奠定了基础。为了促进评估,我们生成了一组专门用于GNN解释性的基准图数据集。我们总结了当前的数据集和指标,以评估GNN的解释性。总的来说,这项工作提供了GNN解释性和评估标准化测试床的统一方法论。
translated by 谷歌翻译
With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
translated by 谷歌翻译
由于事后解释越来越多地用于了解图神经网络(GNN)的行为,因此评估GNN解释的质量和可靠性至关重要。但是,评估GNN解释的质量是具有挑战性的,因为现有的图形数据集对给定任务没有或不可靠的基础真相解释。在这里,我们介绍了一个合成图数据生成器ShapeGgen,该生成可以生成各种基准数据集(例如,不同的图形大小,度分布,同粒细胞与异性图)以及伴随着地面真相解释。此外,生成各种合成数据集和相应的基础真相解释的灵活性使我们能够模仿各种现实世界应用程序生成的数据。我们将ShapeGgen和几个现实图形数据集包括在开源图形图库GraphXai中。除了带有基础真相说明的合成和现实图形数据集外,GraphXAI还提供数据加载程序,数据处理功能,可视化器,GNN模型实现和评估指标,以基准基准GNN解释性方法的性能。
translated by 谷歌翻译
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications. Some GNNs explainers have been proposed recently. However, they lack to generate accurate and real explanations. To mitigate these limitations, we propose GANExplainer, based on Generative Adversarial Network (GAN) architecture. GANExplainer is composed of a generator to create explanations and a discriminator to assist with the Generator development. We investigate the explanation accuracy of our models by comparing the performance of GANExplainer with other state-of-the-art methods. Our empirical results on synthetic datasets indicate that GANExplainer improves explanation accuracy by up to 35\% compared to its alternatives.
translated by 谷歌翻译
图神经网络(GNN)是一类流行的机器学习模型。受到学习解释(L2X)范式的启发,我们提出了L2XGNN,这是一个可解释的GNN的框架,该框架通过设计提供了忠实的解释。L2XGNN学习了一种选择解释性子图(主题)的机制,该机制仅在GNNS消息通话操作中使用。L2XGNN能够为每个输入图选择具有特定属性的子图,例如稀疏和连接。对主题施加这种限制通常会导致更容易解释和有效的解释。几个数据集的实验表明,L2XGNN使用整个输入图实现了与基线方法相同的分类精度,同时确保仅使用提供的解释来进行预测。此外,我们表明L2XGNN能够识别负责预测图形属性的主题。
translated by 谷歌翻译
Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification.
translated by 谷歌翻译
解释机器学习决策的问题是经过深入研究和重要的。我们对一种涉及称为图形神经网络的图形数据的特定类型的机器学习模型感兴趣。众所周知,由于缺乏公认的基准,评估图形神经网络(GNN)的可解释性方法是具有挑战性的。鉴于GNN模型,存在几种可解释性方法来解释具有多种(有时相互矛盾的)方法论的GNN模型。在本文中,我们提出了一个基准,用于评估称为Bagel的GNN的解释性方法。在百吉饼中,我们首先提出了四种不同的GNN解释评估制度 - 1)忠诚,2)稀疏性,3)正确性。 4)合理性。我们在现有文献中调和多个评估指标,并涵盖了各种概念以进行整体评估。我们的图数据集范围从引文网络,文档图,到分子和蛋白质的图。我们对四个GNN模型和九个有关节点和图形分类任务的事后解释方法进行了广泛的实证研究。我们打开基准和参考实现,并在https://github.com/mandeep-rathee/bagel-benchmark上提供它们。
translated by 谷歌翻译
作为当今最受欢迎的机器学习模型之一,Graph神经网络(GNN)最近引起了激烈的兴趣,其解释性也引起了人们的兴趣。用户对更好地了解GNN模型及其结果越来越感兴趣。不幸的是,当今的GNN评估框架通常依赖于合成数据集,从而得出有限范围的结论,因为问题实例缺乏复杂性。由于GNN模型被部署到更关键的任务应用程序中,因此我们迫切需要使用GNN解释性方法的共同评估协议。在本文中,据我们最大的知识,我们提出了针对GNN解释性的第一个系统评估框架,考虑了三种不同的“用户需求”的解释性:解释焦点,掩盖性质和掩蔽转换。我们提出了一个独特的指标,该指标将忠诚度措施结合在一起,并根据其足够或必要的质量对解释进行分类。我们将自己范围用于节点分类任务,并比较GNN的输入级解释性领域中最具代表性的技术。对于广泛使用的合成基准测试,令人惊讶的是,诸如个性化Pagerank之类的浅水技术在最小计算时间内具有最佳性能。但是,当图形结构更加复杂并且节点具有有意义的特征时,根据我们的评估标准,基于梯度的方法,尤其是显着性。但是,没有人在所有评估维度上占主导地位,而且总会有一个权衡。我们在eBay图上的案例研究中进一步应用了我们的评估协议,以反映生产环境。
translated by 谷歌翻译
图形神经网络(GNNS)正在快速成为在多个域中学习图形结构化数据的标准方法,但它们在其决策中缺乏透明度。已经开发了几种基于扰动的方法,以提供对GNN的决策过程的见解。由于这是一个早期研究区域,用于评估生成的解释的方法和数据缺乏成熟。我们探索了这些现有的方法,并识别三个主要领域的普通缺陷:(1)合成数据生成过程,(2)评估指标,以及(3)最终呈现解释。为此目的,我们执行一个实证研究,以探索这些陷阱以及他们意外的后果,并提出补救措施来减轻它们的效果。
translated by 谷歌翻译
图形神经网络的不透明推理导致缺乏人类的信任。现有的图形网络解释器试图通过提供事后解释来解决此问题,但是,它们无法使模型本身更容易解释。为了填补这一空白,我们介绍了概念编码器模块,这是图形网络的第一个可区分概念 - 发现方法。所提出的方法使图形网络可以通过首先发现图形概念,然后使用这些来解决任务来解释。我们的结果表明,这种方法允许图形网络:(i)达到模型准确性与它们的等效香草版本相当,(ii)发现有意义的概念,以实现高概念完整性和纯度得分,(iii)提供基于高质量的概念逻辑。对其预测的解释,以及(iv)在测试时支持有效的干预措施:这些可以提高人类的信任并显着提高模型绩效。
translated by 谷歌翻译
我们研究了图神经网络(GNN)的解释性,作为阐明其工作机制的一步。尽管大多数当前方法都集中在解释图节点,边缘或功能上,但我们认为,作为GNNS的固有功能机制,消息流对执行解释性更为自然。为此,我们在这里提出了一种新颖的方法,即FlowX,以通过识别重要的消息流来解释GNN。为了量化流量的重要性,我们建议遵循合作游戏理论中沙普利价值观的哲学。为了解决计算所有联盟边际贡献的复杂性,我们提出了一个近似方案,以计算类似沙普利的值,作为进一步再分配训练的初步评估。然后,我们提出一种学习算法来训练流量评分并提高解释性。关于合成和现实世界数据集的实验研究表明,我们提出的FlowX导致GNN的解释性提高。
translated by 谷歌翻译
Explainability of Graph Neural Networks (GNNs) is critical to various GNN applications but remains an open challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a trade-off between the two. To search for the most necessary and sufficient explanation, the Probability of Necessity and Sufficiency (PNS) can be applied since it can mathematically quantify the necessity and sufficiency of an explanation. Nevertheless, the difficulty of obtaining PNS due to non-monotonicity and the challenge of counterfactual estimation limits its wide use. To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound. Specifically, we employ nearest neighbor matching to generate counterfactual samples for the features, which is different from the random perturbation. In particular, NSEG combines the edges and node features to generate an explanation, where the common edge explanation is a special case of the combined explanation. Empirical study shows that NSEG achieves excellent performance in generating the most necessary and sufficient explanations among a series of state-of-the-art methods.
translated by 谷歌翻译
最近,图形神经网络(GNN)显着提高了图形上机器学习任务的性能。但是,这一技术突破使人们感到奇怪:GNN如何做出这样的决定,我们可以高度信心信任它的预测吗?当涉及到一些关键领域(例如生物医学)时,做出错误的决策可能会产生严重的后果,在应用它们之前解释GNN的内部工作机制至关重要。在本文中,我们为遵循消息传递方案GnnInterPreter的不同GNN的新型模型模型级解释方法提出了一种新颖的模型级解释方法,以解释GNN模型的高级决策过程。更具体地说,通过图形的连续放松和重新聚集技巧,GnnInterPreter学习了概率生成图分布,该分布在GNN模型的眼中生成了目标预测的最具代表性图。与唯一的现有作品相比,GnnInterPreter在生成具有不同类型的节点功能和边缘功能的解释图时更加有效,更灵活,而无需引入另一个Blackbox来解释GNN,而无需特定领域的知识。此外,在四个不同数据集上进行的实验研究表明,当模型是理想的情况下,GnnInterPreter生成的解释图可以匹配所需的图形模式,并揭示了如果存在任何模型。
translated by 谷歌翻译
最近,引入了亚图增强图神经网络(SGNN),以增强图形神经网络(GNN)的表达能力,事实证明,该功能不高于一维Weisfeiler-Leman同构测试。新的范式建议使用从输入图中提取的子图提高模型的表现力,但是额外的复杂性加剧了GNNS中本来可以具有挑战性的问题:解释其预测。在这项工作中,我们将PGEXPlainer(GNNS的最新解释者之一)改编为SGNN。拟议的解释器解释了所有不同子图的贡献,并可以产生人类可以解释的有意义的解释。我们在真实和合成数据集上执行的实验表明,我们的框架成功地解释了SGNN在图形分类任务上的决策过程。
translated by 谷歌翻译
图形神经网络(GNN)在各种高桩预测任务中实现了最先进的性能,但是具有不规则结构的图表上的多层聚合使得GNN成为一种更不可解释的模型。先前的方法使用更简单的子图来模拟完整模型,或识别预测原因的完整模型或反事实。这两个方法旨在瞄准两个不同的目标,“模拟性”和“反事实相关”,但目前尚不清楚目标如何共同影响人类理解解释。我们设计用户学习,以调查这些关节效果,并使用该研究结果设计多目标优化(MOO)算法,以查找帕累托最佳解释,可在模拟性和反事实方面得到良好平衡。由于目标模型可以是任何GNN变体,并且由于隐私问题可能无法访问,因此我们使用零顺序信息设计一个搜索算法而不访问目标模型的架构和参数。来自四个应用的九个图表的定量实验表明,帕累托有效的解释主导使用一阶连续优化或离散组合搜索的单目标基线。在鲁棒性和敏感性中进一步评估了解释,以表明他们揭示令人信服的令人信服的能力,同时对可能的混乱持谨慎态度。各种主导的反事件可以证明算法追索权的可行性,这可能促进人类参与使用GNN决策的算法公平性。
translated by 谷歌翻译