大多数图形神经网络(GNN)通过学习输入图和标签之间的相关性来预测看不见的图的标签。但是,通过对具有严重偏见的训练图进行图形分类调查,我们发现GNN始终倾向于探索伪造的相关性以做出决定,即使因果关系始终存在。这意味着在此类偏见的数据集中接受培训的现有GNN将遭受概括能力差。通过在因果观点中分析此问题,我们发现从偏见图中解开和去偏置因果和偏见的潜在变量对于偏见至关重要。在此鼓舞下,我们提出了一个普遍的分解GNN框架,分别学习因果子结构和偏见子结构。特别是,我们设计了一个参数化的边蒙版生成器,以将输入图明确分为因果和偏置子图。然后,分别由因果/偏见感知损失函数监督的两个GNN模块进行培训,以编码因果关系和偏置子图表中的相应表示。通过分离的表示,我们合成了反事实无偏的训练样本,以进一步脱离因果变量和偏见变量。此外,为了更好地基于严重的偏见问题,我们构建了三个新的图形数据集,这些数据集具有可控的偏置度,并且更容易可视化和解释。实验结果很好地表明,我们的方法比现有基线实现了优越的概括性能。此外,由于学习的边缘面膜,该拟议的模型具有吸引人的解释性和可转让性。代码和数据可在以下网址获得:https://github.com/googlebaba/disc。
translated by 谷歌翻译
建议图表神经网络(GNNS)在不考虑训练和测试图之间的不可知分布的情况下,诱导GNN的泛化能力退化在分布外(OOD)设置。这种退化的根本原因是大多数GNN是基于I.I.D假设开发的。在这种设置中,GNN倾向于利用在培训中存在的微妙统计相关性用于预测,即使它是杂散的相关性。然而,这种杂散的相关性可能在测试环境中改变,导致GNN的失败。因此,消除了杂散相关的影响对于稳定的GNN来说是至关重要的。为此,我们提出了一个普遍的因果代表框架,称为稳定凝球。主要思想是首先从图数据中提取高级表示,并诉诸因因果推理的显着能力,以帮助模型摆脱虚假相关性。特别是,我们利用图形池化层以提取基于子图的表示作为高级表示。此外,我们提出了一种因果变量区别,以纠正偏置训练分布。因此,GNN将更多地集中在稳定的相关性上。对合成和现实世界ood图数据集的广泛实验良好地验证了所提出的框架的有效性,灵活性和可解释性。
translated by 谷歌翻译
学习强大的表示是图形神经网络(GNN)的一个中心主题。它需要从输入图中炼制关键信息,而不是琐碎的模式,以丰富表示。为此,图表注意力和汇集方法占上风。他们主要遵循“学会参加”的范式。它最大限度地提高了上述子图和地面真理标签之间的相互信息。然而,这种训练范例易于捕获微级子图和标签之间的虚假相关性。这种杂散的相关性对分布(ID)测试评估有益,但在分布外(OOD)测试数据中引起差的概括。在这项工作中,我们从因果角度重新审视GNN建模。在我们的因果假设之上,琐碎的信息是关键信息和标签之间的混淆,它在它们之间打开了一个后门路径,使它们保持虚拟相关。因此,我们提出了一个新的解压缩训练范式(DTP),更好地减轻了批评信息的混淆效果并锁存,以提高表示和泛化能力。具体而言,我们采用注意模块解开关键的子图和微不足道的子图。然后我们使每个关键的子图相当与不同的琐碎子图相互作用,以实现稳定的预测。它允许GNN捕获一个更可靠的子图,其与标签的关系跨越不同的分布。我们对综合和现实世界数据集进行了广泛的实验,以证明有效性。
translated by 谷歌翻译
Graph machine learning has been extensively studied in both academia and industry. Although booming with a vast number of emerging methods and techniques, most of the literature is built on the in-distribution hypothesis, i.e., testing and training graph data are identically distributed. However, this in-distribution hypothesis can hardly be satisfied in many real-world graph scenarios where the model performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this critical problem, out-of-distribution (OOD) generalization on graphs, which goes beyond the in-distribution hypothesis, has made great progress and attracted ever-increasing attention from the research community. In this paper, we comprehensively survey OOD generalization on graphs and present a detailed review of recent advances in this area. First, we provide a formal problem definition of OOD generalization on graphs. Second, we categorize existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy, based on their positions in the graph machine learning pipeline, followed by detailed discussions for each category. We also review the theories related to OOD generalization on graphs and introduce the commonly used graph datasets for thorough evaluations. Finally, we share our insights on future research directions. This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
translated by 谷歌翻译
图表神经网络(GNNS)在测试和训练图数据来自相同分布时取得了令人印象深刻的性能。然而,现有的GNN缺乏分发的泛化能力,使得它们的性能在测试和训练图数据之间存在分布时显着降低。为了解决这个问题,在这项工作中,我们提出了一个用于在具有训练图的不同分布的看不见的分布的看不见的令人满意的令人满意的令人满意的通用图形神经网络(OOD-GNN)。我们所提出的OOD-GNN采用新颖的非线性图形表示去序方法,利用随机傅里叶特征,这鼓励模型通过迭代优化样本图权重和图形编码器来消除相关和无关的图表表示之间的统计依赖性。我们进一步设计了一个全局重量估计器,以学习训练图的权重,使得图形表示中的变量被迫独立。学习权重有助于图形编码器摆脱虚假相关性,并且反过来,更集中学习鉴别图形表示与地面真理标签之间的真实连接。我们进行广泛的实验,以验证两个合成和12个现实世界数据集的分发外概括能力,分配换档。结果表明,我们所提出的OOD-GNN显着优于最先进的基线。
translated by 谷歌翻译
尽管最近在欧几里得数据(例如图像)上使用不变性原理(OOD)概括(例如图像),但有关图数据的研究仍然受到限制。与图像不同,图形的复杂性质给采用不变性原理带来了独特的挑战。特别是,图表上的分布变化可以以多种形式出现,例如属性和结构,因此很难识别不变性。此外,在欧几里得数据上通常需要的域或环境分区通常需要的图形可能非常昂贵。为了弥合这一差距,我们提出了一个新的框架,以捕获图形的不变性,以在各种分配变化下进行保证的OOD概括。具体而言,我们表征了具有因果模型的图形上的潜在分布变化,得出结论,当模型仅关注包含有关标签原因最多信息的子图时,可以实现图形上的OOD概括。因此,我们提出了一个信息理论目标,以提取最大地保留不变的阶级信息的所需子图。用这些子图学习不受分配变化的影响。对合成和现实世界数据集进行的广泛实验,包括在AI ADED药物发现中充满挑战的环境,验证了我们方法的上等OOD概括能力。
translated by 谷歌翻译
Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
公平机器学习旨在减轻模型预测的偏见,这对于关于诸如种族和性别等敏感属性的某些群体的偏见。在许多现有的公平概念中,反事实公平通过比较来自原始数据和反事实的预测来衡量因因果角度来源的模型公平。在反事实上,该个人的敏感属性值已被修改。最近,少数作品将反事实公平扩展到图数据,但大多数忽略了可能导致偏差的以下事实:1)每个节点邻居的敏感属性可能会影响预测w.r.t.这个节点; 2)敏感属性可能会导致其他特征和图形结构。为了解决这些问题,在本文中,我们提出了一种新颖的公平概念 - 图形反应性公平,这考虑了上述事实领导的偏差。要学习对图形反事实公平的节点表示,我们提出了一种基于反事实数据增强的新颖框架。在此框架中,我们生成对应于每个节点和邻居敏感属性的扰动的反应性。然后,我们通过最大限度地减少从原始图表中学到的表示与每个节点的反事实之间的差异来执行公平性。合成和真实图的实验表明,我们的框架优于图形反事实公平性的最先进的基线,并且还实现了可比的预测性能。
translated by 谷歌翻译
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and fail to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons of spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a simple yet effective countermeasure by aligning embeddings. Concretely, concerning potential shifts in the high-dimensional space, we design a distribution-aware alignment algorithm based on anchors. This new objective is easy to compute and can be incorporated into existing techniques with no or little effort. Theoretical analysis shows that it is in effect optimizing a more faithful explanation objective in design, which further justifies the proposed approach.
translated by 谷歌翻译
Data-efficient learning on graphs (GEL) is essential in real-world applications. Existing GEL methods focus on learning useful representations for nodes, edges, or entire graphs with ``small'' labeled data. But the problem of data-efficient learning for subgraph prediction has not been explored. The challenges of this problem lie in the following aspects: 1) It is crucial for subgraphs to learn positional features to acquire structural information in the base graph in which they exist. Although the existing subgraph neural network method is capable of learning disentangled position encodings, the overall computational complexity is very high. 2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure. Subgraph augmentation is more susceptible to undesirable perturbations. 3) Only a small number of nodes in the base graph are contained in subgraphs, which leads to a potential ``bias'' problem that the subgraph representation learning is dominated by these ``hot'' nodes. By contrast, the remaining nodes fail to be fully learned, which reduces the generalization ability of subgraph representation learning. In this paper, we aim to address the challenges above and propose a Position-Aware Data-Efficient Learning framework for subgraph neural networks called PADEL. Specifically, we propose a novel node position encoding method that is anchor-free, and design a new generative subgraph augmentation method based on a diffused variational subgraph autoencoder, and we propose exploratory and exploitable views for subgraph contrastive learning. Extensive experiment results on three real-world datasets show the superiority of our proposed method over state-of-the-art baselines.
translated by 谷歌翻译
Link prediction is a crucial problem in graph-structured data. Due to the recent success of graph neural networks (GNNs), a variety of GNN-based models were proposed to tackle the link prediction task. Specifically, GNNs leverage the message passing paradigm to obtain node representation, which relies on link connectivity. However, in a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation. It leads to a problem of dataset shift which degrades the model performance. In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it. We then propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets. Extensive experiments demonstrate the applicability and superiority of FakeEdge on multiple datasets across various domains.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have attracted increasing attention in recent years and have achieved excellent performance in semi-supervised node classification tasks. The success of most GNNs relies on one fundamental assumption, i.e., the original graph structure data is available. However, recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph, making it necessary to learn comprehensive and robust graph structures for downstream tasks, rather than relying only on the raw graph structure. In light of this, we seek to learn optimal graph structures for downstream tasks and propose a novel framework for semi-supervised classification. Specifically, based on the structural context information of graph and node representations, we encode the complex interactions in semantics and generate semantic graphs to preserve the global structure. Moreover, we develop a novel multi-measure attention layer to optimize the similarity rather than prescribing it a priori, so that the similarity can be adaptively evaluated by integrating measures. These graphs are fused and optimized together with GNN towards semi-supervised classification objective. Extensive experiments and ablation studies on six real-world datasets clearly demonstrate the effectiveness of our proposed model and the contribution of each component.
translated by 谷歌翻译
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?". The comparison of this instance before and after perturbation can enhance human interpretation. Most existing studies on counterfactual explanations are limited in tabular data or image data. In this work, we study the problem of counterfactual explanation generation on graphs. A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed: 1) optimizing in the discrete and disorganized space of graphs; 2) generalizing on unseen graphs; and 3) maintaining the causality in the generated counterfactuals without prior knowledge of the causal model. To tackle these challenges, we propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models. Specifically, CLEAR leverages a graph variational autoencoder based mechanism to facilitate its optimization and generalization, and promotes causality by leveraging an auxiliary variable to better identify the underlying causal model. Extensive experiments on both synthetic and real-world graphs validate the superiority of CLEAR over the state-of-the-art methods in different aspects.
translated by 谷歌翻译
Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
流行的图神经网络模型在图表学习方面取得了重大进展。但是,在本文中,我们发现了一个不断被忽视的现象:用完整图测试的预训练的图表学习模型的表现不佳,该模型用良好的图表测试。该观察结果表明,图中存在混杂因素,这可能会干扰模型学习语义信息,而当前的图表表示方法并未消除其影响。为了解决这个问题,我们建议强大的因果图表示学习(RCGRL)学习可靠的图形表示,以防止混杂效应。 RCGRL引入了一种主动方法,可以在无条件的力矩限制下生成仪器变量,该方法使图表学习模型能够消除混杂因素,从而捕获与下游预测有因果关系的歧视性信息。我们提供定理和证明,以保证拟议方法的理论有效性。从经验上讲,我们对合成数据集和多个基准数据集进行了广泛的实验。结果表明,与最先进的方法相比,RCGRL实现了更好的预测性能和泛化能力。
translated by 谷歌翻译
由于图形神经网络(GNN)在各个域中的出色性能,因此对GNN解释问题越来越感兴趣“ \ emph {输入图的哪一部分是决定模型决定的最关键?}“现有的解释?方法集中在监督的设置,例如节点分类和图形分类上,而无监督的图形表示学习的解释仍未探索。当部署高级决策情况时,图表表示的不透明可能会导致意外风险。在本文中,我们推进了信息瓶颈原理(IB),以解决无监督的图表表示所提出的解释问题,这导致了一个新颖的原理,\ textit {无监督的子图表信息瓶颈}(USIB)。我们还理论上分析了标签空间上图表和解释子图之间的联系,这表明表示的表现力和鲁棒性有益于解释性子图的保真度。合成和现实世界数据集的实验结果证明了我们发达的解释器的优越性以及我们的理论分析的有效性。
translated by 谷歌翻译
需要解释的图表学习是需要的,因为许多科学应用都取决于学习模型来从图形结构数据中收集见解。先前的工作主要集中在使用事后方法来解释预训练的模型(尤其是图形神经网络模型)。他们反对固有的可解释模型,因为对这些模型的良好解释通常是以其预测准确性为代价。而且,广泛使用的固有解释的注意力机制通常无法在图形学习任务中提供忠实的解释。在这项工作中,我们通过提出图形随机关注(GSAT)来解决这两个问题,这是一种来自信息瓶颈原理的注意机制。 GSAT利用随机关注来阻止从任务 - 核定图组件中的信息,同时学习降低随机性的注意力以选择与任务相关的子图以进行解释。 GSAT也可以通过随机注意机制应用于微调和解释预训练的模型。八个数据集的广泛实验表明,GSAT在解释AUC中的最高最高为20%$ \ uparrow $,而预测准确性则高于最高的最高$ \ uparrow $。
translated by 谷歌翻译
理由定义为最能解释或支持机器学习模型预测的输入功能的子集。基本原理识别改善了神经网络在视觉和语言数据上的普遍性和解释性。在诸如分子和聚合物属性预测之类的图应用中,识别称为图理由的代表性子图结构在图神经网络的性能中起着至关重要的作用。现有的图形合并和/或分发干预方法缺乏示例,无法学习确定最佳图理由。在这项工作中,我们介绍了一个名为“环境替代”的新的增强操作,该操作自动创建虚拟数据示例以改善基本原理识别。我们提出了一个有效的框架,该框架在潜在空间中对真实和增强的示例进行基本环境分离和表示学习,以避免显式图解码和编码的高复杂性。与最近的技术相比,对七个分子和四个聚合物实际数据集进行的实验证明了拟议的基于增强的图形合理化框架的有效性和效率。
translated by 谷歌翻译
图形神经网络(GNN)在学习强大的节点表示中显示了令人信服的性能,这些表现在保留节点属性和图形结构信息的强大节点表示中。然而,许多GNNS在设计有更深的网络结构或手柄大小的图形时遇到有效性和效率的问题。已经提出了几种采样算法来改善和加速GNN的培训,但他们忽略了解GNN性能增益的来源。图表数据中的信息的测量可以帮助采样算法来保持高价值信息,同时消除冗余信息甚至噪声。在本文中,我们提出了一种用于GNN的公制引导(MEGUIDE)子图学习框架。 MEGUIDE采用两种新颖的度量:功能平滑和连接失效距离,以指导子图采样和迷你批次的培训。功能平滑度专为分析节点的特征而才能保留最有价值的信息,而连接失败距离可以测量结构信息以控制子图的大小。我们展示了MEGUIDE在多个数据集上培训各种GNN的有效性和效率。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译