基于图神经网络(GNN)方法已饱和推荐系统的领域。这些系统的收益很大,显示了通过网络结构解释数据的优势。但是,尽管在建议任务中使用图形结构有明显的好处,但这种表示形式也带来了新的挑战,这些挑战加剧了缓解算法偏见的复杂性。当将GNN集成到下游任务中时,例如建议,缓解偏差可能会变得更加困难。此外,将现有的公平促进方法应用于大型现实世界数据集的棘手性对缓解尝试更加严重的限制。我们的工作着手通过采用现有方法来促进图形上的个人公平性并将其扩展以支持Mini批次或基于子样本的培训,从而填补了这一空白下游建议任务。我们评估了两种流行的GNN方法:图形卷积网络(GCN),该方法在整个图上进行训练,以及使用概率随机步行的图形,以创建用于迷你批次训练的子图,并评估子采样对个人公平性的影响。我们实施了一个由Dong等人提出的称为\ textit {redress}的个人公平概念,该概念使用等级优化来学习单个公平节点或项目,嵌入。我们在两个现实世界数据集上进行了经验证明,图形不仅能够达到可比的精度,而且与GCN模型相比,还可以提高公平性。这些发现对个人的公平促进,GNN和下游形式产生了影响,推荐系统,表明小批量培训通过允许当地的细微努力指导代表性学习中的公平促进过程来促进个人公平促进。
translated by 谷歌翻译
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node. In this work, we formally introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings. We argue that the notion of neighborhood fairness is more appropriate since GNN-based models operate at the local neighborhood level of a node. Our neighborhood fairness framework has two main components that are flexible for learning fair graph representations from arbitrary data: the first aims to construct fair neighborhoods for any arbitrary node in a graph and the second enables adaption of these fair neighborhoods to better capture certain application or data-dependent constraints, such as allowing neighborhoods to be more biased towards certain attributes or neighbors in the graph.Furthermore, while link prediction has been extensively studied, we are the first to investigate the graph representation learning task of fair link classification. We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings. Notably, our approach achieves not only better fairness but also increases the accuracy in the majority of cases across a wide variety of graphs, problem settings, and metrics.
translated by 谷歌翻译
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge.Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a dataefficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model.We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.
translated by 谷歌翻译
近年来,由于图表代表学习的出色表现,图形神经网络(GNN)技术在许多真实情景中获得了相当大的兴趣,例如推荐系统和社交网络。在推荐系统中,主要挑战是从其互动中学习有效的用户/项目表示。但是,由于它们对数据集和评估度量的差异,比较使用GNNS用于推荐系统的GNN的许多出版物。此外,其中许多只提供了一个演示,以对小型数据集进行实验,这很远可在现实世界推荐系统中应用。为了解决这个问题,我们介绍了Graph4Rec,这是一个Universal Toolkit,它统一地将GNN模型培训到以下部分:图表输入,随机步行生成,自我图形生成,对生成和GNNS选择。从这个训练管道,可以通过一些配置轻松建立自己的GNN模型。此外,我们开发了一个大规模的图形引擎和参数服务器,以支持分布式GNN培训。我们进行系统和全面的实验,以比较不同GNN模型在不同规模中的若干场景中的性能。证明了广泛的实验以识别GNN的关键组分。我们还尝试弄清楚稀疏和密集的参数如何影响GNN的性能。最后,我们研究了包括负面采样,自我图形建设顺序和温暖开始策略的方法,以找到更有效和高效的GNNS在推荐系统上做法。我们的工具包基于PGL HTTPS://github.com/paddlePaddle/pgl,并且在https://github.com/paddlepaddle/pgl/tree/main/apps/graph4rec中打开代码。
translated by 谷歌翻译
最近,图形神经网络(GNN)已被广泛用于开发成功的推荐系统。尽管功能强大,但基于GNN的建议系统很难附上明显的解释,说明为什么特定项目最终在给定用户的建议列表中。确实,解释基于GNN的建议是独特的,而现有的GNN解释方法是不合适的,原因有两个。首先,传统的GNN解释方法是为节点,边缘或图形分类任务而不是排名而设计的,如推荐系统中。其次,标准的机器学习解释通常旨在支持熟练的决策者。相反,建议是为任何最终用户设计的,因此应以用户理解的方式提供其解释。在这项工作中,我们提出了润滑脂,这是一种新的方法,用于解释任何基于黑盒GNN的建议系统提供的建议。具体而言,Grease首先在目标用户项目对及其$ L $ -HOP社区上训练替代模型。然后,它通过找到最佳的邻接矩阵扰动来捕获足够和必要的条件,分别推荐一个项目,从而生成事实和反事实解释。在现实世界数据集上进行的实验结果表明,油脂可以为流行的基于GNN的推荐模型产生简洁有效的解释。
translated by 谷歌翻译
图表神经网络(GNNS)已广泛应用于推荐任务,并获得了非常吸引人的性能。然而,大多数基于GNN的推荐方法在实践中遭受数据稀疏问题。同时,预训练技术在减轻了各个领域(如自然语言处理(NLP)和计算机视觉(CV)等域中的数据稀疏而取得了巨大成功。因此,图形预培训具有扩大基于GNN的建议的数据稀疏的巨大潜力。但是,预先培训GNN,建议面临独特的挑战。例如,不同推荐任务中的用户项交互图具有不同的用户和项目集,并且它们通常存在不同的属性。因此,在NLP和CV中常用的成功机制将知识从预训练任务转移到下游任务,例如共享所学习的嵌入式或特征提取器,而不是直接适用于现有的基于GNN的推荐模型。为了解决这些挑战,我们精致地设计了一个自适应图形预训练框架,用于本地化协作滤波(适应)。它不需要传输用户/项目嵌入式,并且能够跨越不同图的共同知识和每个图形的唯一性。广泛的实验结果表明了适应的有效性和优越性。
translated by 谷歌翻译
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 150 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to specific research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent, and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
translated by 谷歌翻译
Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
图形神经网络(GNN)已通过隐式捕获协作效应的消息通知成功地采用了推荐系统。然而,大多数现有的推荐消息机制是直接从GNN继承的,而无需仔细检查捕获的协作效果是否会受益于用户偏好的预测。在本文中,我们首先分析了消息传播如何捕获协作效应,并提出了面向建议的拓扑指标,共同的相互作用比率(CIR),该比例(CIR)衡量了节点的特定邻居与其其余邻居之间的相互作用水平。在证明了利用邻居与高级CIR合作的好处之后,我们提出了一项推荐销售的GNN,协作意识图形卷积网络(CAGCN),它超出了1-Weisfeiler-Lehman(1-WL)测试,以区分非优质 - 图形图形。六个基准数据集的实验表明,最佳CAGCN变体的表现优于最具代表性的基于GNN的建议模型LightGCN,在Recess@20中的近10%,并且达到了80 \%的加速。我们的代码可在https://github.com/yuwvandy/cagcn上公开获取。
translated by 谷歌翻译
在推荐系统中,一个普遍的挑战是冷门问题,在系统中,相互作用非常有限。为了应对这一挑战,最近,许多作品将元优化的想法介绍到建议方案中,即学习仅通过过去的几个交互项目来学习用户偏好。核心想法是为所有用户学习全局共享的元启动参数,并分别为每个用户迅速调整其本地参数。他们的目的是在各种用户的偏好学习中得出一般知识,以便通过博学的先验和少量培训数据迅速适应未来的新用户。但是,以前的作品表明,推荐系统通常容易受到偏见和不公平的影响。尽管元学习成功地通过冷启动提高了推荐性能,但公平性问题在很大程度上被忽略了。在本文中,我们提出了一个名为Clover的全面的公平元学习框架,以确保元学习的推荐模型的公平性。我们系统地研究了三种公平性 - 个人公平,反事实公平和推荐系统中的群体公平,并建议通过多任务对抗学习方案满足所有三种类型。我们的框架提供了一种通用的培训范式,适用于不同的元学习推荐系统。我们证明了三叶草对三个现实世界数据集的代表性元学习用户偏好估计器的有效性。经验结果表明,三叶草可以实现全面的公平性,而不会恶化整体的冷淡建议性能。
translated by 谷歌翻译
图形神经网络(GNNS)已被证明是在预测建模任务中的Excel,其中底层数据是图形。然而,由于GNN广泛用于人以人为本的应用,因此出现了公平性问题。虽然边缘删除是用于促进GNNS中公平性的常用方法,但是当数据本质上缺少公平连接时,它就无法考虑。在这项工作中,我们考虑未删除的边缘添加方法,促进公平。我们提出了两个模型 - 不可知的算法来执行边缘编辑:蛮力方法和连续近似方法,公平。Fairedit通过利用公平损失的梯度信息来执行有效的边缘编辑,以找到改善公平性的边缘。我们发现Fairedit优于许多数据集和GNN方法的标准培训,同时表现了许多最先进的方法,展示了公平的能力,以改善许多领域和模型的公平性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have become increasingly important in recent years due to their state-of-the-art performance on many important downstream applications. Existing GNNs have mostly focused on learning a single node representation, despite that a node often exhibits polysemous behavior in different contexts. In this work, we develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph. Such disentangled representations are more interpretable and useful than a single embedding. Furthermore, PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings. The framework is flexible enough and the general design helps in the wide applicability of the learned embeddings to suit the domain. We utilize publicly available benchmark datasets to evaluate our approach and against a variety of baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction where we achieve an average gain of 15% while remaining competitive for node classification. Finally, we also demonstrate the utility of PersonaSAGE with a case study for personalized recommendation of different entity types in a data management platform.
translated by 谷歌翻译
Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.
translated by 谷歌翻译
异质图卷积网络在解决异质网络数据的各种网络分析任务方面已广受欢迎,从链接预测到节点分类。但是,大多数现有作品都忽略了多型节点之间的多重网络的关系异质性,而在元路径中,元素嵌入中关系的重要性不同,这几乎无法捕获不同关系跨不同关系的异质结构信号。为了应对这一挑战,这项工作提出了用于异质网络嵌入的多重异质图卷积网络(MHGCN)。我们的MHGCN可以通过多层卷积聚合自动学习多重异质网络中不同长度的有用的异质元路径相互作用。此外,我们有效地将多相关结构信号和属性语义集成到学习的节点嵌入中,并具有无监督和精选的学习范式。在具有各种网络分析任务的五个现实世界数据集上进行的广泛实验表明,根据所有评估指标,MHGCN与最先进的嵌入基线的优势。
translated by 谷歌翻译
图表表示学习已经成为许多情景中的无处不在的组成部分,从社会网络分析到智能电网的能量预测。在几个应用程序中,确保关于某些受保护属性的节点(或图形)表示的公平对其正确部署至关重要。然而,图表深度学习的公平仍然在探索,很少有解决方案。特别地,在若干真实世界图(即同声源性)上相似节点对簇的趋势可以显着恶化这些程序的公平性。在本文中,我们提出了一种新颖的偏见边缘辍学算法(Fairdrop)来反击精神剧并改善图形表示学习中的公平性。 Fairdrop可以在许多现有算法上轻松插入,具有高效,适应性,并且可以与其他公平诱导的解决方案结合。在描述了一般算法之后,我们在两个基准任务中展示其应用,具体地,作为用于生产节点嵌入的随机步道模型,以及用于链路预测的图形卷积网络。我们证明,所提出的算法可以成功地改善所有型号的公平,直到精度小或可忽略的降低,并与现有的最先进的解决方案相比。在一个消融研究中,我们证明我们的算法可以灵活地在偏置公平性和无偏见的边缘辍学之间插入。此外,为了更好地评估增益,我们提出了一种新的二元组定义,以测量与基于组的公平度量配对时的链路预测任务的偏差。特别是,我们扩展了用于测量节点嵌入的偏差的指标,以考虑图形结构。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译
近年来,异构图形神经网络(HGNNS)一直在开花,但每个工作所使用的独特数据处理和评估设置会让他们的进步完全了解。在这项工作中,我们通过使用其官方代码,数据集,设置和超参数来展示12个最近的HGNN的系统再现,揭示了关于HGNN的进展的令人惊讶的结果。我们发现,由于设置不当,简单的均匀GNN,例如GCN和GAT在很大程度上低估了。具有适当输入的GAT通常可以匹配或优于各种场景的所有现有HGNN。为了促进稳健和可重复的HGNN研究,我们构建异构图形基准(HGB),由具有三个任务的11个不同数据集组成。 HGB标准化异构图数据分割,特征处理和性能评估的过程。最后,我们介绍了一个简单但非常强大的基线简单 - HGN - 这显着优于HGB上以前的所有模型 - 以加速未来HGNN的进步。
translated by 谷歌翻译
给定图形学习任务,例如链接预测,在新的图数据集上,我们如何自动选择最佳方法及其超参数(集体称为模型)?图形学习的模型选择在很大程度上是临时的。一种典型的方法是将流行方法应用于新数据集,但这通常是次优的。另一方面,系统比较新图上的模型迅速变得太成本过高,甚至不切实际。在这项工作中,我们为自动图机学习开发了第一种称为AutoGML的元学习方法,该方法利用了基准图数据集中大量现有方法的先前性能,并携带此先前的经验以自动选择有效的有效用于新图的模型,无需任何模型培训或评估。为了捕获来自不同领域的图形之间的相似性,我们引入了量化图形结构特征的专业元图特征。然后,我们设计了一个代表模型和图形之间关系的元图,并开发了在元图上运行的图形元学习器,该图估计了每个模型与不同图的相关性。通过广泛的实验,我们表明,使用AutoGML选择新图的方法显着优于始终应用流行方法以及几个现有的元学习者,同时在测试时非常快。
translated by 谷歌翻译
冷启动问题是推荐任务的根本挑战。最近的自我监督学习(SSL)图形神经网络(GNNS)模型,PT-GNN,预先列出GNN模型以重建冷启动嵌入,并为冷启动推荐表示了很大的潜力。然而,由于过平滑的问题,PT-GNN只能捕获多达3阶关系,这不能提供许多有用的辅助信息来描绘目标冷启动用户或项目。此外,嵌入重建任务仅考虑用户和项目的子图内的相关性,同时忽略不同子图之间的相关间。为解决上述挑战,我们提出了一种基于多策略的冷启动推荐(MPT)的预训练方法,它从模型架构和借口任务的角度扩展了PT-GNN,以提高冷启动推荐性能。具体地,在模型架构方面,除了由GNN编码器捕获的用户和项目的短程依赖性之外,我们还引入变压器编码器以捕获远程依赖性。在借口任务方面,除了通过嵌入重建任务考虑用户和项目的相关性,我们还添加了嵌入对比学习任务以捕获用户和项目的相关性。我们在元学习设置下培训GNN和变压器编码,在这些借口任务下,以模拟真实的冷启动方案,使模型轻松迅速,适应新的冷启动用户和项目。三个公共推荐数据集的实验显示了对Vanilla GNN模型的提议MPT模型的优势,预先培训了用户/项目嵌入推断和推荐任务的GNN模型。
translated by 谷歌翻译
Pre-publication draft of a book to be published byMorgan & Claypool publishers. Unedited version released with permission. All relevant copyrights held by the author and publisher extend to this pre-publication draft.
translated by 谷歌翻译