Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
Contrastive deep graph clustering, which aims to divide nodes into disjoint groups via contrastive mechanisms, is a challenging research spot. Among the recent works, hard sample mining-based algorithms have achieved great attention for their promising performance. However, we find that the existing hard sample mining methods have two problems as follows. 1) In the hardness measurement, the important structural information is overlooked for similarity calculation, degrading the representativeness of the selected hard negative samples. 2) Previous works merely focus on the hard negative sample pairs while neglecting the hard positive sample pairs. Nevertheless, samples within the same cluster but with low similarity should also be carefully learned. To solve the problems, we propose a novel contrastive deep graph clustering method dubbed Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity measure criterion and a general dynamic sample weighing strategy. Concretely, in our algorithm, the similarities between samples are calculated by considering both the attribute embeddings and the structure embeddings, better revealing sample relationships and assisting hardness measurement. Moreover, under the guidance of the carefully collected high-confidence clustering information, our proposed weight modulating function will first recognize the positive and negative samples and then dynamically up-weight the hard sample pairs while down-weighting the easy ones. In this way, our method can mine not only the hard negative samples but also the hard positive sample, thus improving the discriminative capability of the samples further. Extensive experiments and analyses demonstrate the superiority and effectiveness of our proposed method.
translated by 谷歌翻译
Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering and recommendation systems, etc. According to the graph types, the existing KGR models can be roughly divided into three categories, \textit{i.e.,} static models, temporal models, and multi-modal models. The early works in this domain mainly focus on static KGR and tend to directly apply general knowledge graph embedding models to the reasoning task. However, these models are not suitable for more complex but practical tasks, such as inductive static KGR, temporal KGR, and multi-modal KGR. To this end, multiple works have been developed recently, but no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the preliminaries, summaries of KGR models, and typical datasets are introduced and discussed consequently. Moreover, we discuss the challenges and potential opportunities. The corresponding open-source repository is shared on GitHub: https://github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.
translated by 谷歌翻译
Graph contrastive learning is an important method for deep graph clustering. The existing methods first generate the graph views with stochastic augmentations and then train the network with a cross-view consistency principle. Although good performance has been achieved, we observe that the existing augmentation methods are usually random and rely on pre-defined augmentations, which is insufficient and lacks negotiation between the final clustering task. To solve the problem, we propose a novel Graph Contrastive Clustering method with the Learnable graph Data Augmentation (GCC-LDA), which is optimized completely by the neural networks. An adversarial learning mechanism is designed to keep cross-view consistency in the latent space while ensuring the diversity of augmented views. In our framework, a structure augmentor and an attribute augmentor are constructed for augmentation learning in both structure level and attribute level. To improve the reliability of the learned affinity matrix, clustering is introduced to the learning procedure and the learned affinity matrix is refined with both the high-confidence pseudo-label matrix and the cross-view sample similarity matrix. During the training procedure, to provide persistent optimization for the learned view, we design a two-stage training strategy to obtain more reliable clustering information. Extensive experimental results demonstrate the effectiveness of GCC-LDA on six benchmark datasets.
translated by 谷歌翻译
Recently, graph anomaly detection has attracted increasing attention in data mining and machine learning communities. Apart from existing attribute anomalies, graph anomaly detection also captures suspicious topological-abnormal nodes that differ from the major counterparts. Although massive graph-based detection approaches have been proposed, most of them focus on node-level comparison while pay insufficient attention on the surrounding topology structures. Nodes with more dissimilar neighborhood substructures have more suspicious to be abnormal. To enhance the local substructure detection ability, we propose a novel Graph Anomaly Detection framework via Multi-scale Substructure Learning (GADMSL for abbreviation). Unlike previous algorithms, we manage to capture anomalous substructures where the inner similarities are relatively low in dense-connected regions. Specifically, we adopt a region proposal module to find high-density substructures in the network as suspicious regions. Their inner-node embedding similarities indicate the anomaly degree of the detected substructures. Generally, a lower degree of embedding similarities means a higher probability that the substructure contains topology anomalies. To distill better embeddings of node attributes, we further introduce a graph contrastive learning scheme, which observes attribute anomalies in the meantime. In this way, GADMSL can detect both topology and attribute anomalies. Ultimately, extensive experiments on benchmark datasets show that GADMSL greatly improves detection performance (up to 7.30% AUC and 17.46% AUPRC gains) compared to state-of-the-art attributed networks anomaly detection algorithms.
translated by 谷歌翻译
Knowledge graph embedding (KGE) aims to learn powerful representations to benefit various artificial intelligence applications, such as question answering and recommendations. Meanwhile, contrastive learning (CL), as an effective mechanism to enhance the discriminative capacity of the learned representations, has been leveraged in different fields, especially graph-based models. However, since the structures of knowledge graphs (KGs) are usually more complicated compared to homogeneous graphs, it is hard to construct appropriate contrastive sample pairs. In this paper, we find that the entities within a symmetrical structure are usually more similar and correlated. This key property can be utilized to construct contrastive positive pairs for contrastive learning. Following the ideas above, we propose a relational symmetrical structure based knowledge graph contrastive learning framework, termed KGE-SymCL, which leverages the symmetrical structure information in KGs to enhance the discriminative ability of KGE models. Concretely, a plug-and-play approach is designed by taking the entities in the relational symmetrical positions as the positive samples. Besides, a self-supervised alignment loss is used to pull together the constructed positive sample pairs for contrastive learning. Extensive experimental results on benchmark datasets have verified the good generalization and superiority of the proposed framework.
translated by 谷歌翻译
多视图聚类(MVC)最佳地集成了来自不同视图的互补信息,以提高聚类性能。尽管在各种应用中证明了有希望的性能,但大多数现有方法都直接融合了多个预先指定的相似性,以学习聚类的最佳相似性矩阵,这可能会导致过度复杂的优化和密集的计算成本。在本文中,我们通过对齐方式最大化提出了晚期Fusion MVC,以解决这些问题。为此,我们首先揭示了现有K-均值聚类的理论联系以及基本分区和共识之一之间的对齐。基于此观察结果,我们提出了一种简单但有效的多视算法,称为LF-MVC-GAM。它可以从每个单独的视图中最佳地将多个源信息融合到分区级别,并最大程度地将共识分区与这些加权基础分区保持一致。这种对齐方式有助于整合分区级别信息,并通过充分简化优化过程来大大降低计算复杂性。然后,我们设计了另一个变体LF-MVC-LAM,以通过在多个分区空间之间保留局部内在结构来进一步提高聚类性能。之后,我们开发了两种三步迭代算法,以通过理论上保证的收敛来解决最终的优化问题。此外,我们提供了所提出算法的概括误差约束分析。对十八个多视图基准数据集进行了广泛的实验,证明了拟议的LF-MVC-GAM和LF-MVC-LAM的有效性和效率,范围从小到大型数据项不等。拟议算法的代码可在https://github.com/wangsiwei2010/latefusionalignment上公开获得。
translated by 谷歌翻译
聚类是一种代表性的无监督方法,广泛应用于多模式和多视图方案。多个内核聚类(MKC)旨在通过集成基础内核的互补信息来分组数据。作为代表,后期的Fusion MKC首先将内核分解为正交分区矩阵,然后从他们那里学习共识,最近实现了有希望的表现。但是,这些方法无法考虑分区矩阵内部的噪声,从而阻止了聚类性能的进一步改善。我们发现噪声可以分解为可分离的双部分,即n-noise和c-noise(空空间噪声和柱空间噪声)。在本文中,我们严格地定义了双噪声,并通过最小化新颖的无参数MKC算法提出了新颖的MKC算法。为了解决最终的优化问题,我们设计了有效的两步迭代策略。据我们所知,这是第一次研究内核空间中分区中的双重噪声。我们观察到双重噪声会污染对角线结构并产生聚类性能的变性,而C-Noise比N-Noise表现出更大的破坏。由于我们的有效机制可以最大程度地减少双重噪声,因此所提出的算法超过了最新的方法。
translated by 谷歌翻译
多个内核聚类(MKC)致力于从一组基础内核中实现最佳信息融合。事实证明,构建精确和局部核矩阵在应用中具有至关重要的意义,因为不可靠的远距离相似性估计将降低群集的每种形式。尽管与全球设计的竞争者相比,现有的局部MKC算法表现出改善的性能,但其中大多数通过考虑{\ tau} - 最终的邻居来定位内核矩阵来定位内核矩阵。但是,这种粗糙的方式遵循了一种不合理的策略,即不同邻居的排名重要性是相等的,这在应用程序中是不切实际的。为了减轻此类问题,本文提出了一种新型的本地样品加权多核聚类(LSWMKC)模型。我们首先在内核空间中构建共识判别亲和力图,从而揭示潜在的局部结构。此外,学习亲和力图的最佳邻域内核具有自然稀疏特性和清晰的块对角结构。此外,LSWMKC立即优化了具有相应样品的不同邻居的适应性权重。实验结果表明,我们的LSWMKC具有更好的局部流形表示,并且优于现有内核或基于图的聚类算法算法。可以从https://github.com/liliangnudt/lswmkc公开访问LSWMKC的源代码。
translated by 谷歌翻译
近年来,图形神经网络(GNNS)在半监督节点分类中实现了有希望的性能。但是,监督不足的问题以及代表性崩溃,在很大程度上限制了GNN在该领域的性能。为了减轻半监督场景中节点表示的崩溃,我们提出了一种新型的图形对比学习方法,称为混合图对比度网络(MGCN)。在我们的方法中,我们通过扩大决策边界的边距并提高潜在表示的跨视图一致性来提高潜在特征的歧视能力。具体而言,我们首先采用了基于插值的策略来在潜在空间中进行数据增强,然后迫使预测模型在样本之间进行线性更改。其次,我们使学习的网络能够通过强迫跨视图的相关矩阵近似身份矩阵来分开两个插值扰动视图的样品。通过结合两个设置,我们从丰富的未标记节点和罕见但有价值的标记节点中提取丰富的监督信息,以进行判别表示学习。六个数据集的广泛实验结果证明了与现有最​​新方法相比,MGCN的有效性和普遍性。
translated by 谷歌翻译