Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge graphs, these frameworks amplify the underlying data bias that leads to discrimination towards certain groups or individuals in resulting applications. Additionally, as existing debiasing approaches mainly focus on the entity-wise bias, eliminating the multi-hop relational bias that pervasively exists in knowledge graphs remains an open question. However, it is very challenging to eliminate relational bias due to the sparsity of the paths that generate the bias and the non-linear proximity structure of knowledge graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs. The proposed framework is generalizable to mitigate the relational bias for all types of KGNN. We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias. The experiments carried out on three benchmark knowledge graph datasets demonstrate that the Fair-KGNN can effectively mitigate unfair situations during representation learning while preserving the predictive performance of KGNN models.
translated by 谷歌翻译
Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
translated by 谷歌翻译
学习不平衡是数据挖掘的基本挑战,在每个课程中,培训样本的比例不成比例。过度采样是通过为少数族裔生成合成样本来解决不平衡学习的有效技术。尽管已经提出了许多过采样算法,但它们在很大程度上依赖启发式方法,这可能是最佳选择的,因为我们可能需要针对不同数据集和基本分类器的不同采样策略,并且无法直接优化性能指标。在此激励的情况下,我们研究了开发一种基于学习的过采样算法以优化分类性能,这是一项艰巨的任务,因为庞大和等级的决策空间。在高水平上,我们需要确定要生成多少合成样品。在低级别,我们需要确定合成样品的位置,这取决于高级决策,因为样品的最佳位置在不同数量的样品中可能有所不同。为了应对挑战,我们提出了一种自动采样算法,可以共同优化不同级别的决策。由Smote〜 \ cite {Chawla2002smote}的成功的动机及其扩展,我们将生成过程作为Markov决策过程(MDP),由三个级别的策略组成,以在Smote搜索空间内生成合成样本。然后,我们利用深层的层次加强学习来优化验证数据的性能指标。在六个现实世界数据集上进行的广泛实验表明,自动变量极大地超过了最新的重新采样算法。该代码在https://github.com/daochenzha/autosmote上
translated by 谷歌翻译
大型语言模型(LLM)已在一系列自然语言理解任务上实现了最先进的表现。但是,这些LLM可能依靠数据集偏差和文物作为预测的快捷方式。这极大地损害了他们的分布(OOD)概括和对抗性鲁棒性。在本文中,我们对最新发展的综述,这些发展解决了LLMS的鲁棒性挑战。我们首先介绍LLM的概念和鲁棒性挑战。然后,我们介绍了在LLM中识别快捷方式学习行为的方法,表征了快捷方式学习的原因以及引入缓解解决方案。最后,我们确定了关键挑战,并将这一研究线的联系引入其他方向。
translated by 谷歌翻译
激活压缩训练〜(ACT)已被证明是减少训练深神经网络中记忆消耗的一种有希望的方法。但是,现有的ACT工作依赖于在深神经网络(DNN)训练期间寻找最佳的位宽度以减少量化噪声,从而使过程变得复杂且透明。为此,我们提出了一种简单有效的DNN培训方法。我们的方法是由观察结果激励的:\ emph {DNN向后传播主要取决于激活图的低频组分〜(LFC),而不是高频组件〜(HFC)}。它表明激活图的HFC在DNN训练过程中是高度冗余和可压缩的,这激发了我们提出的双重激活精度〜(分裂)。在培训期间,分裂估计激活图的LFC和HFC,并将HFC压缩到低精度副本中以消除冗余。这可以大大减少记忆消耗,而不会对DNN向后传播的精度产生负面影响。这样,部门可以实现可比的表现与正常培训。三个基准数据集的实验结果表明,在记忆消耗,模型准确性和跑步速度方面,分裂的表现优于最先进的基线方法。
translated by 谷歌翻译
关于公平建模的现有工作通常假设所有实例的敏感属性都已完全可用,由于获取敏感信息的高成本,在许多现实世界中,这可能并非如此。当未披露或可用的敏感属性时,需要手动注释培训数据的一小部分以减轻偏见。但是,跨不同敏感组的偏斜分布保留了带注释的子集中原始数据集的偏度,这导致了非最佳偏置缓解。为了应对这一挑战,我们提出了对歧视(APOD)的积极惩罚,这是一个交互式框架,以指导有限的注释以最大程度地消除算法偏见的影响。拟议的APOD将歧视惩罚与主动实例选择集成在一起,以有效利用有限的注释预算,从理论上讲,它可以限制算法偏见。根据五个基准数据集的评估,APOD在有限的注释预算下优于最先进的基线方法,并显示出与完全注释的偏见缓解相当的性能,这表明APOD可以使真实世界应用程序受益于敏感信息时的应用是有限的。
translated by 谷歌翻译
受益于医疗保健数据的数字化和计算能力的发展,机器学习方法越来越多地用于医疗领域。在医疗保健机器学习中已经确定了公平性问题,导致对有限医疗资源的不公平分配或某些群体的健康风险过多。因此,解决公平问题最近引起了医疗保健社区的越来越多的关注。然而,机器学习的机器学习与机器学习中的公平性的交集仍在研究中。在这篇综述中,我们通过暴露公平问题,总结可能的偏见,整理缓解方法并指出挑战以及未来的机会来建立桥梁。
translated by 谷歌翻译
可说明的机器学习吸引了越来越多的关注,因为它提高了模型的透明度,这有助于机器学习在真实应用中受到信任。然而,最近证明了解释方法易于操纵,在那里我们可以在保持其预测常数的同时轻松改变模型的解释。为了解决这个问题,已经支付了一些努力来使用更稳定的解释方法或更改模型配置。在这项工作中,我们从训练角度解决了问题,并提出了一种称为对抗的解释培训的新培训计划(ATEX),以改善模型的内部解释稳定性,无论应用的具体解释方法如何。而不是直接指定数据实例上的解释值,而是仅为模型预测提供了要求,该预测避免了涉及在优化中的二阶导数。作为进一步的讨论,我们还发现解释稳定性与模型的另一个性质密切相关,即暴露于对抗性攻击的风险。通过实验,除了表明ATEX改善了针对操纵靶向解释的模型鲁棒性,它还带来了额外的益处,包括平滑解释,并在应用于模型时提高对抗性训练的功效。
translated by 谷歌翻译
机器学习模型在高赌注应用中变得普遍存在。尽管在绩效方面有明显的效益,但该模型可以表现出对少数民族群体的偏见,并导致决策过程中的公平问题,导致对个人和社会的严重负面影响。近年来,已经开发了各种技术来减轻机器学习模型的偏差。其中,加工方法已经增加了社区的关注,在模型设计期间直接考虑公平,以诱导本质上公平的模型,从根本上减轻了产出和陈述中的公平问题。在本调查中,我们审查了加工偏置减缓技术的当前进展。基于在模型中实现公平的地方,我们将它们分类为明确和隐性的方法,前者直接在培训目标中纳入公平度量,后者重点介绍精炼潜在代表学习。最后,我们在讨论该社区中的研究挑战来讨论调查,以激励未来的探索。
translated by 谷歌翻译