Multi-view graph clustering (MGC) methods are increasingly being studied due to the explosion of multi-view data with graph structural information. The critical point of MGC is to better utilize the view-specific and view-common information in features and graphs of multiple views. However, existing works have an inherent limitation that they are unable to concurrently utilize the consensus graph information across multiple graphs and the view-specific feature information. To address this issue, we propose Variational Graph Generator for Multi-View Graph Clustering (VGMGC). Specifically, a novel variational graph generator is proposed to extract common information among multiple graphs. This generator infers a reliable variational consensus graph based on a priori assumption over multiple graphs. Then a simple yet effective graph encoder in conjunction with the multi-view clustering objective is presented to learn the desired graph embeddings for clustering, which embeds the inferred view-common graph and view-specific graphs together with features. Finally, theoretical results illustrate the rationality of VGMGC by analyzing the uncertainty of the inferred consensus graph with information bottleneck principle. Extensive experiments demonstrate the superior performance of our VGMGC over SOTAs.
translated by 谷歌翻译
基于深度学习的计算机辅助诊断(CAD)在学术研究和临床应用中引起了吸引人的关注。然而,卷积神经网络(CNN)诊断系统严重依赖于标记的病变数据集,对数据分布变化的敏感性也限制了CNN在CAD中的潜在应用。开发了无监督的域适应性(UDA)方法来解决昂贵的注释和域间隙问题,并在医学图像分析中取得了巨大的成功。然而,现有的UDA方法仅适应从源病变域中汲取的知识到一个单个目标病变域,这是针对临床情况的:要诊断的新的未标记的目标域始终以在线和连续的方式到达。此外,由于新知识的知识覆盖了先前学到的知识(即灾难性的遗忘),因此现有方法的性能在先前学到的目标病变域上大大降低。为了处理上述问题,我们开发了一个名为连续病变知识元适应(CLKM)的元适应框架,该框架主要由语义适应阶段(​​SAP)和表示适应阶段(​​RAP)组成,以在线学习诊断模型和连续的方式。在SAP中,从源病变域中学到的语义知识转移到连续的靶病变域。在RAP中,优化了功能提取器以对齐整个源和多个目标病变域的可转移表示知识。
translated by 谷歌翻译
胸部X射线(CXR)中准确的异常定位可以使各种胸部疾病的临床诊断受益。但是,病变水平的注释只能由经验丰富的放射科医生进行,这是乏味且耗时的,因此很难获得。这种情况导致难以开发CXR的完全监督异常定位系统。在这方面,我们建议通过一个弱半监督的策略来训练CXR异常本地化框架,称为“超越阶级”(PBC),该策略(PBC)使用了少数带有病变级别边界框的完全注释的CXR,并通过广泛的弱化的样品和大量的带有注释的样品。点。这样的点注释设置可以通过边缘注释成本提供弱实例级信息,以实现异常定位。尤其是,我们的PBC背后的核心思想是学习从点注释到边界框的强大而准确的映射,以根据注释点的差异。为此,提出了一个正则化项,即多点的一致性,它驱动模型从相同异常内的不同点注释中生成一致的边界框。此外,还提出了一种被称为对称的一致性的自学,也提出了从弱注释的数据中深入利用有用的信息来实现异常定位。 RSNA和VINDR-CXR数据集的实验结果证明了该方法的有效性。当使用少于20%的盒子级标签进行训练时,与当前的最新方法相比,我们的PBC可以在MAP中提高〜5的改进(即点DETR)。代码可从https://github.com/haozheliu-st/point-beyond-class获得。
translated by 谷歌翻译
沟通效率在加速深神经网络(DNN)的分布式训练中起着重要作用。 All-Reduce是减少分布式DNN培训中模型参数的关键沟通原始性。大多数现有的全减少算法都是为传统的电气互连系统设计的,该系统无法满足大型DNN分布式培训的通信要求。电气互连的有希望的替代方案之一是光学互连,可以提供高带宽,低传输延迟和低功率成本。我们提出了一个称为WRHT(波长重复使用的层次树)的有效方案,用于在光学互连系统中实现全降压操作,该系统可以利用WDM(波长多路复用)来减少分布式数据 - 偏置DNN训练的通信时间。我们进一步得出了最少的通信步骤和通信时间,以实现使用WRHT的全面减少。仿真结果表明,与在光学互连系统中模拟的三种传统的全减少算法相比,WRHT的通信时间分别减少了75.59%,49.25%和70.1%。仿真结果还表明,与电气互连系统中的两种现有的全减速算法相比,WRHT可以将所有还原操作的通信时间减少86.69%和84.71%。
translated by 谷歌翻译
流量预测是智能交通系统中时空学习任务的规范示例。现有方法在图形卷积神经操作员中使用预定的矩阵捕获空间依赖性。但是,显式的图形结构损失了节点之间关系的一些隐藏表示形式。此外,传统的图形卷积神经操作员无法在图上汇总远程节点。为了克服这些限制,我们提出了一个新型的网络,空间 - 周期性自适应图卷积,并通过注意力网络(Staan)进行交通预测。首先,我们采用自适应依赖性矩阵,而不是在GCN处理过程中使用预定义的矩阵来推断节点之间的相互依存关系。其次,我们集成了基于图形注意力网络的PW注意,该图形是为全局依赖性设计的,而GCN作为空间块。更重要的是,在我们的时间块中采用了堆叠的散布的1D卷积,具有长期预测的效率,用于捕获不同的时间序列。我们在两个现实世界数据集上评估了我们的Staan,并且实验验证了我们的模型优于最先进的基线。
translated by 谷歌翻译
当仅积极(P)和未标记(U)数据可用时,正面标记(PU)学习涉及二进制分类问题。已经提出了许多基于线性模型和神经网络的PU方法。但是,仍然缺乏关于理论上增强风格算法如何使用P和U数据的研究。考虑到在某些情况下,当神经网络即使使用完全监督的数据也不能像增强算法一样好时,我们提出了一种新颖的增强PU学习算法:ADA-PU,ADA-PU与神经网络进行了比较。 ADA-PU遵循ADABOOST的一般过程,同时维护和更新了P数据的两个不同分布。在新更新的分布上学习了弱分类器后,仅使用PU数据估算最终集合的相应组合权重。我们证明,使用较小的基础分类器集,确保该方法可以保留增强算法的理论属性。在实验中,我们表明ADA-PU在基准PU数据集上优于神经网络。我们还研究了网络安全性的现实世界数据集UNSW-NB15,并证明ADA-PU在恶意活动检测方面具有出色的性能。
translated by 谷歌翻译
文本和视频之间交叉模态检索的任务旨在了解视觉和语言之间的对应关系。现有研究遵循基于文本和视频嵌入的测量文本视频相似度的趋势。在常见的做法中,通过将视频帧馈送到用于全球视觉特征提取的视频帧或仅通过使用图形卷积网络使用本地细粒度的框架区域来实现简单的语义关系来构造视频表示。然而,这些视频表示在学习视频表示中的视觉组件之间没有充分利用时空关系,从而无法区分具有相同视觉组件但具有不同关系的视频。为了解决这个问题,我们提出了一种视觉时空关系增强的网络(VSR-Net),这是一种新的跨模型检索框架,其考虑组件之间的空间视觉关系,以增强桥接文本 - 视频模型中的全局视频表示。具体地,使用多层时空变压器来编码视觉时空关系,以学习视觉关系特征。我们将全局视觉和细粒度的关系功能与两个嵌入空格上的文本功能对齐,用于交叉模态文本 - 视频检索。在MSR-VTT和MSVD数据集中进行了广泛的实验。结果表明了我们提出的模型的有效性。我们将发布促进未来研究的代码。
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts in the captured CT images and then impair the clinical treatment. Against this metal artifact reduction (MAR) task, the existing deep-learning-based methods have gained promising reconstruction performance. Nevertheless, there is still some room for further improvement of MAR performance and generalization ability, since some important prior knowledge underlying this specific task has not been fully exploited. Hereby, in this paper, we carefully analyze the characteristics of metal artifacts and propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts, i.e., rotationally symmetrical streaking patterns. The proposed method rationally adopts Fourier-series-expansion-based filter parametrization in artifact modeling, which can better separate artifacts from anatomical tissues and boost the model generalizability. Comprehensive experiments executed on synthesized and clinical datasets show the superiority of our method in detail preservation beyond the current representative MAR methods. Code will be available at \url{https://github.com/hongwang01/OSCNet}
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been a prevailing technique for tackling various analysis tasks on graph data. A key premise for the remarkable performance of GNNs relies on complete and trustworthy initial graph descriptions (i.e., node features and graph structure), which is often not satisfied since real-world graphs are often incomplete due to various unavoidable factors. In particular, GNNs face greater challenges when both node features and graph structure are incomplete at the same time. The existing methods either focus on feature completion or structure completion. They usually rely on the matching relationship between features and structure, or employ joint learning of node representation and feature (or structure) completion in the hope of achieving mutual benefit. However, recent studies confirm that the mutual interference between features and structure leads to the degradation of GNN performance. When both features and structure are incomplete, the mismatch between features and structure caused by the missing randomness exacerbates the interference between the two, which may trigger incorrect completions that negatively affect node representation. To this end, in this paper we propose a general GNN framework based on teacher-student distillation to improve the performance of GNNs on incomplete graphs, namely T2-GNN. To avoid the interference between features and structure, we separately design feature-level and structure-level teacher models to provide targeted guidance for student model (base GNNs, such as GCN) through distillation. Then we design two personalized methods to obtain well-trained feature and structure teachers. To ensure that the knowledge of the teacher model is comprehensively and effectively distilled to the student model, we further propose a dual distillation mode to enable the student to acquire as much expert knowledge as possible.
translated by 谷歌翻译