随着深度学习技术的发展,从底眼图像中提出了越来越多的方法对视盘和杯子(OD/OC)进行分割。在临床上,多位临床专家通常会注释OD/OC细分以减轻个人偏见。但是,很难在多个标签上训练自动化的深度学习模型。解决该问题的一种普遍做法是多数投票,例如,采用多个标签的平均值。但是,这种策略忽略了医学专家的不同专家。通过观察到的观察,即在临床上通常将OD/OC分割用于青光眼诊断,在本文中,我们提出了一种新的策略,以通过青光眼诊断性能融合多评分者OD/OC分割标签。具体而言,我们通过细心的青光眼诊断网络评估每个评估者的专业性。对于每个评估者,其对诊断的贡献将被反映为专家图。为了确保对不同青光眼诊断模型的专家图是一般性的,我们进一步提出了专家生成器(EXPG),以消除优化过程中的高频组件。基于获得的专家图,多评价者标签可以融合为单个地面真相,我们将其称为诊断第一基地真相(diagfirstgt)。实验结果表明,通过将diagfirstgt用作地面真相,OD/OC分割网络将预测具有优质诊断性能的面膜。
translated by 谷歌翻译
在医学图像上,许多组织/病变可能模棱两可。这就是为什么一群临床专家通常会注释医疗细分以减轻个人偏见的原因。但是,这种临床常规也为机器学习算法的应用带来了新的挑战。如果没有确定的基础真相,将很难训练和评估深度学习模型。当从不同的级别收集注释时,一个共同的选择是多数票。然而,这样的策略忽略了分级专家之间的差异。在本文中,我们考虑使用校准的观察者间的不确定性来预测分割的任务。我们注意到,在临床实践中,医学图像分割通常用于帮助疾病诊断。受到这一观察的启发,我们提出了诊断优先的原则,该原则是将疾病诊断作为校准观察者间分段不确定性的标准。遵循这个想法,提出了一个名为诊断的诊断框架(DIFF)以估算从原始图像中进行诊断,从原始图像进行诊断。特别是,DIFF将首先学会融合多论者分段标签,以最大程度地提高单个地面真相疾病诊断表现。我们将融合的地面真相称为诊断第一基地真实(DF-GT)。我们验证了DIFF对三个不同的医学分割任务的有效性:对眼底图像的OD/OC分割,超声图像上的甲状腺结节分割以及皮肤镜图像上的皮肤病变分割。实验结果表明,拟议的DIFF能够显着促进相应的疾病诊断,这表现优于先前的最先进的多评论者学习方法。
translated by 谷歌翻译
临床上,病变/组织的准确注释可以显着促进疾病诊断。例如,对眼底图像的视盘/杯/杯(OD/OC)的分割将有助于诊断青光眼诊断,皮肤镜图像上皮肤病变的分割有助于黑色素瘤诊断等。随着深度学习技术的发展,广泛的方法证明了病变/组织分割还可以促进自动疾病诊断模型。但是,现有方法是有限的,因为它们只能捕获图像中的静态区域相关性。受视觉变压器的全球和动态性质的启发,在本文中,我们提出了分割辅助诊断变压器(SeaTrans),以将分割知识转移到疾病诊断网络中。具体而言,我们首先提出了一种不对称的多尺度相互作用策略,以将每个单个低级诊断功能与多尺度分割特征相关联。然后,采用了一种称为海块的有效策略,以通过相关的分割特征使诊断特征生命。为了模拟分割诊断的相互作用,海块首先根据分段信息通过编码器嵌入诊断功能,然后通过解码器将嵌入的嵌入回到诊断功能空间中。实验结果表明,关于几种疾病诊断任务的海洋侵蚀超过了广泛的最新(SOTA)分割辅助诊断方法。
translated by 谷歌翻译
眼底图像的视盘(OD)和视杯(OC)的分割是青光眼诊断的重要基本任务。在临床实践中,通常有必要从多位专家那里收集意见,以获得最终的OD/OC注释。这种临床常规有助于减轻单个偏见。但是,当数据乘以注释时,标准深度学习模型将不适用。在本文中,我们提出了一个新型的神经网络框架,以从多评价者注释中学习OD/OC分割。分割结果通过迭代优化多评价专家的估计和校准OD/OC分割来自校准。这样,提出的方法可以实现这两个任务的相互改进,并最终获得精制的分割结果。具体而言,我们提出分化模型(DIVM)和收敛模型(CONM)分别处理这两个任务。 CONM基于DIVM提供的多评价专家图的原始图像。 DIVM从CONM提供的分割掩码中生成多评价者专家图。实验结果表明,通过经常运行CONM和DIVM,可以对结果进行自校准,从而超过一系列最新的(SOTA)多评价者分割方法。
translated by 谷歌翻译
In medical image segmentation, it is often necessary to collect opinions from multiple experts to make the final decision. This clinical routine helps to mitigate individual bias. But when data is multiply annotated, standard deep learning models are often not applicable. In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels. Inspired by the iterative half-quadratic optimization, the proposed MrPrism will combine the multi-rater confidences assignment task and calibrated segmentation task in a recurrent manner. In this recurrent process, MrPrism can learn inter-observer variability taking into account the image semantic properties, and finally converges to a self-calibrated segmentation result reflecting the inter-observer agreement. Specifically, we propose Converging Prism (ConP) and Diverging Prism (DivP) to process the two tasks iteratively. ConP learns calibrated segmentation based on the multi-rater confidence maps estimated by DivP. DivP generates multi-rater confidence maps based on the segmentation masks estimated by ConP. The experimental results show that by recurrently running ConP and DivP, the two tasks can achieve mutual improvement. The final converged segmentation result of MrPrism outperforms state-of-the-art (SOTA) strategies on a wide range of medical image segmentation tasks.
translated by 谷歌翻译
Diffusion probabilistic model (DPM) recently becomes one of the hottest topic in computer vision. Its image generation application such as Imagen, Latent Diffusion Models and Stable Diffusion have shown impressive generation capabilities, which aroused extensive discussion in the community. Many recent studies also found it useful in many other vision tasks, like image deblurring, super-resolution and anomaly detection. Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff. In order to enhance the step-wise regional attention in DPM for the medical image segmentation, we propose dynamic conditional encoding, which establishes the state-adaptive conditions for each sampling step. We further propose Feature Frequency Parser (FF-Parser), to eliminate the negative effect of high-frequency noise component in this process. We verify MedSegDiff on three medical segmentation tasks with different image modalities, which are optic cup segmentation over fundus images, brain tumor segmentation over MRI images and thyroid nodule segmentation over ultrasound images. The experimental results show that MedSegDiff outperforms state-of-the-art (SOTA) methods with considerable performance gap, indicating the generalization and effectiveness of the proposed model.
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
人类注释是不完美的,尤其是在初级实践者生产的时候。多专家共识通常被认为是黄金标准,而这种注释协议太昂贵了,无法在许多现实世界中实施。在这项研究中,我们提出了一种完善人类注释的方法,称为神经注释细化(接近)。它基于可学习的隐式函数,该函数将潜在向量解码为表示形状。通过将外观整合为隐式函数的输入,可以固定注释人工制品的外观可见。我们的方法在肾上腺分析的应用中得到了证明。我们首先表明,可以在公共肾上腺细分数据集上修复扭曲的金标准。此外,我们开发了一个新的肾上腺分析(ALAN)数据集,其中拟议的附近,每个病例都由专家分配的肾上腺及其诊断标签(正常与异常)组成。我们表明,经过近距离修复的形状训练的型号比原始的肾上腺更好地诊断肾上腺。 Alan数据集将是开源的,具有1,594个用于肾上腺诊断的形状,它是医学形状分析的新基准。代码和数据集可在https://github.com/m3dv/near上找到。
translated by 谷歌翻译
Different from the general visual classification, some classification tasks are more challenging as they need the professional categories of the images. In the paper, we call them expert-level classification. Previous fine-grained vision classification (FGVC) has made many efforts on some of its specific sub-tasks. However, they are difficult to expand to the general cases which rely on the comprehensive analysis of part-global correlation and the hierarchical features interaction. In this paper, we propose Expert Network (ExpNet) to address the unique challenges of expert-level classification through a unified network. In ExpNet, we hierarchically decouple the part and context features and individually process them using a novel attentive mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part feature for the subsequent abstraction and memorizes a context-related embedding. Then we fuse the final focal embedding with all memorized context-related embedding to make the prediction. Such an architecture realizes the dual-track processing of partial and global information and hierarchical feature interactions. We conduct the experiments over three representative expert-level classification tasks: FGVC, disease classification, and artwork attributes classification. In these experiments, superior performance of our ExpNet is observed comparing to the state-of-the-arts in a wide range of fields, indicating the effectiveness and generalization of our ExpNet. The code will be made publicly available.
translated by 谷歌翻译
视网膜脉管系统的研究是筛查和诊断许多疾病的基本阶段。完整的视网膜血管分析需要将视网膜的血管分为动脉和静脉(A/V)。早期自动方法在两个顺序阶段接近这些分割和分类任务。但是,目前,这些任务是作为联合语义分割任务处理的,因为分类结果在很大程度上取决于血管分割的有效性。在这方面,我们提出了一种新的方法,用于从眼睛眼睛图像中对视网膜A/V进行分割和分类。特别是,我们提出了一种新颖的方法,该方法与以前的方法不同,并且由于新的损失,将联合任务分解为针对动脉,静脉和整个血管树的三个分割问题。这种配置允许直观地处理容器交叉口,并直接提供不同靶血管树的精确分割罩。提供的关于公共视网膜图血管树提取(RITE)数据集的消融研究表明,所提出的方法提供了令人满意的性能,尤其是在不同结构的分割中。此外,与最新技术的比较表明,我们的方法在A/V分类中获得了高度竞争的结果,同时显着改善了血管分割。提出的多段方法允许检测更多的血管,并更好地分割不同的结构,同时实现竞争性分类性能。同样,用这些术语来说,我们的方法优于各种参考作品的方法。此外,与以前的方法相比,该方法允许直接检测到容器交叉口,并在这些复杂位置保留A/V的连续性。
translated by 谷歌翻译
青光眼是一种严重的盲目疾病,迫切需要自动检测方法来减轻眼科医生的稀缺性。许多作品提出采用深度学习方法,涉及视盘和杯中的分割以进行青光眼检测,其中分割过程通常仅被视为上游子任务。在青光眼评估中,底底图像与分割面具之间的关系很少探索。我们提出了一种基于细分的信息提取和融合方法来实现青光眼检测任务,该方法利用了分割掩模的稳健性,而无需忽略原始底底图像中的丰富信息。私有数据集和公共数据集的实验结果表明,我们提出的方法的表现优于所有仅利用底面图像或口罩的模型。
translated by 谷歌翻译
肝癌是世界上最常见的恶性疾病之一。 CT图像中肝脏肿瘤和血管的分割和标记可以为肝脏肿瘤诊断和手术干预中的医生提供便利。在过去的几十年中,基于深度学习的自动CT分段方法在医学领域得到了广泛的关注。在此期间出现了许多最先进的分段算法。然而,大多数现有的分割方法只关心局部特征背景,并在医学图像的全局相关性中具有感知缺陷,这显着影响了肝脏肿瘤和血管的分割效果。我们引入了一种基于变压器和SebottLenet的多尺度特征上下文融合网络,称为TransFusionNet。该网络可以准确地检测和识别肝脏容器的兴趣区域的细节,同时它可以通过利用CT图像的全球信息来改善肝肿瘤的形态边缘的识别。实验表明,TransFusionNet优于公共数据集LITS和3DIRCADB以及我们的临床数据集的最先进方法。最后,我们提出了一种基于训练模型的自动三维重建算法。该算法可以在1秒内快速准确地完成重建。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
夜间场景解析(NTSP)对于许多视觉应用是必不可少的,尤其是对于自动驾驶。大多数现有方法都是为了解析白天的现有方法。他们依靠在照明下建模基于像素强度的空间上下文线索。因此,这些方法在夜间场景中表现不佳,因为这种空间上下文提示被埋葬在夜间场景中的过度/暴露区域中。在本文中,我们首先进行了基于图像频率的统计实验来解释白天和夜间场景差异。我们发现,在白天和夜间场景之间,图像频率分布有很大差异,并且了解此类频率分布对于NTSP问题至关重要。基于此,我们建议利用图像频率分布来解析夜间场景。首先,我们提出了一个可学习的频率编码器(LFE),以模拟不同频率系数之间的关系,以动态测量所有频率组件。其次,我们提出了一个空间频率融合模块(SFF),该模块融合了空间和频率信息,以指导空间上下文特征的提取。广泛的实验表明,我们的方法对夜总会,夜城+和BDD100K晚数据集的最先进方法表现出色。此外,我们证明我们的方法可以应用于现有的白天场景解析方法,并在夜间场景中提高其性能。
translated by 谷歌翻译
从医用试剂染色图像中分割牙齿斑块为诊断和确定随访治疗计划提供了宝贵的信息。但是,准确的牙菌斑分割是一项具有挑战性的任务,需要识别牙齿和牙齿斑块受到语义腔区域的影响(即,在牙齿和牙齿斑块之间的边界区域中存在困惑的边界)以及实例形状的复杂变化,这些变化均未完全解决。现有方法。因此,我们提出了一个语义分解网络(SDNET),该网络介绍了两个单任务分支,以分别解决牙齿和牙齿斑块的分割,并设计了其他约束,以学习每个分支的特定类别特征,从而促进语义分解并改善该类别的特征牙齿分割的性能。具体而言,SDNET以分裂方式学习了两个单独的分割分支和牙齿的牙齿,以解除它们之间的纠缠关系。指定类别的每个分支都倾向于产生准确的分割。为了帮助这两个分支更好地关注特定类别的特征,进一步提出了两个约束模块:1)通过最大化不同类别表示之间的距离来学习判别特征表示,以了解判别特征表示形式,以减少减少负面影响关于特征提取的语义腔区域; 2)结构约束模块(SCM)通过监督边界感知的几何约束提供完整的结构信息,以提供各种形状的牙菌斑。此外,我们构建了一个大规模的开源染色牙菌斑分割数据集(SDPSEG),该数据集为牙齿和牙齿提供高质量的注释。 SDPSEG数据集的实验结果显示SDNET达到了最新的性能。
translated by 谷歌翻译
深度学习技术表明它们在皮肤科医生临床检查中的优越性。然而,由于难以将临床知识掺入学习过程中,黑色素瘤诊断仍然是一个具有挑战性的任务。在本文中,我们提出了一种新颖的知识意识的深度框架,将一些临床知识纳入两个重要的黑色素瘤诊断任务的协作学习,即皮肤病变分割和黑色素瘤识别。具体地,利用病变区的形态表达的知识以及黑色素瘤鉴定的周边区域,设计了一种基于病变的汇集和形状提取(LPSE)方案,其将从皮肤病变分段获得的结构信息转移到黑色素瘤识别中。同时,为了通过黑色素瘤识别到皮肤病变细分的皮肤病原诊断知识,设计了有效的诊断引导特征融合(DGFF)策略。此外,我们提出了一种递归相互学习机制,进一步促进任务间合作,因此迭代地提高了皮肤病病变分割和黑色素瘤识别模型的联合学习能力。两种公共皮肤病原数据集的实验结果表明了黑色素瘤分析方法的有效性。
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
儿科肌肉骨骼系统的形态学和诊断评价在临床实践中至关重要。但是,大多数分段模型在稀缺的儿科成像数据上都不好。我们提出了一种新的预训练的正则化卷积编码器 - 解码器,用于分割异质儿科磁共振(MR)图像的具有挑战性的任务。在这方面,我们采用转移学习方法以及正规化策略来改善分段模型的概括。为此,我们已经构思了用于分割网络的新颖优化方案,其包括丢失函数的额外正则化术语。为了获得全局一致的预测,我们纳入了基于形状的正则化,从自动编码器学习的非线性形状表示来源。另外,通过鉴别器计算的对抗正规化是集成的,以鼓励合理的描绘。评估来自脚踝和肩部关节的两个稀缺的小儿摄像数据集的多骨分割任务的方法,包括病理和健康检查。所提出的方法与先前提出的骰子,灵敏度,特异性,最大对称表面距离,平均对称表面距离和相对绝对体积差异度量的方法更好或以前的方法进行更好或以前的方法进行比例。我们说明所提出的方法可以很容易地集成到各种骨骼分割策略中,并且可以提高在大型非医学图像数据库上预先培训的模型的预测准确性。获得的结果为小儿肌肉骨骼障碍的管理带来了新的视角。
translated by 谷歌翻译
Automated identification of myocardial scar from late gadolinium enhancement cardiac magnetic resonance images (LGE-CMR) is limited by image noise and artifacts such as those related to motion and partial volume effect. This paper presents a novel joint deep learning (JDL) framework that improves such tasks by utilizing simultaneously learned myocardium segmentations to eliminate negative effects from non-region-of-interest areas. In contrast to previous approaches treating scar detection and myocardium segmentation as separate or parallel tasks, our proposed method introduces a message passing module where the information of myocardium segmentation is directly passed to guide scar detectors. This newly designed network will efficiently exploit joint information from the two related tasks and use all available sources of myocardium segmentation to benefit scar identification. We demonstrate the effectiveness of JDL on LGE-CMR images for automated left ventricular (LV) scar detection, with great potential to improve risk prediction in patients with both ischemic and non-ischemic heart disease and to improve response rates to cardiac resynchronization therapy (CRT) for heart failure patients. Experimental results show that our proposed approach outperforms multiple state-of-the-art methods, including commonly used two-step segmentation-classification networks, and multitask learning schemes where subtasks are indirectly interacted.
translated by 谷歌翻译
伪装的对象检测(COD)旨在检测周围环境的类似模式(例如,纹理,强度,颜色等)的对象,最近吸引了日益增长的研究兴趣。由于伪装对象通常存在非常模糊的边界,如何确定对象位置以及它们的弱边界是具有挑战性的,也是此任务的关键。受到生物视觉感知过程的启发,当人类观察者发现伪装对象时,本文提出了一种名为Errnet的新型边缘的可逆重新校准网络。我们的模型的特点是两种创新设计,即选择性边缘聚集(SEA)和可逆的重新校准单元(RRU),其旨在模拟视觉感知行为,并在潜在的伪装区域和背景之间实现有效的边缘和交叉比较。更重要的是,RRU与现有COD模型相比,具有更全面的信息。实验结果表明,errnet优于三个COD数据集和五个医学图像分割数据集的现有尖端基线。特别是,与现有的Top-1模型SINET相比,ERRNET显着提高了$ \ SIM 6%(平均电子测量)的性能,以显着高速(79.3 FPS),显示ERRNET可能是一般和强大的解决方案COD任务。
translated by 谷歌翻译