自动分割前庭造型瘤(VS)和来自磁共振成像(MRI)的耳蜗可以促进与治疗计划。无监督的分割方法已显示出令人鼓舞的结果,而无需耗时且费力的手动标记过程。在本文中,我们提出了一种在无监督域的适应设置中进行VS和耳蜗分割的方法。具体而言,我们首先开发了跨站点的跨模式未配对的图像翻译策略,以丰富合成数据的多样性。然后,我们设计了一种基于规则的离线增强技术,以进一步最大程度地减少域间隙。最后,我们采用一个自我训练的自我配置分割框架,以获得最终结果。在Crossmoda 2022验证排行榜上,我们的方法已获得竞争性与耳蜗细分性能,平均骰子得分为0.8178 $ \ pm $ 0.0803和0.8433 $ \ pm $ 0.0293。
translated by 谷歌翻译
分割前庭施瓦瘤瘤(VS)肿瘤的自动方法和来自磁共振成像(MRI)的耳蜗对VS治疗计划至关重要。虽然监督方法在VS分割中取得了令人满意的性能,但他们需要专家的完整注释,这是费力且耗时的。在这项工作中,我们的目标是在无监督的域适应设置中解决VS和Cochlea分段问题。我们所提出的方法利用了图像级域对齐,以最大限度地减少域发散和半监督培训,以进一步提高性能。此外,我们建议通过嘈杂的标签校正熔断从多个模型预测的标签。我们对挑战验证排行榜的结果表明,我们无人监督的方法取得了有前途的与科技分割性能,平均骰子得分为0.8261 $ \ PM $ 0.0416;肿瘤的平均骰子值为0.8302 $ \ PM $ 0.0772。这与基于弱监督的方法相当。
translated by 谷歌翻译
The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the segmentation task by including multi-institutional scans. In this work, we proposed an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks. Considering heterogeneous distributions and various image sizes for multi-institutional scans, we apply the min-max normalization for scaling the intensities of all scans between -1 and 1, and use the voxel size resampling and center cropping to obtain fixed-size sub-volumes for training. We adopt two data augmentation methods for effectively learning the semantic information and generating realistic target domain scans: generative and online data augmentation. For generative data augmentation, we use CUT and CycleGAN to generate two groups of realistic T2 volumes with different details and appearances for supervised segmentation training. For online data augmentation, we design a random tumor signal reducing method for simulating the heterogeneity of VS tumor signals. Furthermore, we utilize an advanced hybrid convolutional network with multi-dimensional convolutions to adaptively learn sparse inter-slice information and dense intra-slice information for accurate volumetric segmentation of VS tumor and cochlea regions in anisotropic scans. On the crossMoDA2022 validation dataset, our method produces promising results and achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm and 0.53 mm for VS tumor and cochlea regions, respectively.
translated by 谷歌翻译
磁共振图像(MRI)被广泛用于量化前庭切片瘤和耳蜗。最近,深度学习方法显示了用于分割这些结构的最先进的性能。但是,培训细分模型可能需要目标域中的手动标签,这是昂贵且耗时的。为了克服这个问题,域的适应是一种有效的方法,可以利用来自源域的信息来获得准确的分割,而无需在目标域中进行手动标签。在本文中,我们提出了一个无监督的学习框架,以分割VS和耳蜗。我们的框架从对比增强的T1加权(CET1-W)MRI及其标签中利用信息,并为T2加权MRIS产生分割,而目标域中没有任何标签。我们首先应用了一个发电机来实现图像到图像翻译。接下来,我们从不同模型的集合中集合输出以获得最终的分割。为了应对来自不同站点/扫描仪的MRI,我们在培训过程中应用了各种“在线”增强量,以更好地捕获几何变异性以及图像外观和质量的可变性。我们的方法易于构建和产生有希望的分割,在验证集中,VS和耳蜗的平均骰子得分分别为0.7930和0.7432。
translated by 谷歌翻译
域适应(DA)最近在医学影像社区提出了强烈的兴趣。虽然已经提出了大量DA技术进行了用于图像分割,但大多数这些技术已经在私有数据集或小公共可用数据集上验证。此外,这些数据集主要解决了单级问题。为了解决这些限制,与第24届医学图像计算和计算机辅助干预(Miccai 2021)结合第24届国际会议组织交叉模态域适应(Crossmoda)挑战。 Crossmoda是无监督跨型号DA的第一个大型和多级基准。挑战的目标是分割参与前庭施瓦新瘤(VS)的后续和治疗规划的两个关键脑结构:VS和Cochleas。目前,使用对比度增强的T1(CET1)MRI进行VS患者的诊断和监测。然而,使用诸如高分辨率T2(HRT2)MRI的非对比度序列越来越感兴趣。因此,我们创建了一个无人监督的跨模型分段基准。训练集提供注释CET1(n = 105)和未配对的非注释的HRT2(n = 105)。目的是在测试集中提供的HRT2上自动对HRT2进行单侧VS和双侧耳蜗分割(n = 137)。共有16支球队提交了评估阶段的算法。顶级履行团队达成的表现水平非常高(最佳中位数骰子 - vs:88.4%; Cochleas:85.7%)并接近完全监督(中位数骰子 - vs:92.5%;耳蜗:87.7%)。所有顶级执行方法都使用图像到图像转换方法将源域图像转换为伪目标域图像。然后使用这些生成的图像和为源图像提供的手动注释进行培训分割网络。
translated by 谷歌翻译
本研究的目的是申请和评估跨多媒体挑战的开箱即用的深度学习框架。我们使用从对比度增强的T1 MR到高分辨率T2 MR的域改性的剪切模型。作为数据增强,我们生成了带有较低信号强度的前庭施瓦莫纳的额外图像。对于分段任务,我们使用NNU-Net框架。我们的最终提交在验证阶段实现了0.8299的平均骰子分数,测试阶段0.8253。我们的方法在Crossmoda挑战中排名第3。
translated by 谷歌翻译
Glioblastomas是最具侵略性的快速生长的主要脑癌,起源于大脑的胶质细胞。准确鉴定恶性脑肿瘤及其子区域仍然是医学图像分割中最具挑战性问题之一。脑肿瘤分割挑战(Brats)是自动脑胶质细胞瘤分割算法的流行基准,自于其启动。在今年的挑战中,Brats 2021提供了2,000名术前患者的最大多参数(MPMRI)数据集。在本文中,我们提出了两个深度学习框架的新聚合,即在术前MPMRI中的自动胶质母细胞瘤识别的Deepseg和NNU-Net。我们的集合方法获得了92.00,87.33和84.10和Hausdorff距离为3.81,8.91和16.02的骰子相似度分数,用于增强肿瘤,肿瘤核心和全肿瘤区域,单独进行。这些实验结果提供了证据表明它可以在临床上容易地应用,从而助攻脑癌预后,治疗计划和治疗反应监测。
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively. The docker image for the winning submission is publicly available at (https://hub.docker.com/r/razeineldin/camed22).
translated by 谷歌翻译
The existence of completely aligned and paired multi-modal neuroimaging data has proved its effectiveness in diagnosis of brain diseases. However, collecting the full set of well-aligned and paired data is expensive or even impractical, since the practical difficulties may include high cost, long time acquisition, image corruption, and privacy issues. A realistic solution is to explore either an unsupervised learning or a semi-supervised learning to synthesize the absent neuroimaging data. In this paper, we are the first one to comprehensively approach cross-modality neuroimage synthesis task from different perspectives, which include the level of the supervision (especially for weakly-supervised and unsupervised), loss function, evaluation metrics, the range of modality synthesis, datasets (aligned, private and public) and the synthesis-based downstream tasks. To begin with, we highlight several opening challenges for cross-modality neuroimage sysnthesis. Then we summarize the architecture of cross-modality synthesis under various of supervision level. In addition, we provide in-depth analysis of how cross-modality neuroimage synthesis can improve the performance of different downstream tasks. Finally, we re-evaluate the open challenges and point out the future directions for the remaining challenges. All resources are available at https://github.com/M-3LAB/awesome-multimodal-brain-image-systhesis
translated by 谷歌翻译
这项工作提出了一个新颖的框架CISFA(对比图像合成和自我监督的特征适应),该框架建立在图像域翻译和无监督的特征适应性上,以进行跨模式生物医学图像分割。与现有作品不同,我们使用单方面的生成模型,并在输入图像的采样贴片和相应的合成图像之间添加加权贴片对比度损失,该图像用作形状约束。此外,我们注意到生成的图像和输入图像共享相似的结构信息,但具有不同的方式。因此,我们在生成的图像和输入图像上强制实施对比损失,以训练分割模型的编码器,以最大程度地减少学到的嵌入空间中成对图像之间的差异。与依靠对抗性学习进行特征适应的现有作品相比,这种方法使编码器能够以更明确的方式学习独立于域的功能。我们对包含腹腔和全心的CT和MRI图像的分割任务进行了广泛评估。实验结果表明,所提出的框架不仅输出了较小的器官形状变形的合成图像,而且还超过了最先进的域适应方法的较大边缘。
translated by 谷歌翻译
在过去的几年中,在深度学习中,在深度学习中广泛研究了域的概括问题,但对对比增强成像的关注受到了有限的关注。但是,临床中心之间的对比度成像方案存在明显差异,尤其是在对比度注入和图像采集之间,而与可用的非对抗成像的可用数据集相比,访问多中心对比度增强图像数据受到限制。这需要新的工具来概括单个中心的深度学习模型,跨越新的看不见的域和临床中心,以对比增强成像。在本文中,我们介绍了深度学习技术的详尽评估,以实现对对比度增强图像分割的看不见的临床中心的普遍性。为此,研究,优化和系统评估了几种技术,包括数据增强,域混合,转移学习和域的适应性。为了证明域泛化对对比增强成像的潜力,评估了对对比增强心脏磁共振成像(MRI)中的心室分割的方法。结果是根据位于三个国家(法国,西班牙和中国)的四家医院中获得的多中心心脏对比增强的MRI数据集获得的。他们表明,数据增强和转移学习的组合可以导致单中心模型,这些模型可以很好地推广到训练过程中未包括的新临床中心。在对比增强成像中,具有合适的概括程序的单域神经网络可以达到甚至超过多中心多供应商模型的性能,从而消除了对综合多中心数据集的需求,以训练可概括的模型。
translated by 谷歌翻译
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
translated by 谷歌翻译
无监督的域适应性(UDA)是解决一个问题的关键技术之一,很难获得监督学习所需的地面真相标签。通常,UDA假设在培训过程中可以使用来自源和目标域中的所有样本。但是,在涉及数据隐私问题的应用下,这不是现实的假设。为了克服这一限制,最近提出了无源数据的UDA,即无源无监督的域适应性(SFUDA)。在这里,我们提出了一种用于医疗图像分割的SFUDA方法。除了在UDA中通常使用的熵最小化方法外,我们还引入了一个损失函数,以避免目标域中的特征规范和在保留目标器官的形状约束之前。我们使用数据集进行实验,包括多种类型的源目标域组合,以显示我们方法的多功能性和鲁棒性。我们确认我们的方法优于所有数据集中的最先进。
translated by 谷歌翻译
脾脏是钝性腹腔创伤中最常见的固体器官之一。来自多相CT的自动分割系统的开发用于脾血管损伤的脾血管损伤,可以增强严重程度,以改善临床决策支持和结果预测。然而,由于以下原因,脾血管损伤的准确细分是具有挑战性的:1)脾血管损伤可以是高度变体的形状,质地,尺寸和整体外观; 2)数据采集是一种复杂和昂贵的程序,需要来自数据科学家和放射科学家的密集努力,这使得大规模的注释数据集难以获取。鉴于这些挑战,我们在此设计了一种用于多相脾血管损伤分割的新框架,尤其是数据有限。一方面,我们建议利用外部数据作为矿井伪脾面罩作为空间关注,被称为外部关注,用于引导脾血管损伤的分割。另一方面,我们开发一个合成相位增强模块,它在生成的对抗网络上构建,通过完全利用不同阶段之间的关系来填充内部数据。通过联合实施外部注意力和填充内部数据表示,我们提出的方法优于其他竞争方法,并且在平均DSC方面大大改善了超过7%的流行Deeplab-V3 +基线,这证实了其有效性。
translated by 谷歌翻译
来自3D CTA的多结构(即肾脏,肾脏,动脉和静脉)的准确和自动分割是基于手术的肾脏癌治疗的最重要任务之一(例如,腹腔镜部分肾切除术)。本文简要介绍了MICCAI 2022 KIPA挑战中多结构SEG-Interation方法的主要技术细节。本文的主要贡献是,我们设计具有大量上下文信息限制功能的3D UNET。我们的方法在MICCAI 2022 KIPA CHAL-LENGE开放测试数据集上排名第八,平均位置为8.2。我们的代码和训练有素的模型可在https://github.com/fengjiejiejiejie/kipa22_nnunet上公开获得。
translated by 谷歌翻译
在本文中,我们针对零射肿瘤分割的自我监督代表学习。我们提出以下贡献:首先,我们主张零拍摄设置,其中预培训的模型应该直接适用于下游任务,而无需使用任何手动注释。其次,我们从“层分解”中获取灵感,并创新了模拟肿瘤数据的培训制度。第三,我们进行广泛的消融研究,以分析数据模拟中的关键组成部分,并验证不同代理任务的必要性。我们证明,在模拟中具有足够的质地随机化,培训的模型可以毫不费力地推广到分段实际肿瘤数据。第四,我们的方法在不同下游数据集上实现了零射肿瘤分割的优异成果,对于脑肿瘤细分和LITS2017进行肝脏肿瘤分割。在评估低注释制度下评估肿瘤细分的模型可转移性,拟议方法也优于所有现有的自我监督方法,在实际情况下开辟了自我监督学习的使用。
translated by 谷歌翻译
对于医学图像分割,想象一下,如果仅使用源域中的MR图像训练模型,它的性能如何直接在目标域中进行CT图像?这种设置,即概括的跨模块分割,拥有其临床潜力,其比其他相关设置更具挑战性,例如域适应。为实现这一目标,我们本文通过利用在我们更广泛的分割期间利用增强的源相似和源不同的图像来提出新的双标准化模块。具体而言,给定单个源域,旨在模拟未经证明的目标域中可能的外观变化,我们首先利用非线性变换来增加源相似和源不同的图像。然后,为了充分利用这两种类型的增强,我们所提出的基于双重定量的模型采用共享骨干但独立的批量归一化层,用于单独归一化。之后,我们提出了一种基于风格的选择方案来自动选择测试阶段的适当路径。在三个公开可用的数据集上进行了广泛的实验,即Brats,跨型心脏和腹部多器官数据集表明我们的方法优于其他最先进的域概括方法。
translated by 谷歌翻译
Objective: Thigh muscle group segmentation is important for assessment of muscle anatomy, metabolic disease and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single slice computed tomography (CT) thigh images is challenging. Method: We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from 3D MR to single CT slice. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo labels inferenced by the segmenter. After refining easy cohort pseudo labels based on anatomical assumption, self-training with easy and hard splits is applied to fine tune the segmenter. Results: On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888(0.041) across all muscle groups including sartorius, hamstrings, quadriceps femoris and gracilis. muscles Conclusion: To our best knowledge, this is the first pipeline to achieve thigh imaging domain adaptation from MR to CT. The proposed pipeline is effective and robust in extracting muscle groups on 2D single slice CT thigh images.The container is available for public use at https://github.com/MASILab/DA_CT_muscle_seg
translated by 谷歌翻译
来自磁共振成像(MRI)数据的自动脑肿瘤分割在评估治疗和个性化治疗分层的肿瘤反应中起重要作用.Manual分割是乏味的,主观的脑肿瘤细分算法有可能提供目标并且快速肿瘤分割。但是,这种算法的培训需要大量数据集,这些数据集并不总是可用的。数据增强技术可以减少对大型数据集的需求。然而,当前方法主要是参数,并且可能导致次优的性能。我们引入了两个非参数化的脑肿瘤分割的数据增强方法:混合结构正则化(MSR)和Shuffle像素噪声(SPN).we评估了MSR和SPN增强对大脑肿瘤分割(BRATS)2018挑战数据集的附加值与编码器 - 解码器NNU-NNU-NNU-NET架构作为分割算法。从MSR和SPN改善NNU-NET分段与参数高斯噪声增强相比的准确性。当分别将MSR与肿瘤核心和全肿瘤实验的非参数增强分别增加了80%至82%和p值= 0.0022,00028。所提出的MSR和SPN增强有可能在其他任务中提高神经网络性能。
translated by 谷歌翻译