Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.
translated by 谷歌翻译
临床实践中使用的医学图像是异质的,与学术研究中研究的扫描质量不同。在解剖学,伪影或成像参数不寻常或方案不同的极端情况下,预处理会分解。最需要对这些变化的方法可靠。提出了一种新颖的深度学习方法,以将人脑快速分割为132个区域。提出的模型使用有效的U-NET型网络,并从不同视图和分层关系的交点上受益,以在端到端训练期间融合正交2D平面和脑标签。部署了弱监督的学习,以利用部分标记的数据来进行整个大脑分割和颅内体积(ICV)的估计。此外,数据增强用于通过生成具有较高的脑扫描的磁共振成像(MRI)数据来扩展模型训练,同时保持数据隐私。提出的方法可以应用于脑MRI数据,包括头骨或任何其他工件,而无需预处理图像或性能下降。与最新的一些实验相比,使用了不同的Atlases的几项实验,以评估受过训练模型的分割性能,并且与不同内部和不同内部和不同内部方法的现有方法相比,结果显示了较高的分割精度和鲁棒性。间域数据集。
translated by 谷歌翻译
机器学习和计算机视觉技术近年来由于其自动化,适合性和产生惊人结果的能力而迅速发展。因此,在本文中,我们调查了2014年至2022年之间发表的关键研究,展示了不同的机器学习算法研究人员用来分割肝脏,肝肿瘤和肝脉管结构的研究。我们根据感兴趣的组织(肝果,肝肿瘤或肝毒剂)对被调查的研究进行了划分,强调了同时解决多个任务的研究。此外,机器学习算法被归类为受监督或无监督的,如果属于某个方案的工作量很大,则将进一步分区。此外,对文献和包含上述组织面具的网站发现的不同数据集和挑战进行了彻底讨论,强调了组织者的原始贡献和其他研究人员的贡献。同样,在我们的评论中提到了文献中过度使用的指标,这强调了它们与手头的任务的相关性。最后,强调创新研究人员应对需要解决的差距的关键挑战和未来的方向,例如许多关于船舶分割挑战的研究的稀缺性以及为什么需要早日处理他们的缺席。
translated by 谷歌翻译
We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network (CNN), and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this work, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model's accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time, and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at http://voxelmorph.csail.mit.edu.
translated by 谷歌翻译
大脑磁共振成像(MRI)扫描的自动分割和体积对于诊断帕金森氏病(PD)和帕金森氏症综合症(P-Plus)至关重要。为了提高诊断性能,我们在大脑分割中采用了深度学习(DL)模型,并将其性能与金标准的非DL方法进行了比较。我们收集了健康对照组(n = 105)和PD患者(n = 105),多个全身性萎缩(n = 132)和渐进性超核麻痹(n = 69)的大脑MRI扫描。 2020.使用金标准的非DL模型FreeSurfer(FS),我们对六个脑结构进行了分割:中脑,PON,CAUDATE,CAUDATE,PUTATATE,pALLIDUM和THIRD CNTRICLE,并将其视为DL模型的注释数据,代表性V -net和unet。计算了分化正常,PD和P-Plus病例的曲线下的骰子分数和面积。每位患者六个大脑结构的V-NET和UNETR的分割时间分别为3.48 +-0.17和48.14 +-0.97 s,比FS(15,735 +-1.07 s)快至少300倍。两种DL模型的骰子得分都足够高(> 0.85),它们的疾病分类AUC优于FS。为了分类正常与P-Plus和PD与多个全身性萎缩(小脑型)的分类,DL模型和FS显示出高于0.8的AUC。 DL显着减少了分析时间,而不会损害大脑分割和差异诊断的性能。我们的发现可能有助于在临床环境中采用DL脑MRI分割并提高大脑研究。
translated by 谷歌翻译
尽管数据增强和转移学习有所进步,但卷积神经网络(CNNS)难以推广到看不见的域。在分割大脑扫描时,CNN对分辨率和对比度的变化非常敏感:即使在相同的MRI模式内,则性能可能会跨数据集减少。在这里,我们介绍了Synthseg,第一个分段CNN无关紧要对比和分辨率。 Synthseg培训,用从分段上的生成模型采样的合成数据培训。至关重要,我们采用域随机化策略,我们完全随机开启了合成培训数据的对比度和解决。因此,Synthseg可以在没有再培训或微调的情况下对任何目标结构域进行真实扫描,这是首次分析大量的异构临床数据。因为Synthseg仅需要进行培训(无图像),所以它可以从通过不同群体的对象(例如,老化和患病)的自动化方法获得的标签中学习,从而实现广泛的形态变异性的鲁棒性。我们展示了Synthseg在六种方式的5,300扫描和十项决议中,与监督CNN,最先进的域适应和贝叶斯分割相比,它表现出无与伦比的泛化。最后,我们通过将其施加到心脏MRI和CT分割来证明SyntheeG的恒定性。
translated by 谷歌翻译
U-NET一直是医疗图像分割任务的首选架构,但是将U-NET体系结构扩展到3D图像时会出现计算挑战。我们提出了隐式U-NET体系结构,该体系结构将有效的隐式表示范式适应监督的图像分割任务。通过将卷积特征提取器与隐式定位网络相结合,我们隐式U-NET的参数比等效的U-NET少40%。此外,我们提出了培训和推理程序,以利用稀疏的预测。与等效的完全卷积U-NET相比,隐式U-NET减少了约30%的推理和训练时间以及训练记忆足迹,同时在我们的两个不同的腹部CT扫描数据集中取得了可比的结果。
translated by 谷歌翻译
Brain tumor imaging has been part of the clinical routine for many years to perform non-invasive detection and grading of tumors. Tumor segmentation is a crucial step for managing primary brain tumors because it allows a volumetric analysis to have a longitudinal follow-up of tumor growth or shrinkage to monitor disease progression and therapy response. In addition, it facilitates further quantitative analysis such as radiomics. Deep learning models, in particular CNNs, have been a methodology of choice in many applications of medical image analysis including brain tumor segmentation. In this study, we investigated the main design aspects of CNN models for the specific task of MRI-based brain tumor segmentation. Two commonly used CNN architectures (i.e. DeepMedic and U-Net) were used to evaluate the impact of the essential parameters such as learning rate, batch size, loss function, and optimizer. The performance of CNN models using different configurations was assessed with the BraTS 2018 dataset to determine the most performant model. Then, the generalization ability of the model was assessed using our in-house dataset. For all experiments, U-Net achieved a higher DSC compared to the DeepMedic. However, the difference was only statistically significant for whole tumor segmentation using FLAIR sequence data and tumor core segmentation using T1w sequence data. Adam and SGD both with the initial learning rate set to 0.001 provided the highest segmentation DSC when training the CNN model using U-Net and DeepMedic architectures, respectively. No significant difference was observed when using different normalization approaches. In terms of loss functions, a weighted combination of soft Dice and cross-entropy loss with the weighting term set to 0.5 resulted in an improved segmentation performance and training stability for both DeepMedic and U-Net models.
translated by 谷歌翻译
自动化的腹部多器官分割是计算机辅助诊断腹部器官相关疾病的至关重要但具有挑战性的任务。尽管许多深度学习模型在许多医学图像分割任务中取得了显着的成功,但由于腹部器官的不同大小以及它们之间的含糊界限,腹部器官的准确分割仍然具有挑战性。在本文中,我们提出了一个边界感知网络(BA-NET),以分段CT扫描和MRI扫描进行腹部器官。该模型包含共享编码器,边界解码器和分割解码器。两个解码器都采用了多尺度的深度监督策略,这可以减轻可变器官尺寸引起的问题。边界解码器在每个量表上产生的边界概率图被用作提高分割特征图的注意。我们评估了腹部多器官细分(AMOS)挑战数据集的BA-NET,并获得了CT扫描的多器官分割的平均骰子分数为89.29 $ \%$,平均骰子得分为71.92 $ \%$ \%$ \% MRI扫描。结果表明,在两个分割任务上,BA-NET优于NNUNET。
translated by 谷歌翻译
目的:多发性硬化症(MS)是一种自身免疫和脱髓鞘疾病,导致中枢神经系统的病变。可以使用磁共振成像(MRI)跟踪和诊断该疾病。到目前为止,多数多层自动生物医学方法用于在成本,时间和可用性方面对患者没有有益的病变。本文的作者提出了一种使用只有一个模态(Flair Image)的方法,准确地将MS病变分段。方法:由3D-Reset和空间通道注意模块进行设计,灵活的基于补丁的卷积神经网络(CNN),以段MS病变。该方法由三个阶段组成:(1)对比度限制自适应直方图均衡(CLAHE)被施加到原始图像并连接到提取的边缘以形成4D图像; (2)尺寸80 * 80 * 80 * 2的贴片从4D图像中随机选择; (3)将提取的贴片传递到用于分割病变的关注的CNN中。最后,将所提出的方法与先前的相同数据集进行比较。结果:目前的研究评估了模型,具有测试集的ISIB挑战数据。实验结果表明,该方法在骰子相似性和绝对体积差方面显着超越了现有方法,而该方法仅使用一种模态(Flair)来分割病变。结论:作者推出了一种自动化的方法来分割基于最多两种方式作为输入的损伤。所提出的架构由卷积,解卷积和SCA-VOXRES模块作为注意模块组成。结果表明,所提出的方法优于与其他方法相比良好。
translated by 谷歌翻译
Fully Convolutional Neural Networks (FCNNs) with contracting and expanding paths have shown prominence for the majority of medical image segmentation applications since the past decade. In FCNNs, the encoder plays an integral role by learning both global and local features and contextual representations which can be utilized for semantic output prediction by the decoder. Despite their success, the locality of convolutional layers in FCNNs, limits the capability of learning long-range spatial dependencies. Inspired by the recent success of transformers for Natural Language Processing (NLP) in long-range sequence learning, we reformulate the task of volumetric (3D) medical image segmentation as a sequence-to-sequence prediction problem. We introduce a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output. We have validated the performance of our method on the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for multiorgan segmentation and the Medical Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation tasks. Our benchmarks demonstrate new state-of-the-art performance on the BTCV leaderboard. Code: https://monai.io/research/unetr
translated by 谷歌翻译
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively. The docker image for the winning submission is publicly available at (https://hub.docker.com/r/razeineldin/camed22).
translated by 谷歌翻译
医学成像深度学习模型通常是大而复杂的,需要专门的硬件来训练和评估这些模型。为了解决此类问题,我们提出了PocketNet范式,以减少深度学习模型的规模,通过促进卷积神经网络中的渠道数量的增长。我们证明,对于一系列的分割和分类任务,PocketNet架构产生的结果与常规神经网络相当,同时将参数数量减少多个数量级,最多使用90%的GPU记忆,并加快训练时间的加快。高达40%,从而允许在资源约束设置中培训和部署此类模型。
translated by 谷歌翻译
Automatic segmentation of kidney and kidney tumour in Computed Tomography (CT) images is essential, as it uses less time as compared to the current gold standard of manual segmentation. However, many hospitals are still reliant on manual study and segmentation of CT images by medical practitioners because of its higher accuracy. Thus, this study focuses on the development of an approach for automatic kidney and kidney tumour segmentation in contrast-enhanced CT images. A method based on Convolutional Neural Network (CNN) was proposed, where a 3D U-Net segmentation model was developed and trained to delineate the kidney and kidney tumour from CT scans. Each CT image was pre-processed before inputting to the CNN, and the effect of down-sampled and patch-wise input images on the model performance was analysed. The proposed method was evaluated on the publicly available 2021 Kidney and Kidney Tumour Segmentation Challenge (KiTS21) dataset. The method with the best performing model recorded an average training Dice score of 0.6129, with the kidney and kidney tumour Dice scores of 0.7923 and 0.4344, respectively. For testing, the model obtained a kidney Dice score of 0.8034, and a kidney tumour Dice score of 0.4713, with an average Dice score of 0.6374.
translated by 谷歌翻译
Clinical diagnostic and treatment decisions rely upon the integration of patient-specific data with clinical reasoning. Cancer presents a unique context that influence treatment decisions, given its diverse forms of disease evolution. Biomedical imaging allows noninvasive assessment of disease based on visual evaluations leading to better clinical outcome prediction and therapeutic planning. Early methods of brain cancer characterization predominantly relied upon statistical modeling of neuroimaging data. Driven by the breakthroughs in computer vision, deep learning became the de facto standard in the domain of medical imaging. Integrated statistical and deep learning methods have recently emerged as a new direction in the automation of the medical practice unifying multi-disciplinary knowledge in medicine, statistics, and artificial intelligence. In this study, we critically review major statistical and deep learning models and their applications in brain imaging research with a focus on MRI-based brain tumor segmentation. The results do highlight that model-driven classical statistics and data-driven deep learning is a potent combination for developing automated systems in clinical oncology.
translated by 谷歌翻译
计算机断层扫描(CT)图像中腹部器官的自动分割可以支持放射治疗和图像引导的手术工作流程。这种自动解决方案的开发仍然挑战,主要是由于CT图像中的复杂器官相互作用和模糊边界。为了解决这些问题,我们专注于有效的空间上下文建模和显式边缘分段前提。因此,我们提出了一个3D网络,其中四个主要组件训练了端到端,包括共享编码器,边缘检测器,具有边缘跳过连接的解码器(ESC)和复制特征传播头(RFP-head)。为了捕获宽范围的空间依赖性,RFP-磁头通过以有效的切片方式配制的定向非循环图(DAG)传播和收集局部特征,以高效的切片方式,关于图像单元的空间排列。为了利用边缘信息,边缘探测器通过利用边缘监控来学习专门针对语义分割专门调整的边缘知识。然后,ESC通过多级解码器特征聚合边缘知识,以学习判别特征的层次结构明确地建模器官内部和边缘之间的互补性进行分割。我们对具有八个带电器官的两个挑战性腹部CT数据集进行了广泛的实验。实验结果表明,所提出的网络优于几种最先进的模型,特别是对于小而复杂的结构(胆囊,食道,胃,胰腺和十二指肠)的分割。该代码将公开。
translated by 谷歌翻译
在过去的十年中,卷积神经网络(Convnets)主导了医学图像分析领域。然而,发现脉搏的性能仍然可以受到它们无法模拟图像中体素之间的远程空间关系的限制。最近提出了众多视力变压器来解决哀悼缺点,在许多医学成像应用中展示最先进的表演。变压器可以是用于图像配准的强烈候选者,因为它们的自我注意机制能够更精确地理解移动和固定图像之间的空间对应。在本文中,我们呈现透射帧,一个用于体积医学图像配准的混合变压器-Cromnet模型。我们还介绍了三种变速器的变形,具有两个散晶变体,确保了拓扑保存的变形和产生良好校准的登记不确定性估计的贝叶斯变体。使用来自两个应用的体积医学图像的各种现有的登记方法和变压器架构进行广泛验证所提出的模型:患者间脑MRI注册和幻影到CT注册。定性和定量结果表明,传输和其变体导致基线方法的实质性改进,展示了用于医学图像配准的变压器的有效性。
translated by 谷歌翻译
每年都会在医院中获得数百万个大脑MRI扫描,这比任何研究数据集的规模都要大得多。因此,分析此类扫描的能力可以改变神经成像研究。然而,由于没有自动化算法可以应对临床采集的高度可变性(MR对比度,分辨率,方向等),因此它们的潜力仍未开发。在这里,我们提出了Synthseg+,这是一个AI分割套件,首次可以对异质临床数据集进行强有力的分析。具体而言,除了全脑分割外,SynthSeg+还执行皮质细胞,颅内体积估计和自动检测故障分割(主要是由质量非常低的扫描引起的)。我们在七个实验中证明了合成++,包括对14,000张扫描的老化研究,在该研究中,它准确地复制了在质量更高的数据上观察到的萎缩模式。 Synthseg+公开发布是一种现成的工具,可在广泛设置中解锁定量形态计量学的潜力。
translated by 谷歌翻译
神经胶质瘤是由不同高度异质组织学子区域组成的脑肿瘤。鉴定相关肿瘤子结构的图像分析技术具有改善患者诊断,治疗和预后的高潜力。但是,由于神经胶质瘤的异质性高,分割任务目前是医学图像分析领域的主要挑战。在目前的工作中,研究了由神经胶质瘤的多模式MRI扫描组成的2018年脑肿瘤分割(BRAT)挑战的数据库。提出了基于卷积神经网络(CNN)的设计和应用的分割方法,并结合了原始的后处理技术,其计算需求较低。后处理技术是分割中获得的结果的主要负责。分段区域是整个肿瘤,肿瘤核和增强的肿瘤核,分别获得等于0.8934、0.8376和0.8113的平均骰子系数。这些结果达到了由挑战的获胜者确定的神经胶质瘤分割的最新现状。
translated by 谷歌翻译
我们为Brats21挑战中的脑肿瘤分割任务提出了优化的U-Net架构。为了找到最佳模型架构和学习时间表,我们运行了一个广泛的消融研究来测试:深度监督损失,焦点,解码器注意,下降块和残余连接。此外,我们搜索了U-Net编码器的最佳深度,卷积通道数量和后处理策略。我们的方法赢得了验证阶段,并在测试阶段进行了第三位。我们已开放源代码以在NVIDIA深度学习示例GitHub存储库中重现我们的Brats21提交。
translated by 谷歌翻译