One-shot segmentation of brain tissues is typically a dual-model iterative learning: a registration model (reg-model) warps a carefully-labeled atlas onto unlabeled images to initialize their pseudo masks for training a segmentation model (seg-model); the seg-model revises the pseudo masks to enhance the reg-model for a better warping in the next iteration. However, there is a key weakness in such dual-model iteration that the spatial misalignment inevitably caused by the reg-model could misguide the seg-model, which makes it converge on an inferior segmentation performance eventually. In this paper, we propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for robust one-shot segmentation of brain tissues. Specifically, we first utilize the reg-model to warp the atlas onto an unlabeled image, and then employ the Fourier-based amplitude exchange with perturbation to transplant the style of the unlabeled image into the aligned atlas. This allows the subsequent seg-model to learn on the aligned and style-transferred copies of the atlas instead of unlabeled images, which naturally guarantees the correct spatial correspondence of an image-mask training pair, without sacrificing the diversity of intensity patterns carried by the unlabeled images. Furthermore, we introduce a feature-aware content consistency in addition to the image-level similarity to constrain the reg-model for a promising initialization, which avoids the collapse of image-aligned style transformation in the first iteration. Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%. The source code is available at: https://github.com/JinxLv/One-shot-segmentation-via-IST.
translated by 谷歌翻译
脑MRI图像的登记需要解决变形领域,这对于对准复杂的脑组织,例如皮质核等,这是极其困难的现有努力,该努力在具有微小运动的中间子场中分解目标变形领域,即逐步登记阶段或较低的分辨率,即全尺寸变形场的粗析估计。在本文中,我们认为这些努力不是相互排斥的,并为普通和粗良好的方式同时提出统一的脑MRI登记统一框架。具体地,在双编码器U-Net上构建,定制移动的MRI对被编码和解码成从粗略到精细的多尺度变形子字段。每个解码块包含两个提出的新颖模块:i)在变形场积分(DFI)中,计算单个集成子字段,翘曲,其等同于来自所有先前解码块的子字段逐渐翘曲,并且II)非刚性特征融合(NFF),固定移动对的特征由DFI集成子场对齐,然后融合以预测更精细的子场。利用DFI和NFF,目标变形字段被修改为多尺度子场,其中较粗糙的字段缓解了更精细的一个和更精细的字段的估计,以便构成以前粗糙的较粗糙的那些错位。私人和公共数据集的广泛和全面的实验结果展示了脑MRI图像的优越的登记性能,仅限于逐步登记和粗略估计,平均骰子的粗略估计数量在最多8%上升。
translated by 谷歌翻译
可变形的注册包括找到两个不同图像之间的最佳密集对应。许多算法已发表,但临床应用难以解决优化问题所需的高计算时间。通过利用GPU计算和学习过程,深入学习超越了这种限制。然而,许多深度学习方法不考虑经典算法尊重的理想性质。在本文中,我们呈现MICS,一种用于医学成像注册的新型深度学习算法。由于注册是一个不良问题,我们将我们的算法集中在不同性质的方面:逆一致性,对称性和方向节约。我们还将我们的算法与多步策略组合以改进和改进变形网格。虽然许多方法向脑MRI应用了登记,但我们探讨了更具挑战性的身体定位:腹部CT。最后,我们在Learn2Reg挑战期间使用的数据集中评估了我们的方法,允许与已发布的方法进行公平比较。
translated by 谷歌翻译
Brain extraction and registration are important preprocessing steps in neuroimaging data analysis, where the goal is to extract the brain regions from MRI scans (i.e., extraction step) and align them with a target brain image (i.e., registration step). Conventional research mainly focuses on developing methods for the extraction and registration tasks separately under supervised settings. The performance of these methods highly depends on the amount of training samples and visual inspections performed by experts for error correction. However, in many medical studies, collecting voxel-level labels and conducting manual quality control in high-dimensional neuroimages (e.g., 3D MRI) are very expensive and time-consuming. Moreover, brain extraction and registration are highly related tasks in neuroimaging data and should be solved collectively. In this paper, we study the problem of unsupervised collective extraction and registration in neuroimaging data. We propose a unified end-to-end framework, called ERNet (Extraction-Registration Network), to jointly optimize the extraction and registration tasks, allowing feedback between them. Specifically, we use a pair of multi-stage extraction and registration modules to learn the extraction mask and transformation, where the extraction network improves the extraction accuracy incrementally and the registration network successively warps the extracted image until it is well-aligned with the target image. Experiment results on real-world datasets show that our proposed method can effectively improve the performance on extraction and registration tasks in neuroimaging data. Our code and data can be found at https://github.com/ERNetERNet/ERNet
translated by 谷歌翻译
可变形的图像注册对于许多医学图像分析是基础。准确图像注册的关键障碍在于图像外观变化,例如纹理,强度和噪声的变化。这些变化在医学图像中很明显,尤其是在经常使用注册的大脑图像中。最近,使用深神经网络的基于深度学习的注册方法(DLR)显示了计算效率,比基于传统优化的注册方法(ORS)快几个数量级。 DLR依靠一个全球优化的网络,该网络经过一组培训样本训练以实现更快的注册。但是,DLR倾向于无视ORS固有的目标对特异性优化,因此已经降低了对测试样品变化的适应性。这种限制对于注册出现较大的医学图像的限制是严重的,尤其是因为很少有现有的DLR明确考虑了外观的变化。在这项研究中,我们提出了一个外观调整网络(AAN),以增强DLR对外观变化的适应性。当我们集成到DLR中时,我们的AAN提供了外观转换,以减少注册过程中的外观变化。此外,我们提出了一个由解剖结构约束的损失函数,通过该函数,我们的AAN产生了解剖结构的转化。我们的AAN被目的设计为容易插入广泛的DLR中,并且可以以无监督和端到端的方式进行合作培训。我们用三个最先进的DLR评估了3D脑磁共振成像(MRI)的三个公共数据集(MRI)。结果表明,我们的AAN始终提高了现有的DLR,并且在注册精度上优于最先进的OR,同时向现有DLR增加了分数计算负载。
translated by 谷歌翻译
可变形的图像配准能够在一对图像之间实现快速准确的对准,因此在许多医学图像研究中起着重要作用。当前的深度学习(DL)基础的图像登记方法通过利用卷积神经网络直接从一个图像到另一个图像的空间变换,要求地面真相或相似度量。然而,这些方法仅使用全局相似性能量函数来评估一对图像的相似性,该图像忽略了图像内的感兴趣区域(ROI)的相似性。此外,基于DL的方法通常估计直接图像的全球空间转换,这永远不会注意图像内ROI的区域空间转换。在本文中,我们介绍了一种具有区域一致性约束的新型双流转换网络,其最大化了一对图像内的ROI的相似性,并同时估计全局和区域空间转换。四个公共3D MRI数据集的实验表明,与其他最先进的方法相比,该方法可实现准确性和泛化的最佳登记性能。
translated by 谷歌翻译
Deformable image registration, i.e., the task of aligning multiple images into one coordinate system by non-linear transformation, serves as an essential preprocessing step for neuroimaging data. Recent research on deformable image registration is mainly focused on improving the registration accuracy using multi-stage alignment methods, where the source image is repeatedly deformed in stages by a same neural network until it is well-aligned with the target image. Conventional methods for multi-stage registration can often blur the source image as the pixel/voxel values are repeatedly interpolated from the image generated by the previous stage. However, maintaining image quality such as sharpness during image registration is crucial to medical data analysis. In this paper, we study the problem of anti-blur deformable image registration and propose a novel solution, called Anti-Blur Network (ABN), for multi-stage image registration. Specifically, we use a pair of short-term registration and long-term memory networks to learn the nonlinear deformations at each stage, where the short-term registration network learns how to improve the registration accuracy incrementally and the long-term memory network combines all the previous deformations to allow an interpolation to perform on the raw image directly and preserve image sharpness. Extensive experiments on both natural and medical image datasets demonstrated that ABN can accurately register images while preserving their sharpness. Our code and data can be found at https://github.com/anonymous3214/ABN
translated by 谷歌翻译
在许多图像引导的临床方法中,医学图像分割是一个基本和关键的步骤。基于深度学习的细分方法的最新成功通常取决于大量标记的数据,这特别困难且昂贵,尤其是在医学成像领域中,只有专家才能提供可靠和准确的注释。半监督学习已成为一种吸引人的策略,并广泛应用于医学图像分割任务,以训练注释有限的深层模型。在本文中,我们对最近提议的半监督学习方法进行了全面综述,并总结了技术新颖性和经验结果。此外,我们分析和讨论现有方法的局限性和几个未解决的问题。我们希望这篇评论可以激发研究界探索解决这一挑战的解决方案,并进一步促进医学图像细分领域的发展。
translated by 谷歌翻译
Myocardial pathology segmentation (MyoPS) can be a prerequisite for the accurate diagnosis and treatment planning of myocardial infarction. However, achieving this segmentation is challenging, mainly due to the inadequate and indistinct information from an image. In this work, we develop an end-to-end deep neural network, referred to as MyoPS-Net, to flexibly combine five-sequence cardiac magnetic resonance (CMR) images for MyoPS. To extract precise and adequate information, we design an effective yet flexible architecture to extract and fuse cross-modal features. This architecture can tackle different numbers of CMR images and complex combinations of modalities, with output branches targeting specific pathologies. To impose anatomical knowledge on the segmentation results, we first propose a module to regularize myocardium consistency and localize the pathologies, and then introduce an inclusiveness loss to utilize relations between myocardial scars and edema. We evaluated the proposed MyoPS-Net on two datasets, i.e., a private one consisting of 50 paired multi-sequence CMR images and a public one from MICCAI2020 MyoPS Challenge. Experimental results showed that MyoPS-Net could achieve state-of-the-art performance in various scenarios. Note that in practical clinics, the subjects may not have full sequences, such as missing LGE CMR or mapping CMR scans. We therefore conducted extensive experiments to investigate the performance of the proposed method in dealing with such complex combinations of different CMR sequences. Results proved the superiority and generalizability of MyoPS-Net, and more importantly, indicated a practical clinical application.
translated by 谷歌翻译
We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network (CNN), and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this work, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model's accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time, and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at http://voxelmorph.csail.mit.edu.
translated by 谷歌翻译
基于深度学习的半监督学习(SSL)方法在医学图像细分中实现了强大的性能,可以通过使用大量未标记的数据来减轻医生昂贵的注释。与大多数现有的半监督学习方法不同,基于对抗性训练的方法通过学习分割图的数据分布来区分样本与不同来源,导致细分器生成更准确的预测。我们认为,此类方法的当前绩效限制是特征提取和学习偏好的问题。在本文中,我们提出了一种新的半监督的对抗方法,称为贴片置信疗法训练(PCA),用于医疗图像分割。我们提出的歧视器不是单个标量分类结果或像素级置信度图,而是创建贴片置信图,并根据斑块的规模进行分类。未标记数据的预测学习了每个贴片中的像素结构和上下文信息,以获得足够的梯度反馈,这有助于歧视器以融合到最佳状态,并改善半监督的分段性能。此外,在歧视者的输入中,我们补充了图像上的语义信息约束,使得未标记的数据更简单,以适合预期的数据分布。关于自动心脏诊断挑战(ACDC)2017数据集和脑肿瘤分割(BRATS)2019挑战数据集的广泛实验表明,我们的方法优于最先进的半监督方法,这证明了其对医疗图像分割的有效性。
translated by 谷歌翻译
医疗图像注册和细分是多种临床程序的关键任务。这些任务的手动实现是耗时的,质量高度取决于医师的专业水平。为了减轻这项费力的任务,已经开发了自动工具,其中大多数解决方案都是有监督的技术。但是,在医疗领域中,拥有代表性的基础真理的强有力假设远非现实。为了克服这一挑战,已经研究了无监督的技术。但是,它们的性能仍然有限,并且无法产生合理的结果。在这项工作中,我们提出了一个新型的统一的无监督框架,用于图像注册和分割,我们称为PC-Swinmorph。我们框架的核心是两种基于补丁的策略,我们证明补丁表示是性能增益的关键。我们首先引入了基于补丁的对比策略,该策略可执行当地条件和更丰富的特征表示。其次,我们利用一个3D窗口/移动的窗口多头自发项模块作为补丁缝制策略,以消除贴片分裂中的人工制品。我们通过一组数值和视觉结果证明,我们的技术优于当前最新的无监督技术。
translated by 谷歌翻译
医学图像分割是许多临床方法的基本和关键步骤。半监督学习已被广​​泛应用于医学图像分割任务,因为它减轻了收购专家审查的注释的沉重负担,并利用了更容易获得的未标记数据的优势。虽然已被证明是通过实施不同分布下的预测的不变性的一致性学习,但现有方法无法充分利用来自未标记数据的区域级形状约束和边界级距离信息。在本文中,我们提出了一种新颖的不确定性引导的相互一致学习框架,通过将任务中的一致性学习与自组合和交叉任务一致性学习从任务级正则化的最新预测集成了任务内的一致性学习,从而有效地利用了未标记的数据利用几何形状信息。该框架是由模型的估计分割不确定性指导,以便为一致性学习选择相对某些预测,以便有效地利用来自未标记数据的更可靠的信息。我们在两个公开的基准数据集中广泛地验证了我们提出的方法:左心房分割(LA)数据集和大脑肿瘤分割(BRATS)数据集。实验结果表明,我们的方法通过利用未标记的数据和优于现有的半监督分段方法来实现性能增益。
translated by 谷歌翻译
The existence of completely aligned and paired multi-modal neuroimaging data has proved its effectiveness in diagnosis of brain diseases. However, collecting the full set of well-aligned and paired data is expensive or even impractical, since the practical difficulties may include high cost, long time acquisition, image corruption, and privacy issues. A realistic solution is to explore either an unsupervised learning or a semi-supervised learning to synthesize the absent neuroimaging data. In this paper, we are the first one to comprehensively approach cross-modality neuroimage synthesis task from different perspectives, which include the level of the supervision (especially for weakly-supervised and unsupervised), loss function, evaluation metrics, the range of modality synthesis, datasets (aligned, private and public) and the synthesis-based downstream tasks. To begin with, we highlight several opening challenges for cross-modality neuroimage sysnthesis. Then we summarize the architecture of cross-modality synthesis under various of supervision level. In addition, we provide in-depth analysis of how cross-modality neuroimage synthesis can improve the performance of different downstream tasks. Finally, we re-evaluate the open challenges and point out the future directions for the remaining challenges. All resources are available at https://github.com/M-3LAB/awesome-multimodal-brain-image-systhesis
translated by 谷歌翻译
医疗图像分割是一项相关任务,因为它是多个诊断过程的第一步,因此在临床使用中是必不可少的。尽管已经使用监督技术报告了重大成功,但他们假设一套具有良好代表性的标签集。这是在医学领域中的一个有力的假设,在医学领域,注释昂贵,耗时且人类偏见固有。为了解决这个问题,文献中已经提出了无监督的技术,但由于学习任何转换模式的困难,它仍然是一个开放的问题。在这项工作中,我们介绍了一个新型的优化模型,构成了一个新的基于CNN的对比登记结构,用于无监督的医学图像分割。我们方法的核心是从对比度学习机制中利用图像级注册和特征级别,以执行基于注册的细分。首先,我们提出了一个体系结构,以通过注册进行无监督的医学图像分割来捕获图像到图像转换模式。其次,我们将一种对比的学习机制嵌入了注册体系结构中,以增强网络在功能级别中的区分能力。我们表明,我们提出的技术减轻了现有无监督技术的主要缺点。我们通过数值和视觉实验证明,我们的技术在两个主要的医疗图像数据集上的当前无监督分割方法显着优于当前的最新无监督分割方法。
translated by 谷歌翻译
现代深层神经网络在部署到现实世界应用程序时努力转移知识并跨越不同领域的知识。当前,引入了域的概括(DG),以从多个域中学习通用表示,以提高看不见的域的网络泛化能力。但是,以前的DG方法仅关注数据级的一致性方案,而无需考虑不同一致性方案之间的协同正则化。在本文中,我们通过通过协同整合外在的一致性和内在的一致性来提出一个新型的域概括(HCDG)层次一致性框架。特别是对于外部一致性,我们利用跨多个源域的知识来强制数据级的一致性。为了更好地提高这种一致性,我们将新型的高斯混合策略设计为基于傅立叶的数据增强,称为domainup。对于固有的一致性,我们在双重任务方案下对同一实例执行任务级的一致性。我们在两个医学图像分割任务上评估了提出的HCDG框架,即对眼底图像和前列腺MRI分割的视频杯/圆盘分割。广泛的实验结果表明了我们的HCDG框架的有效性和多功能性。
translated by 谷歌翻译
实现域适应是有价值的,以将学习知识从标记为CT数据集传输到腹部多器官分段的目标未标记的MR DataSet。同时,非常希望避免目标数据集的高注重成本并保护源数据集的隐私。因此,我们提出了一种有效的无核心无监督域适应方法,用于跨型号腹部多器官分段而不访问源数据集。所提出的框架的过程包括两个阶段。在第一阶段,特征映射统计损失用于对准顶部分段网络中的源和目标特征的分布,并使用熵最小化损耗来鼓励高席位细分。从顶部分段网络输出的伪标签用于指导样式补偿网络生成类似源图像。从中间分割网络输出的伪标签用于监督所需模型的学习(底部分段网络)。在第二阶段,循环学习和像素自适应掩模细化用于进一步提高所需模型的性能。通过这种方法,我们在肝脏,肾脏,左肾肾脏和脾脏的分割中实现了令人满意的性能,骰子相似系数分别为0.884,0.891,0.864和0.911。此外,当存在目标注释数据时,所提出的方法可以很容易地扩展到情况。该性能在平均骰子相似度系数的0.888至0.922增加到0.888至0.922,靠近监督学习(0.929),只有一个标记的MR卷。
translated by 谷歌翻译
深度学习已被广​​泛用于医学图像分割,并且录制了录制了该领域深度学习的成功的大量论文。在本文中,我们使用深层学习技术对医学图像分割的全面主题调查。本文进行了两个原创贡献。首先,与传统调查相比,直接将深度学习的文献分成医学图像分割的文学,并为每组详细介绍了文献,我们根据从粗略到精细的多级结构分类目前流行的文献。其次,本文侧重于监督和弱监督的学习方法,而不包括无监督的方法,因为它们在许多旧调查中引入而且他们目前不受欢迎。对于监督学习方法,我们分析了三个方面的文献:骨干网络的选择,网络块的设计,以及损耗功能的改进。对于虚弱的学习方法,我们根据数据增强,转移学习和交互式分割进行调查文献。与现有调查相比,本调查将文献分类为比例不同,更方便读者了解相关理由,并将引导他们基于深度学习方法思考医学图像分割的适当改进。
translated by 谷歌翻译
创伤性脑损伤(TBI)患者的脑网络分析对于其意识水平评估和预后评估至关重要,这需要分割某些意识相关的大脑区域。但是,由于很难收集TBI患者的手动注释的MR扫描,因此很难构建TBI分割模型。数据增强技术可用于缓解数据稀缺问题。但是,常规数据增强策略(例如空间和强度转化)无法模仿创伤性大脑中的变形和病变,这限制了后续分割任务的性能。为了解决这些问题,我们提出了一种名为TBIGA的新型医学图像授课模型,以通过配对的脑标签图合成TBI MR扫描。我们的TBIGAN方法的主要优势在于,它可以同时生成TBI图像和相应的标签映射,这在以前的医学图像的先前涂上方法中尚未实现。我们首先按照粗到细节的方式在边缘信息的指导下生成成分的图像,然后将合成强度图像用作标签上填充的先验。此外,我们引入了基于注册的模板增强管道,以增加合成图像对的多样性并增强数据增强能力。实验结果表明,提出的TBIGAN方法可以产生具有高质量和有效标签图的足够合成的TBI图像,这可以大大改善与替代方案相比的2D和3D创伤性脑部分割性能。
translated by 谷歌翻译
迄今为止,迄今为止,众所周知,对广泛的互补临床相关任务进行了全面比较了医学图像登记方法。这限制了采用研究进展,以防止竞争方法的公平基准。在过去五年内已经探讨了许多新的学习方法,但优化,建筑或度量战略的问题非常适合仍然是开放的。 Learn2reg涵盖了广泛的解剖学:脑,腹部和胸部,方式:超声波,CT,MRI,群体:患者内部和患者内部和监督水平。我们为3D注册的培训和验证建立了较低的入境障碍,这帮助我们从20多个独特的团队中汇编了65多个单独的方法提交的结果。我们的互补度量集,包括稳健性,准确性,合理性和速度,使得能够独特地位了解当前的医学图像登记现状。进一步分析监督问题的转移性,偏见和重要性,主要是基于深度学习的方法的优越性,并将新的研究方向开放到利用GPU加速的常规优化的混合方法。
translated by 谷歌翻译