我们提出了VoxelMorph,一种快速,无监督,基于学习的可变形成对医学图像配准算法。传统的注册方法针对每对图像独立地优化目标函数,这对于大型数据集而言是耗时的。我们将注册定义为参数函数,实现为卷积神经网络(CNN)。在给定一组感兴趣的图像的情况下,对其全局参数进行优化。给定一对新的扫描,VoxelMorph通过直接评估函数来快速计算变形场。我们的模型非常灵活,可以使用任何可微分的目标函数来优化这些参数。在这项工作中,我们提出并广泛评估标准图像匹配目标函数以及可以使用辅助数据的目标函数,例如仅在训练时可用的解剖学分割。我们证明无监督模型的准确性与现有技术相当,而操作数量级更快。我们还发现,使用辅助数据训练的VoxelMorph可显着提高测试时的注册准确性。我们的方法有望显着加速医学图像分析和处理管道,同时促进基于学习的注册及其应用的新方向。我们的代码可以在voxelmorph.csail.mit.edu上免费获得。
translated by 谷歌翻译
We present a fast learning-based algorithm for de-formable, pairwise 3D medical image registration. Current registration methods optimize an objective function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a CNN, and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at https://github.com/balakg/voxelmorph.
translated by 谷歌翻译
临床扫描的可变形登记是许多应用的基本任务,例如人口研究或监测个体患者的长期疾病进展。这项任务具有挑战性,因为与高分辨率研究质量的扫描不相符,临床图像非常稀疏,相比之下,缺少多达85%的切片。此外,由于相对于扫描仪的患者取向的变化,所获取的切片中的解剖结构在扫描中不一致。在这项工作中,我们引入了稀疏VoxelMorph(SparseVM),它适应了最先进的基于学习的注册方法,以改善稀疏临床图像的注册.SparseVM是一种快速,无监督的方法,可以根据置信度对体素贡献进行加权。在体素中。这导致具有不同可靠性的体素的体积的改善的注册性能,例如插值的临床扫描。 SparseVM在GPU下注册3D扫描,这比最佳性能的临床注册方法快几个数量级,同时仍然达到了相当的准确性。由于它的短时运行时间和准确的行为,SparseVM可以实现以前不可能的临床分析。该代码可在voxelmorph.mit.edu上公开获得。
translated by 谷歌翻译
我们提出了一个使用深度神经网络的分段框架,并介绍了两个创新。首先,我们描述了一种基于生物物理学的域适应方法。其次,我们提出了一种自动方法来分割除了肿瘤组织之外的白色和灰色物质以及脑脊髓液。关于我们的第一次创新,我们使用领域适应框架,该框架将新的多种类生物物理肿瘤生长模型与生成性对抗模型相结合,以创建具有已知分割的逼真的合成多模态MR图像。关于我们的第二项创新,我们提出了一种自动方法,通过计算健康组织的分割来丰富可用的分割数据。这种分割是利用BraTS训练数据和一组预先标记的地图集之间的微分形象注册完成的,为训练提供了更多信息,减少了类不平衡问题。我们的整体方法并不特定于任何特定的神经网络,可以与现有解决方案我们展示了使用2D U-Net进行BraTS'18分割挑战的性能改进。与用于创建用于训练的合成数据的现有技术GAN模型相比,基于我们生物物理学的域适应实现了更好的结果。
translated by 谷歌翻译
经典的可变形配准技术取得了令人印象深刻的结果,并提供了严格的理论处理,但计算量很大,因为它们解决了每个图像对的优化问题。最近,基于学习的方法通过学习空间变形功能促进了快速登记。然而,这些方法使用受限制的变形模型,需要监督标签,或者不保证差异形态(拓扑保持)登记。此外,基于学习的注册工具并非来自可提供不确定性估计的概率框架。在本文中,我们建立了经典和基于学习的方法之间的联系。我们提出了一种概率生成模型,并推导出一种无监督的基于学习的推理算法,该算法利用经典注册方法的见解,并利用卷积神经网络(CNNs)的最新发展。我们在图像和解剖表面的3D脑注册任务上演示了我们的方法,并提供了广泛的算法经验分析。我们的原则方法可以实现精确的运行状态和非常快的运行时间,同时提供不同的保证。我们的实现可以在http://voxelmorph.csail.mit.edu在线获得。
translated by 谷歌翻译
基于深度卷积神经网络(DCNN)的图像分割方法的一个关键限制是缺乏普遍性。当在新的成像模式中从不同的疾病群组中分割器官时,通常需要手动追踪的训练图像。如果在一个成像模态(例如,MRI)中的手动跟踪图像能够训练用于另一成像模态(例如,CT)的分段网络,则可以减轻手动努力。在本文中,我们提出了一个端到端的合成分割网络(SynSeg-Net)来训练一个目标成像模态的分段网络,而无需手动标记。通过使用(1)来自源和目标模态的不成对强度图像,以及(2)仅来自源模态的手动标签来训练SynSeg-Net.SynSeg-Net通过循环生成对抗网络(CycleGAN)和DCNN的最新进展来实现。我们在两个实验中评估SynSeg-Net的性能:(1)MRI到CT脾肿大合成分割腹部图像,和(2)CT到MRI总颅内容量合成分割(TICV)用于脑图像。提出的端到端方法在两阶段方法中取得了优越的性能。此外,SynSeg-Net在某些情况下使用目标模态标签实现了与传统分段网络的可比性能。 SynSeg-Net的源代码是公开的(https://github.com/MASILab/SynSeg-Net)。
translated by 谷歌翻译
我们提出了一种新的无监督学习算法“FAIM”,用于3D医学图像配准。基于卷积神经网络,FAIM从一组图像中学习,而不需要地面实况信息,如地标或密集注册。经过培训,FAIM可以在不到一秒的时间内注册一对新的图像,具有竞争力的品质。我们将FAIM与类似的方法VoxelMorph以及LPAC40和Mindboggle101数据集上的微分形式方法uTIlzRegGeoShoot进行了比较。对于成对注册,FAIM的结果与其他方法相当或更好。简要研究了不同正则化选择对预测变形的影响。最后,证明了快速构建模板和地图集的应用。
translated by 谷歌翻译
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
translated by 谷歌翻译
从$ T_1 $加权〜($ T_1 $ -w)磁共振图像(MRI)精确自动分割大脑解剖结构一直是神经影像管道中的重要瓶颈,通过无监督强度建模获得最新结果 - 基于方法和多图册注册和标签融合。随着基于强大的监督卷积神经网络(CNN)的学习算法的出现,现在可以在几秒钟内产生高质量的脑分割。然而,这些方法的非常有监督的特性使得很难将它们概括为与他们所训练的数据不同的数据。现代神经影像学研究必然是具有多种采集协议的多中心计划。尽管有严格的协议协调实践,但是不可能在扫描仪,场强,接收线圈等的整个色域上标准化MRI成像参数,即影响对比度。在本文中,我们提出了一种基于CNN的分割算法,除了高度准确和快速之外,还可以在输入$ T_1 $ -w采集中适应变化。我们的方法依赖于构建产生典型测试图像的$ T_1 $ -w脉冲序列的近似正演模型。我们使用正向模型来使用testdata特定的训练示例来增强训练数据。这些增强数据可用于更新和/或构建更健壮的分割模型,其更适合于测试数据成像属性。我们的方法在几秒钟内生成高度准确的,最先进的分割结果〜(整体Dice重叠= 0.94),并且在各种协议中都是一致的。
translated by 谷歌翻译
传统的可变形配准技术取得了令人印象深刻的结果,并提供了严格的理论处理,但计算密集,因为它们解决了每个图像对的优化问题。最近,基于学习的方法通过学习空间变形功能促进了快速登记。然而,这些方法使用受限制的变形模型,需要监督标签,或者不保证差异形态(拓扑保持)注册。此外,基于学习的注册工具还没有从可以提供不确定性估计的概率框架中得出。在本文中,我们提出了一种概率生成模型,并推导出一种无监督的基于学习的推理算法,该算法利用了卷积神经网络(CNNs)的最新发展。我们在3D脑注册任务上展示我们的方法,并提供该算法的经验分析。我们的方法导致了精确的运行状态和非常快的运行时间,同时提供了不同的保证和不确定性估计。我们的实施可在线获取:http://voxelmorph.csail.mit.edu。
translated by 谷歌翻译
我们提出了一种跨模态生成框架,该框架学习如何在没有重新采集的情况下从MR图像中的给定模态生成透射模态。我们提出的方法通过利用条件生成对抗网络(cGAN)的深度学习模型执行NeuroImage-to-NeuroImage翻译(缩写为N2N)。我们的框架共同开发了交叉模式之间的低级特征(像素信息)和高级代表(例如脑肿瘤,脑灰质结构等),这对于解决大脑结构中具有挑战性的复杂性非常重要。我们的框架可以作为临床诊断的辅助方法,具有很大的应用潜力。基于我们提出的框架,我们首先提出了一种通过融合变形场来采用来自变换模态的交叉模态信息来进行跨模态注册的方法。其次,我们提出了一种用于MRI分割的方法,即翻译的多通道分割(TMS),其中给定的模态以及翻译的模态通过完全卷积网络(FCN)以多通道方式分段。这两种方法都成功地采用了共模信息来提高性能,而无需添加任何额外数据。实验表明,我们提出的框架推进了五个脑部MRI数据集的最新技术。我们还观察到在一些广泛采用的脑数据集上进行跨模态配准和分割的令人鼓舞的结果。总的来说,我们的工作可以作为临床诊断的辅助方法,并应用于医学领域的各种任务。关键词:图像到图像,交叉模态,配准,分割,brainMRI
translated by 谷歌翻译
In recent years, a variety of segmentation methods have been proposed for automatic delineation of the fetal and neonatal brain MRI. These methods aim to define regions of interest of different granularity: brain, tissue types or more localised structures. Different methodologies have been applied for this segmen-tation task and can be classified into unsupervised, parametric, classification, atlas fusion and deformable models. Brain atlases are commonly utilised as training data in the segmentation process. Challenges relating to the image acquisition, the rapid brain development as well as the limited availability of imaging data however hinder this segmentation task. In this paper, we review methods adopted for the perinatal brain and categorise them according to the target population, structures segmented and methodology. We outline different methods proposed in the literature and discuss their major contributions. Different approaches for the evaluation of the segmentation accuracy and benchmarks used for the segmentation quality are presented. We conclude this review with a discussion on shortcomings in the perinatal domain and possible future directions.
translated by 谷歌翻译
人类丘脑是一种脑结构,包含许多高度特异性的细胞核。由于已知这些细胞核具有不同的功能并且与大脑皮层的不同区域相连,因此利用MRI研究它们在体内的体积,形状和连接性对于神经成像社区来说是非常有趣的。在这项研究中,我们提出了使用离体脑MRI扫描和组织学数据建立的丘脑核的概率图谱,以及图谱在体内MRI分割中的应用。 Theatlas是使用26个丘脑核的手工描绘建立的,来自6个尸检样本的12个完整丘脑的序列学,结合手工分析整个丘脑和周围结构(尾状核,壳核,海马等)在体内脑MR数据来自39个科目。使用离体MRI作为参考框架回收组织学数据的3D结构和相应的手动分割,并且在切片期间获得的块面照片堆叠作为中间目标。该图谱被编码为自适应四面体网格,与之前的丘脑组织学研究在代表性核的体积方面表现出良好的一致性。当应用于使用贝叶斯推断进行的体内扫描分割时,该图谱显示出优异的重测信度,对输入MRI对比度变化的稳健性,以及在患有阿尔茨海默病的受试者中检测差异性丘脑效应的能力。概率图谱和伴随分割工具作为神经成像包FreeSurfer的一部分公开提供。
translated by 谷歌翻译
可变形图像配准是医学图像分析中的基本任务,旨在建立一对图像之间的密集和非线性对应。以前的深度学习研究通常采用有监督的神经网络来直接学习从一个图像到另一个图像的空间变换,需要任务特定的地面实况登记进行模型训练。由于难以收集精确的地面实况登记,这些监督方法的实施是几乎具有挑战性。尽管最近已经开发了几种无监督网络,但是这些方法通常忽略了一对图像之间的变换的固有的逆一致性(对于双表态映射必不可少)。而且,现有方法通常仅通过平滑约束来促使待估计的变换在地理上平滑,这不能完全避免在所得到的变换中折叠。为此,我们提出了一种用于无监督可变形图像配准的逆一致深度网络(ICNet)。具体来说,我们开发一个反向一致约束,以鼓励一对图像彼此对称变形,直到两个扭曲图像匹配。除了使用传统的平滑约束之外,我们还提出了抗折叠约束以进一步避免折叠转换。所提出的方法不需要任何监督信息,同时通过所提出的反向一致和反折叠约束来鼓励变换的不同性质。我们评估了我们对T1加权脑磁共振成像(MRI)扫描组织分割和解剖标志检测的方法,结果证明了我们的ICNet在几种最先进的可变形图像配准方法上的卓越性能。我们的代码将公开发布。
translated by 谷歌翻译
Training deep fully convolutional neural networks (F-CNNs) for semantic imagesegmentation requires access to abundant labeled data. While large datasets ofunlabeled image data are available in medical applications, access to manuallylabeled data is very limited. We propose to automatically create auxiliarylabels on initially unlabeled data with existing tools and to use them forpre-training. For the subsequent fine-tuning of the network with manuallylabeled data, we introduce error corrective boosting (ECB), which emphasizesparameter updates on classes with lower accuracy. Furthermore, we introduceSkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation thatcombines skip connections with the unpooling strategy for upsampling. TheSD-Net addresses challenges of severe class imbalance and errors alongboundaries. With application to whole-brain MRI T1 scan segmentation, wegenerate auxiliary labels on a large dataset with FreeSurfer and fine-tune ontwo datasets with manual annotations. Our results show that the inclusion ofauxiliary labels and ECB yields significant improvements. SD-Net segments a 3Dscan in 7 secs in comparison to 30 hours for the closest multi-atlassegmentation method, while reaching similar performance. It also outperformsthe latest state-of-the-art F-CNN models.
translated by 谷歌翻译
We introduce DeepNAT, a 3D Deep convolutional neural network for theautomatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonanceimages. DeepNAT is an end-to-end learning-based approach to brain segmentationthat jointly learns an abstract feature representation and a multi-classclassification. We propose a 3D patch-based approach, where we do not onlypredict the center voxel of the patch but also neighbors, which is formulatedas multi-task learning. To address a class imbalance problem, we arrange twonetworks hierarchically, where the first one separates foreground frombackground, and the second one identifies 25 brain structures on theforeground. Since patches lack spatial context, we augment them withcoordinates. To this end, we introduce a novel intrinsic parameterization ofthe brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. Asnetwork architecture, we use three convolutional layers with pooling, batchnormalization, and non-linearities, followed by fully connected layers withdropout. The final segmentation is inferred from the probabilistic output ofthe network with a 3D fully connected conditional random field, which ensureslabel agreement between close voxels. The roughly 2.7 million parameters in thenetwork are learned with stochastic gradient descent. Our results show thatDeepNAT compares favorably to state-of-the-art methods. Finally, the purelylearning-based method may have a high potential for the adaptation to young,old, or diseased brains by fine-tuning the pre-trained network with a smalltraining sample on the target application, where the availability of largerdatasets with manual annotations may boost the overall segmentation accuracy inthe future.
translated by 谷歌翻译
Whole brain segmentation from structural magnetic resonance imaging (MRI) is a prerequisite for most morphological analyses, but is computationally intense and can therefore delay the availability of image markers after scan acquisition. We introduce QuickNAT, a fully convolutional, densely connected neural network that segments a MRI brain scan in 20 seconds. To enable training of the complex network with millions of learnable parameters using limited annotated data, we propose to first pre-train on auxiliary labels created from existing segmentation software. Subsequently, the pre-trained model is fine-tuned on manual labels to rectify errors in auxiliary labels. With this learning strategy, we are able to use large neuroimaging repositories without manual annotations for training. In an extensive set of evaluations on eight datasets that cover a wide age range, pathology, and different scanners, we demonstrate that QuickNAT achieves superior segmentation accuracy and reliability in comparison to state-of-the-art methods, while being orders of magnitude faster. The speed up facilitates processing of large data repositories and supports translation of imaging biomarkers by making them available within seconds for fast clinical decision making.
translated by 谷歌翻译
脾肿大,脾脏异常增大的发现是肝脏和脾脏疾病的非侵入性临床生物标志物。自动分割方法对于从临床获得的腹部磁共振成像(MRI)扫描中有效量化脾肿大至关重要。然而,由于(1)脾脏肿大的大的解剖学和空间变化,(2)多模式MRI上的大的扫描间和扫描内强度变化,以及(3)有限数量的标记的脾肿大扫描,该任务具有挑战性。在本文中,我们提出了脾肿大分割网络(SS-Net)来介绍深度卷积神经网络(DCNN)方法在多模式MRI脾肿大分割中的应用。大卷积核层用于解释空间和解剖变异,而条件生成对偶网络(GAN)用于以端到端的方式利用SS-Net的分段性能。临床获得的同时包含T1加权(T1w)和T2加权(T2w)MRI脾肿大扫描的队列用于训练和评估多图谱分割(MAS),2D DCNN网络和3D DCNN网络的性能。从实验结果来看,DCNN方法实现了与最先进的MAS方法相比的优越性能。所提出的SS-Net方法在所研究的基线DCNN方法中实现了最高的中值和平均Dicescores。
translated by 谷歌翻译
3D医学图像配准具有重要的临床意义。然而,监督学习方法需要大量精确注释的相应控制点(或变形)。 3D医学图像的基本事实很难获得。无监督学习方法通​​过在没有监督的情况下利用未标记数据来减轻手工注释的负担。在本文中,我们提出了一种新的无监督学习方法,该方法使用卷对端神经网络在端到端框架下,体育补间网络(VTN)来注册3D医学图像。三个技术组件完善了我们用于3D端到端医学图像注册的无监督学习系统:(1)我们级联注册子网; (2)我们将注册融入我们的网络; (3)我们在培训过程中加入了额外的可转移性损失。实验结果表明,我们的算法比传统的基于优化的方法快880倍(或没有GPU加速快3.3倍),并在医学图像配准中实现了最先进的性能。
translated by 谷歌翻译
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmenta-tions that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data.
translated by 谷歌翻译