We introduce DeepNAT, a 3D Deep convolutional neural network for theautomatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonanceimages. DeepNAT is an end-to-end learning-based approach to brain segmentationthat jointly learns an abstract feature representation and a multi-classclassification. We propose a 3D patch-based approach, where we do not onlypredict the center voxel of the patch but also neighbors, which is formulatedas multi-task learning. To address a class imbalance problem, we arrange twonetworks hierarchically, where the first one separates foreground frombackground, and the second one identifies 25 brain structures on theforeground. Since patches lack spatial context, we augment them withcoordinates. To this end, we introduce a novel intrinsic parameterization ofthe brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. Asnetwork architecture, we use three convolutional layers with pooling, batchnormalization, and non-linearities, followed by fully connected layers withdropout. The final segmentation is inferred from the probabilistic output ofthe network with a 3D fully connected conditional random field, which ensureslabel agreement between close voxels. The roughly 2.7 million parameters in thenetwork are learned with stochastic gradient descent. Our results show thatDeepNAT compares favorably to state-of-the-art methods. Finally, the purelylearning-based method may have a high potential for the adaptation to young,old, or diseased brains by fine-tuning the pre-trained network with a smalltraining sample on the target application, where the availability of largerdatasets with manual annotations may boost the overall segmentation accuracy inthe future.
translated by 谷歌翻译