使用卷积神经网络(CNN)的最先进的磁共振(MR)图像超分辨率方法(ISR)由于CNN的空间覆盖率有限,因此在有限的上下文信息中利用有限的上下文信息。Vision Transformers(VIT)学习更好的全球环境,这有助于产生优质的HR图像。我们将CNN的本地信息和来自VIT的全局信息结合在一起,以获得图像超级分辨率和输出超级分辨率的图像,这些图像的质量比最先进的方法所产生的质量更高。我们通过多个新颖的损失函数包括额外的约束,这些损失功能将结构和纹理信息从低分辨率到高分辨率图像。
translated by 谷歌翻译
Fully Convolutional Neural Networks (FCNNs) with contracting and expanding paths have shown prominence for the majority of medical image segmentation applications since the past decade. In FCNNs, the encoder plays an integral role by learning both global and local features and contextual representations which can be utilized for semantic output prediction by the decoder. Despite their success, the locality of convolutional layers in FCNNs, limits the capability of learning long-range spatial dependencies. Inspired by the recent success of transformers for Natural Language Processing (NLP) in long-range sequence learning, we reformulate the task of volumetric (3D) medical image segmentation as a sequence-to-sequence prediction problem. We introduce a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output. We have validated the performance of our method on the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for multiorgan segmentation and the Medical Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation tasks. Our benchmarks demonstrate new state-of-the-art performance on the BTCV leaderboard. Code: https://monai.io/research/unetr
translated by 谷歌翻译
Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
具有高分辨率的视网膜光学相干断层扫描术(八八)对于视网膜脉管系统的定量和分析很重要。然而,八颗图像的分辨率与相同采样频率的视野成反比,这不利于临床医生分析较大的血管区域。在本文中,我们提出了一个新型的基于稀疏的域适应超分辨率网络(SASR),以重建现实的6x6 mm2/低分辨率/低分辨率(LR)八八粒图像,以重建高分辨率(HR)表示。更具体地说,我们首先对3x3 mm2/高分辨率(HR)图像进行简单降解,以获得合成的LR图像。然后,采用一种有效的注册方法在6x6 mm2图像中以其相应的3x3 mm2图像区域注册合成LR,以获得裁切的逼真的LR图像。然后,我们提出了一个多级超分辨率模型,用于对合成数据进行全面监督的重建,从而通过生成的对流策略指导现实的LR图像重建现实的LR图像,该策略允许合成和现实的LR图像可以在特征中统一。领域。最后,新型的稀疏边缘感知损失旨在动态优化容器边缘结构。在两个八八集中进行的广泛实验表明,我们的方法的性能优于最先进的超分辨率重建方法。此外,我们还研究了重建结果对视网膜结构分割的性能,这进一步验证了我们方法的有效性。
translated by 谷歌翻译
改善磁共振(MR)图像数据的分辨率对于计算机辅助诊断和大脑功能分析至关重要。更高的分辨率有助于捕获更详细的内容,但通常会导致较低的信噪比和更长的扫描时间。为此,MR Image超级分辨率已成为近期广泛利益的主题。现有作品建立了广泛的深层模型,该模型具有基于卷积神经网络(CNN)的常规体系结构。在这项工作中,为了进一步推进该研究领域,我们尽早努力建立一个基于变压器的MR图像超分辨率框架,并仔细设计了探索有价值的领域的先验知识。具体而言,我们考虑了包括高频结构的两倍领域先验和模式间环境,并建立了一种新颖的变压器体系结构,称为跨模式高频变压器(COHF-T),以将此类先验引入超分辨率(LR)MR图像的超级分辨。两个数据集的实验表明COHF-T可以实现新的最新性能。
translated by 谷歌翻译
变形金刚占据了自然语言处理领域,最近影响了计算机视觉区域。在医学图像分析领域中,变压器也已成功应用于全栈临床应用,包括图像合成/重建,注册,分割,检测和诊断。我们的论文旨在促进变压器在医学图像分析领域的认识和应用。具体而言,我们首先概述了内置在变压器和其他基本组件中的注意机制的核心概念。其次,我们回顾了针对医疗图像应用程序量身定制的各种变压器体系结构,并讨论其局限性。在这篇综述中,我们调查了围绕在不同学习范式中使用变压器,提高模型效率及其与其他技术的耦合的关键挑战。我们希望这篇评论可以为读者提供医学图像分析领域的读者的全面图片。
translated by 谷歌翻译
Cross-modality magnetic resonance (MR) image synthesis aims to produce missing modalities from existing ones. Currently, several methods based on deep neural networks have been developed using both source- and target-modalities in a supervised learning manner. However, it remains challenging to obtain a large amount of completely paired multi-modal training data, which inhibits the effectiveness of existing methods. In this paper, we propose a novel Self-supervised Learning-based Multi-scale Transformer Network (SLMT-Net) for cross-modality MR image synthesis, consisting of two stages, \ie, a pre-training stage and a fine-tuning stage. During the pre-training stage, we propose an Edge-preserving Masked AutoEncoder (Edge-MAE), which preserves the contextual and edge information by simultaneously conducting the image reconstruction and the edge generation. Besides, a patch-wise loss is proposed to treat the input patches differently regarding their reconstruction difficulty, by measuring the difference between the reconstructed image and the ground-truth. In this case, our Edge-MAE can fully leverage a large amount of unpaired multi-modal data to learn effective feature representations. During the fine-tuning stage, we present a Multi-scale Transformer U-Net (MT-UNet) to synthesize the target-modality images, in which a Dual-scale Selective Fusion (DSF) module is proposed to fully integrate multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Moreover, we use the pre-trained encoder as a feature consistency module to measure the difference between high-level features of the synthesized image and the ground truth one. Experimental results show the effectiveness of the proposed SLMT-Net, and our model can reliably synthesize high-quality images when the training set is partially unpaired. Our code will be publicly available at https://github.com/lyhkevin/SLMT-Net.
translated by 谷歌翻译
具有高分辨率(HR)的磁共振成像(MRI)提供了更详细的信息,以进行准确的诊断和定量图像分析。尽管取得了重大进展,但大多数现有的医学图像重建网络都有两个缺陷:1)所有这些缺陷都是在黑盒原理中设计的,因此缺乏足够的解释性并进一步限制其实际应用。可解释的神经网络模型引起了重大兴趣,因为它们在处理医学图像时增强了临床实践所需的可信赖性。 2)大多数现有的SR重建方法仅使用单个对比度或使用简单的多对比度融合机制,从而忽略了对SR改进至关重要的不同对比度之间的复杂关系。为了解决这些问题,在本文中,提出了一种新颖的模型引导的可解释的深层展开网络(MGDUN),用于医学图像SR重建。模型引导的图像SR重建方法求解手动设计的目标函数以重建HR MRI。我们通过将MRI观察矩阵和显式多对比度关系矩阵考虑到末端到端优化期间,将迭代的MGDUN算法展示为新型模型引导的深层展开网络。多对比度IXI数据集和Brats 2019数据集进行了广泛的实验,证明了我们提出的模型的优势。
translated by 谷歌翻译
We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a softattention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1× to 4× magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations. The source code can be downloaded at https://github.com/ researchmm/TTSR.
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译
深度学习的成功为许多医学图像分析任务树立了新的基准。但是,在训练(源)数据和测试(目标)数据之间存在分布变化的情况下,深层模型通常无法概括。一种通常用于反向分配转移的方法是域的适应性:使用目标域中的样本来学习解释变化的分布。在这项工作中,我们提出了一种无监督的域适应方法,该方法使用图形神经网络以及分离的语义和域不变结构特征,从而可以在分布偏移之间更好地性能。我们提出了交换自动编码器的扩展,以获得更多的判别特征。我们测试了两个具有分配变化的具有挑战性的医学图像数据集上的分类方法 - 多中心胸部X射线图像和组织病理学图像。实验表明,与其他域适应方法相比,我们的方法可获得最新的结果。
translated by 谷歌翻译
现实世界图像Denoising是一个实用的图像恢复问题,旨在从野外嘈杂的输入中获取干净的图像。最近,Vision Transformer(VIT)表现出强大的捕获远程依赖性的能力,许多研究人员试图将VIT应用于图像DeNosing任务。但是,现实世界的图像是一个孤立的框架,它使VIT构建了内部贴片的远程依赖性,该依赖性将图像分为贴片并混乱噪声模式和梯度连续性。在本文中,我们建议通过使用连续的小波滑动转换器来解决此问题,该小波滑动转换器在现实世界中构建频率对应关系,称为dnswin。具体而言,我们首先使用CNN编码器从嘈杂的输入图像中提取底部功能。 DNSWIN的关键是将高频和低频信息与功能和构建频率依赖性分开。为此,我们提出了小波滑动窗口变压器,该变压器利用离散的小波变换,自我注意力和逆离散小波变换来提取深度特征。最后,我们使用CNN解码器将深度特征重建为DeNo的图像。对现实世界的基准测试的定量和定性评估都表明,拟议的DNSWIN对最新方法的表现良好。
translated by 谷歌翻译
在过去的十年中,卷积神经网络(Convnets)主导了医学图像分析领域。然而,发现脉搏的性能仍然可以受到它们无法模拟图像中体素之间的远程空间关系的限制。最近提出了众多视力变压器来解决哀悼缺点,在许多医学成像应用中展示最先进的表演。变压器可以是用于图像配准的强烈候选者,因为它们的自我注意机制能够更精确地理解移动和固定图像之间的空间对应。在本文中,我们呈现透射帧,一个用于体积医学图像配准的混合变压器-Cromnet模型。我们还介绍了三种变速器的变形,具有两个散晶变体,确保了拓扑保存的变形和产生良好校准的登记不确定性估计的贝叶斯变体。使用来自两个应用的体积医学图像的各种现有的登记方法和变压器架构进行广泛验证所提出的模型:患者间脑MRI注册和幻影到CT注册。定性和定量结果表明,传输和其变体导致基线方法的实质性改进,展示了用于医学图像配准的变压器的有效性。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
由于组织和骨骼之间的相似性,在人解剖结构中广泛看到了全球相关性。由于近距离质子密度和T1/T2参数,这些相关性反映在磁共振成像(MRI)扫描中。此外,为了实现加速的MRI,k空间数据的采样不足,从而导致全球混叠伪像。卷积神经网络(CNN)模型被广泛用于加速MRI重建,但是由于卷积操作的固有位置,这些模型在捕获全球相关性方面受到限制。基于自发的变压器模型能够捕获图像特征之间的全局相关性,但是,变压器模型对MRI重建的当前贡献是微小的。现有的贡献主要提供CNN转换器混合解决方案,并且很少利用MRI的物理学。在本文中,我们提出了一种基于物理的独立(无卷积)变压器模型,标题为“多头级联SWIN变压器(MCSTRA),用于加速MRI重建。 MCSTRA将几种相互关联的MRI物理相关概念与变压器网络相结合:它通过移动的窗口自我发场机制利用了全局MR特征;它使用多头设置分别提取属于不同光谱组件的MR特征;它通过级联的网络在中间脱氧和K空间校正之间进行迭代,该网络具有K空间和中间损耗计算中的数据一致性;此外,我们提出了一种新型的位置嵌入生成机制,以使用对应于底面采样掩码的点扩散函数来指导自我发作。我们的模型在视觉上和定量上都大大优于最先进的MRI重建方法,同时描述了改善的分辨率和去除词法。
translated by 谷歌翻译
虽然大多数当前的图像支出都进行了水平外推,但我们研究了广义图像支出问题,这些问题将视觉上下文推断出给定图像周围的全面。为此,我们开发了一个新型的基于变压器的生成对抗网络,称为U-Transformer,能够扩展具有合理结构和细节的图像边界,即使是复杂的风景图像。具体而言,我们将生成器设计为嵌入流行的Swin Transformer块的编码器到二次结构。因此,我们的新型框架可以更好地应对图像远程依赖性,这对于广义图像支出至关重要。我们另外提出了U形结构和多视图时间空间预测网络,以增强图像自我重建以及未知的零件预测。我们在实验上证明,我们提出的方法可以为针对最新图像支出方法提供广义图像支出产生可吸引人的结果。
translated by 谷歌翻译
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model's inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
translated by 谷歌翻译
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realize global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissues structures. Inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting anatomies of 133 structures in brain, 14 organs in abdomen, 4 hierarchical components in kidney, and inter-connected kidney tumors). We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in single network, outperforms prior state-of-the-art method SLANT27 ensembled with 27 network tiles, our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively.
translated by 谷歌翻译
计算机辅助医学图像分割已广泛应用于诊断和治疗,以获得靶器官和组织的形状和体积的临床有用信息。在过去的几年中,基于卷积神经网络(CNN)的方法(例如,U-Net)占主导地位,但仍遭受了不足的远程信息捕获。因此,最近的工作提出了用于医学图像分割任务的计算机视觉变压器变体,并获得了有希望的表现。这种变压器通过计算配对贴片关系来模拟远程依赖性。然而,它们促进了禁止的计算成本,尤其是在3D医学图像(例如,CT和MRI)上。在本文中,我们提出了一种称为扩张变压器的新方法,该方法在本地和全球范围内交替捕获的配对贴片关系进行自我关注。灵感来自扩张卷积核,我们以扩张的方式进行全球自我关注,扩大接收领域而不增加所涉及的斑块,从而降低计算成本。基于这种扩展变压器的设计,我们构造了一个用于3D医学图像分割的U形编码器解码器分层体系结构。 Synapse和ACDC数据集的实验表明,我们的D-Ager Model从头开始培训,以低计算成本从划痕训练,优于各种竞争力的CNN或基于变压器的分段模型,而不耗时的每训练过程。
translated by 谷歌翻译
可以使用超分辨率方法改善医学图像的空间分辨率。实际增强的超级分辨率生成对抗网络(Real-Esrgan)是最近用于产生较高分辨率图像的最新有效方法之一,给定较低分辨率的输入图像。在本文中,我们应用这种方法来增强2D MR图像的空间分辨率。在我们提出的方法中,我们稍微修改了从脑肿瘤分割挑战(BRATS)2018数据集中训练2D磁共振图像(MRI)的结构。通过计算SSIM(结构相似性指数量度),NRMSE(归一化根平方误),MAE(平均绝对误差)和VIF(视觉信息保真度)值,通过计算SSIM(结构相似性指数量度)进行定性和定量验证。
translated by 谷歌翻译