Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Low-field (LF) MRI scanners have the power to revolutionize medical imaging by providing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usually significantly noisier and lower quality than their high-field counterparts. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans to improve diagnostic capability. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested deep learning methods with an average PSNR of 78.83 and SSIM of 0.9551. We tested our network on artificial noisy downsampled synthetic data from a major T1 weighted MRI image dataset called the T1-mix dataset. One board-certified radiologist scored 25 images on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). We also introduce a new type of loss function called natural log mean squared error (NLMSE). In conclusion, we present a more accurate deep learning method for single image super-resolution applied to synthetic low-field MRI via a Nested U-Net architecture.
translated by 谷歌翻译
具有高分辨率的视网膜光学相干断层扫描术(八八)对于视网膜脉管系统的定量和分析很重要。然而,八颗图像的分辨率与相同采样频率的视野成反比,这不利于临床医生分析较大的血管区域。在本文中,我们提出了一个新型的基于稀疏的域适应超分辨率网络(SASR),以重建现实的6x6 mm2/低分辨率/低分辨率(LR)八八粒图像,以重建高分辨率(HR)表示。更具体地说,我们首先对3x3 mm2/高分辨率(HR)图像进行简单降解,以获得合成的LR图像。然后,采用一种有效的注册方法在6x6 mm2图像中以其相应的3x3 mm2图像区域注册合成LR,以获得裁切的逼真的LR图像。然后,我们提出了一个多级超分辨率模型,用于对合成数据进行全面监督的重建,从而通过生成的对流策略指导现实的LR图像重建现实的LR图像,该策略允许合成和现实的LR图像可以在特征中统一。领域。最后,新型的稀疏边缘感知损失旨在动态优化容器边缘结构。在两个八八集中进行的广泛实验表明,我们的方法的性能优于最先进的超分辨率重建方法。此外,我们还研究了重建结果对视网膜结构分割的性能,这进一步验证了我们方法的有效性。
translated by 谷歌翻译
目的:并行成像通过用一系列接收器线圈获取其他灵敏度信息,从而加速了磁共振成像(MRI)数据,从而降低了相位编码步骤。压缩传感磁共振成像(CS-MRI)在医学成像领域中获得了普及,因为其数据要求较少,而不是平行成像。并行成像和压缩传感(CS)均通过最大程度地减少K空间中捕获的数据量来加快传统MRI获取。由于采集时间与样品的数量成反比,因此从缩短的K空间样品中的图像的反向形成会导致收购更快,但具有混乱的伪像。本文提出了一种新型的生成对抗网络(GAN),即雷德格尔(Recgan-gr)受到多模式损失的监督,以消除重建的图像。方法:与现有的GAN网络相反,我们提出的方法引入了一种新型的发电机网络,即与双域损耗函数集成的弹药网络,包括加权幅度和相位损耗函数以及基于平行成像的损失,即Grappa一致性损失。提出了K空间校正块,以使GAN网络自动化生成不必要的数据,从而使重建过程的收敛性更快。结果:全面的结果表明,拟议的Recgan-GR在基于GAN的方法中的PSNR有4 dB的改善,并且在文献中可用的传统最先进的CNN方法中有2 dB的改进。结论和意义:拟议的工作有助于显着改善低保留数据的图像质量,从而更快地获取了5倍或10倍。
translated by 谷歌翻译
Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
可以使用超分辨率方法改善医学图像的空间分辨率。实际增强的超级分辨率生成对抗网络(Real-Esrgan)是最近用于产生较高分辨率图像的最新有效方法之一,给定较低分辨率的输入图像。在本文中,我们应用这种方法来增强2D MR图像的空间分辨率。在我们提出的方法中,我们稍微修改了从脑肿瘤分割挑战(BRATS)2018数据集中训练2D磁共振图像(MRI)的结构。通过计算SSIM(结构相似性指数量度),NRMSE(归一化根平方误),MAE(平均绝对误差)和VIF(视觉信息保真度)值,通过计算SSIM(结构相似性指数量度)进行定性和定量验证。
translated by 谷歌翻译
在临床医学中,磁共振成像(MRI)是诊断,分类,预后和治疗计划中最重要的工具之一。然而,MRI遭受了固有的慢数据采集过程,因为数据在k空间中顺序收集。近年来,大多数MRI重建方法在文献中侧重于整体图像重建而不是增强边缘信息。这项工作通过详细说明了对边缘信息的提高来阐述了这一趋势。具体地,我们通过结合多视图信息介绍一种用于快速多通道MRI重建的新型并行成像耦合双鉴别器生成的对抗网络(PIDD-GaN)。双判别设计旨在改善MRI重建中的边缘信息。一个鉴别器用于整体图像重建,而另一个鉴别器是负责增强边缘信息的负责。为发电机提出了一种具有本地和全局剩余学习的改进的U-Net。频率通道注意块(FCA块)嵌入在发电机中以结合注意力机制。引入内容损耗以培训发电机以获得更好的重建质量。我们对Calgary-Campinas公共大脑MR DataSet进行了全面的实验,并将我们的方法与最先进的MRI重建方法进行了比较。在MICCAI13数据集上进行了对剩余学习的消融研究,以验证所提出的模块。结果表明,我们的PIDD-GaN提供高质量的重建MR图像,具有良好的边缘信息。单图像重建的时间低于5ms,符合加快处理的需求。
translated by 谷歌翻译
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
translated by 谷歌翻译
现实的高光谱图像(HSI)超分辨率(SR)技术旨在从其低分辨率(LR)对应物中产生具有更高光谱和空间忠诚的高分辨率(HR)HSI。生成的对抗网络(GAN)已被证明是图像超分辨率的有效深入学习框架。然而,现有GaN的模型的优化过程经常存在模式崩溃问题,导致光谱间不变重建容量有限。这可能导致所生成的HSI上的光谱空间失真,尤其是具有大的升级因子。为了缓解模式崩溃的问题,这项工作提出了一种与潜在编码器(Le-GaN)耦合的新型GaN模型,其可以将产生的光谱空间特征从图像空间映射到潜在空间并产生耦合组件正规化生成的样本。基本上,我们将HSI视为嵌入在潜在空间中的高维歧管。因此,GaN模型的优化被转换为学习潜在空间中的高分辨率HSI样本的分布的问题,使得产生的超分辨率HSI的分布更接近其原始高分辨率对应物的那些。我们对超级分辨率的模型性能进行了实验评估及其在缓解模式崩溃中的能力。基于具有不同传感器(即Aviris和UHD-185)的两种实际HSI数据集进行了测试和验证,用于各种升高因素并增加噪声水平,并与最先进的超分辨率模型相比(即Hyconet,LTTR,Bagan,SR-GaN,Wgan)。
translated by 谷歌翻译
Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI technique that can extract important tissue and system parameters such as T1, T2, B0, and B1 from a single scan. This property also makes it attractive for retrospectively synthesizing contrast-weighted images. In general, contrast-weighted images like T1-weighted, T2-weighted, etc., can be synthesized directly from parameter maps through spin-dynamics simulation (i.e., Bloch or Extended Phase Graph models). However, these approaches often exhibit artifacts due to imperfections in the mapping, the sequence modeling, and the data acquisition. Here we propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation. To implement our direct contrast synthesis (DCS) method, we deploy a conditional Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net as the generator. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics. We also demonstrate cases where our trained model is able to mitigate in-flow and spiral off-resonance artifacts that are typically seen in MRF reconstructions and thus more faithfully represent conventional spin echo-based contrast-weighted images.
translated by 谷歌翻译
由于CT相关的X射线辐射对患者的潜在健康风险,LDCT在医学成像领域引起了重大关注。然而,减少辐射剂量会降低重建图像的质量,从而损害了诊断性能。已经引入了各种深度学习技术来通过去噪提高LDCT图像的图像质量。基于GANS的去噪方法通常利用额外的分类网络,即鉴别者,学习被去噪和正常剂量图像之间最辨别的差异,因此相应地规范脱景模型;它通常侧重于全球结构或本地细节。为了更好地规范LDCT去噪模式,本文提出了一种新的方法,被称为Du-GaN,该方法利用GANS框架中的U-Net基于鉴别者来学习两种图像中的去噪和正常剂量图像之间的全局和局部差异渐变域。这种基于U-Net的鉴别器的优点是它不仅可以通过U-Net的输出向去噪网络提供每个像素反馈,而且还通过中间层专注于语义层中的全局结构U-net。除了图像域中的对抗性训练之外,我们还应用于图像梯度域中的另一个基于U-Net的鉴别器,以减轻由光子饥饿引起的伪像并增强去噪CT图像的边缘。此外,Cutmix技术使基于U-Net的鉴别器的每个像素输出能够提供具有置信度图的放射科学家以可视化去噪结果的不确定性,促进基于LDCT的筛选和诊断。关于模拟和现实世界数据集的广泛实验在定性和定量上展示了最近发表的方法的优越性。
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
生成的对抗网络(GAN)是在众多领域成功使用的一种强大的深度学习模型。它们属于一个称为生成方法的更广泛的家族,该家族通过从真实示例中学习样本分布来生成新数据。在临床背景下,与传统的生成方法相比,GAN在捕获空间复杂,非线性和潜在微妙的疾病作用方面表现出增强的能力。这篇综述评估了有关gan在各种神经系统疾病的成像研究中的应用的现有文献,包括阿尔茨海默氏病,脑肿瘤,脑老化和多发性硬化症。我们为每个应用程序提供了各种GAN方法的直观解释,并进一步讨论了在神经影像学中利用gans的主要挑战,开放问题以及有希望的未来方向。我们旨在通过强调如何利用gan来支持临床决策,并有助于更好地理解脑部疾病的结构和功能模式,从而弥合先进的深度学习方法和神经病学研究之间的差距。
translated by 谷歌翻译
我们提出了Vologan,这是一个对抗域的适应网络,该网络将一个人的高质量3D模型的合成RGB-D图像转换为可以使用消费者深度传感器生成的RGB-D图像。该系统对于为单视3D重建算法生成大量训练数据特别有用,该算法复制了现实世界中的捕获条件,能够模仿相同的高端3D模型数据库的不同传感器类型的样式。该网络使用具有u-net体系结构的CycleGAN框架,以及受SIV-GAN启发的鉴别器。我们使用不同的优化者和学习率计划来训练发电机和鉴别器。我们进一步构建了一个单独考虑图像通道的损失函数,除其他指标外,还评估了结构相似性。我们证明,可以使用自行车来应用合成3D数据的对抗结构域适应,以训练只有少量训练样本的体积视频发电机模型。
translated by 谷歌翻译
诸如类风湿性关节炎的风湿性疾病的发病通常是亚临床的,这导致挑战疾病的早期检测。然而,可以使用诸如MRI或CT的成像技术来检测解剖结构的特征变化。现代成像技术,如化学交换饱和度转移(CEST)MRI驱动希望进一步通过体内代谢物的成像来改善早期检测。为了图像在患者的关节中的小结构,通常是由于疾病发生而导致的第一个区域之一,所以必须为CEST MR成像进行高分辨率。然而,目前,由于收购的潜在物理限制,CEST MR因其潜在的物理限制而受到固有的低分辨率。在这项工作中,我们将建立了基于神经网络的超分辨率方法的建立的上抽样技术。我们可以表明,神经网络能够从低分辨率到高分辨率不饱和CEST图像的映射显着优于当前方法。在测试设定的情况下,使用Reset神经网络可以实现32.29dB(+ 10%),NRMSE为0.14(+ 28%)的NRMSE,以及0.85(+ 15%)的SSSim,大大提高了基线。这项工作为超分辨率CEST MRI的神经网络预期调查铺平了道路,并且可能导致较早的风湿病发作的检测。
translated by 谷歌翻译
光学相干断层扫描(OCT)是一种非侵入性技术,可在微米分辨率中捕获视网膜的横截面区域。它已被广泛用作辅助成像参考,以检测与眼睛有关的病理学并预测疾病特征的纵向进展。视网膜层分割是至关重要的特征提取技术之一,其中视网膜层厚度的变化和由于液体的存在而引起的视网膜层变形高度相关,与多种流行性眼部疾病(如糖尿病性视网膜病)和年龄相关的黄斑疾病高度相关。变性(AMD)。但是,这些图像是从具有不同强度分布或换句话说的不同设备中获取的,属于不同的成像域。本文提出了一种分割引导的域适应方法,以将来自多个设备的图像调整为单个图像域,其中可用的最先进的预训练模型可用。它避免了即将推出的新数据集的手动标签的时间消耗以及现有网络的重新培训。网络的语义一致性和全球特征一致性将最大程度地减少许多研究人员报告的幻觉效果,这些效应对周期矛盾的生成对抗网络(Cyclegan)体系结构。
translated by 谷歌翻译
超级分辨率(SR)旨在增加图像的分辨率。应用程序包括安全性,医学成像和对象识别。我们提出了一种基于深度学习的SR系统,其将六角采样的低分辨率图像作为输入,并产生矩形采样的SR图像作为输出。为了进行培训和测试,我们使用一种现实观察模型,包括从衍射和传感器劣化的光学劣化,从检测器集成。我们的SR方法首先使用非均匀插值来部分地上置观察到的六边形图像并将其转换为矩形网格。然后,我们利用了设计用于SR的最先进的卷积神经网络(CNN)架构,该架构被称为残留通道注意网络(RCAN)。特别是,我们使用RCAN进一步上表并恢复图像以产生最终的SR图像估计。我们证明该系统优于将RCAN直接施加到具有等效样本密度的矩形采样的LR图像。六边形取样的理论优势是众所周知的。然而,据我们所知,六角形取样的实际好处,即RCAN SR等现代加工技术是迄今为止未经测试的。我们的SR系统演示了六角形样式在采用修改的RCAN进行六边形SR时的显着优势。
translated by 谷歌翻译
In medical image analysis, low-resolution images negatively affect the performance of medical image interpretation and may cause misdiagnosis. Single image super-resolution (SISR) methods can improve the resolution and quality of medical images. Currently, Generative Adversarial Networks (GAN) based super-resolution models have shown very good performance. Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN) is one of the practical GAN-based models which is widely used in the field of general image super-resolution. One of the challenges in medical image super-resolution is that, unlike natural images, medical images do not have high spatial resolution. To solve this problem, we can use transfer learning technique and fine-tune the model that has been trained on external datasets (often natural datasets). In our proposed approach, the pre-trained generator and discriminator networks of the Real-ESRGAN model are fine-tuned using medical image datasets. In this paper, we worked on chest X-ray and retinal images and used the STARE dataset of retinal images and Tuberculosis Chest X-rays (Shenzhen) dataset for fine-tuning. The proposed model produces more accurate and natural textures, and its outputs have better detail and resolution compared to the original Real-ESRGAN outputs.
translated by 谷歌翻译
FREDSR is a GAN variant that aims to outperform traditional GAN models in specific tasks such as Single Image Super Resolution with extreme parameter efficiency at the cost of per-dataset generalizeability. FREDSR integrates fast Fourier transformation, residual prediction, diffusive discriminators, etc to achieve strong performance in comparisons to other models on the UHDSR4K dataset for Single Image 3x Super Resolution from 360p and 720p with only 37000 parameters. The model follows the characteristics of the given dataset, resulting in lower generalizeability but higher performance on tasks such as real time up-scaling.
translated by 谷歌翻译