Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a softattention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1× to 4× magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations. The source code can be downloaded at https://github.com/ researchmm/TTSR.
translated by 谷歌翻译
基于参考的图像超分辨率(REFSR)旨在利用辅助参考(REF)图像为超溶解的低分辨率(LR)图像。最近,RefSR引起了极大的关注,因为它提供了超越单图SR的替代方法。但是,解决REFSR问题有两个关键的挑战:(i)当它们显着不同时,很难匹配LR和Ref图像之间的对应关系; (ii)如何将相关纹理从参考图像转移以补偿LR图像的细节非常具有挑战性。为了解决RefSR的这些问题,本文提出了一个可变形的注意变压器,即DATSR,具有多个尺度,每个尺度由纹理特征编码器(TFE)模块组成,基于参考的可变形注意(RDA)模块和残差功能聚合(RFA)模块。具体而言,TFE首先提取图像转换(例如,亮度)不敏感的LR和REF图像,RDA可以利用多个相关纹理来补偿更多的LR功能信息,而RFA最终汇总了LR功能和相关纹理,以获得更愉快的宜人的质地结果。广泛的实验表明,我们的DATSR在定量和质量上实现了基准数据集上的最新性能。
translated by 谷歌翻译
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
translated by 谷歌翻译
Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
In medical image analysis, low-resolution images negatively affect the performance of medical image interpretation and may cause misdiagnosis. Single image super-resolution (SISR) methods can improve the resolution and quality of medical images. Currently, Generative Adversarial Networks (GAN) based super-resolution models have shown very good performance. Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN) is one of the practical GAN-based models which is widely used in the field of general image super-resolution. One of the challenges in medical image super-resolution is that, unlike natural images, medical images do not have high spatial resolution. To solve this problem, we can use transfer learning technique and fine-tune the model that has been trained on external datasets (often natural datasets). In our proposed approach, the pre-trained generator and discriminator networks of the Real-ESRGAN model are fine-tuned using medical image datasets. In this paper, we worked on chest X-ray and retinal images and used the STARE dataset of retinal images and Tuberculosis Chest X-rays (Shenzhen) dataset for fine-tuning. The proposed model produces more accurate and natural textures, and its outputs have better detail and resolution compared to the original Real-ESRGAN outputs.
translated by 谷歌翻译
具有高分辨率的视网膜光学相干断层扫描术(八八)对于视网膜脉管系统的定量和分析很重要。然而,八颗图像的分辨率与相同采样频率的视野成反比,这不利于临床医生分析较大的血管区域。在本文中,我们提出了一个新型的基于稀疏的域适应超分辨率网络(SASR),以重建现实的6x6 mm2/低分辨率/低分辨率(LR)八八粒图像,以重建高分辨率(HR)表示。更具体地说,我们首先对3x3 mm2/高分辨率(HR)图像进行简单降解,以获得合成的LR图像。然后,采用一种有效的注册方法在6x6 mm2图像中以其相应的3x3 mm2图像区域注册合成LR,以获得裁切的逼真的LR图像。然后,我们提出了一个多级超分辨率模型,用于对合成数据进行全面监督的重建,从而通过生成的对流策略指导现实的LR图像重建现实的LR图像,该策略允许合成和现实的LR图像可以在特征中统一。领域。最后,新型的稀疏边缘感知损失旨在动态优化容器边缘结构。在两个八八集中进行的广泛实验表明,我们的方法的性能优于最先进的超分辨率重建方法。此外,我们还研究了重建结果对视网膜结构分割的性能,这进一步验证了我们方法的有效性。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
可以使用超分辨率方法改善医学图像的空间分辨率。实际增强的超级分辨率生成对抗网络(Real-Esrgan)是最近用于产生较高分辨率图像的最新有效方法之一,给定较低分辨率的输入图像。在本文中,我们应用这种方法来增强2D MR图像的空间分辨率。在我们提出的方法中,我们稍微修改了从脑肿瘤分割挑战(BRATS)2018数据集中训练2D磁共振图像(MRI)的结构。通过计算SSIM(结构相似性指数量度),NRMSE(归一化根平方误),MAE(平均绝对误差)和VIF(视觉信息保真度)值,通过计算SSIM(结构相似性指数量度)进行定性和定量验证。
translated by 谷歌翻译
近年来,使用基于深入学习的架构的状态,在图像超分辨率的任务中有几个进步。先前发布的许多基于超分辨率的技术,需要高端和顶部的图形处理单元(GPU)来执行图像超分辨率。随着深度学习方法的进步越来越大,神经网络已经变得越来越多地计算饥饿。我们返回了一步,并专注于创建实时有效的解决方案。我们提出了一种在其内存足迹方面更快更小的架构。所提出的架构使用深度明智的可分离卷积来提取特征,并且它与其他超分辨率的GAN(生成对抗网络)进行接受,同时保持实时推断和低存储器占用。即使在带宽条件不佳,实时超分辨率也能够流式传输高分辨率介质内容。在维持准确性和延迟之间的有效权衡之间,我们能够生产可比较的性能模型,该性能模型是超分辨率GAN的大小的一个 - 八(1/8),并且计算的速度比超分辨率的GAN快74倍。
translated by 谷歌翻译
面部超分辨率(FSR),也称为面部幻觉,其旨在增强低分辨率(LR)面部图像以产生高分辨率(HR)面部图像的分辨率,是特定于域的图像超分辨率问题。最近,FSR获得了相当大的关注,并目睹了深度学习技术的发展炫目。迄今为止,有很少有基于深入学习的FSR的研究摘要。在本次调查中,我们以系统的方式对基于深度学习的FSR方法进行了全面审查。首先,我们总结了FSR的问题制定,并引入了流行的评估度量和损失功能。其次,我们详细说明了FSR中使用的面部特征和流行数据集。第三,我们根据面部特征的利用大致分类了现有方法。在每个类别中,我们从设计原则的一般描述开始,然后概述代表方法,然后讨论其中的利弊。第四,我们评估了一些最先进的方法的表现。第五,联合FSR和其他任务以及与FSR相关的申请大致介绍。最后,我们设想了这一领域进一步的技术进步的前景。在\ URL {https://github.com/junjun-jiang/face-hallucination-benchmark}上有一个策划的文件和资源的策划文件和资源清单
translated by 谷歌翻译
最新的多视图多媒体应用程序在高分辨率(HR)视觉体验与存储或带宽约束之间挣扎。因此,本文提出了一个多视图图像超分辨率(MVISR)任务。它旨在增加从同一场景捕获的多视图图像的分辨率。一种解决方案是将图像或视频超分辨率(SR)方法应用于低分辨率(LR)输入视图结果。但是,这些方法无法处理视图之间的大角度转换,并利用所有多视图图像中的信息。为了解决这些问题,我们提出了MVSRNET,该MVSRNET使用几何信息从所有LR多视图中提取尖锐的细节,以支持LR输入视图的SR。具体而言,MVSRNET中提出的几何感知参考合成模块使用几何信息和所有多视图LR图像来合成像素对齐的HR参考图像。然后,提出的动态高频搜索网络完全利用了SR参考图像中的高频纹理细节。关于几个基准测试的广泛实验表明,我们的方法在最新方法上有了显着改善。
translated by 谷歌翻译
Reference-based Super-resolution (RefSR) approaches have recently been proposed to overcome the ill-posed problem of image super-resolution by providing additional information from a high-resolution image. Multi-reference super-resolution extends this approach by allowing more information to be incorporated. This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references. Extensive experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
translated by 谷歌翻译
现实的高光谱图像(HSI)超分辨率(SR)技术旨在从其低分辨率(LR)对应物中产生具有更高光谱和空间忠诚的高分辨率(HR)HSI。生成的对抗网络(GAN)已被证明是图像超分辨率的有效深入学习框架。然而,现有GaN的模型的优化过程经常存在模式崩溃问题,导致光谱间不变重建容量有限。这可能导致所生成的HSI上的光谱空间失真,尤其是具有大的升级因子。为了缓解模式崩溃的问题,这项工作提出了一种与潜在编码器(Le-GaN)耦合的新型GaN模型,其可以将产生的光谱空间特征从图像空间映射到潜在空间并产生耦合组件正规化生成的样本。基本上,我们将HSI视为嵌入在潜在空间中的高维歧管。因此,GaN模型的优化被转换为学习潜在空间中的高分辨率HSI样本的分布的问题,使得产生的超分辨率HSI的分布更接近其原始高分辨率对应物的那些。我们对超级分辨率的模型性能进行了实验评估及其在缓解模式崩溃中的能力。基于具有不同传感器(即Aviris和UHD-185)的两种实际HSI数据集进行了测试和验证,用于各种升高因素并增加噪声水平,并与最先进的超分辨率模型相比(即Hyconet,LTTR,Bagan,SR-GaN,Wgan)。
translated by 谷歌翻译
使用卷积神经网络(CNN)的最先进的磁共振(MR)图像超分辨率方法(ISR)由于CNN的空间覆盖率有限,因此在有限的上下文信息中利用有限的上下文信息。Vision Transformers(VIT)学习更好的全球环境,这有助于产生优质的HR图像。我们将CNN的本地信息和来自VIT的全局信息结合在一起,以获得图像超级分辨率和输出超级分辨率的图像,这些图像的质量比最先进的方法所产生的质量更高。我们通过多个新颖的损失函数包括额外的约束,这些损失功能将结构和纹理信息从低分辨率到高分辨率图像。
translated by 谷歌翻译
Deep Convolutional Neural Networks (DCNNs) have exhibited impressive performance on image super-resolution tasks. However, these deep learning-based super-resolution methods perform poorly in real-world super-resolution tasks, where the paired high-resolution and low-resolution images are unavailable and the low-resolution images are degraded by complicated and unknown kernels. To break these limitations, we propose the Unsupervised Bi-directional Cycle Domain Transfer Learning-based Generative Adversarial Network (UBCDTL-GAN), which consists of an Unsupervised Bi-directional Cycle Domain Transfer Network (UBCDTN) and the Semantic Encoder guided Super Resolution Network (SESRN). First, the UBCDTN is able to produce an approximated real-like LR image through transferring the LR image from an artificially degraded domain to the real-world LR image domain. Second, the SESRN has the ability to super-resolve the approximated real-like LR image to a photo-realistic HR image. Extensive experiments on unpaired real-world image benchmark datasets demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods.
translated by 谷歌翻译
磁共振成像(MRI)的核心问题是加速度和图像质量之间的折衷。图像重建和超分辨率是磁共振成像(MRI)中的两个重要技术。目前的方法旨在单独执行这些任务,忽略它们之间的相关性。在这项工作中,我们为联合MRI重建和超分辨率提出了一个端到端的任务变压器网络(T $ ^ 2 $ net),它允许在多项任务之间共享表示和特征传输以实现更高质量的,来自高度遮盖率和退化的MRI数据的无序和运动伪影的图像。我们的框架与重建和超分辨率相结合,分为两个子分支,其功能表示为查询和键。具体地,我们鼓励两个任务之间的联合特征学习,从而传输准确的任务信息。我们首先使用两个单独的CNN分支来提取特定于任务的功能。然后,任务变压器模块旨在嵌入和综合两个任务之间的相关性。实验结果表明,我们的多任务模型显着优于高级顺序方法,包括定量和定性。
translated by 谷歌翻译
在本文中,我们考虑了基于参考的超分辨率(REFSR)中的两个具有挑战性的问题,(i)如何选择适当的参考图像,以及(ii)如何以一种自我监督的方式学习真实世界RefSR。特别是,我们从双摄像头Zooms(SelfDZSR)观察到现实世界图像SR的新颖的自我监督学习方法。考虑到多台相机在现代智能手机中的普及,可以自然利用越来越多的缩放(远摄)图像作为指导较小的变焦(短对焦)图像的SR。此外,SelfDZSR学习了一个深层网络,以获得短对焦图像的SR结果,以具有与远摄图像相同的分辨率。为此,我们将远摄图像而不是其他高分辨率图像作为监督信息,然后从中选择中心贴片作为对相应的短对焦图像补丁的引用。为了减轻短对焦低分辨率(LR)图像和远摄地面真相(GT)图像之间未对准的影响,我们设计了辅助LR发电机,并将GT映射到辅助LR,同时保持空间位置不变。 。然后,可以利用辅助-LR通过建议的自适应空间变压器网络(ADASTN)将LR特征变形,并将REF特征与GT匹配。在测试过程中,可以直接部署SelfDZSR,以使用远摄映像的引用来超级解决整个短对焦图像。实验表明,我们的方法可以针对最先进的方法实现更好的定量和定性性能。代码可在https://github.com/cszhilu1998/selfdzsr上找到。
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译