超级分辨率(SR)旨在增加图像的分辨率。应用程序包括安全性,医学成像和对象识别。我们提出了一种基于深度学习的SR系统,其将六角采样的低分辨率图像作为输入,并产生矩形采样的SR图像作为输出。为了进行培训和测试,我们使用一种现实观察模型,包括从衍射和传感器劣化的光学劣化,从检测器集成。我们的SR方法首先使用非均匀插值来部分地上置观察到的六边形图像并将其转换为矩形网格。然后,我们利用了设计用于SR的最先进的卷积神经网络(CNN)架构,该架构被称为残留通道注意网络(RCAN)。特别是,我们使用RCAN进一步上表并恢复图像以产生最终的SR图像估计。我们证明该系统优于将RCAN直接施加到具有等效样本密度的矩形采样的LR图像。六边形取样的理论优势是众所周知的。然而,据我们所知,六角形取样的实际好处,即RCAN SR等现代加工技术是迄今为止未经测试的。我们的SR系统演示了六角形样式在采用修改的RCAN进行六边形SR时的显着优势。
translated by 谷歌翻译
由智能手机和中端相机捕获的照片的空间分辨率和动态范围有限,在饱和区域中未充满刺激的区域和颜色人工制品中的嘈杂响应。本文介绍了第一种方法(据我们所知),以重建高分辨率,高动态范围的颜色图像,这些颜色来自带有曝光括号的手持相机捕获的原始照相爆发。该方法使用图像形成的物理精确模型来结合迭代优化算法,用于求解相应的逆问题和学习的图像表示,以进行健壮的比对,并以前的自然图像。所提出的算法很快,与基于最新的学习图像恢复方法相比,内存需求较低,并且从合成但逼真的数据终止学习的特征。广泛的实验证明了其出色的性能,具有最多$ \ times 4 $的超分辨率因子在野外拍摄的带有手持相机的真实照片,以及对低光条件,噪音,摄像机摇动和中等物体运动的高度鲁棒性。
translated by 谷歌翻译
尽管近年来取得了显着的进展,但开发了几个局限性的单像超分辨率方法。具体而言,它们在具有某些降解(合成还是真实)的固定内容域中进行了培训。他们所学的先验容易过度适应培训配置。因此,目前尚不清楚对新型领域(例如无人机顶视图数据以及跨海)的概括。尽管如此,将无人机与正确的图像超分辨率配对具有巨大的价值。这将使无人机能够飞行更高的覆盖范围,同时保持高图像质量。为了回答这些问题,并为无人机图像超级分辨率铺平了道路,我们探索了该应用程序,特别关注单像案例。我们提出了一个新颖的无人机图像数据集,其场景在低分辨率和高分辨率下捕获,并在高度范围内捕获。我们的结果表明,现成的最先进的网络见证了这个不同领域的性能下降。我们还表明了简单的微调,并将高度意识纳入网络的体系结构,都可以改善重建性能。
translated by 谷歌翻译
高分辨率遥感图像用于广泛的任务,包括对象的检测和分类。然而,高分辨率图像昂贵,而较低的分辨率图像通常是可自由的可用的,并且可以由公众用于社会良好应用范围。为此,我们使用从Spacenet 7挑战的PlanetsCope图像策划多个频谱多图像超分辨率数据集作为高分辨率参考和与低分辨率图像相同的图像的多个Sentinel-2重新定位。我们介绍了将多图像超分辨率(MISR)应用于多光谱遥感图像的第一个结果。此外,我们还将辐射级一致性模块引入MISR模型,以保持哨声-2传感器的高辐射分辨率。我们表明MISR优于一系列图像保真度指标的单图像超分辨率和其他基线。此外,我们对建筑描绘的多图像超分辨率的效用进行了第一次评估,显示利用多个图像导致这些下游任务中的更好的性能。
translated by 谷歌翻译
捕获场景的空间和角度信息的光场(LF)成像无疑是有利于许多应用。尽管已经提出了用于LF采集的各种技术,但是在角度和空间上实现的既仍然是技术挑战。本文,提出了一种基于学习的方法,其应用于3D末面图像(EPI)以重建高分辨率LF。通过2级超分辨率框架,所提出的方法有效地解决了各种LF超分辨率(SR)问题,即空间SR,Angular SR和角空间SR。虽然第一阶段向Up-Sample EPI体积提供灵活的选择,但是由新型EPI体积的细化网络(EVRN)组成的第二阶段,基本上提高了高分辨率EPI体积的质量。从7个发布的数据集的90个挑战合成和实际灯田场景的广泛评估表明,所提出的方法优于空间和角度超分辨率问题的大型延伸的最先进的方法,即平均值峰值信号到噪声比为2.0 dB,1.4 dB和3.14 dB的空间SR $ \ Times 2 $,Spatial SR $ \ Times 4 $和Angular SR。重建的4D光场展示了所有透视图像的平衡性能分布,与先前的作品相比,卓越的视觉质量。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results.A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∼ 100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
具有窄光谱带的高光谱图像(HSI)可以捕获丰富的光谱信息,但它在该过程中牺牲其空间分辨率。最近提出了许多基于机器学习的HSI超分辨率(SR)算法。然而,这些方法的基本限制之一是它们高度依赖于图像和相机设置,并且只能学会用另一个特定设置用一个特定的设置映射输入的HSI。然而,由于HSI相机的多样性,不同的相机捕获具有不同光谱响应函数和频带编号的图像。因此,现有的基于机器学习的方法无法学习用于各种输入输出频带设置的超声波HSIS。我们提出了一种基于元学习的超分辨率(MLSR)模型,其可以在任意数量的输入频带'峰值波长下采用HSI图像,并产生具有任意数量的输出频带'峰值波长的SR HSIS。我们利用NTIRE2020和ICVL数据集训练并验证MLSR模型的性能。结果表明,单个提出的模型可以在任意输入 - 输出频带设置下成功生成超分辨的HSI频段。结果更好或至少与在特定输入输出频带设置上单独培训的基线相当。
translated by 谷歌翻译
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
translated by 谷歌翻译
目前基于学习的单图像超分辨率(SISR)算法由于假定的Daradada-Tion过程中的偏差而导致的实际数据up到实际数据。常规的劣化过程考虑在高分辨率(HR)图像上应用模糊,噪声和下采样(通常是较大的采样)以合成低分辨率(LR)对应物。然而,很少有用于退化建模的作品已经采取了光学成像系统的物理方面。在本文中,我们光学分析了成像系统,并探索了空间频域的实际LR-HR对的特征。通过考虑optiopticsandsordegration,我们制定真实的物理启发的退化模型;成像系统的物理劣化被建模为低通滤波器,其截止频率由物体距离,焦距的更焦距和图像传感器的像素尺寸。特别是,我们建议使用卷积神经网络(CNN)来学习现实世界劣化过程的截止频率。然后应用学习的网络从未配对的HR图像合成LR图像。稍后使用合成的HR-LR图像对培训SISR网络。我们评估所提出的不同成像系统捕获的现实世界图像中提出的退化模型的有效性和泛化能力。实验结果展示了通过使用传统的退化模型使用我们的合成数据训练的SISR网络通过传统的降级模型对网络进行了有利的。此外,我们的结果与通过使用现实世界LR-HR对训练的相同网络获得的结果相当,这是在真实场景中获得的具有挑战性。
translated by 谷歌翻译
在本文中,我们考虑了基于参考的超分辨率(REFSR)中的两个具有挑战性的问题,(i)如何选择适当的参考图像,以及(ii)如何以一种自我监督的方式学习真实世界RefSR。特别是,我们从双摄像头Zooms(SelfDZSR)观察到现实世界图像SR的新颖的自我监督学习方法。考虑到多台相机在现代智能手机中的普及,可以自然利用越来越多的缩放(远摄)图像作为指导较小的变焦(短对焦)图像的SR。此外,SelfDZSR学习了一个深层网络,以获得短对焦图像的SR结果,以具有与远摄图像相同的分辨率。为此,我们将远摄图像而不是其他高分辨率图像作为监督信息,然后从中选择中心贴片作为对相应的短对焦图像补丁的引用。为了减轻短对焦低分辨率(LR)图像和远摄地面真相(GT)图像之间未对准的影响,我们设计了辅助LR发电机,并将GT映射到辅助LR,同时保持空间位置不变。 。然后,可以利用辅助-LR通过建议的自适应空间变压器网络(ADASTN)将LR特征变形,并将REF特征与GT匹配。在测试过程中,可以直接部署SelfDZSR,以使用远摄映像的引用来超级解决整个短对焦图像。实验表明,我们的方法可以针对最先进的方法实现更好的定量和定性性能。代码可在https://github.com/cszhilu1998/selfdzsr上找到。
translated by 谷歌翻译
Image super-resolution (SR) is a technique to recover lost high-frequency information in low-resolution (LR) images. Spatial-domain information has been widely exploited to implement image SR, so a new trend is to involve frequency-domain information in SR tasks. Besides, image SR is typically application-oriented and various computer vision tasks call for image arbitrary magnification. Therefore, in this paper, we study image features in the frequency domain to design a novel scale-arbitrary image SR network. First, we statistically analyze LR-HR image pairs of several datasets under different scale factors and find that the high-frequency spectra of different images under different scale factors suffer from different degrees of degradation, but the valid low-frequency spectra tend to be retained within a certain distribution range. Then, based on this finding, we devise an adaptive scale-aware feature division mechanism using deep reinforcement learning, which can accurately and adaptively divide the frequency spectrum into the low-frequency part to be retained and the high-frequency one to be recovered. Finally, we design a scale-aware feature recovery module to capture and fuse multi-level features for reconstructing the high-frequency spectrum at arbitrary scale factors. Extensive experiments on public datasets show the superiority of our method compared with state-of-the-art methods.
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: A) generate and train a standard SR network on synthetic low-resolution - high-resolution (LR - HR) pairs or B) attempt to predict the degradations an LR image has suffered and use these to inform a customised SR network. Despite significant progress, subscribers to the former miss out on useful degradation information that could be used to improve the SR process. On the other hand, followers of the latter rely on weaker SR networks, which are significantly outperformed by the latest architectural advancements. In this work, we present a framework for combining any blind SR prediction mechanism with any deep SR network, using a metadata insertion block to insert prediction vectors into SR network feature maps. Through comprehensive testing, we prove that state-of-the-art contrastive and iterative prediction schemes can be successfully combined with high-performance SR networks such as RCAN and HAN within our framework. We show that our hybrid models consistently achieve stronger SR performance than both their non-blind and blind counterparts. Furthermore, we demonstrate our framework's robustness by predicting degradations and super-resolving images from a complex pipeline of blurring, noise and compression.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image superresolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
translated by 谷歌翻译
尽管应用于自然图像的大量成功的超分辨率重建(SRR)模型,但它们在遥感图像中的应用往往会产生差的结果。遥感图像通常比自然图像更复杂,并且具有较低分辨率的特殊性,它包含噪音,并且通常描绘了大质感表面。结果,将非专业的SRR模型应用于遥感图像,从而导致人工制品和不良的重建。为了解决这些问题,本文提出了一种受到先前研究工作启发的体系结构,引入了一种新的方法来迫使SRR模型输出现实的遥感图像:而不是依靠功能空间相似性作为感知损失,而是将其视为Pixel-从图像的归一化数字表面模型(NDSM)推断出的级别信息。该策略允许在训练模型期间应用更具信息的更新,该模型从任务(高程图推理)源中源,该模型与遥感密切相关。但是,在生产过程中不需要NDSM辅助信息,因此该模型除了其低分辨率对以外没有任何其他数据,因此该模型还没有任何其他数据。我们在两个远程感知的不同空间分辨率的数据集上评估了我们的模型,这些数据集也包含图像的DSM对:DFC2018数据集和包含卢森堡国家激光雷达飞行的数据集。根据视觉检查,推断的超分辨率图像表现出特别优越的质量。特别是,高分辨率DFC2018数据集的结果是现实的,几乎与地面真相图像没有区别。
translated by 谷歌翻译
使用具有固定尺度的图像超分辨率(SR)的深度学习技术,已经取得了巨大的成功。为了提高其现实世界的适用性,还提出了许多模型来恢复具有任意尺度因子的SR图像,包括不对称的图像,其中图像沿水平和垂直方向大小为不同的尺度。尽管大多数模型仅针对单向上升尺度任务进行了优化,同时假设针对低分辨率(LR)输入的预定义的缩小内核,但基于可逆神经网络(INN)的最新模型能够通过优化降低和降低尺度和降低范围的降低准确性来显着提高上升的准确性共同。但是,受创新体系结构的限制,它被限制在固定的整数尺度因素上,并且需要每个量表的一个模型。在不增加模型复杂性的情况下,提出了一个简单有效的可逆重新恢复网络(IARN),以通过在这项工作中仅训练一个模型来实现任意图像重新缩放。使用创新的组件,例如位置感知量表编码和先发制通道拆分,该网络被优化,以将不可固化的重新恢复周期转换为有效的可逆过程。证明它可以在双向任意重新缩放中实现最新的(SOTA)性能,而不会在LR输出中损害感知质量。还可以证明,使用相同的网络体系结构在不对称尺度的测试上表现良好。
translated by 谷歌翻译