图像表示对于许多视觉任务至关重要。最近的一项研究,即局部隐式图像函数(LIIF),而不是用2D阵列代替图像,而是将图像表示为连续函数,其中像素值是通过使用相应的坐标作为输入来扩展的。由于其连续的性质,可以为任意规模的图像超分辨率任务采用LIIF,从而为各种提高因素提供了一个有效和有效的模型。但是,Liif通常遭受边缘周围的结构扭曲和响起的伪影,主要是因为所有像素共享相同的模型,因此忽略了图像的局部特性。在本文中,我们提出了一种新颖的自适应局部图像功能(A-LIIF)来减轻此问题。具体而言,我们的A-LIIF由两个主要组成部分组成:编码器和扩展网络。前者捕获了跨尺度的图像特征,而后者通过多个局部隐式图像函数的加权组合进行了连续升级函数。因此,我们的A-LIIF可以更准确地重建高频纹理和结构。多个基准数据集的实验验证了我们方法的有效性。我们的代码可在\ url {https://github.com/leehw-thu/a-liif}上找到。
translated by 谷歌翻译
NERF和其他相关隐式神经表示方法的最新成功为连续图像表示打开了一条新的途径,其中不再需要从存储的离散2D阵列中查找像素值,但可以从连续空间域上的神经网络模型推断出来。尽管LIIF最近的工作表明,这种新颖的方法可以在任意尺度的超分辨率任务上实现良好的性能,但由于对高频纹理的预测不准确,它们的高尺度图像经常显示出结构性失真。在这项工作中,我们提出了UltraSR,这是一种基于隐式图像函数的简单而有效的新网络设计,在其中我们深入整合了空间坐标和与隐式神经表示的定期编码。通过广泛的实验和消融研究,我们表明空间编码是朝向下一个阶段高表现隐式图像函数的缺失钥匙。与以前的最先进的方法相比,我们的Ultrasr在所有超分辨率量表下在DIV2K基准测试中设定了新的最先进的性能。 Ultrasr还可以在其他标准基准数据集上实现卓越的性能,在这些数据集中,它在几乎所有实验中都优于先前的工作。
translated by 谷歌翻译
中心位置是否完全能够代表像素?在离散的图像表示中表示具有它们的中心的像素的错误,但是在图像超分辨率(SR)上下文中的局域脉中的信号的聚合时,它更有意义地考虑每个像素。尽管任意级图像SR领域的基于坐标的隐式表示的能力很大,但该区域的像素的性质不完全考虑。为此,我们提出了集成的位置编码(IPE),通过聚合在像素区域上聚合频率信息来扩展传统的位置编码。我们将IPE应用于最先进的任意级图像超分辨率方法:本地隐式图像功能(LIIF),呈现IPE-LIIF。我们通过定量和定性评估显示IPE-LIIF的有效性,并进一步证明了IPE泛化能力与更大的图像尺度和基于多种隐式的方法。代码将被释放。
translated by 谷歌翻译
Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with arbitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learnable parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot ensemble relevant features in a large field which are important in an image; iii) it inherently has a gap with real camera imaging since it only depends on the coordinate. To address these issues, this paper proposes a continuous implicit attention-in-attention network, called CiaoSR. We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this implicit attention network to exploit additional non-local information. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image super resolution (SISR) methods with the same backbone. In addition, the proposed method also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance.
translated by 谷歌翻译
如今,由于屏幕共享,远程合作和在线教育的广泛应用,屏幕内容存在爆炸性增长。为了匹配有限终端带宽,可以缩小高分辨率(HR)屏幕内容并压缩。在接收器侧,低分辨率(LR)屏幕内容图像(SCI)的超分辨率(SR)由HR显示器或用户缩小以供详细观察。然而,由于图像特性非常不同的图像特性以及在任意尺度下浏览的SCI浏览要求,图像SR方法主要针对自然图像设计不概括SCI。为此,我们为SCISR提出了一种新颖的隐式变压器超分辨率网络(ITSRN)。对于任意比率的高质量连续SR,通过所提出的隐式变压器从密钥坐标处的图像特征推断出查询坐标处的像素值,并且提出了隐式位置编码方案来聚合与查询相似的相邻像素值。使用LR和HR SCI对构建基准SCI1K和SCI1K压缩数据集。广泛的实验表明,提出的ITSRN显着优于压缩和未压缩的SCI的几种竞争连续和离散SR方法。
translated by 谷歌翻译
How to represent an image? While the visual world is presented in a continuous manner, machines store and see the images in a discrete way with 2D arrays of pixels. In this paper, we seek to learn a continuous representation for images. Inspired by the recent progress in 3D reconstruction with implicit neural representation, we propose Local Implicit Image Function (LIIF), which takes an image coordinate and the 2D deep features around the coordinate as inputs, predicts the RGB value at a given coordinate as an output. Since the coordinates are continuous, LIIF can be presented in arbitrary resolution. To generate the continuous representation for images, we train an encoder with LIIF representation via a self-supervised task with superresolution. The learned continuous representation can be presented in arbitrary resolution even extrapolate to ×30 higher resolution, where the training tasks are not provided. We further show that LIIF representation builds a bridge between discrete and continuous representation in 2D, it naturally supports the learning tasks with size-varied image ground-truths and significantly outperforms the method with resizing the ground-truths. Our project page with code is at https://yinboc.github.io/liif/.
translated by 谷歌翻译
最近有一种隐式神经功能棚灯,代表任意分辨率的图像。然而,独立的多层Perceptron(MLP)在学习高频分量中显示了有限的性能。在本文中,我们提出了一种局部纹理估计器(LTE),用于自然图像的主要频率估计器,使得隐式功能以连续方式重建图像的同时捕获精细细节。当用深层超分辨率(SR)架构共同培训时,LTE能够在2D傅里叶空间中表征图像纹理。我们表明,基于LTE的神经功能优于所有数据集的任意级别的现有深度SR方法,以及所有规模因素。此外,与以前的作品相比,我们的实施呈现了最短的运行时间。源代码将打开。
translated by 谷歌翻译
The deep learning technique was used to increase the performance of single image super-resolution (SISR). However, most existing CNN-based SISR approaches primarily focus on establishing deeper or larger networks to extract more significant high-level features. Usually, the pixel-level loss between the target high-resolution image and the estimated image is used, but the neighbor relations between pixels in the image are seldom used. On the other hand, according to observations, a pixel's neighbor relationship contains rich information about the spatial structure, local context, and structural knowledge. Based on this fact, in this paper, we utilize pixel's neighbor relationships in a different perspective, and we propose the differences of neighboring pixels to regularize the CNN by constructing a graph from the estimated image and the ground-truth image. The proposed method outperforms the state-of-the-art methods in terms of quantitative and qualitative evaluation of the benchmark datasets. Keywords: Super-resolution, Convolutional Neural Networks, Deep Learning
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.
translated by 谷歌翻译
我们呈现NERF-SR,一种用于高分辨率(HR)新型视图合成的解决方案,主要是低分辨率(LR)输入。我们的方法是基于神经辐射场(NERF)的内置,其预测每点密度和颜色,具有多层的射击。在在任意尺度上产生图像时,NERF与超越观察图像的分辨率努力。我们的关键识别是NERF具有本地之前的,这意味着可以在附近区域传播3D点的预测,并且保持准确。我们首先通过超级采样策略来利用它,该策略在每个图像像素处射击多个光线,这在子像素级别强制了多视图约束。然后,我们表明,NERF-SR可以通过改进网络进一步提高超级采样的性能,该细化网络利用估计的深度来实现HR参考图像上的相关补丁的幻觉。实验结果表明,NERF-SR在合成和现实世界数据集的HR上为新型视图合成产生高质量结果。
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
高光谱图像(HSI)没有额外辅助图像的超分辨率仍然是由于其高维光谱图案的恒定挑战,其中学习有效的空间和光谱表示是基本问题。最近,隐式的神经表示(INR)正在进行进步,作为新颖且有效的代表,特别是在重建任务中。因此,在这项工作中,我们提出了一种基于INR的新颖的HSI重建模型,其通过将空间坐标映射到其对应的光谱辐射值值的连续函数来表示HSI。特别地,作为INR的特定实现,参数模型的参数是通过使用卷积网络在特征提取的超通知来预测的。它使连续功能以内容感知方式将空间坐标映射到像素值。此外,周期性空间编码与重建过程深度集成,这使得我们的模型能够恢复更高的频率细节。为了验证我们模型的功效,我们在三个HSI数据集(洞穴,NUS和NTIRE2018)上进行实验。实验结果表明,与最先进的方法相比,该建议的模型可以实现竞争重建性能。此外,我们提供了对我们模型各个组件的效果的消融研究。我们希望本文可以服务器作为未来研究的效率参考。
translated by 谷歌翻译
在本文中,我们提出了D2C-SR,这是一个新颖的框架,用于实现现实世界图像超级分辨率的任务。作为一个不适的问题,超分辨率相关任务的关键挑战是给定的低分辨率输入可能会有多个预测。大多数基于经典的深度学习方法都忽略了基本事实,缺乏对基础高频分布的明确建模,从而导致结果模糊。最近,一些基于GAN或学习的超分辨率空间的方法可以生成模拟纹理,但不能保证具有低定量性能的纹理的准确性。重新思考这两者,我们以离散形式了解了基本高频细节的分布,并提出了两阶段的管道:分歧阶段到收敛阶段。在发散阶段,我们提出了一个基于树的结构深网作为差异骨干。提出了发散损失,以鼓励基于树的网络产生的结果,以分解可能的高频表示,这是我们对基本高频分布进行离散建模的方式。在收敛阶段,我们分配空间权重以融合这些不同的预测,以获得更准确的细节,以获取最终输出。我们的方法为推理提供了方便的端到端方式。我们对几个现实世界基准进行评估,包括具有X8缩放系数的新提出的D2CrealSR数据集。我们的实验表明,D2C-SR针对最先进的方法实现了更好的准确性和视觉改进,参数编号明显较少,并且我们的D2C结构也可以作为广义结构应用于其他一些方法以获得改进。我们的代码和数据集可在https://github.com/megvii-research/d2c-sr上找到
translated by 谷歌翻译
Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method's capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating a significant advantage over existing approaches.
translated by 谷歌翻译
Reference-based Super-resolution (RefSR) approaches have recently been proposed to overcome the ill-posed problem of image super-resolution by providing additional information from a high-resolution image. Multi-reference super-resolution extends this approach by allowing more information to be incorporated. This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references. Extensive experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
translated by 谷歌翻译
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
translated by 谷歌翻译
近年来,压缩图像超分辨率已引起了极大的关注,其中图像被压缩伪像和低分辨率伪影降解。由于复杂的杂化扭曲变形,因此很难通过简单的超分辨率和压缩伪像消除掉的简单合作来恢复扭曲的图像。在本文中,我们向前迈出了一步,提出了层次的SWIN变压器(HST)网络,以恢复低分辨率压缩图像,该图像共同捕获分层特征表示并分别用SWIN Transformer增强每个尺度表示。此外,我们发现具有超分辨率(SR)任务的预处理对于压缩图像超分辨率至关重要。为了探索不同的SR预审查的影响,我们将常用的SR任务(例如,比科比奇和不同的实际超分辨率仿真)作为我们的预处理任务,并揭示了SR在压缩的图像超分辨率中起不可替代的作用。随着HST和预训练的合作,我们的HST在AIM 2022挑战中获得了低质量压缩图像超分辨率轨道的第五名,PSNR为23.51db。广泛的实验和消融研究已经验证了我们提出的方法的有效性。
translated by 谷歌翻译
作为一个严重的问题,近年来已经广泛研究了单图超分辨率(SISR)。 SISR的主要任务是恢复由退化程序引起的信息损失。根据Nyquist抽样理论,降解会导致混叠效应,并使低分辨率(LR)图像的正确纹理很难恢复。实际上,自然图像中相邻斑块之间存在相关性和自相似性。本文考虑了自相似性,并提出了一个分层图像超分辨率网络(HSRNET)来抑制混叠的影响。我们从优化的角度考虑SISR问题,并根据半季节分裂(HQS)方法提出了迭代解决方案模式。为了先验探索本地图像的质地,我们设计了一个分层探索块(HEB)并进行性增加了接受场。此外,设计多级空间注意力(MSA)是为了获得相邻特征的关系并增强了高频信息,这是视觉体验的关键作用。实验结果表明,与其他作品相比,HSRNET实现了更好的定量和视觉性能,并更有效地释放了别名。
translated by 谷歌翻译
盲目图像超分辨率(SR)是CV的长期任务,旨在恢复患有未知和复杂扭曲的低分辨率图像。最近的工作主要集中在采用更复杂的退化模型来模拟真实世界的降级。由此产生的模型在感知损失和产量感知令人信服的结果取得了突破性。然而,电流生成的对抗性网络结构所带来的限制仍然是显着的:处理像素同样地导致图像的结构特征的无知,并且导致性能缺点,例如扭曲线和背景过度锐化或模糊。在本文中,我们提出了A-ESRAN,用于盲人SR任务的GAN模型,其特色是基于U-NET的U-NET的多尺度鉴别器,可以与其他发电机无缝集成。据我们所知,这是第一项介绍U-Net结构作为GaN解决盲人问题的鉴别者的工作。本文还给出了对模型的多规模注意力突破的机制的解释。通过对现有作品的比较实验,我们的模型在非参考自然图像质量评估员度量上提出了最先进的水平性能。我们的消融研究表明,利用我们的鉴别器,基于RRDB的发电机可以利用多种尺度中图像的结构特征,因此与先前作品相比,更加感知地产生了感知的高分辨率图像。
translated by 谷歌翻译