可靠,高分辨率气候和天气数据的可用性对于为气候适应和缓解的长期决策提供了重要的意见,并指导对极端事件的快速响应。预测模型受到计算成本的限制,因此通常以粗空间分辨率预测数量。统计降尺度可以提供高采样低分辨率数据的有效方法。在这个领域,经常使用计算机视觉中超分辨率域中的方法成功地应用了深度学习。尽管经常取得令人信服的结果,但这种模型在预测物理变量时通常会违反保护法。为了节省重要的物理量,我们开发的方法可以通过深层缩减模型来确保物理约束,同时还根据传统指标提高其性能。我们介绍了约束网络的两种方法:添加到神经网络末尾的重新归一化层,并连续的方法随着增加的采样因子的增加而扩展。我们使用ERE5重新分析数据显示了我们在不同流行架构和更高采样因子上的方法的适用性。
translated by 谷歌翻译
This work presents a physics-informed deep learning-based super-resolution framework to enhance the spatio-temporal resolution of the solution of time-dependent partial differential equations (PDE). Prior works on deep learning-based super-resolution models have shown promise in accelerating engineering design by reducing the computational expense of traditional numerical schemes. However, these models heavily rely on the availability of high-resolution (HR) labeled data needed during training. In this work, we propose a physics-informed deep learning-based framework to enhance the spatial and temporal resolution of coarse-scale (both in space and time) PDE solutions without requiring any HR data. The framework consists of two trainable modules independently super-resolving the PDE solution, first in spatial and then in temporal direction. The physics based losses are implemented in a novel way to ensure tight coupling between the spatio-temporally refined outputs at different times and improve framework accuracy. We analyze the capability of the developed framework by investigating its performance on an elastodynamics problem. It is observed that the proposed framework can successfully super-resolve (both in space and time) the low-resolution PDE solutions while satisfying physics-based constraints and yielding high accuracy. Furthermore, the analysis and obtained speed-up show that the proposed framework is well-suited for integration with traditional numerical methods to reduce computational complexity during engineering design.
translated by 谷歌翻译
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
translated by 谷歌翻译
Because of the necessity to obtain high-quality images with minimal radiation doses, such as in low-field magnetic resonance imaging, super-resolution reconstruction in medical imaging has become more popular (MRI). However, due to the complexity and high aesthetic requirements of medical imaging, image super-resolution reconstruction remains a difficult challenge. In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN). The integrated system can extract more precise texture information and focus more on important locations through global image matching after successfully inserting Transformer into the generative adversarial network for picture reconstruction. Furthermore, we weighted the combination of content loss, adversarial loss, and adversarial feature loss as the final multi-task loss function during the training of our proposed model T-GAN. In comparison to established measures like PSNR and SSIM, our suggested T-GAN achieves optimal performance and recovers more texture features in super-resolution reconstruction of MRI scanned images of the knees and belly.
translated by 谷歌翻译
尽管有持续的改进,但降水预测仍然没有其他气象变量的准确和可靠。造成这种情况的一个主要因素是,几个影响降水分布和强度的关键过程出现在全球天气模型的解决规模以下。计算机视觉社区已经证明了生成的对抗网络(GAN)在超分辨率问题上取得了成功,即学习为粗图像添加精细的结构。 Leinonen等。 (2020年)先前使用GAN来产生重建的高分辨率大气场的集合,并给定较粗糙的输入数据。在本文中,我们证明了这种方法可以扩展到更具挑战性的问题,即通过使用高分辨率雷达测量值作为“地面真相”来提高天气预报模型中相对低分辨率输入的准确性和分辨率。神经网络必须学会添加分辨率和结构,同时考虑不可忽略的预测错误。我们表明,甘斯和vae-gan可以在创建高分辨率的空间相干降水图的同时,可以匹配最新的后处理方法的统计特性。我们的模型比较比较与像素和合并的CRP分数,功率谱信息和等级直方图(用于评估校准)的最佳现有缩减方法。我们测试了我们的模型,并表明它们在各种场景中的表现,包括大雨。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
高分辨率遥感图像用于广泛的任务,包括对象的检测和分类。然而,高分辨率图像昂贵,而较低的分辨率图像通常是可自由的可用的,并且可以由公众用于社会良好应用范围。为此,我们使用从Spacenet 7挑战的PlanetsCope图像策划多个频谱多图像超分辨率数据集作为高分辨率参考和与低分辨率图像相同的图像的多个Sentinel-2重新定位。我们介绍了将多图像超分辨率(MISR)应用于多光谱遥感图像的第一个结果。此外,我们还将辐射级一致性模块引入MISR模型,以保持哨声-2传感器的高辐射分辨率。我们表明MISR优于一系列图像保真度指标的单图像超分辨率和其他基线。此外,我们对建筑描绘的多图像超分辨率的效用进行了第一次评估,显示利用多个图像导致这些下游任务中的更好的性能。
translated by 谷歌翻译
Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image superresolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
translated by 谷歌翻译
近年来,使用基于深入学习的架构的状态,在图像超分辨率的任务中有几个进步。先前发布的许多基于超分辨率的技术,需要高端和顶部的图形处理单元(GPU)来执行图像超分辨率。随着深度学习方法的进步越来越大,神经网络已经变得越来越多地计算饥饿。我们返回了一步,并专注于创建实时有效的解决方案。我们提出了一种在其内存足迹方面更快更小的架构。所提出的架构使用深度明智的可分离卷积来提取特征,并且它与其他超分辨率的GAN(生成对抗网络)进行接受,同时保持实时推断和低存储器占用。即使在带宽条件不佳,实时超分辨率也能够流式传输高分辨率介质内容。在维持准确性和延迟之间的有效权衡之间,我们能够生产可比较的性能模型,该性能模型是超分辨率GAN的大小的一个 - 八(1/8),并且计算的速度比超分辨率的GAN快74倍。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
基于深度学习(DL)的降尺度已成为地球科学中的流行工具。越来越多的DL方法被采用来降低降水量的降水量数据,并在局部(〜几公里甚至更小)的尺度上产生更准确和可靠的估计值。尽管有几项研究采用了降水的动力学或统计缩减,但准确性受地面真理的可用性受到限制。衡量此类方法准确性的一个关键挑战是将缩小的数据与点尺度观测值进行比较,这些观察值通常在如此小的尺度上是无法使用的。在这项工作中,我们进行了基于DL的缩减,以估计印度气象部(IMD)的当地降水数据,该数据是通过近似从车站位置到网格点的价值而创建的。为了测试不同DL方法的疗效,我们采用了四种不同的缩小方法并评估其性能。所考虑的方法是(i)深度统计缩小(DEEPSD),增强卷积长期记忆(ConvlstM),完全卷积网络(U-NET)和超分辨率生成对抗网络(SR-GAN)。 SR-GAN中使用的自定义VGG网络是在这项工作中使用沉淀数据开发的。结果表明,SR-GAN是降水数据缩减的最佳方法。 IMD站的降水值验证了缩小的数据。这种DL方法为统计缩减提供了有希望的替代方法。
translated by 谷歌翻译
我们为有限的信息提供了一种用于平行扩散过程的超分辨率模型。虽然大多数超分辨率模型在训练中假设高分辨率(HR)地面真实数据,但在许多情况下,这种HR数据集不易访问。在这里,我们表明,通过基于物理的规则化训练的经常性卷积网络能够在不具有HR地面真实数据的情况下重建HR信息。此外,考虑到超分辨率问题的不良性质,我们采用了经常性的Wasserstein AutoEncoder来模拟不确定性。
translated by 谷歌翻译
从同一场景的单个或多个低分辨率图像中获取高分辨率图像的过程对于现实世界图像和信号处理应用非常感兴趣。这项研究是关于探索基于深度学习的图像超分辨率算法的潜在用法,用于为驾驶汽车内车辆驾驶员监测系统产生高质量的热成像结果。在这项工作中,我们提出并开发了一种新型的多图像超分辨率复发性神经网络,以增强分辨率并提高从未冷却的热摄像机捕获的低分辨率热成像数据的质量。端到端完全卷积神经网络在室内环境条件下从刮擦上训练了30个不同受试者的新获得的热数据。热调谐超分辨率网络的有效性已定量验证,以及在6个不同受试者的测试数据上进行定性验证。该网络能够在验证数据集上达到4倍超分辨率的平均峰信号与噪声比为39.24,在定量和质量上都超过了双色插值。
translated by 谷歌翻译
尽管近年来取得了显着的进展,但开发了几个局限性的单像超分辨率方法。具体而言,它们在具有某些降解(合成还是真实)的固定内容域中进行了培训。他们所学的先验容易过度适应培训配置。因此,目前尚不清楚对新型领域(例如无人机顶视图数据以及跨海)的概括。尽管如此,将无人机与正确的图像超分辨率配对具有巨大的价值。这将使无人机能够飞行更高的覆盖范围,同时保持高图像质量。为了回答这些问题,并为无人机图像超级分辨率铺平了道路,我们探索了该应用程序,特别关注单像案例。我们提出了一个新颖的无人机图像数据集,其场景在低分辨率和高分辨率下捕获,并在高度范围内捕获。我们的结果表明,现成的最先进的网络见证了这个不同领域的性能下降。我们还表明了简单的微调,并将高度意识纳入网络的体系结构,都可以改善重建性能。
translated by 谷歌翻译
The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the domain of face super-resolution (also known as face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously pos-sible.
translated by 谷歌翻译
尽管应用于自然图像的大量成功的超分辨率重建(SRR)模型,但它们在遥感图像中的应用往往会产生差的结果。遥感图像通常比自然图像更复杂,并且具有较低分辨率的特殊性,它包含噪音,并且通常描绘了大质感表面。结果,将非专业的SRR模型应用于遥感图像,从而导致人工制品和不良的重建。为了解决这些问题,本文提出了一种受到先前研究工作启发的体系结构,引入了一种新的方法来迫使SRR模型输出现实的遥感图像:而不是依靠功能空间相似性作为感知损失,而是将其视为Pixel-从图像的归一化数字表面模型(NDSM)推断出的级别信息。该策略允许在训练模型期间应用更具信息的更新,该模型从任务(高程图推理)源中源,该模型与遥感密切相关。但是,在生产过程中不需要NDSM辅助信息,因此该模型除了其低分辨率对以外没有任何其他数据,因此该模型还没有任何其他数据。我们在两个远程感知的不同空间分辨率的数据集上评估了我们的模型,这些数据集也包含图像的DSM对:DFC2018数据集和包含卢森堡国家激光雷达飞行的数据集。根据视觉检查,推断的超分辨率图像表现出特别优越的质量。特别是,高分辨率DFC2018数据集的结果是现实的,几乎与地面真相图像没有区别。
translated by 谷歌翻译
单像超分辨率可以在需要可靠的视觉流以监视任务,处理远程操作或研究相关视觉细节的环境中支持机器人任务。在这项工作中,我们为实时超级分辨率提出了一个有效的生成对抗网络模型。我们采用了原始SRGAN的量身定制体系结构和模型量化,以提高CPU和Edge TPU设备上的执行,最多达到200 fps的推断。我们通过将其知识提炼成较小版本的网络,进一步优化我们的模型,并与标准培训方法相比获得显着的改进。我们的实验表明,与较重的最新模型相比,我们的快速和轻量级模型可保持相当令人满意的图像质量。最后,我们对图像传输进行带宽降解的实验,以突出提出的移动机器人应用系统的优势。
translated by 谷歌翻译
Reference-based Super-resolution (RefSR) approaches have recently been proposed to overcome the ill-posed problem of image super-resolution by providing additional information from a high-resolution image. Multi-reference super-resolution extends this approach by allowing more information to be incorporated. This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references. Extensive experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译