水下图像由于光吸收,折射率和散射而受到颜色铸造,低对比度和朦胧效果,这些效果降低了高级应用,例如,对象检测和对象跟踪。最新的基于学习的方法证明了水下图像增强的惊人性能,但是,这些作品中的大多数使用合成对数据进行监督学习,并忽略了对现实世界数据的域间隙。为了解决这个问题,我们提出了一个通过内容和样式分离来增强水下图像的域适应框架,不同于水下图像增强的域适应性作品,该框架的目标是最大程度地减少合成和现实世界的潜在差异,我们的目标将编码功能分离为内容和样式潜在的样式,并将潜在的样式与不同的域,即合成,现实世界的水下和清洁域以及潜在空间中的过程域的适应和图像增强。通过潜在的操纵,我们的模型提供了一个用户交互接口,以调整不同的增强级别以进行连续更改。对各种公共水下基准测试的实验表明,所提出的框架能够对水下图像增强的域进行适应性,并在数量和质量方面胜过各种最先进的水下图像增强算法。该型号和源代码将在https://github.com/fordevoted/uiess上找到
translated by 谷歌翻译
With the development of convolutional neural networks, hundreds of deep learning based dehazing methods have been proposed. In this paper, we provide a comprehensive survey on supervised, semi-supervised, and unsupervised single image dehazing. We first discuss the physical model, datasets, network modules, loss functions, and evaluation metrics that are commonly used. Then, the main contributions of various dehazing algorithms are categorized and summarized. Further, quantitative and qualitative experiments of various baseline methods are carried out. Finally, the unsolved issues and challenges that can inspire the future research are pointed out. A collection of useful dehazing materials is available at \url{https://github.com/Xiaofeng-life/AwesomeDehazing}.
translated by 谷歌翻译
Recovery of true color from underwater images is an ill-posed problem. This is because the wide-band attenuation coefficients for the RGB color channels depend on object range, reflectance, etc. which are difficult to model. Also, there is backscattering due to suspended particles in water. Thus, most existing deep-learning based color restoration methods, which are trained on synthetic underwater datasets, do not perform well on real underwater data. This can be attributed to the fact that synthetic data cannot accurately represent real conditions. To address this issue, we use an image to image translation network to bridge the gap between the synthetic and real domains by translating images from synthetic underwater domain to real underwater domain. Using this multimodal domain adaptation technique, we create a dataset that can capture a diverse array of underwater conditions. We then train a simple but effective CNN based network on our domain adapted dataset to perform color restoration. Code and pre-trained models can be accessed at https://github.com/nehamjain10/TRUDGCR
translated by 谷歌翻译
在低灯条件下捕获的图像遭受低可视性和各种成像伪影,例如真实噪音。现有的监督启示算法需要大量的像素对齐的训练图像对,这很难在实践中准备。虽然弱监督或无人监督的方法可以缓解这些挑战,但不使用配对的训练图像,由于缺乏相应的监督,一些现实世界的文物不可避免地被错误地放大。在本文中,而不是使用完美的对齐图像进行培训,我们创造性地使用未对准的现实世界图像作为指导,这很容易收集。具体地,我们提出了一个交叉图像解剖线程(CIDN),以分别提取来自低/常光图像的交叉图像亮度和图像特定内容特征。基于此,CIDN可以同时校正特征域中的亮度和抑制图像伪像,其在很大程度上将鲁棒性增加到像素偏移。此外,我们收集了一个新的低光图像增强数据集,包括具有现实世界腐败的未对准培训图像。实验结果表明,我们的模型在新建议的数据集和其他流行的低光数据集中实现了最先进的表演。
translated by 谷歌翻译
基于深度学习的低光图像增强方法通常需要巨大的配对训练数据,这对于在现实世界的场景中捕获是不切实际的。最近,已经探索了无监督的方法来消除对成对训练数据的依赖。然而,由于没有前衣,它们在不同的现实情景中表现得不稳定。为了解决这个问题,我们提出了一种基于先前(HEP)的有效预期直方图均衡的无监督的低光图像增强方法。我们的作品受到了有趣的观察,即直方图均衡增强图像的特征图和地面真理是相似的。具体而言,我们制定了HEP,提供了丰富的纹理和亮度信息。嵌入一​​个亮度模块(LUM),它有助于将低光图像分解为照明和反射率图,并且反射率图可以被视为恢复的图像。然而,基于Retinex理论的推导揭示了反射率图被噪声污染。我们介绍了一个噪声解剖学模块(NDM),以解除反射率图中的噪声和内容,具有不配对清洁图像的可靠帮助。通过直方图均衡的先前和噪声解剖,我们的方法可以恢复更精细的细节,更有能力抑制现实世界低光场景中的噪声。广泛的实验表明,我们的方法对最先进的无监督的低光增强算法有利地表现出甚至与最先进的监督算法匹配。
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
基于对抗性学习的图像抑制方法,由于其出色的性能,已经在计算机视觉中进行了广泛的研究。但是,大多数现有方法对实际情况的质量功能有限,因为它们在相同场景的透明和合成的雾化图像上进行了培训。此外,它们在保留鲜艳的色彩和丰富的文本细节方面存在局限性。为了解决这些问题,我们开发了一个新颖的生成对抗网络,称为整体注意力融合对抗网络(HAAN),用于单个图像。 Haan由Fog2FogFogre块和FogFree2Fog块组成。在每个块中,有三个基于学习的模块,即雾除雾,颜色纹理恢复和雾合成,它们相互限制以生成高质量的图像。 Haan旨在通过学习雾图图像之间的整体通道空间特征相关性及其几个派生图像之间的整体通道空间特征相关性来利用纹理和结构信息的自相似性。此外,在雾合成模块中,我们利用大气散射模型来指导它,以通过新颖的天空分割网络专注于大气光优化来提高生成质量。关于合成和现实世界数据集的广泛实验表明,就定量准确性和主观的视觉质量而言,Haan的表现优于最先进的脱落方法。
translated by 谷歌翻译
水下杂质的光吸收和散射导致水下较差的水下成像质量。现有的基于数据驱动的基于数据的水下图像增强(UIE)技术缺乏包含各种水下场景和高保真参考图像的大规模数据集。此外,不同颜色通道和空间区域的不一致衰减不完全考虑提升增强。在这项工作中,我们构建了一个大规模的水下图像(LSUI)数据集,包括5004个图像对,并报告了一个U形变压器网络,其中变压器模型首次引入UIE任务。 U形变压器与通道 - 方面的多尺度特征融合变压器(CMSFFT)模块和空间全局功能建模变压器(SGFMT)模块集成在一起,可使用更多地加强网络对色频道和空间区域的关注严重衰减。同时,为了进一步提高对比度和饱和度,在人类视觉原理之后,设计了组合RGB,实验室和LCH颜色空间的新型损失函数。可用数据集的广泛实验验证了报告的技术的最先进性能,具有超过2dB的优势。
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
基于深度学习的水下图像增强(UIE)面临的主要挑战是地面真相高质量的图像是不可用的。大多数现有方法首先生成近似参考图,然后可以确定地训练增强网络。这种方法无法处理参考图的歧义。在本文中,我们将UIE解决为分布估计和共识过程。我们提出了一个新颖的概率网络,以了解退化的水下图像的增强分布。具体而言,我们将条件变异自动编码器与自适应实例归一化结合在一起,以构建增强分布。之后,我们采用共识过程来根据分布中的一组样本来预测确定性结果。通过学习增强分布,我们的方法可以在某种程度上应对参考图标记中引入的偏差。此外,共识过程对于捕获强大而稳定的结果很有用。我们在两个广泛使用的现实水下图像增强数据集上检查了提出的方法。实验结果表明,我们的方法可以对可能的增强预测进行抽样。同时,与最先进的UIE方法相比,共识估计会产生竞争性能。代码可在https://github.com/zhenqifu/puie-net上找到。
translated by 谷歌翻译
由于波长依赖性的光衰减,折射和散射,水下图像通常遭受颜色变形和模糊的细节。然而,由于具有未变形图像的数量有限数量的图像作为参考,培训用于各种降解类型的深度增强模型非常困难。为了提高数据驱动方法的性能,必须建立更有效的学习机制,使得富裕监督来自有限培训的示例资源的信息。在本文中,我们提出了一种新的水下图像增强网络,称为Sguie-net,其中我们将语义信息引入了共享常见语义区域的不同图像的高级指导。因此,我们提出了语义区域 - 明智的增强模块,以感知不同语义区域从多个尺度的劣化,并将其送回从其原始比例提取的全局注意功能。该策略有助于实现不同的语义对象的强大和视觉上令人愉快的增强功能,这应该由于对差异化增强的语义信息的指导应该。更重要的是,对于在训练样本分布中不常见的那些劣化类型,指导根据其语义相关性与已经良好的学习类型连接。对公共数据集的广泛实验和我们拟议的数据集展示了Sguie-Net的令人印象深刻的表现。代码和建议的数据集可用于:https://trentqq.github.io/sguie-net.html
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any examples of corresponding image pairs. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to state-of-the-art approaches further demonstrate the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT.
translated by 谷歌翻译
我们提出了一种增强的多尺度网络,被称为GriddehazeNet +,用于单图像脱水。所提出的去吸收方法不依赖于大气散射模型(ASM),并提供为什么不一定执行该模型提供的尺寸减少的原因。 Griddehazenet +由三个模块组成:预处理,骨干和后处理。与手工选定的预处理方法产生的那些导出的输入相比,可训练的预处理模块可以生成具有更好分集和更相关的功能的学习输入。骨干模块实现了两种主要增强功能的多尺度估计:1)一种新颖的网格结构,有效地通过不同尺度的密集连接来减轻瓶颈问题; 2)一种空间通道注意力块,可以通过巩固脱水相关特征来促进自适应融合。后处理模块有助于减少最终输出中的伪像。由于域移位,在合成数据上培训的模型可能在真实数据上概括。为了解决这个问题,我们塑造了合成数据的分布以匹配真实数据的分布,并使用所产生的翻译数据来到Finetune我们的网络。我们还提出了一种新的任务内部知识转移机制,可以记住和利用综合域知识,以协助学习过程对翻译数据。实验结果表明,所提出的方法优于几种合成脱色数据集的最先进,并在FineTuning之后实现了现实世界朦胧图像的优越性。
translated by 谷歌翻译
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed Enlight-enGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a selfregularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.
translated by 谷歌翻译
恶劣的天气图像翻译属于无监督的图像到图像(I2i)翻译任务,旨在将不利条件领域(例如,雨夜)转移到标准领域(例如,日期)。这是一个具有挑战性的任务,因为来自不利域的图像具有一些伪影和信息不足。最近,许多采用生成的对抗性网络(GANS)的研究在I2I翻译中取得了显着的成功,但仍然有限制将它们应用于恶劣天气增强。基于双向循环 - 一致性损耗的对称架构被采用作为无监督域传输方法的标准框架。但是,如果两个域具有不平衡信息,它可能会导致较差的转换结果。为了解决这个问题,我们提出了一种新的GaN模型,即Au-GaN,它具有不对称的域翻译的非对称架构。我们仅在普通域生成器(即雨夜 - >日)中插入建议的功能传输网络($ {T} $ - 网),以增强不利域图像的编码特征。此外,我们介绍了对编码特征的解剖学的非对称特征匹配。最后,我们提出了不确定感知的周期 - 一致性损失,以解决循环重建图像的区域不确定性。我们通过与最先进的模型进行定性和定量比较来证明我们的方法的有效性。代码在https://github.com/jgkwak95/au-g中提供。
translated by 谷歌翻译
通过对抗训练的雾霾图像转换的关键程序在于仅涉及雾度合成的特征,即表示不变语义内容的特征,即内容特征。以前的方法通过利用它在培训过程中对Haze图像进行分类来分开单独的内容。然而,在本文中,我们认识到在这种技术常规中的内容式解剖学的不完整性。缺陷的样式功能与内容信息纠缠不可避免地引导阴霾图像的呈现。要解决,我们通过随机线性插值提出自我监督的风格回归,以减少风格特征中的内容信息。烧蚀实验表明了静态感知雾度图像合成中的解开的完整性及其优越性。此外,所产生的雾度数据应用于车辆检测器的测试概括。雾度和检测性能之间的进一步研究表明,雾度对车辆探测器的概括具有明显的影响,并且这种性能降低水平与雾度水平线性相关,反过来验证了该方法的有效性。
translated by 谷歌翻译
低光图像增强(LLIE)旨在提高在环境中捕获的图像的感知或解释性,较差的照明。该领域的最新进展由基于深度学习的解决方案为主,其中许多学习策略,网络结构,丢失功能,培训数据等已被采用。在本文中,我们提供了全面的调查,以涵盖从算法分类到开放问题的各个方面。为了检查现有方法的概括,我们提出了一个低光图像和视频数据集,其中图像和视频是在不同的照明条件下的不同移动电话的相机拍摄的。除此之外,我们首次提供统一的在线平台,涵盖许多流行的LLIE方法,其中结果可以通过用户友好的Web界面生产。除了在公开和我们拟议的数据集上对现有方法的定性和定量评估外,我们还验证了他们在黑暗中的脸部检测中的表现。这项调查与拟议的数据集和在线平台一起作为未来研究的参考来源和促进该研究领域的发展。拟议的平台和数据集以及收集的方法,数据集和评估指标是公开可用的,并将经常更新。
translated by 谷歌翻译
在弱照明条件下捕获的图像可能会严重降低图像质量。求解一系列低光图像的降解可以有效地提高图像的视觉质量和高级视觉任务的性能。在本研究中,提出了一种新的基于RETINEX的实际网络(R2RNET),用于低光图像增强,其包括三个子网:DECOM-NET,DENOISE-NET和RELIGHT-NET。这三个子网分别用于分解,去噪,对比增强和细节保存。我们的R2RNET不仅使用图像的空间信息来提高对比度,还使用频率信息来保留细节。因此,我们的模型对所有退化的图像进行了更强大的结果。与在合成图像上培训的最先前的方法不同,我们收集了第一个大型现实世界配对的低/普通灯图像数据集(LSRW数据集),以满足培训要求,使我们的模型具有更好的现实世界中的泛化性能场景。对公共数据集的广泛实验表明,我们的方法在定量和视觉上以现有的最先进方法优于现有的现有方法。此外,我们的结果表明,通过使用我们在低光条件下的方法获得的增强的结果,可以有效地改善高级视觉任务(即面部检测)的性能。我们的代码和LSRW数据集可用于:https://github.com/abcdef2000/r2rnet。
translated by 谷歌翻译