The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.
translated by 谷歌翻译
Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. 1
translated by 谷歌翻译
图像质量评估(IQA)指标被广泛用于定量估计一些形成,恢复,转换或增强算法后图像降解的程度。我们提出了Pytorch图像质量(PIQ),这是一个以可用性为中心的库,其中包含最受欢迎的现代IQA算法,并保证根据其原始命题正确实现并进行了彻底验证。在本文中,我们详细介绍了图书馆基础背后的原则,描述了使其可靠的评估策略,提供了展示性能时间权衡的基准,并强调了GPU加速的好处Pytorch后端。Pytorch图像质量是一个开源软件:https://github.com/photosynthesis-team/piq/。
translated by 谷歌翻译
We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-tonoise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http:// live.ece.utexas.edu/ research/ quality/ BRISQUE_release.zip for public use and evaluation.
translated by 谷歌翻译
人们对开发图像超分辨率(SR)算法的兴趣越来越大,该算法将低分辨率(LR)转换为更高分辨率的图像,但是自动评估超级分辨图像的视觉质量仍然是一个具有挑战性的问题。在这里,我们在确定性保真度(DF)与统计保真度(SF)的二维(2D)空间中查看SR图像质量评估(SR IQA)的问题。这使我们能够更好地理解现有SR算法的优势和缺点,这些算法在(DF,SF)的2D空间中在不同簇中产生图像。具体而言,我们观察到更传统的SR算法的一种有趣趋势,这些算法通常倾向于在失去SF的同时优化DF,以及最新的基于生成的对抗网络(GAN)的方法,相比之下,这些方法在实现高SF方面具有很强的优势,但有时在高SF方面表现出很强的优势维护DF。此外,我们提出了一个基于内容依赖性的清晰度和纹理评估的不确定性加权方案,将两种保真度措施合并为名为“超级分辨率图像保真度(SRIF)指数的总体质量预测”,这表明了与最新的绩效相对的卓越性能ART IQA模型对主题评级数据集进行测试。
translated by 谷歌翻译
近年来,图像存储和传输系统的快速发展,其中图像压缩起着重要作用。一般而言,开发图像压缩算法是为了确保以有限的比特速率确保良好的视觉质量。但是,由于采用不同的压缩优化方法,压缩图像可能具有不同的质量水平,需要对其进行定量评估。如今,主流全参考度量(FR)指标可有效预测在粗粒水平下压缩图像的质量(压缩图像的比特速率差异很明显),但是,它们对于细粒度的压缩图像的性能可能很差比特率差异非常微妙。因此,为了更好地提高经验质量(QOE)并为压缩算法提供有用的指导,我们提出了一种全参考图像质量评估(FR-IQA)方法,以针对细粒度的压缩图像进行压缩图像。具体而言,首先将参考图像和压缩图像转换为$ ycbcr $颜色空间。梯度特征是从对压缩伪像敏感的区域中提取的。然后,我们采用对数 - 盖尔转换来进一步分析纹理差异。最后,将获得的功能融合为质量分数。提出的方法在细粒度的压缩图像质量评估(FGIQA)数据库中进行了验证,该数据库尤其是用于评估具有亲密比特率的压缩图像质量的构建。实验结果表明,我们的公制优于FGIQA数据库上的主流FR-IQA指标。我们还在其他常用的压缩IQA数据库上测试我们的方法,结果表明,我们的方法在粗粒度压缩IQA数据库上也获得了竞争性能。
translated by 谷歌翻译
Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area, and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The "ground truth" image quality data obtained from about 25,000 individual human quality judgments is used to evaluate the performance of several prominent full-reference (FR) image quality assessment algorithms.To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group (VQEG), the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image.
translated by 谷歌翻译
在本文中,提出了一种基于高动态范围(HDR)图像的频率差异的新颖有效的图像质量评估(IQA)算法,称为基于局部全球频率特征模型(LGFM)。由假设人类视觉系统高度适应于在感知视觉场景时提取结构信息和部分频率的动机,Gabor和Butterworth滤镜分别用于HDR图像的亮度,分别提取本地和全局频率特征。相似性测量和特征池在频率特征上依次执行,以获得预测的质量评分。在四个广泛使用的基准上评估的实验表明,与最先进的HDR IQA方法相比,所提出的LGFM可以提供更高的主观感知一致性。我们的代码可在:\ url {https://github.com/eezkni/lgfm}中获得。
translated by 谷歌翻译
在这项工作中,我们介绍了梯度暹罗网络(GSN)进行图像质量评估。所提出的方法熟练地捕获了全参考图像质量评估(IQA)任务中扭曲的图像和参考图像之间的梯度特征。我们利用中央微分卷积获得图像对中隐藏的语义特征和细节差异。此外,空间注意力指导网络专注于与图像细节相关的区域。对于网络提取的低级,中级和高级功能,我们创新设计了一种多级融合方法,以提高功能利用率的效率。除了常见的均方根错误监督外,我们还进一步考虑了批处理样本之间的相对距离,并成功地将KL差异丢失应用于图像质量评估任务。我们在几个公开可用的数据集上试验了提出的算法GSN,并证明了其出色的性能。我们的网络赢得了NTIRE 2022感知图像质量评估挑战赛1的第二名。
translated by 谷歌翻译
Deep learning-based full-reference image quality assessment (FR-IQA) models typically rely on the feature distance between the reference and distorted images. However, the underlying assumption of these models that the distance in the deep feature domain could quantify the quality degradation does not scientifically align with the invariant texture perception, especially when the images are generated artificially by neural networks. In this paper, we bring a radical shift in inferring the quality with learned features and propose the Deep Image Dependency (DID) based FR-IQA model. The feature dependency facilitates the comparisons of deep learning features in a high-order manner with Brownian distance covariance, which is characterized by the joint distribution of the features from reference and test images, as well as their marginal distributions. This enables the quantification of the feature dependency against nonlinear transformation, which is far beyond the computation of the numerical errors in the feature space. Experiments on image quality prediction, texture image similarity, and geometric invariance validate the superior performance of our proposed measure.
translated by 谷歌翻译
3D点云的客观质量评估对于在现实世界应用中的沉浸式多媒体系统的开发至关重要。尽管对2D图像和视频的感知质量评估成功,但对于具有大规模不规则分布的3D点的3D点云仍然很少。因此,在本文中,我们提出了一个带有结构引导重采样(SGR)的客观点云质量指数,以自动评估3D密集点云的感知视觉质量。所提出的SGR是无需任何参考信息的通用盲质量评估方法。具体而言,考虑到人类视觉系统(HVS)对结构信息高度敏感,我们首先利用点云的唯一正常向量来执行区域预处理,其中包括按键重新采样和局部区域构建。然后,我们提取三组与质量相关的特征,包括:1)几何密度特征; 2)颜色自然特征; 3)角度一致性特征。人脑的认知特征和自然性的规律性都涉及设计的质量感知功能,这些特征可以捕获扭曲的3D点云的最重要方面。对几个公开可用的主点云质量数据库进行的广泛实验验证了我们提出的SGR可以与最新的全参考,减少引用和无参考质量评估算法竞争。
translated by 谷歌翻译
全向图像和视频可以在虚拟现实(VR)环境中提供真实世界场景的沉浸式体验。我们在本文中介绍了一项感知全向图像质量评估(IQA)研究,因为在VR环境下提供良好的经验非常重要。我们首先建立一个全向IQA(OIQA)数据库,其中包括16个源图像和320个失真的图像,这些图像被4种通常遇到的失真类型降解,即JPEG压缩,JPEG2000压缩,高斯模糊和高斯噪声。然后,在VR环境中的OIQA数据库上进行了主观质量评估研究。考虑到人类只能在VR环境中的一个运动中看到场景的一部分,因此视觉注意力变得极为重要。因此,我们还在质量评级实验过程中跟踪头部和眼动数据。原始和扭曲的全向图像,主观质量评级以及头部和眼动数据构成了OIQA数据库。在OIQA数据库上测试了最先进的全参考(FR)IQA测量,并进行了一些与传统IQA不同的新观察结果。
translated by 谷歌翻译
现有的基于深度学习的全参考IQA(FR-IQA)模型通常通过明确比较特征,以确定性的方式预测图像质量,从而衡量图像严重扭曲的图像是多远,相应的功能与参考的空间相对远。图片。本文中,我们从不同的角度看这个问题,并提议从统计分布的角度对知觉空间中的质量降解进行建模。因此,根据深度特征域中的Wasserstein距离来测量质量。更具体地说,根据执行最终质量评分,测量了预训练VGG网络的每个阶段的1Dwasserstein距离。 Deep Wasserstein距离(DEEPWSD)在神经网络的功能上执行的,可以更好地解释由各种扭曲引起的质量污染,并提出了高级质量预测能力。广泛的实验和理论分析表明,在质量预测和优化方面,提出的DEEPWSD的优越性。
translated by 谷歌翻译
In this paper, we analyse two well-known objective image quality metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM), and we derive a simple mathematical relationship between them which works for various kinds of image degradations such as Gaussian blur, additive Gaussian white noise, jpeg and jpeg2000 compression. A series of tests realized on images extracted from the Kodak database gives a better understanding of the similarity and difference between the SSIM and the PSNR.
translated by 谷歌翻译
随着高动态范围(HDR)摄影的日益普及和可访问性,用于动态范围压缩和中等呈现的音调映射操作员(TMO)实际上是要求的。在本文中,我们开发了一种基于生物学的,计算效率和感知优化的两阶段神经网络图像TMO。在第一阶段,由人类视觉系统(HVS)早期阶段的生理学动机,我们首先将HDR图像分解为标准化的Laplacian金字塔。然后,我们使用两个轻巧的深神经网络(DNN),将这种归一化表示作为输入并估计相应LDR图像的拉普拉斯金字塔。我们通过最小化标准化的拉普拉斯金字塔距离(NLPD)来优化音调映射网络,这是一种对人类对音调映射图像质量判断的校准的感知度量。在第二阶段中,我们通过输入HDR图像``校准'',生成具有不同颜色饱和度和细节可见性的伪型曝光图像堆栈。然后,我们通过最大化MEF-SSIM的变体,这是另一个具有感知校准的度量以进行图像融合,将另一个轻巧的DNN训练将LDR图像堆叠融合到所需的LDR图像中。通过这样做,提出的TMO是完全自动的,以映射未校准的HDR图像。在一组独立的HDR图像中,我们发现我们的方法生成具有更好的视觉质量的图像,并且是本地最快的TMO之一。
translated by 谷歌翻译
任意神经风格转移是一个重要的主题,具有研究价值和工业应用前景,该主题旨在使用另一个样式呈现一个图像的结构。最近的研究已致力于任意风格转移(AST)的任务,以提高风格化质量。但是,关于AST图像的质量评估的探索很少,即使它可以指导不同算法的设计。在本文中,我们首先构建了一个新的AST图像质量评估数据库(AST-IQAD),该数据库包括150个内容样式的图像对以及由八种典型AST算法产生的相应的1200个风格化图像。然后,在我们的AST-IQAD数据库上进行了一项主观研究,该研究获得了三种主观评估(即内容保存(CP),样式相似(SR)和整体视觉(OV),该数据库获得了所有风格化图像的主观评分评分。 。为了定量测量AST图像的质量,我们提出了一个新的基于稀疏表示的图像质量评估度量(SRQE),该指标(SRQE)使用稀疏特征相似性来计算质量。 AST-IQAD的实验结果证明了该方法的优越性。数据集和源代码将在https://github.com/hangwei-chen/ast-iqad-srqe上发布
translated by 谷歌翻译
基于深度学习的立体图像超分辨率(StereOSR)的最新研究促进了Stereosr的发展。但是,现有的立体声模型主要集中于改善定量评估指标,并忽略了超级分辨立体图像的视觉质量。为了提高感知性能,本文提出了第一个面向感知的立体图像超分辨率方法,通过利用反馈,这是对立体声结果的感知质量的评估提供的。为了为StereOSR模型提供准确的指导,我们开发了第一个特殊的立体图像超分辨率质量评估(StereOSRQA)模型,并进一步构建了StereOSRQA数据库。广泛的实验表明,我们的Stereosr方法显着提高了感知质量,并提高了立体声图像的可靠性以进行差异估计。
translated by 谷歌翻译
虚拟现实(VR)视频(通常以360美元$^\ Circ $视频形式)由于VR技术的快速开发以及消费级360 $^\ Circ $摄像机和显示器的显着普及而引起了人们的关注。因此,了解人们如何看待用户生成的VR视频,这些视频可能会受到混乱的真实扭曲,通常是在时空和时间上局部的。在本文中,我们建立了最大的360美元$^\ Circ $视频数据库之一,其中包含502个用户生成的视频,内容丰富和失真多样性。我们捕获了139位用户的观看行为(即扫描路径),并在四个不同的观看条件下(两个起点$ \ times $ $ $ $ $两个探索时间)收集了他们的意见分数。我们对记录的数据提供了详尽的统计分析,从而产生了一些有趣的观察结果,例如观看条件对观看行为和感知质量的重大影响。此外,我们还探讨了我们的数据和分析的其他用法,包括评估360 $^\ CIRC $视频的质量评估和显着性检测的计算模型。我们已经在https://github.com/yao-yiru/vr-video-database上提供了数据集和代码。
translated by 谷歌翻译
数字图像包含大量冗余,因此,应用了压缩以减少图像尺寸而不会损失合理的图像质量。在包含图像序列的视频的情况下,在包含图像序列和更高的压缩比中,在低吞吐量网络中实现了相同的突出。评估这种情况下的图像质量变得特别兴趣。大多数情景中的主观评估变得不可行,因此客观评估是首选。在三种客观质量措施中,全文和减少参考方法需要某种形式的原始图像来计算在广播或IP视频等情景中不可行的质量分数。因此,提出了一种非参考质量度量来评估计算亮度和多尺度梯度统计的数字图像的质量,以及平均减去对比度标准化产品作为具有缩放共轭梯度的前馈神经网络的特征。训练有素的网络提供了良好的回归和R2测量,并进一步测试实时图像质量评估数据库第2版已显示有前途的结果。 Pearson,Kendall和Spearman的相关性是计算预测和实际质量评分之间的相关性,结果与最先进的系统相当。此外,所提出的指标的计算方式比其对应物更快,并且可以用于图像序列的质量评估。
translated by 谷歌翻译
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.
translated by 谷歌翻译