In this paper, a novel despeckling algorithm based on undecimated wavelet decomposition and maximum a posteriori estimation is proposed. Such a method represents an improvement with respect to the filter presented by the authors, and it is based on the same conjecture that the probability density functions (pdfs) of the wavelet coefficients follow a generalized Gaussian (GG) distribution. However, the approach introduced here presents two major novelties: 1) theoretically exact expressions for the estimation of the GG parameters are derived: such expressions do not require further assumptions other than the multiplicative model with uncorrelated speckle, and hold also in the case of a strongly correlated reflectivity; 2) a model for the classification of the wavelet coefficients according to their texture energy is introduced. This model allows us to classify the wavelet coefficients into classes having different degrees of heterogeneity, so that ad hoc estimation approaches can be devised for the different sets of coefficients. Three different implementations , characterized by different approaches for incorporating into the filtering procedure the information deriving from the seg-mentation of the wavelet coefficients, are proposed. Experimental results, carried out on both artificially speckled images and true synthetic aperture radar images, demonstrate that the proposed filtering approach outperforms the previous filters, irrespective of the features of the underlying reflectivity. Index Terms-Despeckling, generalized Gaussian (GG) model-ing, image segmentation, synthetic aperture radar (SAR), undeci-mated wavelet decomposition.
translated by 谷歌翻译
We propose a novel despeckling algorithm for synthetic aperture radar (SAR) images based on the concepts of nonlocal filtering and wavelet-domain shrinkage. It follows the structure of the block-matching 3-D algorithm, recently proposed for additive white Gaussian noise denoising, but modifies its major processing steps in order to take into account the peculiarities of SAR images. A probabilistic similarity measure is used for the block-matching step, while the wavelet shrinkage is developed using an additive signal-dependent noise model and looking for the optimum local linear minimum-mean-square-error estimator in the wavelet domain. The proposed technique compares favorably w.r.t. several state-of-the-art reference techniques, with better results both in terms of signal-to-noise ratio (on simulated speckled images) and of perceived image quality. Index Terms-Empirical Wiener filtering, linear minimum-mean-square-error (LMMSE) filtering, nonlocal filtering, speckle, synthetic aperture radar (SAR), undecimated discrete wavelet transform (UDWT).
translated by 谷歌翻译
Most current SAR systems offer high-resolution images featuring polarimetric, interferometric, multi-frequency, multi-angle, or multi-date information. SAR images however suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed these last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multi-channel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity, and estimating the parameters of interest from a collection of similar patches.
translated by 谷歌翻译
We propose a new approach to synthetic aperture radar (SAR) despeckling, based on the combination of multiple alternative estimates of the same data. The many despeckling methods proposed in the literature possess different and often complementary strengths and weaknesses. Given a reliable pixelwise characterization of the image, one can take advantage of this diversity by selecting the most appropriate combination of estimators for each image region. Following this paradigm, we develop a simple algorithm where only two state-of-the-art despeckling tools, characterized by complementary properties, are linearly combined. To ensure the smooth combination of contributes, thus avoiding new artifacts, we propose a novel soft classification method, where a basic estimate of homogeneity is improved through nonlocal and multiresolution processing steps. The results of a number of experiments conducted on both synthetic and real-world SAR images are very promising, thus confirming the potential of the proposed approach.
translated by 谷歌翻译
A novel adaptive and exemplar-based approach is proposed for image restoration (denoising) and representation. The method is based on a pointwise selection of similar image patches of fixed size in the variable neighborhood of each pixel. The main idea is to associate with each pixel the weighted sum of data points within an adaptive neighborhood. We use small image patches (e.g. 7 × 7 or 9 × 9 patches) to compute these weights since they are able to capture local geometric patterns and texels seen in images. In this paper, we mainly focus on the problem of adaptive neighborhood selection in a manner that balances the accuracy of approximation and the stochastic error, at each spatial position. The proposed pointwise estimator is then iterative and automatically adapts to the degree of underlying smoothness with minimal a priori assumptions on the function to be recovered. The method is applied to artificially corrupted real images and the performance is very close, and in some cases even surpasses, to that of the already published denoising methods. The proposed algorithm is demonstrated on real images corrupted by non-Gaussian noise and is used for applications in bio-imaging.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
In image processing, restoration is expected to improve the qualitative inspection of the image and the performance of quantitative image analysis techniques. In this paper, an adaptation of the nonlocal (NL)-means filter is proposed for speckle reduction in ultrasound (US) images. Originally developed for additive white Gaussian noise, we propose to use a Bayesian framework to derive a NL-means filter adapted to a relevant ultrasound noise model. Quantitative results on synthetic data show the performances of the proposed method compared to well-established and state-of-the-art methods. Results on real images demonstrate that the proposed method is able to preserve accurately edges and structural details of the image.
translated by 谷歌翻译
The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions but fail in general and create artifacts or remove fine structures in images. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms and, second, to propose a nonlocal means (NL-means) algorithm addressing the preservation of structure in a digital image. The mathematical analysis is based on the analysis of the "method noise," defined as the difference between a digital image and its denoised version. The NL-means algorithm is proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods is compared in four ways; mathematical: asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical: the algorithms artifacts and their explanation as a violation of the image model; quantitative experimental: by tables of L 2 distances of the denoised version to the original image. The fourth and perhaps most powerful evaluation method is, however, the visualization of the method noise on natural images. The more this method noise looks like a real white noise, the better the method.
translated by 谷歌翻译
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, "method noise", specifies that only noise must be removed from an image. A second principle will be introduced, "noise to noise", according to which a denoising method must transform a white noise into a white noise. Contrarily to "method noise", this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. "Noise to noise" will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the "statistical optimality", is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the "noise to noise" principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.
translated by 谷歌翻译
F or more than 50 years, the mean-squared error (MSE) has been the dominant quantitative performance metric in the field of signal processing. It remains the standard criterion for the assessment of signal quality and fidelity; it is the method of choice for comparing competing signal processing methods and systems, and, perhaps most importantly, it is the nearly ubiquitous preference of design engineers seeking to optimize signal processing algorithms. This is true despite the fact that in many of these applications, the MSE exhibits weak performance and has been widely criticized for serious shortcomings, especially when dealing with perceptually important signals such as speech and images. Yet the MSE has exhibited remarkable staying power, and prevailing attitudes towards the MSE seem to range from "it's easy to use and not so bad" to "everyone else uses it." So what is the secret of the MSE-why is it still so popular? And is this popularity misplaced? What is wrong with the MSE when it does not work well? Just how wrong is the MSE in these cases? If not the MSE, what else can be used? These are the questions we'll be concerned with in this article. Our backgrounds are primarily in the field of image processing, where the MSE has a particularly bad reputation, but where, ironically, it is used nearly as much as in other areas of signal processing. Our discussion will often deal with the role of the MSE (and alternative methods) for processing visual signals. Owing to the poor performance of the MSE as a visual metric, interesting alternatives are arising in the image processing field. Our goal is to stimulate fruitful thought and discussion regarding the role of the MSE in processing other types of signals. More specifically, we hope to inspire signal processing engineers to rethink whether the MSE is truly the criterion of choice in their own theories and applications, and whether it is time to look for alternatives.
translated by 谷歌翻译
Recently emerging non-invasive imaging modality-optical coherence tomography (OCT)-is becoming an increasingly important diagnostic tool in various medical applications. One of its main limitations is the presence of speckle noise which obscures small and low-intensity features. The use of multiresolution techniques has been recently reported by several authors with promising results. These approaches take into account the signal and noise properties in different ways. Approaches that take into account the global orientation properties of OCT images apply accordingly different level of smoothing in different orientation subbands. Other approaches take into account local signal and noise covariance's. So far it was unclear how these different approaches compare to each other and to the best available single-resolution despeckling techniques. The clinical relevance of the denoising results also remains to be determined. In this paper we review systematically recent multiresolution OCT speckle filters and we report the results of a comparative experimental study. We use 15 different OCT images extracted from five different three-dimensional volumes, and we also generate a software phantom with real OCT noise. These test images are processed with different filters and the results are evaluated both visually and in terms of different performance measures. The results indicate significant differences in the performance of the analyzed methods. Wavelet techniques perform much better than the single resolution ones and some of the wavelet methods improve remarkably the quality of OCT images.
translated by 谷歌翻译
The search for efficient image denoising methods still is a valid challenge, at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions, but fail in general and create artifacts or remove image fine structures. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms, second, to propose an algorithm (Non Local Means) addressing the preservation of structure in a digital image. The mathematical analysis is based on the analysis of the "method noise", defined as the difference between a digital image and its denoised version. The NL-means algorithm is proven to be asymptotically optimal under a generic statistical image model. The de-noising performance of all considered methods are compared in four ways; mathematical: asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical: the algorithms artifacts and their explanation as a violation of the image model; quantitative experimental: by tables of L 2 distances of the denoised version to the original image. The most powerful evaluation method seems, however, to be the visualization of the method noise on natural images. The more this method noise looks like a real white noise, the better the method.
translated by 谷歌翻译
In recent years, Bayes least squares-Gaussian scale mixtures (BLS-GSM) has emerged as one of the most powerful methods for image restoration. Its strength relies on providing a simple and, yet, very effective local statistical description of oriented pyramid coefficient neighborhoods via a GSM vector. This can be viewed as a fine adaptation of the model to the signal variance at each scale, orientation, and spatial location. Here, we present an enhancement of the model by introducing a coarser adaptation level, where a larger neighborhood is used to estimate the local signal covariance within every subband. We formulate our model as a BLS estimator using space-variant GSM. The model can be also applied to image deconvolution, by first performing a global blur compensation, and then doing local adaptive denoising. We demonstrate through simulations that the proposed method, besides being model-based and noniterative, it is also robust and efficient. Its performance, measured visually and in L2-norm terms, is significantly higher than the original BLS-GSM method, both for denoising and deconvolution. Index Terms-Bayesian estimation, Gaussian scale mixtures (GSM), image denoising, image restoration, overcomplete oriented pyramids.
translated by 谷歌翻译
The Lee sigma filter was developed in 1983 based on the simple concept of two-sigma probability, and it was reasonably effective in speckle filtering. However, deficiencies were discovered in producing biased estimation and in blurring and depressing strong reflected targets. The advancement of synthetic aperture radar (SAR) technology with high-resolution data of large dimensions demands better and efficient speckle filtering algorithms. In this paper, we extend and improve the Lee sigma filter by eliminating these deficiencies. The bias problem is solved by redefining the sigma range based on the speckle probability density functions. To mitigate the problems of blurring and depressing strong reflective scatterers, a target signature preservation technique is developed. In addition, we incorporate the minimum-mean-square-error estimator for adaptive speckle reduction. Simulated SAR data are used to quantitatively evaluate the characteristics of this improved sigma filter and to validate its effectiveness. The proposed algorithm is applied to spaceborne and airborne SAR data to demonstrate its overall speckle filtering characteristics as compared with other algorithms. This improved sigma filter remains simple in concept and is computationally efficient but without the deficiencies of the original Lee sigma filter. Index Terms-Sigma filter, speckle, speckle filtering, synthetic aperture radar (SAR).
translated by 谷歌翻译
Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances, while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior.
translated by 谷歌翻译
SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image Z is the product of two independent fields: the backscatter X and the speckle Y. The result of any speckle filter is X, an estimator of the backscatter X, based solely on the observed data Z. An ideal estimator would be the one for which the ratio of the observed image to the filtered one I = Z/ X is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of I to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first-and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, β edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter.
translated by 谷歌翻译
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an over-complete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error. Index Terms-Bayesian estimation, Gaussian scale mixtures, hidden Markov model, natural images, noise removal, overcom-plete representations, statistical models, steerable pyramid. T HE artifacts arising from many imaging devices are quite different from the images that they contaminate, and this difference allows humans to "see past" the artifacts to the underlying image. The goal of image restoration is to relieve human observers from this task (and perhaps even to improve upon their abilities) by reconstructing a plausible estimate of the original image from the distorted or noisy observation. A prior probability model for both the noise and for uncorrupted images is of central importance for this application. Modeling the statistics of natural images is a challenging task, partly because of the high dimensionality of the signal. Two basic assumptions are commonly made in order to reduce di-mensionality. The first is that the probability structure may be defined locally. Typically, one makes a Markov assumption, that the probability density of a pixel, when conditioned on a set of neighbors, is independent of the pixels beyond the neighborhood. The second is an assumption of spatial homogeneity: the distribution of values in a neighborhood is the same for all such neighborhoods, regardless of absolute spatial position. The Markov random field model that results from these two assumptions is commonly simplified by assuming the distributions are Gaussian. This last assumption is problematic for image mod-eling, where the complexity of local structures is not well described by Gaussian densities. The power of statistical image models can be substantially improved by transforming the signal from the pixel domain to a new representation. Over the past decade, it has become standard to initiate computer-vision and image processing tasks by decomposing the image with a set of multiscale bandpass oriented filters. This kind of representation, loosely referred to as a wavelet decomposition, is effective at d
translated by 谷歌翻译
The presence of speckle in radar images makes the radio-metric and textural aspects less efficient for class discrimination. Many adaptive filters have been developed for speckle reduction. In this paper , the most well-known filters are analyzed. It is shown that they are based on a test related to the local coefficient of variation of the observed image, which describes the scene heterogeneity. Some practical criteria are then introduced to modify the filters in order to make them more efficient. The filters are tested on a simulated SAR image and a SAR-580 image. As was expected, the new filters perform better, i.e., they average the homogeneous areas better and preserve texture information , edges, linear features, and point target responses better at the same time. Moreover, they can be adapted to features other than the coefficient of variation to reduce the speckle and at the same time preserve the corresponding information.
translated by 谷歌翻译
Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning. By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination , image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications.
translated by 谷歌翻译
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction or super-resolution. The non-local means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the ROF model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing non-local methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a non-local data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
translated by 谷歌翻译