Given a set of mixed spectral (multispectral or hy-perspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: 1) the endmembers are the vertices of a simplex and 2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method. Index Terms-Linear unmixing, simplex, spectral mixture model, unmixing hypespectral data, unsupervised endmember extraction, vertex component analysis (VCA).
translated by 谷歌翻译
Endmember extraction is a process to identify the hidden pure source signals from the mixture. In the past decade, numerous algorithms have been proposed to perform this estimation. One commonly used assumption is the presence of pure pixels in the given image scene, which are detected to serve as endmembers. When such pixels are absent, the image is referred to as the highly mixed data, for which these algorithms at best can only return certain data points that are close to the real endmembers. To overcome this problem, we present a novel method without the pure pixel assumption, referred to as the minimum volume constrained non-negative matrix factorization (MVC-NMF), for unsupervised endmember extraction from highly mixed image data. Two important facts are exploited: first, the spectral data are non-negative; second, the simplex volume determined by the endmembers is the minimum among all possible simplexes that circumscribe the data scatter space. The proposed method takes advantage of the fast convergence of NMF schemes, and at the same time eliminates the pure pixel assumption. The experimental results based on a set of synthetic mixtures and a real image scene demonstrate that the proposed method outperforms several other advanced endmember detection approaches.
translated by 谷歌翻译
When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid and other nonlinear models need to be considered, for instance, when there are multi-scattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this paper, we present an overview of recent advances in nonlinear unmixing modeling.
translated by 谷歌翻译
B lind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyper-spectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization , and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing-blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts-discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research-are elegant and very different from statistical independence-based BSS approaches established in [ Insights from remote sensing ] [ Wing
translated by 谷歌翻译
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection , change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspec-tral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images. Index Terms-Dimensionality reduction, hyperspectral imagery , hyperspectral signal subspace identification by minimum error (HySime), hyperspectral unmixing, linear mixture, minimum mean square error (mse), subspace identification.
translated by 谷歌翻译
This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown end-member spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for non-negativity and full-additivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. Index Terms Hyperspectral imagery, endmember extraction, linear spectral unmixing, Bayesian inference, MCMC methods.
translated by 谷歌翻译
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.
translated by 谷歌翻译
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2) sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed. Index Terms-Independent component analysis (ICA), independent factor analysis (IFA), mixture of Gaussians, unmixing hyper-spectral data.
translated by 谷歌翻译
非负矩阵分解(NMF)已成为信号和数据分析的主力,由其模型简约性和可解释性引发。也许有点令人惊讶的是,对模型可识别性的理解 - 主题挖掘和高光谱成像等许多应用中可解释性的主要原因 - 直到近几年才相当有限。从2010年开始,NMF的可识别性研究取得了相当大的进展。 :信号处理(SP)和机器学习(ML)社区已经发现了许多有趣且重要的结果。 NMF可识别性在实践中的许多方面都有很大的影响,例如避免配方避免和性能保证算法设计。另一方面,没有教学论文从可识别性的角度介绍NMF。在本文中,我们旨在通过提供有关NMF模型可识别性的全面而深入的教程以及与算法和应用的连接来填补这一空白。本教程将帮助研究人员和研究生掌握NMF的本质和见解,从而避免典型的“陷阱”,这些常常是由于无法识别的NMF配方造成的。本文还将帮助从业者为自己的问题挑选/设计合适的因子化工具。
translated by 谷歌翻译
Spectral Unmixing is one of the main research topics in hyperspectral imaging. It can be formulated as a source separation problem whose goal is to recover the spectral signatures of the materials present in the observed scene (called endmembers) as well as their relative proportions (called fractional abundances), and this for every pixel in the image. A Linear Mixture Model is often used for its simplicity and ease of use but it implicitly assumes that a single spectrum can be completely representative of a material. However, in many scenarios, this assumption does not hold since many factors, such as illumination conditions and intrinsic variability of the endmembers, induce modifications on the spectral signatures of the materials. In this paper, we propose an algorithm to unmix hyperspectral data using a recently proposed Extended Linear Mixing Model. The proposed approach allows a pixelwise spatially coherent local variation of the endmembers, leading to scaled versions of reference endmembers. We also show that the classic nonnegative least squares, as well as other approaches to tackle spectral variability can be interpreted in the framework of this model. The results of the proposed algorithm on two different synthetic datasets, including one simulating the effect of topography on the measured reflectance through physical modelling, and on two real datasets, show that the proposed technique outperforms other methods aimed at addressing spectral variability, and can provide an accurate estimation of endmember variability along the scene thanks to the scaling factors estimation.
translated by 谷歌翻译
This paper presents a new method of minimum volume class for hyperspectral unmixing, termed minimum volume simplex analysis (MVSA). The underlying mixing model is linear; i.e., the mixed hyperspectral vectors are modeled by a linear mixture of the end-member signatures weighted by the correspondent abundance fractions. MVSA approaches hyperspectral unmixing by fitting a minimum volume simplex to the hyperspectral data, constraining the abundance fractions to belong to the probability simplex. The resulting optimization problem is solved by implementing a sequence of quadratically constrained subproblems. In a final step, the hard constraint on the abundance fractions is replaced with a hinge type loss function to account for outliers and noise. We illustrate the state-of-the-art performance of the MVSA algorithm in unmixing simulated data sets. We are mainly concerning with the realistic scenario in which the pure pixel assumption (i.e., there exists at least one pure pixel per endmember) is not fulfilled. In these conditions, the MVSA yields much better performance than the pure pixel based algorithms.
translated by 谷歌翻译
Linear spectral unmixing is a commonly accepted approach to mixed-pixel classification in hyperspectral imagery. This approach involves two steps. First, to find spectrally unique signatures of pure ground components, usually known as endmembers, and, second, to express mixed pixels as linear combinations of endmember materials. Over the past years, several algorithms have been developed for autonomous and supervised endmember extraction from hyperspectral data. Due to a lack of commonly accepted data and quantitative approaches to substantiate new algorithms, available methods have not been rigorously compared by using a unified scheme. In this paper, we present a comparative study of standard endmember extraction algorithms using a custom-designed quantitative and comparative framework that involves both the spectral and spatial information. The algorithms considered in this study represent substantially different design choices. A database formed by simulated and real hyperspectral data collected by the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) is used to investigate the impact of noise, mixture complexity, and use of radiance/reflectance data on algorithm performance. The results obtained indicate that endmember selection and subsequent mixed-pixel interpretation by a linear mixture model are more successful when methods combining spatial and spectral information are applied. Index Terms-Comparative and quantitative framework, endmember extraction, spatial/spectral analysis, spectral mixture analysis.
translated by 谷歌翻译
Spectral mixture analysis provides an efficient mechanism for the interpretation and classification of remotely sensed multidimensional imagery. It aims to identify a set of reference signatures (also known as endmembers) that can be used to model the reflectance spectrum at each pixel of the original image. Thus, the modeling is carried out as a linear combination of a finite number of ground components. Although spectral mixture models have proved to be appropriate for the purpose of large hyperspectral dataset subpixel analysis, few methods are available in the literature for the extraction of appropriate endmembers in spectral unmixing. Most approaches have been designed from a spectroscopic viewpoint and, thus, tend to neglect the existing spatial correlation between pixels. This paper presents a new automated method that performs unsupervised pixel purity determination and endmember extraction from multidimensional datasets; this is achieved by using both spatial and spectral information in a combined manner. The method is based on mathematical morphology, a classic image processing technique that can be applied to the spectral domain while being able to keep its spatial characteristics. The proposed methodology is evaluated through a specifically designed framework that uses both simulated and real hyperspectral data. Index Terms-Automated endmember extraction, mathematical morphology, morphological eccentricity index, multidimensional analysis, spatial/spectral integration, spectral mixture model.
translated by 谷歌翻译
高光谱图像中的光谱可变性可以由包括环境,照明,大气和时间变化的因素产生。其发生可能导致在混合过程中传播显着的估计误差。为了解决这个问题,已经提出了扩展线性混合模型,导致大规模非光滑不适定逆问题。此外,用于获得有意义结果的正则化策略引入了丰度解之间的相互依赖性,这进一步增加了所得优化问题的复杂性。在本文中,我们提出了一种新的数据相关的多尺度模型,用于光谱可变性的高光谱非混合计数。新方法通过使用基于超像素的多尺度变换将空间背景信息结合到扩展线性混合模型中的丰度。所提出的方法产生快速算法,其在每次迭代的每次缩放中仅解决一次丰度问题。使用合成和真实图像的仿真结果与所提出的算法和其他最先进的解决方案的精度和执行时间的性能进行比较。
translated by 谷歌翻译
Mixing phenomena in hyperspectral images depend on a variety of factors such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless , it has been recognized that mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed Detect-then-Unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images. The work of J.-C. M. Bermudez was partly supported by CNPq grants
translated by 谷歌翻译
A new approach to multispectral and hyperspectral image analysis is presented. This method, called convex cone analysis (CCA), is based on the fact that some physical quantities such as radiance are nonnegative. The vectors formed by discrete radiance spectra are linear combinations of nonnegative components, and they lie inside a nonnegative, convex region. The object of CCA is to find the boundary points of this region, which can be used as endmember spectra for unmixing or as target vectors for classification. To implement this concept, we find the eigenvectors of the sample spectral correlation matrix of the image. Given the number of endmembers or classes, we select as many eigenvectors corresponding to the largest eigenvalues. These eigenvectors are used as a basis to form linear combinations that have only nonnegative elements, and thus they lie inside a convex cone. The vertices of the convex cone will be those points whose spectral vector contains as many zero elements as the number of eigenvectors minus one. Accordingly, a mixed pixel can be decomposed by identifying the vertices that were used to form its spectrum. An algorithm for finding the convex cone boundaries is presented, and applications to unsupervised unmixing and classification are demonstrated with simulated data as well as experimental data from the hyperspectral digital imagery collection experiment (HYDICE). Index Terms-Classification, convex cone analysis, hyperspec-tral digital imagery collection experiment (HYDICE), hyperspec-tral image, multispectral image, unmixing.
translated by 谷歌翻译
This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image un-mixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data. Index Terms-Gibbs sampler, hierarchical Bayesian analysis, hyperspectral images, linear spectral unmixing, Markov chain Monte Carlo (MCMC) methods, reversible jumps.
translated by 谷歌翻译
Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely , multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images which combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector Total Variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the non-quadratic and non-smooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally "live" in a low-dimensional subspace and by tailoring the Split Augmented Lagrangian Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction Method of Multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state-of-the-art, as illustrated in a series of experiments with simulated and real-life data.
translated by 谷歌翻译
This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig's seminal ideas, hyperspectral linear unmixing amounts to finding the minimum volume simplex containing the hyperspectral vectors. This is a noncon-vex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data.
translated by 谷歌翻译
Many spectral unmixing approaches ranging from geometry, algebra to statistics have been proposed, in which nonnegative matrix factorization (NMF)-based ones form an important family. The original NMF-based unmixing algorithm loses the spectral and spatial information between mixed pixels when stacking the spectral responses of the pixels into an observed matrix. Therefore, various constrained NMF methods are developed to impose spectral structure, spatial structure, and spectral-spatial joint structure into NMF to enforce the estimated endmembers and abundances preserve these structures. Compared with matrix format, the third-order tensor is more natural to represent a hyperspectral data cube as a whole, by which the intrinsic structure of hyperspectral imagery can be losslessly retained. Extended from NMF-based methods, a matrix-vector nonnegative tensor factorization (NTF) model is proposed in this paper for spectral unmixing. Different from widely used tensor factorization models, such as canonical polyadic decomposition CPD) and Tucker decomposition, the proposed method is derived from block term decomposition, which is a combination of CPD and Tucker decomposition. This leads to a more flexible frame to model various application-dependent problems. The matrix-vector NTF decomposes a third-order tensor into the sum of several component tensors, with each component tensor being the outer product of a vector (endmember) and a matrix (corresponding abundances). From a formal perspective, this tensor decomposition is consistent with linear spectral mixture model. From an informative perspective, the structures within spatial domain, within spectral domain, and cross spectral-spatial domain are retreated interdependently. Experiments demonstrate that the proposed method has outperformed several state-of-the-art NMF-based unmixing methods. Index Terms-Hyperspectral imagery (HSI), spectral unmixing, spectral-spatial structure, tensor factorization.
translated by 谷歌翻译