HYPERSPECTRAL UNMIXING ALGORITHM BASED ON NONNEGATIVE MATRIX
FACTORIZATION
Wenxing Bao*,Qin Li,Lipin Xin,Kewen Qu
School of Computer Science and Engineering,
Beifang University of Nationalities,Yinchuan 750021, China
Email:bwx71@163.com
ABSTRACT
Nonnegative Matrix Factorization (NMF) factorizes a
nonnegative matrix into product of two positive matrixes,
which is widely used in hyperspectral unmixing. However,
the convergence speed of NMF is comparatively slower, and
a large number of local minimum will be existed when it is
directly adopted in the factorization of hyperspectral image
mixed pixels. A modified hyperspectral unmixing method
based on NMF is presented in this paper. The local linear
embedding (LLE) algorithm is used to reduce the dimension
of hyperspectral data. The sparseness and smoothness
constraints are added into the cost function. The NeNMF
algorithm is used in updating endmember matrix and
abundance matrix for hyperspectral data. The results show
that this method can achieve good result of classification.
Index Terms
—Hyperspectral unmixing, Nonnegative
Matrix Factorization (NMF), local linear embedding (LLE),
NeNMF
1. INTRODUCTION
Mixed pixel is widely existed in hyperspectral image, which
seriously affects the recognition and classification of
hyperspectral image. As an unsupervised spectral mixing
technique, NMF has achieved wide popularity. However, it
also has some drawbacks. Aim at the drawbacks of local
minimum in NMF, many new algorithms have been
proposed, such as constrained NMF algorithm [1], local
NMF [2], sparse NMF [3]. For the complexity of
hyperspectral data, those improved constrained matrix
factorization algorithms have some limitations.
After the deeply analysis on the spectral characteristics
and abundance distribution of endmember spectra contained
in hyperspectral mixed pixel, the algorithm in this paper is
proposed. NMF unmixing algorithm supposes that spectral
signatures have stable spectral characteristics, but actually it
is always changes. Manifold learning mainly focuses on the
high hyperspectral data dimension reduction with the target
is to find out the low dimensional structure which hides in
high dimensional data, which is helpful for dimension
reduction and data breakdown[4]. The main concept of LLE
in manifold learning is to preserve the mutual relationship of
local regions in original manifold, map high dimensional
data to the low dimensional feature space. After dimension
reduction by the LLE algorithm, data can be changed to
local linearity from general nonlinearity. Therefore, this
paper use LLE to reduce the dimension of hyperspectral
data. In order to solve the object function, the sparseness
and smoothness constraints are added to improve
convergence speed. Finally, NeNMF method is applied to
accelerate the NMF optimization[5]. This algorithm fully
considers the spectra characteristics and spatial
characteristics of mixing pixel in hyperspectral data, which
can improve the unmixing precision and efficiency of
mixing algorithm based on NMF.
2. IMPROVED LLE ALGORITHM
LLE is a nonlinear dimension reduction algorithm based on
manifold proposed by Rowies and Saul in 2000, its basic
concept is to map the two adjacent or related points in high
dimension space to the low dimension space and also keep
its adjacency and relativity [4]. LLE algorithm has many
advantages, which is the reason why it is popular among
many researchers. LLE has three steps:
Step one: Construct the local area. Look for k(k<n) neighbor
points through euclidean distance for every sample point
in the defined original data set
.
Step two: Calculate the local reconstruction weight matrix in
sample point to minimize reconstruction deviation. Define
reconstruction deviation:
(1)
Reconstruct the weight value
satisfies two conditions:
when
is not subject to the region of
,
. The process of finding W is the solution
process of the constrained least square problem.
(2)
Step three: Map to low dimensional embedding space
: the cost deviation of embedding space is defined
in formula (2), which is similar with cost deviation formula
(1) defined before based on local linear reconstruction
deviation. However,
is fixed here and
is optimized in