SIViP
DOI 10.1007/s11760-015-0808-y
ORIGINAL PAPER
Sparse matrix transform-based linear discriminant analysis
for hyperspectral image classification
Jiangtao Peng
1
· Tao Luo
2
Received: 2 April 2014 / Revised: 18 July 2015 / Accepted: 26 July 2015
© Springer-Verlag London 2015
Abstract Due to the high dimensionality of hyperspec-
tral image (HSI), dimension reduction or feature extraction
is usually needed before the HSI classification. Traditional
linear discriminant analysis (LDA) method for feature extrac-
tion usually encounters difficulty because the available
training samples in HSI classification are limited, which
causes the singularity of data scatter matrix. In this paper, we
propose a sparse matrix transform-based LDA (SMT-LDA)
algorithm for the HSI classification. By using SMT, the total
scatter matrix used in LDA can be constrained to have an
eigen-decomposition where the eigenvectors can be sparsely
parametrized by a limited number of Givens rotations. In
this way, the estimated scatter matrix is always positive defi-
nite and well conditioned even in the case of limited training
samples. The proposed SMT-LDA method is compared with
regularized LDA and PCA-LDA methods on two benchmark
hyperspectral data sets. Experimental results indicate that the
performance of the proposed method is overall superior to
these methods, especially for small-sample-size classifica-
tion.
Keywords Hyperspectral image · Linear discriminant
analysis · Sparse matrix transform · Dimension reduction ·
Small-sample-size
B
Jiangtao Peng
pengjt1982@126.com
Tao Luo
luo_tao@tju.edu.cn
1
Faculty of Mathematics and Statistics, Hubei Key Laboratory
of Applied Mathematics, Hubei University, Wuhan 430062,
China
2
School of Computer Science and Technology, Tianjin Key
Laboratory of Cognitive Computing and Application, Tianjin
University, Tianjin 300072, China
1 Introduction
Hyperspectral remote sensors capture digital images in
hundreds of narrow spectral bands spanning the visible-
to-infrared spectrum [1]. It can be used to capture high-
resolution hyperspectral images (HSIs) for environmental
mapping, geological research, plant and mineral identifica-
tion, crop analysis, and so on. In all of these applications,
it usually requires to classify the pixels in the scene, where
a pixel (or sample) is represented as a vector whose entries
correspond to the reflection or absorption value in differ-
ent spectral bands. In HSI classification, we usually have
few training samples (small samples) coupled with a large
number of spectral channels (high dimensionality) [2]. Large
number of bands provide rich information for classifying dif-
ferent materials in the scene. However, with few training
samples, beyond a certain limit, the classification accu-
racy decreases as the number of features increases (Hughes
phenomenon [3]). In order to obtain good classification per-
formance, it needs more training samples which are rarely
feasible in hyperspectral remote sensing applications. There-
fore, for high-dimensional small-sample hyperspectral data,
the classification is relatively difficult. Moreover, the large
amount of features involved in HSI will dramatically increase
processing complexity. An HSI data generally consist of
thousands of pixels over hundreds of spectral bands. Classifi-
cation of this tremendous amount of data is time-consuming
and requires significant computational effort, which may not
be possible in many applications. Therefore, for classification
of HSI data, it is common to perform a dimension reduction or
feature extraction procedure followed by classification algo-
rithms [4–6].
A basic and commonly used method for feature extraction
is the Fisher linear discriminant analysis (LDA) [7,8]. The
objective of LDA is to find the most discriminant projection
123