1320 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 8, AUGUST 2014
Hyperspectral Image Classification Using Kernel
Sparse Representation and Semilocal
Spatial Graph Regularization
Jianjun Liu, Zebin Wu, Le Sun, Zhihui Wei, and Liang Xiao
Abstract—This letter presents a postprocessing algorithm for
a kernel sparse representation (KSR)-based hyperspectral image
classifier, which is based on the integration of spatial and spectral
information. A pixelwise KSR is first used to find the sparse
coefficient vectors of the hyperspectral image. Then, a sparsity
concentration index (SCI) rule-guided semilocal spatial graph
regularization (SSG), called SSG+SCI, is proposed to determine
refined sparse coefficient vectors that promote spatial continuity
within each class. Finally, these refined coefficient vectors are
used to obtain the final classification map. Compared with pre-
vious approaches based on similar spatial–spectral postprocessing
strategies, SSG+SCI clearly outperforms their results in terms of
accuracy and the number of training samples, as it is demon-
strated with two real hyperspectral images.
Index Terms—Graph regularization, hyperspectral image clas-
sification, kernel sparse representation (KSR), sparsity concentra-
tion index (SCI).
I. INTRODUCTION
H
YPERSPECTRAL imaging sensors capture digital im-
ages in hundreds of narrow and contiguous spectral bands
spanning the visible to infrared spectrum. The wealth of spec-
tral information promotes the development of many application
domains, such as military, agriculture, and mineralogy. Among
these applications, image classification is an important one
where pixels are labeled to one of the classes. Various methods
have been developed for hyperspectral image classification and
have shown a good performance, such as support vector ma-
chines (SVMs) [1], [2], multinomial logistic regression (MLR)
[3], and sparse representation (SR) [4].
Recent work has highlighted that hyperspectral image classi-
fication should not only focus on analyzing spectral features but
also incorporate information on the spatially adjacent pixels.
Manuscript received July 1, 2013; revised September 24, 2013 and November
4, 2013; accepted November 18, 2013. Date of publication December 20,
2013; date of current version March 11, 2014. This work was supported
in part by the National Natural Science Foundation of China under Grants
61101194, 61071146, and 61171165, by the Jiangsu Provincial Natural Science
Foundation of China under Grant BK2011701, by the Research Fund for the
Doctoral Program of Higher Education of China under Grant 20113219120024,
by the Jiangsu Province Six Top Talents Project of China under Grant WLW-
011, by the CAST Innovation Foundation under Grant CAST201227, and by
the Project of China Geological Survey under Grant 1212011120227.
The authors are with the School of Computer Science and Engineer-
ing, Nanjing University of Science and Technology, Nanjing 210094, China
(e-mail: liuofficial@163.com; zebin.wu@gmail.com; sunlecncom@163.com;
gswei@mail.njust.edu.cn; xiaoliang@mail.njust.edu.cn).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LGRS.2013.2292831
Generally, the spatial–spectral classification methods can be
classified into two groups. The first one can be considered as the
preprocessing methods, where spatial contextual information is
exploited in the classification stage. In [2], Camps-Valls et al.
combine spectral and spatial information within a feature vector
of each pixel by taking advantage of the composite kernels and
then apply a pixelwise SVM to the obtained set of vectors.
Gurram and Kwon [5] exploit both local spectral and spatial
information by weighing the neighboring pixels during Hilbert
space embedding and then build a large-margin SVM on the
weighted means of small patches of pixels. The second group is
composed by the algorithms in which the spatial dependence is
exploited at a postprocessing stage. An example is a pixelwise
classification followed by Markov random field regularization
of the classification map [3], [6].
Kernel SR (KSR), a kernel version of SR, has been suc-
cessfully applied to face recognition and hyperspectral image
classification recently [7], [8]. Unlike SVM, KSR is a non-
parametric learning method, which does not need a set of hy-
pothesis functions to learn the weight vectors of the hypothesis
function. KSR can be treated as a learning machine in which
the classification process is implemented by using the signal
reconstruction methods.
This letter concentrates on KSR. Nevertheless, KSR is a
pixelwise classification method without considering the corre-
lations between spatially adjacent pixels. As pointed earlier,
two categories of methods can be used to incorporate spatial
information. For the first one, a common approach is to use the
composite kernel framework [8]. For the second one, until now,
there are no related approaches. Different from the classifiers
such as MLR and probabilistic SVM (PSVM) [9], the output
of KSR is the sparse coefficient vectors, while the outputs
of MLR and PSVM are the probabilistic classification maps.
Spatial smoothness is the main assumption for spatial–spectral
classification of hyperspectral images, which assumes that the
neighboring pixels consist of the same type of materials ( same
class) and have similar spectral characteristics. In [10], the
Local Invariance Assumption (LIA) is imposed as follows:
If two pixels are close in the intrinsic geometry of the data
distribution, then the coding vectors of these two pixels with
respect to the new basis are also close to each other. Based
on LIA, the spatial s mooth assumption is also suitable for
the sparse coefficient vectors, since all pixels s hare the same
basis (training dictionary) in KSR. Therefore, the neighboring
coefficient vectors are similar to each other.
Graph-based methods are commonly used in image classi-
fication problems [10]–[12]. These methods rely on defining
1545-598X © 2013
IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.