
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 3, MARCH 2015 611
Fusion of MS and PAN Images
Preserving Spectral Quality
Hamid Reza Shahdoosti and Hassan Ghassemian, Senior Member, IEEE
Abstract—Image fusion aims at improving spectral information
in a fused image as well as adding spatial details to it. Among
the existing fusion algorithms, filter-based fusion methods are
the most frequently discussed cases in recent publications due
to their ability to improve spatial and spectral information of
multispectral (MS) and panchromatic (PAN) images. Filter-based
approaches extract spatial information from the PAN image and
inject it into MS images. Designing an optimal filter that is
able to extract relevant and nonredundant information from the
PAN image is presented in this letter. The optimal filter coef-
ficients extracted from statistical properties of the images are
more consistent with type and texture of the remotely sensed
images compared with other kernels such as wavelets. Visual and
statistical assessments show that the proposed algorithm clearly
improves the fusion quality in terms of correlation coefficient,
relative dimensionless global error in synthesis, spectral angle
mapper, universal image quality index, and quality without ref-
erence, as compared with fusion methods, including improved
intensity–hue–saturation, multiscale Kalman filter, Bayesian, im-
proved nonsubsampled contourlet transform, and sparse fusion of
image.
Index Terms—Directional filter, image fusion, optimal filter,
pan-sharpening, spectral information.
I. INTRODUCTION
I
N REMOTE sensing systems, scenes are observed in differ-
ent portions of electromagnetic spectrum and with different
resolutions. To collect more energy and maintain signal-to-
noise ratio simultaneously, the multispectral (MS) sensors, with
high spectral resolution, have a poor spatial quality compared
with panchromatic (PAN) sensor with a higher spatial reso-
lution and a wider spectral bandwidth. By means of image
fusion, it is possible to synthesize images with the high spatial
resolution and the appropriate spectral content.
A large collection of fusion algorithms has been proposed in
the last decades early initiated based on component substitu-
tion, such as intensity–hue–saturation (IHS) [1] and principal
component analysis (PCA) [2], or based on relative spectral
contribution, such as intensity modulation [3] and Brovey [4].
These simple and popular fusion methods lead to the spectral
distortion in the fused image.
The filter-based image fusion methods provide more spec-
tral and spatial information, and as a consequence, the fused
products have a higher quality. High-pass filtering [5] is the
Manuscript received February 2, 2014; revised July 1, 2014 and July 25,
2014; accepted August 27, 2014.
The authors are with the Faculty of Electrical and Computer Engineering,
Tarbiat Modares University, Tehran 14155-4843, Iran (e-mail: hamidreza.
shahdoosti@modares.ac.ir; ghassemi@modares.ac.ir).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LGRS.2014.2353135
primary, and the generalized Laplacian pyramid (GLP) [6], à
trous-wavelet transform (ATWT) [7], and nonsubsampled con-
tourlet transform (NSCT) [8] are the current filter-based fusion
methods. The wavelet method has high ability in representation
of details with 1-D singularities. For 2-D objects, it uses a
tensor product of two 1-D transforms; hence, it is incapable of
representing directional objects. The NSCT is flexible, such that
it allows any number of directions in each scale and captures
spatial structures of images along the smooth contours, and is
thereby more efficient in representation of 2-D objects.
Some improved filter-based algorithms such as multiscale
Kalman filter (MKF) [9] and improved NSCT [8] consider both
the MS and PAN images in injection of high-frequency details
of the PAN image. Thus, a fused image with the high spatial
and spectral qualities is obtained by these methods.
In this letter, a new adaptation is proposed to the filter-
based fusion method that varies the manner in which the low-
pass filter and the extracted spatial information are calculated
depending on the initial MS and PAN images. The designed
filter preserves the spectral quality of the expanded MS images,
as well as improves the spatial quality by minimizing a tradeoff
objective function.
The filter-based fusion model is briefly discussed in the
next section. In Section III, both spectral and spatial qualities
are considered for designing the optimal filter. To verify the
efficiency of the proposed method, visual and quantitative
assessments are carried out on MS and PAN data in Section IV.
Finally, the conclusion is presented in Section V.
II. F
ILTER-BASED MODEL
The filter-based image fusion model is [6]
F
i
= MS
i
+ G
i
(PAN − h ∗ PAN) (1)
where F
i
is the ith fused band, MS
i
is the ith MS band
resampled to the scale of the PAN image, and h denotes a low-
pass filter, i.e.,
h =
⎛
⎜
⎝
h
1,1
··· h
1,n
.
.
. h
n+1
2
,
n+1
2
.
.
.
h
n,1
··· h
n,n
⎞
⎟
⎠
.
In this model, a low-pass-filtered version of the PAN image,
i.e., PAN
L
, has to be created to extract the high-frequency
component of the PAN image. Subsequently, by a subtrac-
tion procedure, the high-frequency component is extracted and
added to the MS images via addition [see (1)].
If G
i
is a constant for the ith band, then it is usually
calculated from global statistics on the whole image; GLP is
a particular case in which G
i
is obtained from the regression
1545-598X © 2014
IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.