656 CHINESE OPTICS LETTERS / Vol. 8, No. 7 / July 10, 2010
Novel image fusion method based on discrete
fractional random transform
Qing Guo (HHH ) and Shutian Liu (444äääXXX)
∗
Department of Physics, Harbin Institute of Technology, Harbin 150001, China
∗
E-mail: stliu@hit.edu.cn
Received December 8, 2009
We introduce a new spectrum transform into the image fusion field and propose a novel fusion method
based on discrete fractional random transform (DFRNT). In DFRNT domain, high amplitude spectrum
(HAS) and low amplitude spectrum (LAS) components carry different information of original images. For
different fusion goals, different fusion rules can be adopted in HAS and LAS components, respectively. The
prop osed method is applied to fuse real multi-spectral (MS) and panchromatic (Pan) images. The fused
image is observed to preserve both spectral information of MS and spatial information of Pan. Spectrum
distribution of DFRNT is random and uniform, which guarantees that good information is reserved.
OCIS co des: 100.0100, 280.0280, 330.0330, 070.0070.
doi: 10.3788/COL20100807.0656.
Image fusion involves combining multiple images of the
same scene with complementary information to generate
a new composite image with more information and bet-
ter quality than the individual image obtained solely by
a single sensor. In remote sensing, multi-spectral (MS)
images sufficient spectral information but poor spatial
resolution, while panchromatic (Pan) images are marked
by high spatial resolution but low spectral information.
In this letter, we aim to achieve pixel-level fusion of MS
and Pan images to preserve spectral information while
enhancing spatial details, which can better serve appli-
cations such as land classification and road detection.
There are various fusion algorithms at the pixel
level, including intensity-hue-saturation (IHS), Brovey,
wavelet, and contourlet transforms
[1−5]
. IHS method
transforms three MS bands from red-green-blue (RGB)
space into IHS space to separate spatial information from
spectral components. After replacing intensity with Pan,
the merged result is converted back into RGB space. Al-
though this method can preserve high spatial resolution,
it distorts spectral information
[6]
. Brovey fusion is a sim-
ple color normalized method that commonly introduces
spectral distortion. In the contourlet method, down-
sampling and up-sampling processes prompt contourlet
transform to lose translation invariance and further in-
troduce the Gibbs effect in the resultant image.
For wavelet methods, the Pan and each band of MS im-
ages are decomposed into an approximation and a set of
detailed images. Band by band, the approximation image
from MS is combined with details from Pan. Then, in-
verse wavelet transform is performed to obtain the fused
images. This method can obtain sound fusion results;
however, the wavelet decomposition level produces an im-
pact on fusion performance. If the decomposition level is
low, fused images preserve more spectral characteristics
but fail to preserve spatial details appropriately. With a
higher level of decomposition, the performance of spatial
details gradually increases; however, the spectral infor-
mation cannot be preserved very well as low frequency
coefficients are decomposed repeatedly.
IHS and Brovey transforms are direct conversions be-
tween pixel values of images, while wavelet transform is
a joint space-frequency transform. Wavelet coefficients
straightly display approximate and detailed spatial im-
ages corresponding to original image. These types of
representations are marked by incompleteness and un-
certainty. Although wavelet has space and frequency
information, it has no exact transform domain. Two-
dimensional (2D) wavelet bases are isotropic and have
limited directional representations of image details. It is
noted that Fourier transform (FT) and fractional Fourier
transform (FrFT) are joint space-frequency transforms.
Their transform coefficients represent the contribution
of each basis function at each frequency, thus they have
exact transform domains. They can show the transform
spectrum, and spatial image can be obtained only after
inverse transform. FT and FrFT clearly display the fea-
tures of signals in frequency domain, which is difficult
to display in the spatial domain. Their kernel functions
allow the perfect frequency resolution to be obtained, as
the kernel per se is a window of infinite length. FT and
FrFT convert grayscale distribution of an image into its
frequency distribution, and frequency indicates the ex-
tent of change in gray scale. Therefore, performing fu-
sion in such transformed domains is an indirect change of
the original image simultaneously based on space image
features and different spectrum distribution features.
In this letter, we propose a novel fusion metho d based
on discrete fractional random transform (DFRNT)
[7]
.
DFRNT originates from discrete fractional Fourier trans-
form (DFrFT)
[8]
. It features excellent mathematical
properties inherited from FrFT, in addition to a num-
ber of special spectrum distribution features of its own.
The randomness of DFRNT can randomly distribute the
changed information by fusion, which introduces lower in-
fluence than same-strength changes at the concentrated
location in spectra. This ensures less spectral distortion.
The uniformity of DFRNT ensures majority of accept-
able fusion results when distortions occur in any posi-
tion of the spectrum; this lends certain robustness to
the method. In the DFRNT domain, a nominal high-
frequency component with spatial details and a nomi-
1671-7694/2010/070656-05
c
° 2010 Chinese Optics Letters