978-1-4799-2764-7/13/$31.00 ©2013 IEEE 880
2013 6th International Congress on Image and Signal Processing (CISP 2013)
A Novel Image Fusion Rule Based on Structure
Similarity Indices
Shi Su, Fuxiang Wang
School of Electronic and Information Engineering
Beihang University
Beijing, China
Abstract—A novel image fusion rule named “variance-choose-
max” based on Structure Similarity Index is proposed in this
paper. Firstly, the sparse representation of source image patches
are acquired through bases training algorithm K-SVD and
pursuit algorithm Orthogonal Matching Pursuit. Then, we group
image patches into relevant patches and independent patches
according to the Structure Similarity Index of each patch pair.
Finally, we fuse the corresponding sparse coefficients of relevant
patches and independent patches with “coefficient-choose-max”
rule and a new fusion rule named “variance-choose-max”
respectively. According to the experiments, our proposed method
gains a good performance in visual quality of fused image and
also in objective metric.
Keywords - image fusion; Structure Similarity Index; relevant
patch; independent patch; “variance-choose-max” rule; K-SVD.
I. INTRODUCTION
Image fusion, which is a sphere of image processing, aims
to combine several source images gained by multi-sensors into
a single fused image. In the fused image, all the key
information in the source images should be included. Therefore,
image fusion enables convenient comprehension and effective
analysis about the complementary information obtained by
multi-sensor. Image fusion is significant in various applications,
such as medical diagnose, computer vision, object detection
and remote sensing.
Generally, image fusion consists of two steps: image
transformation and fusion rule. In the procedure of image
transformation, most of these methods are composed of two
sub-procedures: choosing or training certain bases and then
calculating the corresponding coefficients. In recent decades, a
variety of transformational algorithms have been utilized in
image transform. According to the way of acquiring the bases,
these transformational algorithms could be classified into two
groups: one presets the bases, such as discrete wavelets
transformation (DWT) and discrete cosine transformation
(DCT), while the other obtains the bases through training, such
as Independent Component Analysis (ICA) and K-SVD. The
authors in [1] apply wavelets transformation to image fusion
through transforming image from spatial domain to wavelet
domain and then processing the data in wavelet domain. Source
images acquired from different sensors are utilized as input
data in spatial domain and, in specifically preset bases [such as
Haar wavelet, Morlet wavelet and some other mother wavelets],
wavelet coefficients are the output data in wavelet domain.
N.Mitianoudis et al raises the image fusion method based on
ICA [2-4]. Through using ICA transform, source image
features are represented by ICA bases through training, and the
corresponding coefficients are used to construct the fused
image. The authors in [5,6] propose the algorithms to obtain a
sparse representation of source images, and the bases
(described as dictionary in [7,8]) of them are gained through
training the samples which are acquired from source images.
Our work mainly concentres upon the parts after image
transformation. In image transformation process, we adopt K-
SVD to acquire the trained bases of source image patches and
utilize the pursuit algorithm OMP to calculate the
corresponding sparse coefficients to obtain the sparse
representation of source image patches. Then, according to the
Structure Similarity Index (SSIM) [9] of each image patch pair,
we conduct a patch grouping process to classify these patches
into relevant patches and independent patches before fusing
step. Finally, in the process of fusion rule, we combine
“coefficient-choose-max” rule with a novel fusion rule named
“variance-choose-max” to fuse image patches. As to relevant
patches, “coefficient-choose-max” rule is utilized to fuse these
patches. Meanwhile, we propose a novel fusion rule named
“variance-choose-max” to fuse independent patches. A detail
statement would be given in Section Ⅲ.
In Section Ⅱ, we describe the background of image based
on joint sparse representation. Section Ⅲ depicts the details of
our proposed method. And Section Ⅳ presents our results and
the comparison with other methods. We analyze and conclude
our research in Section Ⅴ.
II. BACKGROUND
Image fusion based on joint sparse representation consists
of two parts: image transformation and fusion rule.
Let