Regular article
Infrared and visible image fusion method based on saliency detection
in sparse domain
C.H. Liu
a,
⇑
,Y.Qi
b
, W.R. Ding
a
a
Institute of Unmanned System, Beihang University, Xueyuan Road, Haidian District, 100191 Beijing, China
b
State Key Laboratory of Virtual Reality Technology and System, Beihang University, NO. 37 Xueyuan Road, Haidian District, 100191 Beijing, China
highlights
An infrared and visible image fusion method based on saliency detection in sparse domain is proposed.
The global and local saliency maps of the source images are obtained based on sparse coefficients.
An integrated saliency map is generated by combining the global and local saliency maps.
The image fusion procedure is guided by the integrated saliency map under the framework of JSR model.
Accuracy assessment for real data shows a great potential of the proposed method.
article info
Article history:
Received 17 December 2016
Revised 21 March 2017
Accepted 27 April 2017
Available online 28 April 2017
Keywords:
Image fusion
Joint sparse representation
Saliency detection
Infrared image
Visible image
abstract
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better pre-
serve the significant information of the infrared and visible images in the final fused image, the saliency
maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the
joint sparse representation (JSR) model, the global and local saliency maps of the source images are
obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines
the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algo-
rithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental
results show that our method is superior to the state-of-the-art methods in terms of several universal
quality evaluation indexes, as well as in the visual quality.
Ó 2017 Elsevier B.V. All rights reserved.
1. Introduction
As a class of important fusion technology, the infrared and vis-
ible image fusion has been widely used in many military and civil-
ian applications, such as target detection, surveillance and
intelligence gathering. The fusion technology can combine the
complementary information from each sensor of the same scene
into one image which can provide much precise information than
one single image [1]. For objects that need to be detected, the
means of infrared and visible image fusion can effectively enhance
their characteristics, especially in some weak light or low light
environment.
Various algorithms for infrared and visible image fusion have
been developed over the past few decades. Early image fusion
methods mainly include pixel-domain image fusion methods
which complete the fusion progress directly in the spatial domain.
The representative pixel-domain image fusion methods are the
intensity-hue-saturation (IHS) [2], the principal component analy-
sis (PCA) [3] and total variation minimization (TV) [4], etc. Though
this type of method is fast and easy to implement, the fusion effect
is quite limited. The multiscale transform (MST) based fusion
methods is another popular fusion methods in the past few years.
The classical MST-based methods are the pyramids based [5] and
wavelet transform based methods [6]. Recently developed fusion
methods start to employ multiscale geometric analysis (MGA)
tools, such as the Curvelet Transform [7], the Shearlet Transform
[8], the nonsubsampled Contourlet Transform (NSCT) [9] and their
variants [10]. However, the limitations of predefined basis func-
tions of the MST-based methods may lead to the fact that the edges
and textures of the images cannot be accurately expressed, which
seriously affects the fusion result.
Recently, the sparse representation (SR) based methods has
recently drawn significant interest in image fusion due to its
http://dx.doi.org/10.1016/j.infrared.2017.04.018
1350-4495/Ó 2017 Elsevier B.V. All rights reserved.
⇑
Corresponding author.
E-mail address: liuchunhui2134@126.com (C.H. Liu).
Infrared Physics & Technology 83 (2017) 94–102
Contents lists available at ScienceDirect
Infrared Physics & Technology
journal homepage: www.elsevier.com/locate/infrared