Infrared and visible image fusion using multiscale
directional nonlocal means filter
XIANG YAN,HANLIN QIN,* JIA LI,HUIXIN ZHOU,JING-GUO ZONG, AND QINGJIE ZENG
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an Shaanxi 710071, China
*Corresponding author: hlqin@mail.xidian.edu.cn
Received 28 January 2015; revised 6 April 2015; accepted 7 April 2015; posted 8 April 2015 (Doc. ID 233260); published 1 May 2015
Fusion of infrared and visible images is a significant research area in image analysis and computer vision. The
purpose of infrared and visible image fusion is to combine the complementary image information of the source
images into a fused image. Thus, it is vital to efficiently represent the important image information of the source
images and choose rational fusion rules. To achieve this aim, an image fusion method using multiscale directional
nonlocal means (MDNLM) filter is proposed in this paper. The MDNLM combines the feature of preserving
edge information by the nonlocal means filter with the capacity of capturing directional image information by the
directional filter bank, which can effectively represent the intrinsic geometric structure of images. The MDNLM is
a multiscale, multidirectional, and shift-invariant image decomposition method, and we use it to fuse infrared and
visible images in this paper. First, the MDNLM is discussed and used to decompose the source images into
approximation subbands and directional detail subbands. Then, the approximation and directional detail sub-
bands are fused by a local neighborhood gradient weighted fusion rule and a local eighth-order correlation fusion
rule, respectively. Finally, the fused image can be obtained through the inverse MDNLM. Comparison experi-
ments have been performed on different image sets, and the results clearly demonstrate that the proposed method
is superior to some conventional and recent propos ed fusion methods in terms of the visual effects and objective
evaluation.
© 2015 Optical Society of America
OCIS codes: (100.0100) Image processing; (100.4994) Pattern recognition, image transforms; (100.4997) Pattern recognition,
nonlinear spatial filters; (350.2660) Fusion.
http://dx.doi.org/10.1364/AO.54.004299
1. INTRODUCTION
Image fusion is an important research area in image analysis and
computer vision applications that is used to combine the im-
portant information of different images acquired from two or
more sensors or at different times for one sensor [
1,2]. So far,
image fusion is focused on multifocus image fusion, infrared-
visible image fusion, multimedical image fusion, and remote
sensing image fusion. Among them, infrared and visible image
fusion plays an important role in improving the value of image
fusion technology in civilian and military applications such as
object detection [
3], object tracking [4], asymmetric cryptosys-
tems, and hiding [
5] and securing multiple images [6] because
infrared images can clearly present the target regions but cannot
better describe the detail information of scene. However, the
visible image can better describe the detail information of
the subjects and scene that it acquires through the visible light
image sensor. Thus, we can obtain clearly images in object re-
gions and detail parts by performing an image fusion technique
on the infrared and visible images.
Until now, a lot of image fusion methods have been put
forward. Among them, multiscale transform is a main trend
image fusion method, such as the pyramid transform-based
method [
7,8], the wavelet transform-based method [9–13],
nonsubsampled contourlet transform-based method [
14,15],
and nonsubsampled shearlet transform-based method [
16].
The principal component analysis (PCA)-based method is also
a practical image fusion method [
17]. The methods based on
multiscale transform decompose the source images into differ-
ent subbands at different scales to efficiently represent the detail
information and approximate image information, while some
image detail information may be smoothed. This will affect
the result of infrared and visible image fusion. The image fusion
method is based on PCA, which extracts the primary informa-
tion of the source images and effectively combines them to get
the fused image. This method is often used for some special
purposes. Image filter theory is a critical theory in image
processing, and different filter theories are widely used for im-
age denoising [
18,19], image super-resolution reconstruction
[
20], image enhancement [21], and other fields. In recent years,
some scholars have introduced image filter theories, such as the
cross bilateral filter-based method [
22], multiscale directional
bilateral filter-based method [
23], and guided filter-based
Research Article
Vol. 54, No. 13 / May 1 2015 / Applied Optics 4299
1559-128X/15/134299-10$15/0$15.00 © 2015 Optical Society of America