978-1-7281-1708-9/19/$31.00 ©2019 IEEE
Detail-Preserving Exposure Fusion Based on
Adaptive Structure Patch Decomposition
Mali Yu
School of Information Science and
Technology
Jiujiang University
Jiujiang, China
mary8011@163.com
Xinyu Li
School of Information Science and
Technology
Jiujiang University
Jiujiang, China
892838720@qq.com
Wuyan Cheng
School of Information Science and
Technology
Jiujiang University
Jiujiang, China
1578640237@qq.com
Hai Zhang
School of Information Science and
Technology
Jiujiang University
Jiujiang, China
zhanghai0792@qq.com
Abstract—Exposure fusion aims at improving the overall
visual quality, especially in the extremely bright and dark
regions, by merging different exposure images. Structure patch
decomposition based exposure fusion (SPD-EF) method can
generate images with overall good visual quality without any
post-processing steps. However, SPD-EF inevitably induces
halo artifacts due to large local contrast. In this paper, we
propose an adaptive SPD-EF, which can preserve the fine
details and avoid halo artifacts. In the first step, each input
image is partitioned into overlapping patches with the same
size, each of which is decomposed into signal strength, signal
structure, and mean intensity. In the second step, we propose
to adjust the signal strength based on the local contrast and
intensity, following which three components are merging
separately. The qualitative and quantitative comparisons with
four current state-of-the-art methods demonstrate that our
method can not only remove halo artifacts, but also preserve
the vivid color appearance and details.
Keywords—adaptive structure patch decomposition, signal
decomposition, exposure fusion, detail-preserving exposure
fusion
I. INTRODUCTION
Luminance of real scenes spans too wide range to be
captured by a common digital camera. It becomes
challenging when the whole luminance is captured with a
single shot. That is, details in the saturated regions, including
the extremely bright and dark regions, are commonly lost. To
overcome this limitation, many researches focusing on high
dynamic range (HDR) imaging have been proposed in recent
years, which can be grouped into two classes [1,2].
The first class of the methods consists of three steps. In
the first step, camera response function (CRF) is estimated
based on the intensities and the exposure times using
different models, e.g. power function model [3], least squares
model [4], Bayesian-based statistical model [5] etc.. In the
second step, the fused radiance is the weight sum of all
radiances computed by the estimated CRF. Finally, tone-
mapping [6-8] is applied in converting the fused radiance
into a low dynamic range (LDR) image suitable for common
display devices. However, the performance of these methods
relies deeply on the estimated CRF and the process
commonly has high computation.
The other class of the methods is by directly merging all
input LDR images, commonly termed exposure fusion. That
is, the fused value F of the pixel p is defined as:
(1)
where N is the number of input images, W
p,n
and I
p,n
denote
the weight and the intensity of the pixel p in the n-th image,
respectively.
In most exposure fusion methods, belonging to the pixel-
wise fusion, the weight map is computed based on the
feature of a single pixel, e.g. contrast, saturation and well-
exposedness[9], gradient[10], gradient direction[11], local
entropy[12], discrete wavelets[13], etc.. Because these
methods have risk of introducing spatial artifacts, additional
processing steps such as Gaussian smoothing pyramid [9],
boosting Laplacian pyramid [14], edge-preserving filter
[15,16], edge-preserving smoothing pyramid [17] etc., are
used. However, these methods are hard to preserve both the
fine details in the saturated regions and the overall good
visual quality. Ma et al. [18] proposed structure patch
decomposition based exposure fusion (SPD-EF). In
consideration of RGB color channels of all pixels in the
patch jointly, SPD-EF produced better visual quality without
any post-processing steps. However, this method has risk of
introducing halo artifacts near edges with large local contrast,
such as areas around window and light.
In this paper, we propose an adaptive SPD-EF method.
First, all input images are partitioned into overlapping
patches, each of which is decomposed into three components:
signal strength, signal structure and mean intensity. Instead
of using the signal strength as in [18], an adaptive scale
factor based on the local contrast and intensity is introduced
to adjust the signal strength, which effectively compress the
difference of signal strengths. Finally, three components are
merging separately. The fused images preserve both the fine
details and good visual quality, and have no halo artifacts.
The experimental results prove that the performance of our
method is better than four current state-of-the-art methods
both qualitatively and quantitatively.