Spatially variant defocus blur map estimation and deblurring
from a single image
q
Xinxin Zhang, Ronggang Wang
⇑
, Xiubao Jiang, Wenmin Wang, Wen Gao
Peking University Shenzhen Graduate School, China
article info
Article history:
Received 7 September 2015
Accepted 5 January 2016
Available online 11 January 2016
Keywords:
Spatially variant blur
Edge information
Defocus image deblurring
Image deblurring
Blur map estimation
Ringing artifacts removal
Image restoration
Non-blind deconvolution
abstract
In this paper, we propose a single image deblurring algorithm to remove spatially variant defocus blur
based on the estimated blur map. Firstly, we estimate the blur map from a single image by utilizing
the edge information and K nearest neighbors (KNN) matting interpolation. Secondly, the local kernels
are derived by segmenting the blur map according to the blur amount of local regions and image con-
tours. Thirdly, we adopt a BM3D-based non-blind deconvolution algorithm to restore the latent image.
Finally, ringing artifacts and noise are detected and removed, to obtain a high quality in-focus image.
Experimental results on real defocus blurred images demonstrate that our proposed algorithm outper-
forms some state-of-the-art approaches.
Ó 2016 Elsevier Inc. All rights reserved.
1. Introduction
Conventional camera with low f-number is sensitive to defocus
and has shallow depth of focus, which often results in defocus blur.
For a scene with multiple depth layers, sometimes only one of
them can be in-focus during the process of image capturing. This
may be done deliberately by cameramen for artistic effect. How-
ever, in an out-of-focus image, texture details are blurred or even
become invisible. And it also affects the performance of object
detection, recognition, tracking and compression [30,31]. There-
fore, in many scenarios, out-of-focus should be avoided.
In most cases, multiple depth layers lead to spatially-variant
blur effects. The defocus process is analyzed with the thin lens
imaging system. As illustrated in Fig. 1, a light ray emitting from
a point on the focal plane focuses on a point in the camera sensor.
But a light ray emitting from a point behind or in front of the focal
plane forms a circle region on the sensor which is called the circle
of confusion (CoC) on the sensor. We can see that the larger the
distance between the object and the focal plane is, the larger
the diameter of CoC becomes. The diameter of CoC characterizes
the amount of blur and can be calculated by the similar triangle
principle.
The spatially variant defocus deblurring from a single image is a
great challenging problem. Firstly, the blur amount is closely
related to the depth of field. However, we cannot get the exact
depth values from a single image, so we cannot use the thin lens
model to solve the deblurring problem. Secondly, the blur amount
may change abruptly at object boundaries or change continuously
in an image, which are shown in Fig. 2(a) and (b) respectively. On
one hand, when blur amount changes abruptly, the image can be
split into several regions and the spatially variant deblurring prob-
lem can be transformed to local uniform deblurring problem.
Whereas, blurred image segmentation and image ringing artifacts
along edges are the primary problems to be solved. On the other
hand, when the blur amount changes continuously, the depth lay-
ers is hard to be separated. Thirdly, as shown in Fig. 2(c), out-of-
focus for some regions is made deliberately for artistic effect, and
plenty of high-frequency information is lost.
Spatially variant defocus deblurring has attracted much atten-
tion in recent years. In order to get more available information for
kernel estimation, a group of methods used special camera equip-
ment to capture photographs. Vu et al. [14] estimated depth from
a pair of stereoscopic images and exploited the depth-of-field to cal-
culate the diameter of CoC for each depth layer. Levin et al. [9]
restored a single refocus blurred image captured from a modified
camera. They inserted a patterned occluder within the aperture of
the camera lens to create a coded aperture and combined it with a
sparse prior so as to improve the accuracy of blur scale estimation.
Zhou et al. [17] used a pair of optimized coded apertures to capture
http://dx.doi.org/10.1016/j.jvcir.2016.01.002
1047-3203/Ó 2016 Elsevier Inc. All rights reserved.
q
This paper has been recommended for acceptance by M.T. Sun.
⇑
Corresponding author.
E-mail address: rgwang@pkusz.edu.cn (R. Wang).
J. Vis. Commun. Image R. 35 (2016) 257–264
Contents lists available at ScienceDirect
J. Vis. Commun. Image R.
journal homepage: www.elsevier.com/locate/jvci