Discriminative Blur Detection Features
Jianping Shi
†
Li Xu
‡
Jiaya Jia
†
†
The Chinese University of Hong Kong
‡
Image & Visual Computing Lab, Lenovo R&T
jpshi@cse.cuhk.edu.hk xulihk@lenovo.com leojia@cse.cuhk.edu.hk
http://www.cse.cuhk.edu.hk/leojia/projects/dblurdetect/
Abstract
Ubiquitous image blur brings out a practically impor-
tant question – what are effective features to differentiate
between blurred and unblurred image regions. We address
it by studying a few blur feature representations in image
gradient, Fourier domain, and data-driven local filters. Un-
like previous methods, which are often based on restoration
mechanisms, our features are constructed to enhance dis-
criminative power and are adaptive to various blur scales
in images. To avail evaluation, we build a new blur percep-
tion dataset containing thousands of images with labeled
ground-truth. Our results are applied to several applica-
tions, including blur region segmentation, deblurring, and
blur magnification.
1. Introduction
Blur is one type of photo degradation that leads to loss of
details. In many special cases, it can also be a visual effect
purposely g enerated by photographers to give prominence
to foreground persons or other important objects based on
defocus or camera/object motion.
With the fast development of computer vision tech-
niques, it becomes important and practical to understand
information immersed in blurred images or regions. We ad-
dress a central blur detection problem in this area, since
quickly and effectively finding blur pixels can naturally
benefit many applications including but not restricted to im-
age segmentation, object detection, scene classification, im-
age quality assessment, image restoration, and photo editing
[6, 23, 21], given the fact that many blurred images exist on-
line or are produced from personal cameras.
There have been a series of methods directly solving
blind [4, 25, 7, 1 5] and non-blind [27, 12] deconvolution
problems. They aim at explicitly inferring latent images
and/or blur kernels. Our goal in blur detection is not to
follow this line using deconvolution [11]. Instead, we will
focus on finding and constructing blur feature representa-
tions directly from input images and making them potent
enough to differentiate between blurred and unblurred re-
gions, which are of high importance in feature understand-
ing.
A few previous methods relate to explicit blur detec-
tion. Levin [14] used image statistics to identify partial
motion blur. Lin et al. [16] also explored natural image
statistics for blur analy sis. Liu et al. [17] designed four lo-
cal blur features for blur confidence and type classification.
Chakrabarti et al. [3] analyzed direction al blur via local
Fourier transform. Dai and Wu [5] developed a two-layer
image model on a lpha channel to estimate partial b lur. Dif-
ferent from th e se approaches directly fitting n atural image
statistics, we in this paper analyze feature discrep ancy in
gradient and Fourier space. We also propose a few features
that are with decent d iscrimination ability theoretically and
empirically.
In addition to feature construction, we explore a data-
driven solution, which learns local filters. We build a new
blur detection dataset that contains 1000 images with hu-
man labeled ground-truth blur regions. These data not only
make detection results convincing, but also provide useful
resource to understand blur with respect to structure diver-
sity in natural images. It enables training and testing, which
are traditionally hard to implement without suitable d ata.
Our contribution is three-fold. First, we design a set of
blur features in multiple domains. Second, we develop a
multi-scale solution for blur perception tha t avoids scale
ambiguity. Third, we build a blur detection dataset with
ground-truth labels on 1000 images, which provides a rea-
sonable evaluation platform for blur analysis. We apply our
results to several applications, including blur region seg-
mentation, image debluring and blur magnification.
2. Blur Features
We deal with challenging partially blurred images where
the point spread function (PSF) varies across the image.
2014 IEEE Conference on Computer Vision and Pattern Recognition
1063-6919/14 $31.00 © 2014 IEEE
DOI 10.1109/CVPR.2014.379
2961
2014 IEEE Conference on Computer Vision and Pattern Recognition
1063-6919/14 $31.00 © 2014 IEEE
DOI 10.1109/CVPR.2014.379
2961
2014 IEEE Conference on Computer Vision and Pattern Recognition
1063-6919/14 $31.00 © 2014 IEEE
DOI 10.1109/CVPR.2014.379
2961
2014 IEEE Conference on Computer Vision and Pattern Recognition
1063-6919/14 $31.00 © 2014 IEEE
DOI 10.1109/CVPR.2014.379
2965
2014 IEEE Conference on Computer Vision and Pattern Recognition
1063-6919/14 $31.00 © 2014 IEEE
DOI 10.1109/CVPR.2014.379
2965