Regular Clipping Detection of Image Based on SIFT
Huang Yan-li, Cui Hao-liang, Niu Shao-zhang
Beijing Key Lab of Intelligent Telecommunication Software and Multimedia
Beijing University of Posts and Telecommunications
Beijing 100876, China
Deneae@163.com
, cui.haoliang@163.com,szniu@bupt.edu.cn
Abstract—For the issues that the cropped level of image is
detected and the cropped region of image is determined, a new
scheme is proposed that is based on SIFT (Scale-invariant
feature transform) for the regular clipping operation under the
situation of existing original image. Features points of original
and testing images are extracted by SIFT algorithm, and the
matching are done between the corresponding features points.
According to the relative position relations among matching
points, the scaling of testing images corresponding to the
original can be calculated, and further the cropped region can
be determined. The experimental results show that the
detection way works well for the regular clipping.
Keywords-SIFT; regular clipping; feature points; scale ratio
I. INTRODUCTION
The development of digital image brings rich and
colorful visual information for social life, and image
processing techniques also provide convenient and quick for
people's life. In many cases, digital camera image will be
released after images are tailored and scaled etc.,
beneficially to the release and broadcast of the image. This
requires news photography workers provide
images of the
original film in order to ensure the authenticity of the image,
and some people take time to compare the original and
provided images.
For this issue, the paper puts forward an effective testing
whether the image suffers the regular cutting, and can outline
the original image cropping area. First, features points of
original and testing images are detected by SIFT algorithm,
and the scale of these points is rotation invariant. Then, the
matching is done between the corresponding features points
of two images are done, and the matching points can be
obtained. Thus, according to the relative position relations
among matching points and the scaling of testing images
corresponding to original can be calculated, and further the
cropped region can be determined. The experiment results
show the validity of this method.
II. S
IFT ALGORITHM
Scale-invariant feature transform (SIFT)
[1]
, which
belongs to a kind of the vision algorithm, can be used to
detect the local characteristics of image, seek extreme value
points in the scale space, and extract its location, scale and
rotation invariant. Its application range is wide, including
object identification, perception and robot map navigation,
image stitching, 3D modeling, and gesture recognition, etc.
The main steps of SIFT algorithm are following
[2]
:
(1) The extreme value detection of Scale space. In Scale
space, the potential interest points for scale and rotation
invariant are detected by differential Gaussian function.
(2) The location of key points. In the interest point
position, the location and scale of key points are determined.
(3) The determination of direction. Based on local
gradient direction, the direction is assigned to each key
point.
(4) The description of key points. Image local gradient at
each critical point is measured by the key point
neighborhood, and in the final a feature vector is expressed.
A. Scale space representation
Scale space theory
[3]
describes that by scaling the
original image to obtain multi-scale representation of the
image sequence, these sequences are extracted for the main
contours of the scale space, and the main outlines are as a
feature vector to achieve edge, corner detection and feature
extraction on different resolutions.
The purpose of generating Scale space is to simulate
multi-scale features of image data. Gaussian convolution
kernel is the only linear core to implement scaling. Two-
dimensional Gaussian function is defined as:
()
()
222
2/
2
2
1
,,
σ
πσ
σ
yx
eyxG
+−
=
(1)
()
yx,
is the space coordinate, the size of
decides the
degree of image smoothing. The
large scale corresponds to
the overview feature of image and small scale to the detail
feature of the image. In order to effectively detect the stable
points in scale space, the different scales of Gaussian
difference kernel and image convolution are used to
generate Difference of Gaussian (DoG) scale-space:
()( )()()()
()()
σσ
,,,,
,*,,,,,,
yxLkyxL
yxIyxGkyxGyxD
−=
−=
(3)
B. DOG extreme value point of Scale space detection
Down sampling and Gaussian smoothing can get
Gaussian pyramid
[4]
, and then the pyramid DoG is
generated by subtracting adjacent scale image, finally the
scale space is formed. In the scale space, each sampling
point and its neighboring points up and down the scale with
the scale of the adjacent 8 + 9 * 2 = 26 points are compared,
as shown in Figure 1.
Extreme value points are searched in discrete space, but
the extreme points found in the discrete space are not
2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
978-1-4799-5390-5/14 $31.00 © 2014 IEEE
DOI 10.1109/IIH-MSP.2014.165
638
2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
978-1-4799-5390-5/14 $31.00 © 2014 IEEE
DOI 10.1109/IIH-MSP.2014.165
638
2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
978-1-4799-5390-5/14 $31.00 © 2014 IEEE
DOI 10.1109/IIH-MSP.2014.165
638
2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
978-1-4799-5390-5/14 $31.00 © 2014 IEEE
DOI 10.1109/IIH-MSP.2014.165
638
2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
978-1-4799-5390-5/14 $31.00 © 2014 IEEE
DOI 10.1109/IIH-MSP.2014.165
638