580 CHINESE OPTICS LETTERS / Vol. 8, No. 6 / June 10, 2010
Multi-parameter fusion algorithm for auto focus
Jun Luo (ÛÛÛ )
∗
, Li Sun ( ååå), Kesong Wu (ÇÇÇttt),
Weimin Chen (¬¬¬), and Li Fu (GGG www)
Key Laboratory of Optoelectronic Technology and System, Ministry of Education,
Chongqing University, Chongqing 400030, China
∗
E-mail: luojun@cqu.edu.cn
Received January 6, 2010
A multi-parameter fusion algorithm (MPFA) for auto-focus (AF) is discussed. The image sharpness evalu-
ation algorithm (ISEA) and zoom tracking method (ZTM) are combined for AF. The zoom motor position
(z) and background complexity (c) are regarded as the main parameters of this algorithm. A priority table
dep ending on z and c is proposed. Modified ISEA or ZTM is adopted according to the priority table value.
The hardware implementation of the MPFA on Texas Instruments’ Davinci digital signal processor is also
provided. Results show that the proposed scheme provides faster focusing compared with the conventional
approaches.
OCIS co des: 000.3110, 040.1490, 110.2960, 110.5200.
doi: 10.3788/COL20100806.0580.
At present, zoom lenses have been utilized widely in var-
ious industrial applications. Auto-focus (AF) algorithm
is an important factor in determining the imaging qual-
ity of a digital still camera (DSC)
[1−3]
. There are two
ways to implement AF, namely, active AF and passive
AF. In literature, only passive AF methods have been
discussed, although some conventional passive AF meth-
ods have been introduced. In this letter, we introduce a
new AF approach, the multi-parameter fusion algorithm
(MPFA), which merges the image sharpness evaluation
algorithm (ISEA) and the zoom tracking method (ZTM)
according to a priority table. The priority table is based
on zoom motor position (z) and background complexity
(c). Results show that the MPFA provides higher focus-
ing speed compared with conventional methods.
There are two prevalent methods used in realizing
passive AF: ISEA and ZTM
[4]
, and each has many
algorithms, including sharpness evaluation operators,
look-up table, geometric, and adaptive zoom tracking.
However, these algorithms have several general draw-
backs. Firstly, a complex background image results in
local peak values of the image definition. In addition,
more time is required in searching for the maximum
definition value when using ISEA
[5−14]
. Secondly, the
look-up table method uses a large amount of memory,
and no mechanism is provided to select the right trace
curve when the zoom motor is moved towards the tele-
angle direction; this is due to the one-to-many mapping
problem. Thirdly, in the geometric ZTM, offset between
the estimated trace curve and true trace curve gradually
increases for the zoom motor positions towards the tele-
angle direction. Fourthly, the adaptive ZTM improves
the tracking accuracy at the expense of tracking speed
[4]
.
Thus, a more effective AF approach is necessary. The
new MPFA presented in this letter can solve these prob-
lems in various ways.
There are numerous evaluation functions for digital im-
age sharpness, including variance operator, power gradi-
ent operator, and Laplacian operator. The performance
of each operator is different in relation to focusing speed.
The output of high-pixel color charge-coupled
device/complementary metal-oxide semiconductor
(CCD/CMOS) image sensors are RAW format image
data. In conventional methods, data are restored in
BMP or JPEG formats, the color space conversion is
taken, and image definition is evaluated via luminance
or green (G) component. However, these methods are
not so efficient that they require a large quantity of cal-
culation and are also time-consuming.
According to the above discussion, the gradient oper-
ator model, which is directly based on RAW format, is
provided by
F =
X
mat
x
X
mat
y
(f
2
x
(mat
x
, mat
y
) + f
2
y
(mat
x
, mat
y
)), (1)
where
g(mat
x
, mat
y
) = 0.299 × R(mat
x
, mat
y
) + 0.587×
G
0
(mat
x
, mat
y
) + 0.114 × B(mat
x
, mat
y
), (2)
f
x
(mat
x
, mat
y
) = g(mat
x+1
, mat
y
)−
g(mat
x
, mat
y
), (3)
f
y
(mat
x
, mat
y
) = g(mat
x
, mat
y +1
)−
g(mat
x
, mat
y
). (4)
In the above expressions, “mat” is a MAT unit, which
has one red (R) element, two G elements, and one blue
(B) element in the RAW format image acquired. In ad-
dition, (mat
x
, mat
y
) is the MAT unit with coordinate (x,
y), F is the focus value, g(mat
x
, mat
y
) is the gray value
in (mat
x
, mat
y
), and G
0
(mat
x
, mat
y
) is the average of
the two G-components in (mat
x
, mat
y
).
The detailed gradient operator based on RAW format
is not presented here. When obtaining the focus value of
each frame according to Eq. (1), hill-climbing algorithm
is used to find the in-focus position.
ZTM depends on the zoom motor positions and the
focus motor positions conjointly
[15−18]
. Figure 1 illus-
trates the ZTM system. Cam mechanism is adjusted by
zoom motor to change the combination focal length f of
1G and 2G lens units. Zoom PI and focus PI represent
1671-7694/2010/060580-04
c
° 2010 Chinese Optics Letters