constraint is a standard length reference to the reconstructed results, the adding of the two
constraints would be consistent to the 3D error function and therefore would be advantageous
to the iterative convergence. In a word, the method proposed in this paper allows combining
different geometric constraints (i.e., 3D error constraint, epipolar constraint, distance
constraint) in a common framework to update the calibration parameters through a non-linear
optimization process. To the best of our knowledge, the 3D error cost function established in
the measurement coordinate system is first put forward in the binocular vision system, which
is different from the conventional 2D reprojection error criterion based on image plane.
Theoretically, it would make the camera parameters optimization consistent with the final
measurement process, and the refined parameters would be more accurate. By comparison
with the traditional method, our experiment will verify the effectiveness of the proposed
method.
The rest of the paper is organized as follows. Section 2 gives some preliminaries about the
binocular vision model. The detailed procedure of the calibration method based on 3D
optimization with multiple constraints is described in Section 3. Section 4 provides two kinds
of accuracy evaluation functions to assess calibration results. In Section 5, both computer
simulative and real data are used to validate the proposed method compared with traditional
one. The paper ends with some concluding remarks in Section 6.
2. Preliminaries
2.1. Camera pin-hole model with lens distortion
A camera is modeled by the usual pin-hole and the relationship between a 3D point M and its
image projections in the left and right camera are denoted as m
l
and m
r
:
()
0
0
0
,0
001
ll
ll l ll l yl l
u
v
λ
==
mARtM A
(1)
()
0
0
0
,0
001
rr
rr r rr r yr r
u
v
λ
==
mARtMA
(2)
Where λ
l
and λ
r
are arbitrary scale factors,
l
m
,
r
m
and
M
are the homogeneous coordinates
of image points and their corresponding space point, (R
l
,t
l
) and (R
r
,t
r
) called the extrinsic
parameters, are the rotation and translation which relate the local world coordinate system to
the camera coordinate system, and A
l
, A
r
are the camera intrinsic matrixes, consisting of the
following parameters: (f
xl
, f
yl
) and (f
xr
, f
yr
) the effective focal length, (u
0l
,v
0l
) and (u
0r
,v
0r
) the
coordinates of the principal point.
Considering the first two terms of radial distortion, we choose the most commonly used
model to handle lens distortion effects [22, 23]:
24
12
1
d
llllll
kr k r
=+ +
mm
(3)
24
12
1
d
rrrrrr
kr kr
=+ +
mm (4)
Where
l
r ,
r
r are the distances from undistorted image point
l
m ,
r
m to each principal point,
(k
1l
,k
2l
) and (k
1r
,k
2r
) are the coefficients of the radial distortion,
d
l
m ,
d
r
m denote the
coordinates of distorted image points. Equations (1)–(4) completely describe the real
perspective projection model of the two cameras including lens distortion effects.
Received 27 Jan 2014; revised 18 Mar 2014; accepted 1 Apr 2014; published 8 Apr 2014
21 April 2014 | Vol. 22, No. 8 | DOI:10.1364/OE.22.009134 | OPTICS EXPRESS 9137