c.s. Fraser/1SPRS Journal of Photogrammetry & Remote Sensing 52 (1997) 149-159
151
distance unknown for each image), which is gener-
ally anathema to photogrammetrists seeking a robust
camera calibration.
Contradictory assessments of the importance of
calibration can also be found in the computer vision
literature. The following two statements are by the
same author in the same year: "Camera calibration is
an important task in computer vision" (Maybank and
Faugeras, 1992), and "computer vision may have
been slightly overdoing it in trying at all costs to
obtain metric information from images" (Faugeras,
1992). In the second referenced paper the author
also adds that it is not often the case that metric
information is necessary for robotics applications,
which is clearly fortunate given the accuracy limita-
tions of 'self-calibrating' from two- and three-image
networks comprising less than ten image point cor-
respondences.
If photogrammetrists were to realistically ask
themselves whether the present CCD camera cali-
bration techniques developed in computer vision are
beneficial in metric measurement, the answer would
have to be no. Even the perceived advantages of
speed and on-line processing are no longer valid. In
order to automatically self-calibrate a digital cam-
era in an on-line close-range network configuration,
the photogrammetrist needs only to collect four or
more images of a field of a few tens of distinct
targets, with there being no requirement for object
space dimensional information. Calibration to a fi-
delity matching the angular measurement resolution
of the photogrammetric camera is then available in
near real time (within a few seconds of the last
image being recorded). Such fully automated self-
calibration procedures are already implemented in
commercially available vision metrology systems for
industrial measurement (Fraser, 1997). Claims that
computer vision inspired approaches such as 'object
reconstruction without inner orientation' (a stereo
solution for uncalibrated cameras involving only six
image points) will find wide use in photogramme-
try (Shan, 1996) cannot be given much credence, at
least in situations requiring photogrammetric accu-
racies. The use of orientation techniques with their
roots in computer vision is, however, clearly not
precluded for preliminary orientation determination,
the adoption of the closed-form resection formula-
tion of Fischler and Bolles (1981) being a good
example. The following discussion is confined to the
'photogrammetric' self-calibration approach.
3. Quality of self-calibration
At first sight the task of ascertaining the quality
of digital camera calibration appears to be less than
straightforward. For example, one might consider the
question of how accurately interior orientation or de-
centring distortion needs to be determined to support
a triangulation to so many parts per 100,000. The
issue is further complicated by the fact that a good
deal of projective compensation takes place between
the terms forming the AP model, and between the
self-calibration parameters and exterior orientation
elements. Moreover, the issue of the 'fidelity' of
the calibration model depends a good deal on what
photogrammetric applications are envisaged for the
camera. If one is self-calibrating a camera or cameras
which is/are to be used for stereo restitution, then it
is probably unwise to recover decentring distortion
parameters since very few commercially available
digital photogrammetric workstations accommodate
such an image correction. Instead, it would generally
be better to suppress these parameters and allow part
of the component of the error signal to be projec-
tively absorbed by the generally highly correlated
principal point offsets (x0, Y0). Even these parame-
ters may be of limited practical consequence if the
stereo model contains little variation in depth.
The more useful approach to examining the qual-
ity of calibration involves essentially three simple
factors: the distribution of points within the images,
the photogrammetric network configuration, and the
variance factor or standard error of unit weight of the
self-calibrating bundle adjustment. The first item re-
lates specifically to lens distortion which is modelled
in terms of polynomial functions that are notori-
ously poor extrapolators. If a representative distor-
tion modelling (radial and decentring) is sought over
the full image format, then the image point distribu-
tion must encompass the full image area, albeit not
in all images.
Photogrammetric network design considerations
for self-calibration are well known, among these
being the need for a highly convergent imaging
configuration, incorporation of orthogonal camera
roll angles, and the use of four or more images