![](https://csdnimg.cn/release/download_crawler_static/10377770/bg1.jpg)
AprilCal: Assisted and repeatable camera calibration
Andrew Richardson Johannes Strom Edwin Olson
Abstract— Reliable and accurate camera calibration usually
requires an expert intuition to reliably constrain all of the
parameters in the camera model. Existing toolboxes ask users
to capture images of a calibration target in positions of
their choosing, after which the maximum-likelihood calibration
is computed using all images in a batch optimization. We
introduce a new interactive methodology that uses the current
calibration state to suggest the position of the target in the next
image and to verify that the final model parameters meet the
accuracy requirements specified by the user.
Suggesting target positions relies on the ability to score candi-
date suggestions and their effect on the calibration. We describe
two methods for scoring target positions: one that computes
the stability of the focal length estimates for initializing the
calibration, and another that subsequently quantifies the model
uncertainty in pixel space.
We demonstrate that our resulting system, AprilCal, consis-
tently yields more accurate camera calibrations than standard
tools using results from a set of human trials. We also
demonstrate that our approach is applicable for a variety of
lenses.
I. INTRODUCTION
Applications such as visual odometry [14], dense recon-
struction [8], [15], and colored point cloud segmentation [20]
are fundamentally dependent on accurate calibrations in
order to extra metrical data from images. The MATLAB and
OpenCV packages are two popular systems for calibrating
lenses [3], [4]. However, they can be error prone, especially
for lenses with significant distortion. This stems from the fact
that the quality of a calibration is dramatically affected by
the user’s choice of calibration images. A user who chooses
poor calibration target positions may find the resulting model
generalizes poorly to unseen examples. This challenge is
particularly acute for novice users, who are not aware of
the properties of the underlying estimation and optimization
methods, or end-users in dramatically different fields [2].
Even experts may be unsure that the positions they have
chosen will yield a sufficiently accurate calibration, as the
number of images needed is not constant across lenses and
should vary with the quality of the constraints. Consequently,
standard practice is to collect many more images than
necessary and verify that the model parameter uncertainty
and training error are low; if the results are unsatisfactory,
the calibration is repeated or updated with additional images.
This process is unreliable, and not very satisfying from a
theoretical standpoint.
Therefore, the primary goal of this work is to increase
calibration repeatability and accuracy in a more principled
The authors are with the Computer Science and Engineering
Department, University of Michigan, Ann Arbor, MI 48104,
USA {chardson,jhstrom,ebolson}@umich.edu
http://april.eecs.umich.edu
Fig. 1: The AprilCal GUI. Our system combines the ability
to reason about unseen targets and a novel quality metric
to make suggestions to the user about where to place the
target. The user is notified that calibration is complete once
the desired accuracy has been reached, typically achieving
< 1 pixel of error after 6-8 images.
fashion. We introduce a paradigm where fit quality is explic-
itly considered at each stage during a live calibration process.
Specifically, we automatically consider many unseen target
positions and suggest positions that will best improve the
quality of the calibration. This is achieved using a novel
quality metric based on the uncertainty of the calibration as
measured in pixels. Previous toolboxes report the uncertainty
of the model parameters, but the effect of these parameter
uncertainties on pixel coordinates can be complex. We argue
that worst-case uncertainty in pixels is more relevant for
application performance and more natural for the user. Worst-
case pixel uncertainty also serves as a principled basis to
automatically determine when enough images have been
collected.
We also introduce a new method for robustly bootstrapping
a calibration that enables our system to make sensible
recommendations even when little or no prior information
is available about the lens. Our system also makes use of a
calibration target composed of AprilTags [16], which, unlike
previous approaches, can still be detected when individual
markers are occluded. This enables a wider variety of target
positions, which our method successfully exploits when
making suggestions to the user.
We validated our camera calibration toolbox via a 16-
participant study mostly compromised of users who had
never calibrated a camera. Despite their lack of expertise,
they were consistently able to use our software to produce