Automatic visual/IR image registration
Hui Henry Li
Yi-Tong Zhou,
MEMBER SPIE
HNC Software, Inc.
5930 Cornerstone Court West
San Diego, California 92121-3728
E-mail: huili@hnc.com
Abstract. A feature-based approach to visual/IR sensor image registra-
tion is presented. This new method overcomes the difficulties caused by
the discrepancy in data’s gray-scale characteristics and the problem of
feature inconsistency. It employs a wavelet-based feature extractor to
locate point features from contours based on local statistics of the image
intensity. Matching is carried out at multiresolution levels based on point
features. A consistency-checking step is involved to eliminate mis-
matches. The algorithm is accurate, robust, and fast. It is capable of
handling images with considerable translation, scaling, and rotation. De-
tails on the registration algorithm including feature extraction, matching,
consistency checking, and the image transformation model are dis-
cussed. Experimental results using real visual/IR sensor data are
presented. ©
1996 Society of Photo-Optical Instrumentation Engineers.
Subject terms: image registration; contour extraction; point feature extraction;
wavelet transforms; consistency checking; multisensor analysis.
Paper SC-001 received July 5, 1995; revised manuscript received Aug, 18, 1995;
accepted for publication Aug. 21, 1995.
1 Introduction
Image registration is an important issue in multiple-sensor
fusion. The goal of registration is to establish the corre-
spondence between two images and determine a geometric
transformation that aligns one with the other.
1–4
In many
image analysis systems today, multisensor image registra-
tion is conducted in an interactive, semiautomatic fashion;
image interpreters detect the common landmarks ~often re-
ferred to as control points or tie points! and the computers
perform registration based on the human inputs. This paper
presents an automatic visual and infrared ~IR! sensor image
registration algorithm. Visual and IR sensors operate at dif-
ferent frequency bands and their images have different
gray-level characteristics. This type of multisensor image
registration is useful for tasks such as target recognition
because information derived from different sensors is
complementary and can be used jointly to improve detec-
tion and tracking performance. However, it creates prob-
lems for registration because features presented in the vi-
sual images are often not the same as those in the IR
images. Automatic registration of the visual and IR imagery
is very difficult. Most existing image registration algo-
rithms do not perform well on visual/IR data. For instance,
Fourier methods that are based on image intensity cannot
handle images with different gray-scale characteristics.
5
The edge-map-based methods cannot properly align the IR
sensor image with the visual sensor image because of the
inconsistent features presented in the data.
6,7
Recently, an automatic image registration algorithm was
developed by Zheng and Chellappa
8–10
for single sensor
images such as nadir and oblique aerial images. This algo-
rithm uses point features as control points in aligning two
images. The point features are extracted from the image
intensity directly by a Gabor feature extractor. Point fea-
tures presented in both images are often referred as ‘‘con-
sistent points.’’ To identify the consistent points, a multi-
stage point-matching scheme is used. Because the matching
is carried out only on the feature points, a significant
amount of computation is saved in comparison with tradi-
tional pixel-by-pixel brute-force searching methods. How-
ever, Zheng and Chellappa’s algorithm cannot be directly
applied to the multisensor data because features extracted
from the image intensity are only suitable for registration of
images from the same sensor or from different sensors with
very similar characteristics.
Registration of visual/IR images require ~1! a substantial
number of consistent points and ~2! an effective
consistency-checking method for eliminating mismatches.
Extracting consistent features from visual/IR sensor images
is much more difficult than extracting features from single-
sensor images. Many features presented in one sensor im-
age do not show up in other sensor image. Because of the
feature inconsistency, mismatches are more likely to occur
in the multiple-sensor case. Removing mismatches be-
comes a critical step in getting accurate registration results.
Our visual/IR registration algorithm is derived from
Zheng-Chellappa’s single-sensor registration technique. In-
stead of using the Gabor feature extractor, our method em-
ploys a wavelet feature extractor to locate point features
from contours based on local statistics of the image inten-
sity. Matching is carried out based on the point features at
multiresolution levels. A consistency-checking step is in-
volved to eliminate mismatches. The consistency checking
is performed recursively in such a way that the most likely
incorrect match is deleted first, followed by the next most
likely incorrect match, and so on. This overcomes the dif-
ficulties caused by the discrepancy in the data’s gray-scale
characteristics and the problem of feature inconsistency.
391Opt. Eng. 35(2) 391–400 (Feb. 1996) 0091-3286/96/$6.00 © 1996 Society of Photo-Optical Instrumentation Engineers
Downloaded¬from¬SPIE¬Digital¬Library¬on¬22¬Jun¬2010¬to¬222.190.117.211.¬Terms¬of¬Use:¬¬http://spiedl.org/terms