I. INTRODUCTION TO IMAGE FUSION
Theinformation science research associated with the development of sensory
systems focuses mainlyonhow information about the world can be extracted
from sensory data. The sensing process can be interpreted as amapping of the
state of the world into aset of much lower dimensionality. The mapping is many-
to-one which means that there are typicallymany possible configurations of the
world that could give rise to the measured sensory data. Thus in many cases, a
single sensorisnot sufficient to provide an accurateperception of the real world.
There has been growinginterest in the use of multiple sensors to increasethe
capabilitiesofintelligent machines and systems. As aresult, multisensor fusion
has become an area of intense research and development activityinthe past few
years.
1–11
Multisensor fusion refers to the synergistic combination of different
sources of sensory information into asingle representational format. The
information to be fused may comefrom multiple sensory devices monitored over
acommon period of time, or from asingle sensory device monitored over an
extended time period. Multisensor fusion is avery broad topic that involves
contributions from manydifferent groups of people. These groups include
academic researchers in mathematics, physics, and engineering. These groups
alsoinclude defense agencies, defense laboratories, corporateagencies and
corporate laboratories.
Multisensor fusion can occur at the signal, image, feature, or symbol level of
representation. Signal-levelfusion refers to the direct combination of several
signals in order to provide asignal that has the samegeneral format as the source
signals. Image-level fusion (also calledpixel-level fusion in someliterature
6
)
generates afusedimage in which each pixel is determined from aset of pixels in
each sourceimage. Clearly, image-level fusion, or image fusion, is closely
related to signal-level fusion since an image can be considered atwo-dimensional
(2D)signal. We make adistinction since we focus on image fusion here. Feature-
levelfusion first employs feature extraction on the source data so that features
from each source can be jointly employed for some purposes. Acommon type of
feature-level fusion involves fusion of edge maps. Symbol-level fusion allows the
information from multiple sensors to be effectivelycombinedatthe highest level
of abstraction. The symbols used for the fusion can be originated either from
processing only the information provided or through asymbolic reasoning
process that may include apriori information. Acommon type of symbol-level
fusion is decision fusion. Most common sensors provide data that can be fusedat
one or more of these levels.The different levels of multisensor fusion can be used
to provide information to asystem that can be used for avarietyofpurposes.
One shouldrecognize that image fusion means different things to different
people. In the rest of this chapter, image fusion is defined as aprocedure for
generating afused image in which each pixel is determined from aset of pixels in
each source image. Other researchers define image fusion as any form of fusion
involving images.Thisincludes, for example, cases of decision fusion using
.
Multi-Sensor Image Fusion and Its Applications2
images,aspecific example of which is showninFigure 1.1