Med Biol Eng Comput
structures was proposed. To deal with structures’ discrep-
ancy, we tried to bring atlases for complementing the
discrepancy structure information in multimodal medical
images. The floating image was registered to a floating-atlas
image first. Then a corresponding reference-atlas image was
registered to the reference image. Finally, we deformed the
floating image to the reference image by the deformation
fields extracted from registrations above. The atlas plays
an intermediary role between floating image and reference
image. As the atlas is multimodal, the method could avoid
the structure discrepancy in multimodal images. A frame
model, a brain model, and the clinical images are used
to evaluate the performance of this method. Results indi-
cate that the method is suitable for multimodal images with
discrepancy structures.
The rest of the paper is organized as follows: the discrepancy
structures in multimodal images are given in Section 2.1.
The a tl as -b as ed registration method is described in Section 2.2.
The three experiments are presented in Section 3.In
Section 4, we discussed atlas obtaining, 3-dimension (3D)
registration extension, and concluded the paper.
2 Method
2.1 Structures in multimodal images
The tissue structures in multimodal images may be
discrepant, which may affect the distribution of MI.
To distinctly illustrate the problem, we designed one-
dimension models in Fig. 1b for Fig. 1a, c. In Fig. 1b, the
model on the left represents the diffusion weighted imaging
(DWI) image—the floating image in Fig. 1a, the blocks in
the model represent the brain tissue and the background,
respectively; the model on the right represents the T1-
weighted (T1) image—the reference image in Fig. 1c, the
blocks in the model represent the brain tissue, the skull,
and the background, respectively. The motion of the red
borderline in x-direction corresponds to the zoom in or out
of the DWI image (the view field in the transformation is
the field of reference, the object out of the reference is
drooped). As reference images are fixed, the blue borderline
and the green borderline in the T1 model keep still. As
the red borderline slides, MI and NMI of the models
are computed, respectively. The MI and NMI curves are
showninFig.1d, e. Both of the two curves have two
maximum peaks (when the red borderline locates in the
green borderline or blue borderline). Due to the improper
initialization or severe deformation, the maximum may
be obtained when the red borderline slides to the green
borderline. It means that the brain tissue in DWI images may
match to the skull in T1 images. In structure discrepancy
condition, MI measurement based methods may get poor
registration results.
2.2 Atlas-based registration
In order to deal with the structure discrepancy in multimodal
images, an atlas-based multimodal registration method was
proposed. The floating image has a floating-atlas image.
The reference image has a reference-atlas image. The
structures in the images are the same with the structures
in the corresponding atlas images (as the same modality).
So the atlas-based registration could avoid the structures
discrepancy, which could guarantee the maximization
information entropy point is consistent with the ground
truth. As the two atlas images are aligned, the floating image
to reference image deformation field could be obtained
by the deformation fields from the two registrations. The
registration flowchart of the method is shown in Fig. 2.
Three steps are included in this scheme: floating image
to atlas registration, atlas to reference image registration,
and field-based deformation. Let F(x, y) be the floating
image, R(x, y) be the reference image, A
f
(x, y) be the
floating-atlas image of F(x,y), A
r
(x, y) be the reference-
atlas image of R(x,y).AndA
f
(x, y) and A
r
(x, y) are
aligned.
Floating image to atlas registration contained pre-
registration and nonrigid registration. The floating image
F(x,y) was registered to the atlas image A
f
(x, y).The
similarity transformation T
s
(x, y) wasappliedinpre-
registration. The transformation is defined as follows:
T
s
(x, y) = (x − C
x
,y − C
y
)
cos θ sin θ
−sin θ cos θ
λ 0
0 λ
+(T
x
− C
x
,T
y
− C
y
) (1)
where (x, y) is the coordinate of floating image, λ is
the scale factor, θ is the rotation angle, (C
x
,C
y
) is
the coordinate of the rotation center, and (T
x
,T
y
) is the trans-
lation in x-direction and y-direction, the output of T
s
(x, y)
is the image coordinate (x
,y
) after transformation.
To compare with other multimodal registration methods
without atlas, a common measurement—MI, was selected.
Thus, the objective function is defined as follows:
λ, θ, C
x
,C
y
,T
x
,T
y
= arg max
λ,θ,C
x
,C
y
,T
x
,T
y
MI (F (T
s
(x, y)), A
f
(x, y))
(2)
F(T
s
(x, y)) is the floating image under the similarity
transformation T
s
(x, y). The MI between two images is
defined as follows:
MI(F,A
f
) = H(F)+ H(A
f
) − H(F,A
f
) (3)