Research Article
Multimodal Image Alignment via Linear Mapping between
Feature Modalities
Yanyun Jiang,
1
Yuanjie Zheng,
1
Sujuan Hou,
1
Yuchou Chang,
2
and James Gee
3
1
School of Information Science and Engineering, Key Lab of Intelligent Computing & Information Security in Universities of
Shandong, Institute of Life Sciences, Shandong Provincial Key Laboratory for Distributed Computer Software Novel Technology and
Key Lab of Intelligent Information Processing, Shandong Normal University, Jinan, Shandong 250014, China
2
Computer Science and Engineering Technology Department, University of Houston-Downtown, Houston, TX 77002, USA
3
Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
Correspondence should be addressed to Yuanjie Zheng; zhengyuanjie@gmail.com
Received 8 January 2017; Accepted 10 May 2017; Published 6 July 2017
Academic
Editor: Saverio Affatato
Copyright © 2017 Yanyun Jiang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by
resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity
of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the
landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship
between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in
our experiments carried out by using a variety of image modalities.
1. Introduction
Multimodal/multispectral images acquired from multiple
modalities or different spectral bands of the same subject or
organ are of great importance for medical diagnosis and
computer-aided surgery, benefiting from the complementary
information captured by sensors of different modalities/spec-
tra (e.g., magnetic resonance imaging and computed tomog-
raphy or the multispectral imaging) [1–3]. They are also
being more and more widely used in other fields, such as
computer vision and computational photography, accom-
plished via different imaging modalities (e.g., RGB and near
infrared) or under various imaging conditions (e.g., flash
and no flash, depth, and color images) [4].
Image alignment resolves spatial correspondences
between images and plays a fundamentally important role
in practical application of multimodal images. There cur-
rently exist various techniques [4–9] for multimodal image
alignment, which can be basically categorized into feature-
based and patch-based methods. The feature-based methods
detect sparse salient points and extract features to describe
their local photometric/geometric pattern [10, 11]. Different
from alignment of generic images, multimodal image align-
ment requires the features together with their similarity
measurement to be able to deal with image variations caused
by the modality difference [6]. The patch-based methods
measure the similarity between local patches by computing
their mutual information [12], cross correlation [4, 6, 13],
or their combination [14].
Disregarding the promising results reported in existing
papers, multimodal image alignment still remains a challenge
mainly due to the comp lex and unknown relationship
between image modalities (as shown by the left two images
in Figure 1(c)). The common information bet ween multi-
modal images is needed for defining image features. How-
ever, it is not always trivial to recognize, model, or learn
this information in practice due to outliers, large displace-
ment, and the complex relationship [4]. Moreover, the
predefined image features can work well only when the cor-
responding measurement of the feature similarity fits these
features, which is not always an easy task in practice. Finally,
the definition of image feature and similarity is independent
Hindawi
Journal of Healthcare Engineering
Volume 2017, Article ID 8625951, 6 pages
https://doi.org/10.1155/2017/8625951