Feature-Based Retinal Image Registration by Enforcing Transformation-guided and
Robust Estimation
Lifang Wei, Changcai Yang, Riqing Chen *
Institute of Smart Agriculture and Forestry and Big
Data, College of Computer and Information Sciences,
Fujian Agriculture and Forestry University
Fuzhou 350002, China
.chen@fafu.edu.cn
*Corresponding author
Dong Heng
Fuzhou Institute of Technology
Fuzhou 350506, China
ypoofml@126.com
Abstract—Retinal image registration plays an important role to
diagnose, monitor and track the progress of various related
fundus disease. Since the vascular structure is indistinct in poor
quality and low overlap retinal images, it becomes more difficult
for the intensity-based and blood vessel, branch and cross points-
based registration methods. In order to solve this issue, an
effective retinal image registration method is proposed by
enforcing transformation-guided and robust estimate in this
paper. The landmark is extracted to build the initial
correspondence set. And the correspondences are refined by using
affine model and quadric model to guide estimator for removing
the mismatches. The robust regression estimate method
(iteratively reweighted least squares) combined the M-estimator is
used to calculate the transformation parameter for affine model
and quadric model hierarchically for warping the moving images.
We evaluate the proposed framework by quantitative
measurements and visual comparison and the results demonstrate
that the proposed framework is more robust for estimating the
transformation parameter and obtain more accurate re
registration results than other methods.
Keywords- Image Registration; Retinal Image; Robust Estimate
I.
I
NTRODUCTION
’
The retinal images are used for many application in diagnose
and monitor the progress of a variety of diseases, such as
diabetic retinopathy, age-related macular degeneration, and
glaucoma [1-3]. The retinal image registration could effective
assistant analysis. Although many retinal image registration
methods have been proposed, there are many challenges: (1)
Inadequate landmark points due to low overlap between
adjacent fields, especially crossovers and bifurcations points of
the vessel structure for feature-based retinal registration
methods [4-6]. (2) In retinal image, intensity-based techniques
could be deteriorated the performance because it is not spatially
uniform or consistent for the distributions of contrast and
intensity [7, 8]. (3) It is difficult to register successfully due to
large homogeneous nonvascular or texture-less regions in the
high-resolution retinal images [9, 10].
To address above difficulties, a feature-based registration
method is proposed by using affine model and quadric model to
guide robust estimate for removing the mismatches for retinal
image in this paper. SIFT algorithm is utilized to extract the
landmark and detect the initial correspondence. The robust
estimate method M-estimator [11] combined robust regression
method iteratively reweighted least squares (IRLS) [12] is used
to calculate the transformation parameters. For accurate
estimate, the affine transformation model and quadric
transformation model is used to removing the mismatches
hierarchically. In our experiments, three different types of
retinal images are used to evaluate the performance for the
proposed method and the results show that the proposed method
is robust and effective to register retinal images with low overlap
and poor quality.
The remaining parts of this paper are organized as follows.
In Section 2, we present the technical details of the proposed
registration method. In Section 3, we outline the performed
experiments and results. Finally, in Section 4, the conclusion is
provided.
II. M
ETHODS
The main goal of our framework is to register any two retinal
images, specially, with poor quality and low overlap. In
particular, the method guidance can be utilized in two directions
as follows. (1) The significant incorrect matches are excluded
according to the landmarks’ orientations between matched
landmarks. And the affine transformation model is utilized to
crude map the transformation relationship by the top 10
correspondences with the smallest error of orientation between
initial matches [13]. Most incorrect matches could be discarded
by this way. We can redetect the correspondences just in the
overlap regions and removing the mismatches in same schedule
to obtain the new correspondences set. (2) The approximate
quadric transformation parameters can be estimated further by
M-estimator combining the robust regression issue IRLS with a
stricter sparse constraint by requiring.
A. Detect Initial Correspondences
The initial correspondences are implemented by removing
mismatches via transformation enforcing. SIFT (Scale
Invariant Feature Transform) [14] [15] proposed by D. Lowe is
used to extract the landmarks and build the initial potential
correspondence [16-18]. Since the SIFT descriptors are
invariant to image scaling and rotation, the incorrect matches
could be excluded by the landmarks' orientations. And if two
landmarks are matches, the landmarks' orientations between the
two landmarks should be similar with others. Supposing two
features were a correspondence for each other, the difference of
the theirs orientation could be constrained in range and
2017 International Conference on Green Informatics
978-1-5386-2280-3/17 $31.00 © 2017 IEEE
DOI 10.1109/ICGI.2017.33
211