Since the IDBD algorithm can be seen as a Variable
Step-Size (VSS) LMS algorithm, it is appropriate to com-
pare the performance of the KIMEL algorithm with that of
other kernelized VSS-LMS algorithms. Many researchers
have proposed variable step-size algorithms based on the
standard LMS weight update recursion, such as [13–16].
The robust VSS-LMS algorithm proposed in [14] is one of
the well-known and successful VSS-LMS algorithms.
In this paper, we compare the performance of the KIMEL
with that of the Kernelized version of this VSS-LMS (KVSS-
LMS). Note that, although the KVSS-LMS is actually also
derived by us in this paper, we treat it as a competing
algorithm.
Automatical assessment of perceptual image quality is
critical for amounts of image processing applications. It is
becoming increasingly important. The KIMEL algorithm
proposed in this paper is excellent in solving nonlinear
fitting or regression problems, which can be used to
construct good approximation of the function relationship
between the input and output data. The blind Image
Quality Assessment (IQA) problem based on the machine
learning can be seen as constructing a relationship
between the distorted images and the final scores [31].
Thus, the KIMEL algorithm is applicable to the blind IQA
problem. In this paper, we perform this study. The
performance of the KIMEL algorithm is tested on the LIVE
IQA database [41] and is compared with that of the KLMS
algorithm, the KVSS-LMS algorithm, and the recent NR
IQA algorithms in [31,32,38]. The Spearman Rank-Order
Correlation Coefficient (SROCC), the (Pearson’s) Linear
Correlation Coefficient (LCC), and the Root MSE (RMSE)
are used to evaluate the performance of these algorithms.
Experimental results show that the performance of the
proposed algorithm is superior to that of the competing
methods.
The paper is organized as follows. In Section 2,we
briefly introduce the IDBD algorithm, the kernel method,
as well as the KLMS algorithm to make the paper self-
contained, and present the KVSS-LMS algorithm for the
purpose of comparison. In Section 3, the KIMEL algorithm
is formulated and the convergence analyses are per-
formed. In Section 4, a simple application in nonlinear
channel equalization is presented to illustrate the effec-
tiveness and advantage of the proposed algorithm.
In Section 5, we apply the KIMEL algorithm to blind IQA
problem, which is a more practical application. Finally,
conclusions are drawn in Section 6.
2. Preliminary knowledge and algorithms for
comparison
In this paper, the KIMEL algorithm to be presented is
inspired by the IDBD algorithm [2] and based on the
famed kernel trick, thus in this section we briefly intro-
duce them, and two kernel algorithms as well, which will
be used for comparison in the application parts.
2.1. Incremental Delta-Bar-Delta algorithm
In [2], the IDBD algorithm was derived, which can be
regarded as a VSS-LMS algorithm with an individual step-size
parameter for each weight dimension and these parameters
change according to a metal earning process. The IDBD is
developed for learning linear systems. After defining the
input vector XðnÞ¼½x
1
ðnÞ, x
2
ðnÞ, ..., x
N
ðnÞ
T
, the weight vector
WðnÞ¼½w
1
ðnÞ, w
2
ðnÞ, ..., w
N
ðnÞ
T
, and the desired output
d(n), the output of the linear system can be expressed as
yðnÞ¼WðnÞ
T
XðnÞ¼
P
N
i ¼ 1
w
i
ðnÞx
i
ðnÞ and the estimation
error can be expressed as eðnÞ¼dðnÞyðnÞ. The procedure
of the algorithm is as follows:
b
i
ðnþ1Þ¼b
i
ðnÞþ
y
eðnÞx
i
ðnÞh
i
ðnÞ,
a
i
ðnþ1Þ¼e
b
i
ðn þ1Þ
,
w
i
ðnþ1Þ¼w
i
ðnÞþa
i
ðnþ1ÞeðnÞx
i
ðnÞ,
h
i
ðnþ1Þ¼h
i
ðnÞ½1a
i
ðnþ1Þx
i
ðnÞ
2
þ
þa
i
ðnþ1ÞeðnÞx
i
ðnÞ, ð1Þ
where ½
þ
is a half-rectified function. In this algorithm, the
step-size parameters a
i
of all weight dimensions have an
exponential relationship with the memory parameters b
i
,
which ensures a
i
a positive value and provides a mechanism
for making geometric steps in a
i
.Notethath
i
, an additional
per-input memory parameter, is a decaying trace of the
cumulative sum of recent changes to w
i
, thus, the increment
to b
i
is proportional to the correlation between the current
weight change eðn Þx
i
ðnÞ and a trace of recent weight changes
h
i
(n). If the current step is positively correlated with past
steps, indicating that the past steps should have been larger,
the memory parameter b
i
, as w ell as the step-size a
i
,is
increased. If the current step is negatively correlated with
past steps, indicating that the past steps should have been
smaller, the memory parameter b
i
,aswellasthestep-sizea
i
,
is decreased. The IDBD algorithm is a metalearning algorithm
in the sense that it learns the step-size parameter based on
previous learning experience.
It was shown that the IDBD algorithm outperforms the
ordinary LMS algorithm and in fact it finds the optimal
step-size parameters [2]. In IDBD, both the weight update
rule and the learning rate update rule are derived by
gradient descent, which is the origin of the ideology of our
new algorithm to be presented in the following.
2.2. Kernel method
In order to learn a nonlinear relationship by a linear
learning machine, a nonlinear feature set needs to be
chosen, that is to say, transform the input data into a
high-dimensional feature space using a certain nonlinear
mapping and then the linear learning machine is applied
in the feature space. Thus, the output of the learning
machine has the form
f ðxÞ¼
X
N
i ¼ 1
w
i
j
i
ðxÞ, ð2Þ
where x is the input data in original space,
u
:
X
-
F
is a
nonlinear mapping from the input data space to a certain
feature space, and w
i
is the ith weight component in the
feature space, assumed to have N dimensions.
An important characteristic of a linear learning machine
is that it can be expressed in dual [6], which means that the
weight of the linear learning machine can be expressed as a
C. Li et al. / Signal Processing 93 (2013) 1586–1596 1587