
978-1-4799-3903-9/14/$31.00 ©2014 IEEE 627 ICALIP2014
Variable Step Size LMS Algorithm Based on
Modified Sigmoid Function
Yong Chen, JinpengTian, Yanping Liu
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai University
Shanghai 200072, China
cyongmail@163.com
Abstract—By studyingthe shortage of the traditional fixed
step size least mean square (LMS) algorithm. This paper builds a
nonlinear function relationship between ȝ and the error signal by
reviewing the existing algorithm and presents a novel variable
step size LMS adaptive filtering algorithm by improving Sigmoid
function based on translation transformation. The selective of
parameters and the performance of convergence are discussed.
Theoretical analysis and simulation results show that the
proposed variable step size LMS algorithm has better
performance. Comparing with some existing algorithms, the
algorithm improves their convergence performance.
Keywords—Leastmean square algorithm; adaptive filtering
algorithm; translation transformation; variable step
I. INTRODUCTION
The least square algorithm is a gradient estimation
algorithm and first proposed by Widrow and Hoff. Because of
its simplicity and robustness, it is used widely in many fields
such as Adaptive Control, Radar, System Identification and
Signal Processing. But the convergence rate of the traditional
fixed step size is very slow. What is more, if we increase the
step size to improve the convergence rate, it is bound to
enlarge the steady state error of the algorithm. What is worse,
it may cause divergence of algorithm. So the convergence
speed and steady-state error is contradictory for fixed step
LMS algorithm.
In order to overcome the contradictory, many researchers
have proposed some variable step least mean square algorithms
which build a nonlinear function relationship between ȝ and
the error signal. In [1] the authors propose an algorithm based
on Sigmoid function which can overcome the contradictory.
But the step size of the method changes very large in the steady
state. It will affect the steady state error. Reference [2]
improves [1] based on its shortage. But when [2] reduces the
error, it is lacking in convergence speed and anti-interference.
Reference [3] makes the step size change slowly by adding
index, when the error close to zero. However, the index will
reduce the performance of the anti-interference. In [4], authors
put forward an algorithm of the tongue-like curve which
reduces the calculation. But the improvement of the
convergence speed is very little. The basic idea of these
algorithm is that: the step size should be as large as possible at
the initial stage of algorithm, which can make the algorithm
has a quick convergence speed. And the step size should be as
small as possible to decrease the steady state error when the
error is small at the stage of the convergence steady-
state.References [5-16] also make the improvement based on
the theory.
Through the above idea, this paper puts forward a new
variable step size adaptive algorithm. We build a nonlinear
relationship based on the translation transformation of the
Sigmoid function. The method can reduce the effect of the
step size at the stage of convergence steady-state. What is
more, it also has a better performance to anti-interference.
Theoretical analysis and simulation results show the
advantages of this algorithm.
II. I
MPROVED LMS ALGORITHM
The basic LMS algorithm˖
ሺ
ሻ
ൌ
ሺ
ሻ
െܺ
்
ሺሻɘ
ሺ
݊
ሻ
(1)
ɘ
ሺ
ͳ
ሻ
ൌɘ
ሺ
ሻ
ʹρሺሻሺሻ (2)
Where ɘሺሻ is the vector of the adaptive filter
weights,d(n) is the desired signalˈX(n) is the adaptive filter
input signal vector, and ρ is step size and it is a constant. The
condition of the algorithm achieving convergence isͳ൏ߤ൏
ͳȀߣ
௫
, where ߣ
௫
is the maximal eigenvalue of the input
signal autocorrelation matrix.
Due to the problem of the traditional LMS algorithm, [1]
proposes a variable step size algorithm based on Sigmoid
called SVSLMS algorithm. The step size of it is variable.
Different error will get different step size.
ሺ
ሻ
ൌ
ଵ
ଵାୣ୶୮ሺି௫ሻ
(3)
The step-size ρ is the nonlinear function of the error e(n)˖
ρ
ሺ
ሻ
ൌȾቀ
ଵ
ଵାୣ୶୮
ሺ
ି
ȁ
ሺ
ሻ
ȁ
ሻ
െͲǤͷቁ (4)
This algorithm can meet the requirement of the step size.
However, the step size has great changes at the stage of steady
state[2].
Equation (3) is the Sigmoid function and (4) is the function
of [1]. By researching this function in [1], the reason why the
step size changes largely at the stage of the convergence is that
the function varies quickly in thesymmetry point. When we