2138 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 12, DECEMBER 2011
[25] D. Gabay, “Minimizing a differentiable function over a differential
manifold,” J. Optim. Theory Appl., vol. 37, no. 2, pp. 177–219,
1982.
[26] D. G. Luenberger, “The gradient projection method along geodesics,”
Manage. Sci., vol. 18, no. 11, pp. 620–631, Jul. 1972.
[27] T.-P. Chen, S.-I. Amari, and Q. Lin, “A unified algorithm for principal
and minor components extraction,” Neural Netw., vol. 11, no. 3, pp.
365–369, Apr. 1998.
[28] T.-P. Chen and S.-I. Amari, “Unified stabilization approach to principal
and minor components extraction algorithms,” Neural Netw., vol. 14,
no. 10, pp. 1377–1387, Dec. 2001.
[29] G. Caire and S. Shamai, “On the achievable throughput of a multiantenna
Gaussian broadcast channel,” IEEE Trans. Inf. Theory, vol. 49, no. 7,
pp. 1691–1705, Jul. 2003.
[30] G. W. Stewart, “On the adjugate matrix,” Linear Algebra Appl., vol. 283,
nos. 1–3, pp. 151–164, Nov. 1998.
Delay-Slope-Dependent Stability Results of
Recurrent Neural Networks
Tao Li, Wei Xing Zheng, Senior Member, IEEE,and
Chong Lin, Senior Member, IEEE
Abstract—By using the fact that the neuron activation func-
tions are sector bounded and nondecreasing, this brief presents
a new method, named the delay-slope-dependent method, for
stability analysis of a class of recurrent neural networks with
time-varying delays. This method includes more information
on the slope of neuron activation functions and fewer matrix
variables in the constructed Lyapunov–Krasovskii functional.
Then some improved delay-dependent stability criteria with less
computational burden and conservatism are obtained. Numerical
examples are given to illustrate the effectiveness and the benefits
of the proposed method.
Index Terms—Asymptotic stability, delay-slope-dependent,
recurrent neural networks.
I. INTRODUCTION
Recurrent neural networks (RNNs) have been an active
research area for the last few decades and have been suc-
cessfully applied to various fields such as image processing,
pattern recognition, partial differential equations solving, etc.,
[1], [2]. Some of these applications require that the equilibrium
points of the designed network be stable. So, it is important to
study the stability of RNNs. However, in the implementation
of artificial RNNs, time delays are frequently encountered as
Manuscript received January 7, 2011; accepted September 12, 2011. Date
of publication October 6, 2011; date of current version December 1, 2011.
This work was supported in part by the National Science Foundation of
China, under Grant 60904025, Grant 60904026, and Grant 61174033, the
Key Laboratory of Education Ministry for Image Processing and Intelligent
Control, under Grant 200805, and the Research Grant from the Australian
Research Council.
T. Li is with the Department of Information and Communication, Nanjing
University of Information Science and Technology, Nanjing 210044, China.
This work was done when he was with the School of Computing and
Mathematics, University of Western Sydney, Penrith NSW 2751, Australia
(e-mail: litaojia79@yahoo.com.cn).
W. X. Zheng is with the School of Computing and Mathematics,
University of Western Sydney, Penrith NSW 2751, Australia (e-mail:
w.zheng@uws.edu.au).
C. Lin is with the Institute of Complexity Science, College of Automation
Engineering, Qingdao University, Qingdao 266071, China (e-mail: linchong
2004@hotmail.com).
Digital Object Identifier 10.1109/TNN.2011.2169425
1045–9227/$26.00 © 2011 IEEE
a result of the finite switching speed of amplifiers and the
inherent communication time of neurons. It has been shown
that the existence of time delays might affect the dynami-
cal properties of the equilibrium points such as oscillation,
divergence, and even instability. Therefore, the equilibrium and
stability properties of RNNs with time delays have received
considerable attention (see [3]–[23]).
So far, the stability criteria of RNNs with time delay
are classified into two categories, i.e., delay-independent
criteria [3]–[9], and delay-dependent criteria [10]–[23]. For
the delay-dependent case, some criteria have been obtained
by using Lyapunov–Krasovskii functional (LKF) [14]–[23].
It is well known that the choice of an appropriate LKF is
crucial for deriving less conservative stability criteria. Thus,
some new techniques have been developed for reducing
conservatism, such as free-weighting matrix LKF [15], [16],
discretized LKF [18], augmented LKF [21], weighting delay
LKF [23], and so on. However, these methods suffer some
common shortcomings: 1) many matrix variables, some of
which are even useless for reducing conservatism (see [20]
for details), are introduced in the obtained results, which
causes a large computational burden, and 2) the information
of neuron activation functions is not adequately considered,
which may lead to some conservatism.
In practical applications, it has been found that the suitable
and more generalized neuron activation functions can improve
the performance of RNNs. For example, in [24] it was shown
that the absolute capacity of an associative memory model
can be remarkably improved by replacing the usual sigmoid
activation function with a nonmonotonic activation function.
In [25] and [26], the finite slope of neuron nonlinearities was
exploited for obtaining less conservative conditions for global
stability of neural networks in the non-delayed case. In [27],
it was pointed out that the number of degrees freedom when
solving the nonlinear optimization task can be reduced by
studying the relation between the learning rate and the slope
of neuron activation functions. Furthermore, it was also noted
in [22] that the property of the neuron activation functions can
affect the allowable time delay upper bound for RNNs.
Motivated by the preceding discussion, this brief mainly
considers the relationship between the time delay upper bound
and the slope of neuron activation functions. On this basis,
a new method, called the delay-slope-dependent method,is
proposed to deal with the stability of RNNs with time-varying
delay, so that a larger allowable upper bound for the time delay
can be obtained. Different from previous studies, this method
has the following features: 1) compared with the methods in
[15], [16], [20], and [23], more information of the slope of
neuron activation functions is utilized in the proposed LKF
and then less conservative stability criteria are obtained, and
2) while maintaining the efficiency of the stability conditions,
the proposed method introduces far fewer matrix variables than
the existing methods. These two distinctive features are the
novelty of the work presented in this brief.
Notation: Throughout this brief, a real symmetric matrix
P > 0 (≥0) denotes P being a positive definite (positive
semidefinite) matrix, and A >(≥)B means A − B >(≥)0.