426 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 2, FEBRUARY 2016
Optimal Communication Network-Based H
∞
Quantized Control With Packet Dropouts
for a Class of Discrete-Time Neural
Networks With Distributed
Time Delay
Qing-Long Han, Senior Member, IEEE, Yurong Liu, and Fuwen Yang, Senior Member, IEEE
Abstract—This paper is concerned with optimal
communication network-based H
∞
quantized control for
a discrete-time neural network with distributed time delay.
Control of the neural network (plant) is implemented via a
communication network. Both quantization and communication
network-induced data packet dropouts are considered
simultaneously. It is assumed that the plant state signal is
quantized by a logarithmic quantizer before transmission,
and communication network-induced packet dropouts can be
described by a Bernoulli distributed white sequence. A new
approach is developed such that controller design can be
reduced to the feasibility of linear matrix inequalities, and
a desired optimal control gain can be derived in an explicit
expression. It is worth pointing out that some new techniques
based on a new sector-like expression of quantization errors,
and the singular value decomposition of a matrix are developed
and employed in the derivation of main results. An illustrative
example is presented to show the effectiveness of the obtained
results.
Index Terms— Discrete-time neural networks, distributed time
delays, H
∞
control, packet dropouts, quantized control.
I. INTRODUCTION
I
N THE last few decades, recurrent neural networks (RNNs)
have gained considerable research attention due to their
successful applications in a wide range of areas, such
as combinatorial pattern recognition, associative memory,
combinational optimization, and synchronization [1]–[4].
Manuscript received August 15, 2014; revised December 11, 2014 and
February 15, 2015; accepted February 21, 2015. Date of publication March
24, 2015; date of current version January 18, 2016. This work was sup-
ported in part by the Australian Research Council Discovery Project under
Grant DP1096780 and in part by the National Natural Science Foundation of
China under Grant 61374010.
Q.-L. Han and F. Yang are with the Griffith School of Engineering, Griffith
University, Brisbane, QLD 4111, Australia, and also with the Centre for
Intelligent and Networked Systems, Central Queensland University,
Rockhampton, QLD 4702, Australia (e-mail: q.han@griffith.edu.au;
fuwen.yang@griffith.edu.au).
Y. Liu is with the Centre for Intelligent and Networked Systems, Central
Queensland University, Rockhampton, QLD 4702, Australia, and also with the
Department of Mathematics, Yangzhou University, Yangzhou 225009, China
(e-mail: liuyurong@gmail.com).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNNLS.2015.2411290
Usually, in biological and artificial neural networks, equilibria
of a neural network are referred to as memories associated
with external stimuli. In neural networks of associative
memories, locally stable equilibria store information and form
distributed memory structures. When solving some classes
of optimization problems in real time, neural networks have
to be designed such that there is only one equilibrium point
and this equilibrium point is globally stable, to avoid the
spurious suboptimal responses [5]–[9]. Therefore, stability
analysis for RNNs with or without time delays has been an
attractive subject of research [10]–[13]. Various sufficient
conditions have been obtained to ensure the global asymptotic
or exponential stability for the RNNs (see [14]–[18] and the
references therein).
Closely related to the stability issue is the
stabilization or control problem of neural networks.
Control of neural networks has several applications in
real world. For example, in biological neural networks, the
medical treatment of neuropathy patients can be regarded as a
typical application for control of neural networks. In addition,
in some application of artificial neural networks, control
is usually essentially implemented to achieve a desired
goal such as better system performance or fast convergence
rate in real-time computations. Up to date, there have
been few reports on the stabilization and control issues for
neural networks. For example, stability and the stabilization
of discrete-time neural networks are investigated in [19],
where a stabilization problem was defined as a constrained
optimization task, and was solved by a method based on
gradient projection and minimum distance projection. In [20],
some results on the global exponential stabilization for neural
networks with various activation functions and time-varying
continuously distributed delays are obtained, where delay-
dependent conditions for the global exponential stabilization
are formulated in terms of linear matrix inequalities. In [21],
a globally exponential stabilization problem was investigated
for a general class of stochastic Cohen–Grossberg neural
networks with both Markovian jumping parameters and
mixed mode-dependent time delays, and a memoryless
state feedback controller was designed to guarantee that the
2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.