348 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 28, NO. 2, FEBRUARY 2017
where ψ
(σ
k
)
ij
=ψ
ij
(σ
k
)
+ e
(σ
k
)
ij
, and the symbols ψ
ij
(σ
k
)
and e
(σ
k
)
ij
denote the known constant estimation and the
corresponding estimated error of ψ
(σ
k
)
ij
for each σ
k
, respec-
tively. Here, we assume that |e
(σ
k
)
ij
|≤ϑ
(σ
k
)
ij
. In system (), the
neuron activation functions are represented by
f (x(k))
= ( f
1
(x
1
(k)), f
2
(x
2
(k)), . . . , f
n
(x
n
(k)))
T
,and
satisfy the following assumption.
Assumption 1 [8]: Consider the neuron activation func-
tion f (·) satisfying f (0) = 0, there exist constants φ
−
l
and φ
+
l
for each l = 1, 2,...,n, such that
φ
−
l
≤
f
l
(s
1
) − f
l
(s
2
)
s
1
− s
2
≤ φ
+
l
∀s
1
= s
2
∈ R. (4)
In system (), for each λ
k
∈ S, C(λ
k
)
= diag{c
1
(λ
k
),
c
2
(λ
k
),...,c
n
(λ
k
)} with |c
l
(λ
k
)| < 1 describes the rate in
which each neuron will reset its potential to the resting
state in isolation when disconnected from the networks and
external inputs, A(λ
k
) is the connect weight matrix, and
E(λ
k
) and D(λ
k
) are known real constant matrices of appro-
priate dimensions for each λ
k
∈ S.
In this paper, we suppose that the measurement output y(k)
of system () is represented as follows:
y(k) = B(λ
k
)x(k) + F(λ
k
)ω(k) (5)
where B(λ
k
) and F(λ
k
) are known constant matrices
for each λ
k
. Taking the signal quantization into account,
we assume that the signals are quantized by the logarithmic
quantizer before entering into the desired estimator,
where Q[]=(Q
1
[
1
], Q
2
[
2
],...,Q
n
[
n
])
T
. As discussed
in [39], the set of the logarithmic quantization levels for each
Q
h
[](1 ≤ h ≤ n) is described by
U
h
=
± u
(h)
l
, u
(h)
l
= ρ
(l)
h
u
(h)
0
, l = 0, ±1, ±2,...
∪{0}
(6)
with the quantization density ρ
h
∈[0, 1] and u
(h)
0
> 0. Each
quantization level corresponds to a segment of the input of
quantizer, such that the quantizer maps the whole segment to
this quantization level. For the logarithmic quantizer, Q
h
[
h
] is
defined as follows:
Q
h
[
h
]=
⎧
⎪
⎪
⎨
⎪
⎪
⎩
u
(h)
l
,
1
1 + δ
h
u
(h)
l
<
h
≤
1
1 − δ
h
u
(h)
l
0,
h
= 0
−Q
h
(−
h
),
h
≤ 0
(7)
with δ
h
(1 − ρ
h
/1 + ρ
h
). Inspired by [34], it is not difficult
to deduce that
Q
h
[
h
]=(1 +
h
)
h
(8)
such that |
h
|≤δ
h
for each h ∈[1, n].
For notational simplicity, we use y
pq
(k) and Q[y
pq
(k)] to
denote the actual input available to the quantizer and the
corresponding actual input of the desired estimator, respec-
tively. Under the assumption that the communication network
link between the physical plant and the quantizer is perfect,
it is easy to find that y(k) = y
pq
(k). However, as one
of the common network-induced imperfections, the phenom-
enon of data packet dropouts inevitably occurs in practice.
That is, the measurements y(k) drop intermittently, which
means that y(k) = y
pq
(k). For this reason, we model the data
packet dropouts phenomenon by using a stochastic Bernoulli
approach, which has been verified to be effective [40]. Thus,
the connection between y(k) and y
pq
(k) is shown as follows:
y
pq
(k) = β
k
y(k) (9)
where the stochastic variable β
k
is a Bernoulli-distributed
white noise sequence specified by the following distribution
law [40]:
Pr{β
k
= 1}=E{β
k
}=
¯
β, Pr{β
k
= 0}=1 −
¯
β
where
¯
β ∈[0, 1] is a known constant. Clearly, the commu-
nication link is in complete failure when
¯
β = 0, and the
communication link is in normal case when
¯
β = 1. For the
stochastic variable β
k
, it is also easy to see
E{β
k
−
¯
β}=0, E{(β
k
−
¯
β)
2
}=
¯
β(1 −
¯
β). (10)
In this paper, denoting diag{
1
,
2
,...,
n
}, we are
interested in designing the following state estimator (
ˆ
):
ˆx(k + 1) = C(λ
k
) ˆx(k) + A(λ
k
) f ( ˆx(k)) + K (λ
k
)
×[Q[y
pq
(k)]−B(λ
k
) ˆx(k) − F(λ
k
)ω(k)]
= C(λ
k
) ˆx(k) + A(λ
k
) f ( ˆx(k))
+ K (λ
k
)[(I + )β
k
y(k)
− B(λ
k
) ˆx(k) − F(λ
k
)ω(k)] (11)
ˆz(k) = D(λ
k
) ˆx(k) (12)
where ˆx(k) is the estimate of the state x(k), ˆz(k) is the output
of the estimator, and for each λ
k
∈ S, K (λ
k
) are estimator
parameters to be determined.
Let e(k) x(k) −ˆx(k) and ¯z(k) z(k) −ˆz(k) be
the error state and the output estimation error, respectively,
and
¯
f (e(k)) f (x(k)) − f ( ˆx(k)). Then, augmenting the
network () to include the states of the estimator (
ˆ
),
the resulting estimation error is subject to the dynamics
governed by
e(k + 1) = C(λ
k
)e(k) + A(λ
k
)
¯
f (e(k)) + E(λ
k
)ω(k)− K (λ
k
)
×[B(λ
k
)e(k) + (
¯
β − 1)
× (B(λ
k
)x(k) + F(λ
k
)ω(k))]
− K (λ
k
)
¯
β(B(λ
k
)x(k) + F(λ
k
)ω(k))
− (β
k
−
¯
β)K(λ
k
)(I + )
× (B(λ
k
)x(k) + F(λ
k
)ω(k)) (13)
¯z(k) = D(λ
k
)e(k). (14)
In the following, for convenience, we denote C
i
C(λ
k
),and
the other symbols are similarly denoted. In order to facilitate
the design of the desired estimator, two new vectors ζ(k)
[
e
T
(k) x
T
(k)
]
T
and
¯
f (ζ(k)) [
¯
f
T
(e(k)) f
T
(x(k))
]
T
need to be introduced, and then, an augmented system (
˜
)