This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
ZHANG et al.: FINITE-TIME STABILIZABILITY AND INSTABILIZABILITY FOR COMPLEX-VALUED MEMRISTIVE NEURAL NETWORKS 3
with C
j
is the capacitor, W
fjk
and W
gjk
are the memduc-
tances of memristors F
fjk
and F
gjk
, respectively, which denote
the memristor between f
j
(z
j
(t)) and z
j
(t) and the memristor
between g
j
(z
j
(t −τ
j
)) and z
j
(t), respectively. According to the
current-voltage characteristics of memristor, memristive con-
nection weights are chosen to be state-dependent switching
throughout this paper as follows:
a
jk
(
z
k
(t)
)
=
a
jk
, |z
k
(t)| <χ
k
a
jk
, |z
k
(t)| >χ
k
b
jk
(
z
k
(t)
)
=
b
jk
, |z
k
(t)| <χ
k
b
jk
, |z
k
(t)| >χ
k
a
R
jk
(
x
k
(t)
)
=
a
R
jk
, |x
k
(t)| <χ
k
a
R
jk
, |x
k
(t)| >χ
k
a
I
jk
(y
k
(t)) =
a
I
jk
, |y
k
(t)| <χ
k
a
I
jk
, |y
k
(t)| >χ
k
b
R
jk
(x
k
(t)) =
b
R
jk
, |x
k
(t)| <χ
k
b
R
jk
, |x
k
(t)| >χ
k
b
I
jk
(y
k
(t)) =
b
I
jk
, |y
k
(t)| <χ
k
b
I
jk
, |y
k
(t)| >χ
k
(4)
for j, k = 1, 2,...,n, the switching jumps χ
k
> 0, and,
where a
R
jk
(x
k
(t)) = Re(a
jk
(z
k
(t))), a
I
jk
(y
k
(t)) = Im(a
jk
(z
k
(t))),
b
R
jk
(x
k
(t)) = Re(b
jk
(z
k
(t))), b
I
jk
(y
k
(t)) = Im(b
jk
(z
k
(t))), and
a
jk
, a
jk
, b
jk
, b
jk
a
R
jk
, a
R
jk
, a
I
jk
, a
I
jk
, b
R
jk
, b
R
jk
, b
I
jk
and b
I
jk
are
constants.
Now, we abbreviate x
k
(t), y
k
(t), x
k
(t−τ
k
), y
k
(t−τ
k
) to x
k
, y
k
,
x
τ
k
k
, y
τ
k
k
. The following assumption is given for the complex-
valued activation functions f
k
(z
k
(t)) = f
R
k
(x
k
, y
k
) + if
I
k
(x
k
, y
k
)
and g
k
(z
k
(t − τ
k
)) = g
R
k
(x
τ
k
k
, y
τ
k
k
) + ig
I
k
(x
τ
k
k
, y
τ
k
k
).
Assumption 1: For k = 1, 2,...,n, f
k
(0) = g
k
(0) = 0 and
there exist scalars λ
RR
k
, λ
RI
k
, λ
IR
k
, λ
II
k
≥ 0, such that
f
R
k
ˆx
k
, ˆy
k
− f
R
k
(
x
k
, y
k
)
≤ λ
RR
k
ˆx
k
− x
k
+ λ
RI
k
ˆy
k
− y
k
f
I
k
ˆx
k
, ˆy
k
− f
I
k
(
x
k
, y
k
)
≤ λ
IR
k
ˆx
k
− x
k
+ λ
II
k
ˆy
k
− y
k
for all x
k
, ˆx
k
, y
k
, ˆy
k
. Similarly, g
k
(z
k
(t)) also satisfies the
inequalities above and the according coefficients are μ
RR
k
, μ
RI
k
,
μ
IR
k
, μ
II
k
.
Remark 1: In many results for complex-valued neural
networks models, the real and imaginary parts of complex-
valued activation functions often are assumed to be partially
differentiable and all the partial derivatives are continuous and
bounded [43], [47], [51], [53], [56], [57]. In fact, the exis-
tence, continuity and boundedness of the partial derivatives
are unnecessary in the proof process of deriving main results
except for the inequalities stated in Assumption 1. Hence, in
this paper we remove them and only assume that the activation
functions satisfy the inequalities above. This universality can
be seen from simulation examples in Section IV.
Remark 2: References [54] and [55] concern the activation
functions f
k
(z) = f
R
k
(Re(z)) +if
I
k
(Im(z)), g
k
(z) = g
R
k
(Re(z)) +
ig
I
k
(Im(z)), and f
R
k
(Re(z)), f
I
k
(Im(z)), g
R
k
(Re(z)), and g
I
k
(Im(z))
are assumed to satisfy the inequalities similar to the ones in
real-valued neural networks. Actually, it is obvious that this
assumption is included in Assumption 1. Thus, we will not
consider this special case in this paper.
Remark 3: It should be noted that for explicitly sepa-
rable real-imaginary activation functions the inequalities in
Assumption 1 are equivalent to the Lipschitz continuity con-
dition in the complex domain. In this paper, so as to derive the
results expediently, we assume that activation functions satisfy
the former instead of the latter.
Let z
j
(t) = x
j
(t) + iy
j
(t), u
j
(t) = u
R
j
(t) + iu
I
j
(t), and ϕ
j
(s) =
ϕ
R
j
(s) + iϕ
I
j
(s). Then, by separating system (1) with (2)into
the real and imaginary parts, one has
˙x
j
(t) =−d
j
x
j
(t) +
n
k=1
a
R
jk
(t)f
R
k
(
x
k
, y
k
)
−
n
k=1
a
I
jk
(t)f
I
k
(
x
k
, y
k
)
+
n
k=1
b
R
jk
(t)g
R
k
x
τ
k
k
, y
τ
k
k
−
n
k=1
b
I
jk
(t)g
I
k
x
τ
k
k
, y
τ
k
k
+ u
R
j
(t)
˙y
j
(t) =−d
j
y
j
(t) +
n
k=1
a
I
jk
(t)f
R
k
(
x
k
, y
k
)
+
n
k=1
a
R
jk
(t)f
I
k
(
x
k
, y
k
)
+
n
k=1
b
I
jk
(t)g
R
k
x
τ
k
k
, y
τ
k
k
+
n
k=1
b
R
jk
(t)g
I
k
x
τ
k
k
, y
τ
k
k
+ u
I
j
(t)
x
j
(s) = ϕ
R
j
(s), y
j
(s) = ϕ
I
j
(s), s ∈ [−τ,0] (5)
for j = 1, 2,...,n.
Considering the equivalence of stability for systems (1)
and (5), we will focus on analyzing system (5). First, to
check that whether system (5) is stable in finite time, a novel
controller u
j
(t) = u
R
j
(t) + iu
I
j
(t) will be designed as follows:
u
R
j
(t) =−δ
1j
x
j
(t) − η
1j
x
j
(t)
σ
1
sgn
x
j
(t)
− θ
1j
n
k=1
|
x
k
(t − τ
k
)
|
sgn
x
j
(t)
u
I
j
(t) =−δ
2j
y
j
(t) − η
2j
y
j
(t)
σ
2
sgn
y
j
(t)
− θ
2j
n
k=1
|
y
k
(
t − τ
k
)
|
sgn
y
j
(t)
(6)
where δ
1j
, δ
2j
, η
1j
> 0, η
2j
> 0, θ
1j
, θ
2j
, σ
1
, and σ
2
are
constants for j = 1, 2,...,n.
Remark 4: In this paper, the more general nonlinear delayed
controller (6) is designed for delayed complex-valued memris-
tive neural networks. Compared with the results in real-valued
domain, when θ
1j
= θ
2j
= 0 and 0 <σ
1
,σ
1
< 1, it can
be reduced to the controller in [7] and [8]. When θ
1j
= 0
and θ
2j
= 0, based on the influence of time delay on the
dynamics for delayed systems, it permits the full involve-
ment of time delay as stated in [36]. Although this kind of
controller may limit the theoretical application because of its
complexity, it compensates and generalizes the most exist-
ing controllers. Hence, it has obvious advantages from the
theoretical viewpoint.