1784 G. Zhang et al. / Neurocomputing 275 (2018) 1782–1792
It is clear that under H1 and H2, system (5) has one equilibrium point denoted as z
∗
= [ z
∗
1
, . . . , z
∗
n
]
T
. For the convenience, we first shift the
equilibrium point z
∗
to the origin by letting x = z − z
∗
and f (x ) = g(x + z
∗
) − g(z
∗
) , then the system (1) can be converted to
˙
x (t) − D
˙
x (t − τ
1
(t)) = −Cx (t) + Af(x (t)) + Bf(x (t − τ
2
(t))) ,
(6)
where f (x ) = [ f
1
(x
1
) , . . . , f
n
(x
n
)]
T
. It is easy to check that the function f
j
( ·) satisfies f
j
(0) = 0 , and
ρ
−
j
≤
f
j
(α)
α
≤ ρ
+
j
, ∀ α ∈ R , α = 0 , j = 1 , . . . , n. (7)
Then the problem to be addressed in the paper can be formulated as developing one sufficient condition ensuring that the origin of neutral
DNNs (1) is robustly stable.
In what follows, some essential lemmas are introduced.
Lemma 1
[49] . For d ( t ) ∈ [0, d ], a symmetric matrix R > 0 and constant matrix S
1
satisfying
R
1
S
1
∗ R
1
≥ 0 with R
1
= diag { R, 3 R, 5 R } , the
following inequality can be true:
−
t
t −d(t )
˙
x
T
(s ) R
˙
x (s ) ds −
t −d(t )
t−d
˙
x
T
(s ) R
˙
x (s ) ds ≤−
1
d
ζ
T
(t)
E
1
E
2
T
R
1
S
1
∗ R
1
+
d −d (t )
d
T 0
∗
d(t)
d
T
E
1
E
2
ζ
( t ) ,
where
ζ
T
(t) =
x
T
(t) x
T
(t − d(t)) x
T
(t − d) ϕ
T
(t)
T
(t) ν
T
(t) ω
T
(t)
;
E
1
=
e
1
− e
2
e
1
+ e
2
− 2 e
4
e
1
− e
2
+ 6 e
4
− 12 e
6
, E
2
=
e
2
− e
3
e
2
+ e
3
− 2 e
5
e
2
− e
3
+ 6 e
5
− 12 e
7
; e
i
=
0
i −1
I
n
0
7 −i
(1 ≤ i ≤ 7) ;
T
= R
1
− S
T
1
R
−1
1
S
1
; ϕ(t ) =
1
d(t)
t
t −d(t )
x (s ) ds, ν(t ) =
2
d
2
(t)
t
t −d(t )
s
t −d(t )
x (u ) d ud s ;
(t) =
1
d − d(t)
t −d(t )
t−d
x ( s ) ds, ω( t ) =
2
[ d − d( t)]
2
t −d(t )
t−d
s
t−d
x ( u ) d ud s.
Lemma 2 [42–44,48] . For an any constant matrix M > 0, the following inequalities hold for all continuously differentiable function ϕ in [ a,
b ] → R
n
:
−(b − a )
−a
−b
ϕ
T
(s ) Mϕ(s ) ds ≤−
−a
−b
ϕ(s ) ds
T
M
−a
−b
ϕ(s ) ds
− 3
T
M,
−
b
2
− a
2
2
−a
−b
t
t+ θ
ϕ
T
(s ) Mϕ(s ) d sd θ ≤−
−a
−b
t
t+ θ
ϕ(s ) d sd θ
T
M
−a
−b
t
t+ θ
ϕ(s ) d sd θ
,
−
b
3
− a
3
6
−a
−b
0
t
t+ θ
ϕ
T
(s ) Mϕ(s ) d sd θd ≤−
−a
−b
0
t
t+ θ
ϕ(s ) d sd θd
T
M
−a
−b
0
t
t+ θ
ϕ(s ) d sd θd
,
where
=
−a
−b
ϕ(s ) ds −
2
b − a
−a
−b
s
−b
ϕ(u ) d ud s.
Lemma 3 [46] . For an any constant matrix M > 0, the following inequality holds for all continuously differentiable function ϕ in [ a, b ] → R
n
:
−
(b − a )
2
2
b
a
s
a
ϕ
T
(u ) Mϕ(u ) d ud s ≤−
b
a
s
a
ϕ(u ) d ud s
T
M
b
a
s
a
ϕ(u ) d ud s
− 2
T
M,
where
=
b
a
s
a
ϕ(u ) d ud s −
3
b − a
b
a
s
a
u
a
ϕ(v ) d v d ud s.
Lemma 4
[47] . For vector ω, real scalars a ≤ b, symmetric matrix R > 0 such that the integration is well defined, then the following inequality
holds:
(b − a )
b
a
˙ ω
T
(s ) R ˙ ω (s ) ds ≥ χ
T
1
Rχ
1
+ 3 χ
T
2
Rχ
2
+ 5 χ
T
3
Rχ
3
,
where χ
1
= ω(b) − ω(a ) , and
χ
2
= ω(b) + ω(a ) −
2
b − a
b
a
ω(s ) ds, χ
3
= ω(b) − ω(a ) +
6
b − a
b
a
ω(s ) ds −
12
(b − a )
2
b
a
b
s
ω(θ ) dθds.
As an extended case of Lemma 2 in [50] , we can derive the following lemma easily.
Lemma 5
[50] . Suppose that ,
ij
,
mn
(i, m = 1 , 2 , 3 , 4 ; j, n = 1 , 2) are the constant matrices of appropriate dimensions, α ∈ [0, 1], β ∈ [0,
1], γ ∈ [0, 1], and δ ∈ [0, 1], then
+
α
11
+ (1 − α)
12
+
β
21
+ (1 − β )
22
+
γ
31
+ (1 − γ )
32
+
δ
41
+ (1 − δ)
42
< 0