3 Preliminaries
This section formulates some basic definitions and prop-
erties which will be used in the subsequent sections. It
involves mainly the restricted isometry property (RIP)
[2, 46], coherence [40], sparse approximation functions
[47], nonsmooth analysis [48] and smoothing functions
[23, 26, 37].
For two given vectors x; y 2 R
n
, let hx; yi represent the
inner product between x and y, and x
kk
p
stand for the L
p
-
norm of x with 0\p\1. Matrix A 2 R
mn
is said to
satisfy the S-restricted isometry property (S-RIP) [2, 46]if
ð1 d
S
Þ x
kk
2
2
Ax
kk
2
2
ð1 þ d
S
Þ x
kk
2
2
for all S-sparse signals x 2 R
n
, where d
S
is the smallest
restricted isometry constant (RIC) for which the above
inequality holds with d
S
2ð0; 1Þ . Candes and Tao [46]
claimed that sensing matrices with small RIC, e.g., random
Gaussian and Bernoulli matrices, were suitable for sparse
signals recovery. However, it is NP-hard to check whether
a fixed matrix satisfies the RIP or not [49]. Fortunately, the
version of coherence or mutual coherence [40] is closely
related to one such RIP and easy to examine, in which the
coherence degree of A, i.e., sðAÞ, is employed to measure
the similarity between the matrix columns A
i
and A
j
,
namely
sðAÞ¼max
i6¼j
A
T
i
A
j
A
i
kk
2
A
j
2
;
ð2Þ
with A ¼½A
1
; :::; A
n
2R
mn
. Note that if sðAÞ¼1, there
exists i and j such that A
i
and A
j
are linearly dependent. In
particular, a RIP matrix has a small coherence degree,
while a highly coherent matrix yields a large RIC.
In order to achieve sparse signal recovery for a com-
pressed sensing problem as described by the L
0
-model,
define a step function s : R ! R given by
sðhÞ¼
1; if h 6¼ 0
0; if h ¼ 0:
ð3Þ
Thereby, x
kk
0
¼
P
n
i¼1
sðx
i
Þ. Further, since sðÞ is discon-
tinuous, it is replaced by a given continuous approximation
function F
a
ðÞ satisfying the following three conditions
[47]:
(1) F
a
ð0Þ¼0, F
a
ðÞ is an even function and is not
identically equal to zero;
(2) F
a
ðÞ is nondecreasing and concave on ½0; þ1Þ;
(3)
F
a
ðhÞ
h
is nonincreasing in h on ð0; þ1Þ.
Here, a is a parameter which controls the closeness of
approximation, and accordingly, x
kk
0
can be replaced by a
function JðxÞ defined by
JðxÞ¼
X
n
i¼1
F
a
ðx
i
Þ:
ð4Þ
We next introduce the versions of Clarke generalized
gradient [48] and smoothing function. To this point, let
f : R
n
! R be a locally Lipschitz function and D
f
denote
the set of points at which f is differentiable. Hence, the
Clarke generalized gradient of f at x can be defined below,
of ðxÞ¼co lim
u
k
!x;u
k
2D
f
rf ðu
k
Þ
ð5Þ
where ‘co’ means the convex hull. Note that if f is a locally
Lipschitz function with rank L near x , of ðxÞ is a nonempty,
convex and compact subset of R
n
, satisfying n
kk
2
L with
8n 2 of ðxÞ. For a given minimization problem with
objective function f, we say that x
is a Clarke stationary
point in set X if there exists n
2 of ðx
Þ such that
x x
; n
i0h for 8x 2 X.IfX is nonempty, closed and
convex in R
n
, the normal cone to X at x 2 X is defined by
N
X
ðxÞ¼cl
[
c 0
cod
X
ðxÞ
¼
v 2 R
N
jhy x; vi0 ; 8y 2 X
;
ð6Þ
where ‘cl’ denotes the closure hull and d
X
is a globally
Lipschitz function with d
X
ðxÞ¼minf u x
kk
2
; u 2 Xg.
On the other hand, smoothing approximation is a usual way
to handle nonsmooth optimization problems, for which
some parameterized smooth functions can be adopted to
approximate their nonsmooth functions.
Definition 1 [23, 26, 37] Let h : R
n
! R be a locally
Lipschitz function, We say that
~
h : R
n
½0; þ1Þ ! R is a
smoothing function of h,if
~
h satisfies the following four
conditions:
(1)
~
hð:; lÞ is continuously differentiable on ½0; þ1Þ;
(2) for any fixed x 2 R
n
, lim
l!0
þ
~
hðx; lÞ¼hðxÞ;
(3) there exists a positive constant j
~
h
such that
r
l
~
hðx; lÞ
j
~
h
with 8l 2½0; 1Þ and x 2 R
n
;
(4) lim
z!x;l!0
þ
r
z
~
hðz; lÞ
ohðxÞ.
From the version of smoothing function above, we
observe that any locally Lipschitz function hðxÞ has a
smoothing approximation function
~
hðx; lÞ satisfying
lim
z!x;l!0
þ
~
hðz; lÞ¼hðxÞ and
jhðxÞ
~
hðx; lÞjj
~
h
l; 8l 2½0; 1Þ; x 2 R
n
:
ð7Þ
2908 Neural Comput & Applic (2019) 31:2905–2920
123