4. The proposed model IGP is as simple as greedy algo-
rithm, while it overcomes the common shortcoming
that greedy algorithm is more likely to fall into a sub-
optimal solution.
The rest of this paper is organized as follow: Section 2
presents the modeling of IGP, the two-cycle optimization
algorithm of IGP, the computational complexity analysis
and connections between IGP and other existing works.
Simulation results are given in Section 3 to illustrate the
performance of the proposed model IGP. Section 4 con-
cludes this paper.
The follow notations are used in the rest of this paper.
The l
0
norm of signal x can be represented by J xJ
0
. Let ϕ
indicates the measurement matrix and ϕ
i
represents the
ith column of ϕ. S ¼ 1; 2; …; N indicates the index of each
entries of the original signal, where N represents the
length of the original signal. Suppose K entries are selected
from S to generate the support collection I
K
, the vector x
I
K
consists the entries of x indexed by i I
K
and the matrix
ϕ
I
K
is composed of the columns of ϕ with indexes i I
K
.
The pseudo-inverse operation of measurement matrix ϕ
can be calculated by ϕ
†
¼ðϕ
>
ϕÞ
1
ϕ
>
, where > repre-
sents matrix transportation.
2. Intelligent greedy model
In this section, we first propose a novel optimization
function for the sparse reconstruction problem, and then
design a two-cycle optimization algorithm to solve the
proposed optimization function. Next, we analyse the
computational complexity of IGP. In the end, we compare
our IGP model to some existing arts.
2.1. Modeling of IGP
Suppose the original signal x is sparse and the sparsity
level K is known as a prior, we can obtain the measure-
ment signal by y ¼ ϕx, where ϕ is the measurement
matrix. As applied in many greedy algorithms such as
OMP, SP, and CoSaMP, the l
0
minimization problem can be
handled by a two-step strategy. In the first step, an esti-
mated support collection I which satisfies J xJ
0
¼ K is
obtained. Then the nonzero elements of the reconstructed
signal recx
I
can be calculated using the least square
method and the estimated solution recx is obtained by Eq
(4) in the second step:
recx
I
¼ ϕ
†
I
y and recx
S I
¼ 0: ð4Þ
However, the sparsity level is hard to be estimated in
many reconstruction problem in practice. So we propose a
novel optimization function in Theorem 1 when the
sparsity level is unknown in advance. Before giving it, we
provide a uniqueness solution method in Lemma pro-
posed in [22], which can be used to prove the Theorem 1.
Lemma. Suppose that recx
with support collection I
is the
unique solution of Eq. (2), and that the sparsity level satisfies
K o sparkðϕÞ=2, where sparkðϕÞ is the smallest number of
columns from ϕ that are linearly dependent. Then we have
recx ¼ recx
if and only if ϕ
I
ϕ
†
I
y ¼ y, where I
satisfying
I
¼ K is the estimated support collection and the estimated
solution recx
is obtained using Eq. (4).
As described in Lemma, if the sparsity level satisfies
K o sparkðϕÞ=2, we can obtain the unique solution by
seeking a support collection I which satisfies I ¼ K and
ϕ
I
ϕ
†
I
y ¼ y. If the estimated support collection can be
estimated, it satisfies FðIÞ¼0. As Eq. (5) satisfies f ðIÞ Z 0,
the optimization function of [22] can be set as Eq. (5) and
the estimated support collection can be obtained by opti-
mizing Eq. (6) when the sparsity level is known as a prior:
f ðIÞ¼‖ϕ
I
ϕ
†
I
y y‖
2
2
; ð5Þ
min
A Θ
f ðIÞ: ð6Þ
Theorem 1. Suppose the sparsity level K which satisfies
K r sparkðϕÞ=2 is unknown in advance, the two-step strategy
turns to estimate the collection I
mm
by optimizing the opti-
mization function described in Eq. (7) and then the estimated
solution can be obtained by Eq. (8):
min
I
mm
f ðI
mm
Þ¼‖ϕ
I
mm
ϕ
†
I
mm
yy‖
2
2
; ð7Þ
where, the collection I
mm
represents all mmcardinality
(where mm is bigger than the sparsity level K and smaller
than the measurement number m) subsets of the set S:
recx
I
mm
¼ ϕ
†
I
mm
y and recx
S I
mm
¼ 0: ð8Þ
Proof. As proved in [17], let the columns of ϕ be in general
position, suppose I
0
denotes the support of the original
signal x
0
and I
s
denotes the support of x
s
.IfI
0
is a subset of
I
s
, then we have perfect recovery:
x
s
¼ x
0
ð9Þ
□
If the support collection I satisfies J ϕ
I
ϕ
†
I
yy J
2
¼ 0
and the collection I
mm
includes I, I
mm
must satisfy the
condition that J ϕ
I
mm
ϕ
†
I
mm
yy J
2
¼ 0. So Eq. (7) can be set
as the optimization function and the reconstructed signal
can be obtained by optimizing the problem minf ðI
mm
Þ
when the sparsity level is unknown in advance.
The representation of the optimization function of IGP
is the same as that of the optimization function in [22].
However, the independent variable I
mm
of the optimization
function of IGP is different to that I of the optimization
function in [22], which is the main difference between
them. The length of I
mm
does not equal to the sparsity
level, which means the sparsity level need not to be
known as a prior for reconstruction. The proposed opti-
mization function of IGP can relax the initial precondition
for reconstruction and can extend the applications of IGP.
In the condition that the sparsity level is known as a
prior, the size of the feasible domain Θ, which represents
the solution set consisted of all K-cardinality subsets of the
set S, can be calculated by C
N
K
. However, there is only one
global optimal solution. As mentioned in Theorem 1, the
size of the feasible domain Θ
0
can be calculated by C
N
mm
when the sparsity level is unknown as a prior. While there
are C
mm K
N K
global optimal solutions. The main reason is
D. Li et al. / Signal Processing 122 (2016) 138–151140