Hindawi Publishing Corporation
EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 560349, 8 pages
doi:10.1155/2010/560349
Research Article
Optimized Projection Matrix for Compressive Sensing
Jianping Xu, Yiming Pi, and Zongjie Cao
School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
Correspondence should be addressed to Jianping Xu, xujianping1982@hotmail.com
Received 27 September 2009; Revised 26 April 2010; Accepted 22 June 2010
Academic Editor: A. Enis Cetin
Copyright © 2010 Jianping Xu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal
is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume
the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed
to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum
coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type
method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of
samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional
optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
1. Introduction
Compressive sensing (CS) [1–3] has received much attention
as it has shown promising results in many applications.
CS is an emerging framework, stating that signals which
have a sparse representation on an appropriate basis can
be recovered from a number of random linear projections
of dimension considerably lower than that required by the
Shannon-Nyquist theorem. Moreover, compressible signals,
that is, the signals’ transform coefficients on appropriate
basis decay rapidly, can also be sampled at a much lower
rate than that required by the Shannon-Nyquist theorem and
then reconst ructed with little loss of information.
Consider a signal X
∈ R
n
which can be sparsely
represented over a fixed dictionary Ψ
∈ R
n×k
that is assumed
to be redundant (k>n). Accordingly, the signal can be
described as
X
= Ψθ,
(1)
where θ
∈ R
k
is the coefficient vector which represents X on
Ψ and
θ
0
n.Thel
0
norm used here simply counts the
number of nonzero element in θ.
CS is an innovative and revolutionary idea that offers
a joint sampling and compressing process for such signal.
Consider a general linear sampling process which computes
m (m<n) inner products between X and a collection of
vectors
{ϕ
j
}
m
j
=1
as y
j
=X, ϕ
j
. Arrange the measurements
{y
j
}
m
j
=1
in an m × 1vectorY and the sampling vectors
{ϕ
j
T
}
m
j
=1
as rows in an m×n matrix Φ, then Y can be written
as
Y
= ΦX = ΦΨθ.
(2)
The original X can be reconstructed from Y by exploring
its sparse expression, that is, among all possible
θ that
satisfies Y
= ΦΨ
θ, seek the sparsest. If this representation
coincides with θ, a perfect reconstruction of the signal in (1)
is gotten. This reconstruction requires the solution of
min
θ
0
,
subject to Y
= ΦΨ
θ = D
θ
,
(3)
where D
= ΦΨ is defined as the equivalent dictionary.
It is know n to be NP-hard in general to solve the problem
[4] and different suboptimal strategies are used in practice
such as Basis Pursuit (BP) [3] and Orthogonal Matching
Pursuit (OMP) [5, 6].
Until now, almost all works on CS made the assumption
that Φ is drawn at random except the one by Elad [7]
and the one by Duarte-Carvajalino and Sapiro [8].In
[7], Elad proposed an iterative algorithm. The algorithm