10
When we consider sparse signals, the CS recovery process consists of a search for the sparsest signal x that
yields the measurements y. By defining the `
0
“norm” of a vector kxk
0
as the number of nonzero entries in x,
the simplest way to pose a recovery algorithm is using the optimization
bx = arg min
x∈R
N
kxk
0
subject to y = Φx. (10)
Solving (10) relies on an exhaustive search and is successful for all x ∈ Σ
K
when the matrix Φ has the sparse
solution uniqueness property (i.e., for M as small as 2K, see Theorems 2 and 4). However, this algorithm has
combinatorial computational complexity, since we must check whether the measurement vector y belongs to the
span of each set of K columns of Φ, K = 1, 2, . . . , N . Our goal, therefore, is to find computationally feasible
algorithms that can successfully recover a sparse vector x from the measurement vector y for the smallest possible
number of measurements M.
An alternative to the `
0
“norm” used in (10) is to use the `
1
norm, defined as kxk
1
=
P
N
n=1
|x(n)|. The resulting
adaptation of (10), known as basis pursuit (BP) [22], is formally defined as
bx = arg min
x∈R
N
kxk
1
subject to y = Φx. (11)
Since the `
1
norm is convex, (11) can be seen as a convex relaxation of (10). Thanks to the convexity, this
algorithm can be implemented as a linear program, making its computational complexity polynomial in the signal
length [49].
3
The optimization (11) can be modified to allow for noise in the measurements y = Φx + n; we simply change
the constraint on the solution to
bx = arg min
x∈R
N
kxk
1
subject to ky − Φxk
2
≤ , (12)
where ≥ knk
2
is an appropriately chosen bound on the noise magnitude. This modified optimization is known as
basis pursuit with inequality constraints (BPIC) and is a quadratic program with polynomial complexity solvers [49].
The Lagrangian relaxation of this quadratic program is written as
bx = arg min
x∈R
N
kxk
1
+ λky − Φxk
2
, (13)
and is known as basis pursuit denoising (BPDN). There exist many efficient solvers to find BP, BPIC, and BPDN
solutions; for an overview, see [51].
Oftentimes, a bounded-norm noise model is overly pessimistic, and it may be reasonable instead to assume that
the noise is random. For example, additive white Gaussian noise n ∼ N(0, σ
2
I) is a common choice. Approaches
designed to address stochastic noise include complexity-based regularizaton [52] and Bayesian estimation [53].
These methods pose probabilistic or complexity-based priors, respectively, on the set of observable signals.
The particular prior is then leveraged together with the noise probability distribution during signal recovery.
Optimization-based approaches can also be formulated in this case; one of the most popular techniques is the
Dantzig selector [54]:
bx = arg min
x∈R
N
kxk
1
s. t. kΦ
T
(y − Φx)k
∞
≤ λ
p
log Nσ, (14)
3
A similar set of recovery algorithms, known as total variation minimizers, operate on the gradient of an image, which exhibits sparsity
for piecewise smooth images [50].