1
Lecture Notes: Appendix
Abstract—This note presents a unified analysis of the
recovery of simple objects from random linear measure-
ments. When the linear functionals are Gaussian, we show
that an s-sparse vector in R
n
can be efficiently recovered
from 2s log n measurements with high probability and a
rank r, n × n matrix can be efficiently recovered from
r (6 n − 5r ) measurements with high probability. For sparse
vectors, this is within an additive factor of the best known
nonasymptotic bounds. For low-rank matrices, this matches
the best known bounds. We present a parallel analysis for
block-sparse vectors obtaining similarly tight bounds. In
the case of sparse and block-sparse signals, we additionally
demonstrate that our bounds are only slightly weakened
when the measurement map is a random sign matrix.
Our results are based on analyzing a particular dual
point which certifies optimality conditions of the respective
convex programming problem. Our calculations rely only
on standard large deviation inequalities and our analysis
is self-contained.
Keywords. ℓ
1
-norm minimization nuclear-norm min-
imization block-sparsity duality random matrices.
I. INTRODUCTION
The past decade has witnessed a revolution in con-
vex optimization algorithms for recovering structured
models from highly incomplete information. Work in
compressed sensing has shown that when a vector is
sparse, then it can be reconstructed from a number of
nonadaptive linear measurements proportional to a log-
arithmic factor times the signal’s sparsity level [?], [?].
Building on this work, many have recently demonstrated
that if an array of user data has low-rank, then the matrix
can be re-assembled from a sampling of information
proportional to the number of parameters required to
specify a low-rank factorization. See [?], [?], [?] for
some early references on this topic.
Sometimes, one would like to know precisely how
many measurements are needed to recover an s-sparse
vector (a vector with at most s nonzero entries) by
ℓ
1
minimization or a rank-r matrix by nuclear-norm
minimization. This of course depends on the kind of
measurements one is allowed to take, and can be em-
pirically determined or approximated by means of nu-
merical studies. At the theoretical level, however, very
precise answers—e.g., perfect knowledge of numerical
constants—for models of general interest may be very
hard to obtain. For instance, in [?], the authors demon-
strated that about 20s log n randomly selected Fourier
coefficients were sufficient to recover an s-sparse signal,
but determining the minimum number that would suffice
appears to be a very difficult question. Likewise, obtain-
ing precise theoretical knowledge about the number of
randomly selected entries required to recover a rank-r
matrix by convex programming seems delicate, to say the
least. For some special and idealized models, however,
this is far easier and the purpose of this note is to make
this clear.
In this note, we demonstrate that many bounds con-
cerning Gaussian measurements can be derived via ele-
mentary, direct methods using Lagrangian duality. By a
careful analysis of a particular Lagrange multiplier, we
are able to prove that 2s log n measurements are suffi-
cient to recover an s-sparse vector in R
n
and r(6 n −5r)
measurements are sufficient to recover a rank r, n × n
matrix with high probability. These almost match the
best-known, non-asymptotic bounds for sparse vector
reconstruction (2s log(n/s) + 5/4s measurements [?],
[?]), and match the best known bounds for low-rank
matrix recovery in the nuclear norm (as reported in [?],
[?]).
The work [?], cited above, presents a unified view of
the convex programming approach to inverse problems
and provides a relatively simple framework to derive
exact, robust recovery bounds for a variety of simple
models. As we already mentioned, the authors also pro-
vide rather tight bounds on sparse vector and low-rank
matrix recovery in the Gaussian measurement ensemble
by using a deep theorem in functional analysis due to
Gordon, which concerns the intersection of random sub-
spaces with subsets of the sphere [?]. Gordon’s Theorem
has also been used to provide sharp estimates of the
phase transitions for the ℓ
1
and nuclear norm heuristics
in [?] and [?] respectively. Our work complements these
results, demonstrating that the dual multiplier ansatz
proposed in [?] can also yield very tight bounds for many
signal recovery problems.
To introduce our results, suppose we are given infor-
mation about an object x
0
∈ R
n
of the form Φx
0
∈ R
m
where Φ is an m × n matrix. When Φ has entries
i.i.d. sampled from a Gaussian distribution with mean
0 and variance 1/m, we call it a Gaussian measurement
map. We want bounds on the number of rows m of Φ to