2006 IEEE SENSORS JOURNAL, VOL. 16, NO. 7, APRIL 1, 2016
where A(ϑ) =[a(ϑ
1
), ···, a(ϑ
N
)] can be simply denoted
as A for brevity, ¯s(t) =[¯s
1
(t), ···, ¯s
N
(t)]
T
isasparsevector
where the non-zero entries indicate the true source signals.
Hence the signal directions can be determined easily once
¯s(t) is obtained using numerous existing algorithms for the
basic SMV model. When L snapshots are collected, the SMV
model (2) can be easily extended to an MMV model as,
X = A
¯
S + V , (3)
where X =[x(t
1
), ···, x(t
L
)] is the array output matrix,
¯
S =[¯s(t
1
), ···, ¯s(t
L
)] is the expanded signal matrix and
V =[v(t
1
), ···, v(t
L
)] is the measurement noises matrix of
the array.
It has been proven that, the DOA estimation performance
of the MMV model is always better than that of the SMV
model [33]. Also it is shown in [34] that, under certain mild
assumptions the recovery rate increases exponentially with the
number of measurement vectors.
Remark 1: A key assumption of the MMV model is that
each column of
¯
S shares the identical sparse structure, i.e.,
the non-zero entries of ¯s(t
l
)(l = 1, ···, L) should appear in
the same rows of
¯
S [33]. This is valid only if the directions
of incident signals change a little or even are invariant during
the acquisition of
¯
S. Unfortunately, the signal directions are
often time-varying in practice, hence a small L in the MMV
model is required from practical application point of view.
We assume the maximum of L is 150 in this paper.
Now we take the off-grid case into consideration, where
a bias exists between the true DOA and its nearest grid.
No matter how dense we divide the angle space, the bias
always exists. In general, the more dense the grid set is,
the more computational cost will be. Furthermore, a very
dense grid set may lead to a high correlation between a(ϑ
n
)
making many of the compressive sensing (CS) reconstruction
algorithms fail. We incorporate a bias parameter into the MMV
model (3) to avoid or alleviate the performance degradation
causedbyadensegridset.
Let a(θ
n
) = (1 −ρ
n
)a(ϑ
n
) +ρ
n
a(ϑ
n
),whereρ
n
is defined
as the bias parameter, ϑ
n
and ϑ
n
are the directions adjacent
to the true DOA θ
n
from the left and right, respectively. Then
the modified signal model can be rewritten as,
X =
¯
A
¯
S + V, (4)
where
¯
A is a new manifold matrix
¯
A = A(1 : N − 1)diag(1 − ρ) + A(2 : N)diag(ρ)
with ρ =[ρ
1
, ···,ρ
N−1
]
T
, (5)
where A(i : j) means the subset of A consisting of the
ith column through the jth column of A. By defining =
diag(ρ), I
f
=[I
N−1
, 0
N−1)×1
]
T
, I
b
=[0
N−1)×1
, I
N−1
]
T
,
(5) can be rewritten compactly as,
¯
A = AI
f
(I
N−1
− ) + AI
b
= AI
f
+ A(I
b
− I
f
)
= A
f
+ A
bf
, (6)
where A
f
= AI
f
, A
bf
= A(I
b
− I
f
).
Remark 2: The proposed off-grid model (6) is similar to
the off-grid model in [29] and [30]. In particular, the model
in [29] and [30] is a first-order approximation method while
our proposed model is a linear interpolation method. This
difference results in a distinction about the estimation per-
formance. A theoretical analysis of the proposed model will
be provided at the end of this section.
Remark 3: Some latest methods have been proposed for
the off-grid model presented in [29] such as the perturbed
1
-norm-based algorithm [28] and the perturbed greedy
algorithm [35]. However, it has been proven that the
1
-norm-
based algorithms often fail to obtain the sparsest solution and
the greedy ones are usually sensitive to the high correlation
between the columns of the manifold matrix [25]. Unlike these
two kinds of methods, the SBL does not rely on the restricted
isometry property (RIP) to guarantee reliable performance
and is convenient to incorporate proper priors to exploit the
signal’s structure. Hence, we employ the SBL algorithm to
solve our off-grid model (4) in this paper. In fact, the authors
of [29] have proposed a sparse Bayesian based algorithm with
a Gamma hyperprior assumption. However, this assumption is
likely to lead to the instability of the algorithm or even the
incorrect solution [36].
B. PSBL Algorithm
We assume that the columns of
¯
S are mutually independent,
and each column obeys a zero-mean Gaussian distribution with
variance , namely,
¯s(t
l
) ∼ CN(0, ) (7)
where = diag(γ ) with γ =[γ
1
, ···,γ
N
]
T
is the covariance
matrix of the lth column of
¯
S. Note that γ
n
(n = 1, ···, N)
is a nonnegative hyperparameter controlling the row sparsity
of
¯
S,i.e.,whenγ
n
= 0, the associated row of
¯
S becomes zero.
With this assumption, we can obtain the probability density
function (PDF) of
¯
S with respect to as follows,
p(
¯
S;) =|π|
−L
exp
−tr
¯
S
H
−1
¯
S
. (8)
Assume that the entries of the noise matrix V are mutually
independent and each row of V has a complex Gaussian
distribution, i.e., v
n
(t
l
) ∼ CN (0,σ
2
),whereσ
2
is the noise
power. For the SMV model (4), the Gaussian likelihood is
p(X|
¯
S;σ
2
, ρ) ∼ CN (
¯
A
¯
S,σ
2
I). (9)
Using the Bayes rule we obtain the posterior PDF of
¯
S as,
p(
¯
S|X;,σ
2
, ρ) ∼ CN (μ
¯s
,
¯s
) (10)
with mean
μ
¯s
=
¯
A
H
−1
x
X (11)
and the covariance matrix
¯s
= (
−1
+ σ
−2
¯
A
H
¯
A)
−1
= −
¯
A
H
−1
x
¯
A (12)
where
x
= σ
2
I +
¯
A
¯
A
H
.