1530-437X (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2015.2508059, IEEE Sensors
Journal
3
Hence the signal directions can be determined easily once
¯
s(t)
is obtained using numerous existing algorithms for the basic
SMV model. When L snapshots are collected, the SMV model
(2) can be easily extended to an MMV model as,
X = A
¯
S + V , (3)
where X = [x(t
1
), ··· , x(t
L
)] is the array output matrix,
¯
S = [
¯
s(t
1
), ··· ,
¯
s(t
L
)] is the expanded signal matrix and V =
[v(t
1
), ··· , v(t
L
)] is the measurement noises matrix of the
array.
It has been proven that, the DOA estimation performance
of the MMV model is always better than that of the SMV
model [33]. Also it is shown in [34] that, under certain mild
assumptions the recovery rate increases exponentially with the
number of measurement vectors.
Remark 1: A key assumption of the MMV model is that
each column of
¯
S shares the identical sparse structure, i.e.,
the non-zero entries of
¯
s(t
l
) (l = 1, ··· , L) should appear in
the same rows of
¯
S [33]. This is valid only if the directions
of incident signals change a little or even are invariant during
the acquisition of
¯
S. Unfortunately, the signal directions are
often time-varying in practice, hence a small L in the MMV
model is required from practical application point of view. We
assume the maximum of L is 150 in this paper.
Now we take the off-grid case into consideration, where
a bias exists between the true DOA and its nearest grid. No
matter how dense we divide the angle space, the bias always
exists. In general, the more dense the grid set is, the more
computational cost will be. Furthermore, a very dense grid set
may lead to a high correlation between a(ϑ
n
) making many
of the compressive sensing (CS) reconstruction algorithms fail.
We incorporate a bias parameter into the MMV model (3) to
avoid or alleviate the performance degradation caused by a
dense grid set.
Let a(θ
n
) = (1−ρ
n
)a(ϑ
n
)+ρ
n
a(ϑ
n
), where ρ
n
is defined
as the bias parameter, ϑ
n
and ϑ
n
are the directions adjacent
to the true DOA θ
n
from the left and right, respectively. Then
the modified signal model can be rewritten as,
X =
¯
A
¯
S + V , (4)
where
¯
A is a new manifold matrix
¯
A = A(1 : N −1)diag(1 − ρ) + A(2 : N)diag(ρ)
with ρ = [ρ
1
, ··· , ρ
N−1
]
T
,
(5)
where A(i : j) means the subset of A consisting of the ith co-
lumn through the jth column of A. By defining ∆ = diag(ρ),
I
f
= [I
N−1
, 0
N−1)×1
]
T
, I
b
= [0
N−1)×1
, I
N−1
]
T
, (5) can be
rewritten compactly as,
¯
A = AI
f
(I
N−1
− ∆) + AI
b
∆
= AI
f
+ A(I
b
− I
f
)∆
= A
f
+ A
bf
∆,
(6)
where A
f
= AI
f
, A
bf
= A(I
b
− I
f
).
Remark 2: The proposed off-grid model (6) is similar to
the off-grid model in [29] and [30]. In particular, the model in
[29] and [30] is a first-order approximation method while our
proposed model is a linear interpolation method. This differ-
ence results in a distinction about the estimation performance.
A theoretical analysis of the proposed model will be provided
at the end of this section.
Remark 3: Some latest methods have been proposed for the
off-grid model presented in [29] such as the perturbed ℓ
1
-
norm-based algorithm [28] and the perturbed greedy algorithm
[35]. However, it has been proven that the ℓ
1
-norm-based
algorithms often fail to obtain the sparsest solution and the
greedy ones are usually sensitive to the high correlation
between the columns of the manifold matrix [25]. Unlike these
two kinds of methods, the SBL does not rely on the restricted
isometry property (RIP) to guarantee reliable performance
and is convenient to incorporate proper priors to exploit the
signal’s structure. Hence, we employ the SBL algorithm to
solve our off-grid model (4) in this paper. In fact, the authors
of [29] have proposed a sparse Bayesian based algorithm with
a Gamma hyperprior assumption. However, this assumption is
likely to lead to the instability of the algorithm or even the
incorrect solution [36].
B. PSBL algorithm
We assume that the columns of
¯
S are mutually independent,
and each column obeys a zero-mean Gaussian distribution with
variance Γ, namely,
¯
s(t
l
) ∼ CN(0, Γ) (7)
where Γ = diag(γ) with γ = [γ
1
, ··· , γ
N
]
T
is the covariance
matrix of the lth column of
¯
S. Note that γ
n
(n = 1, ··· , N) is
a nonnegative hyperparameter controlling the row sparsity of
¯
S, i.e., when γ
n
= 0, the associated row of
¯
S becomes zero.
With this assumption, we can obtain the probability density
function (PDF) of
¯
S with respect to Γ as follows,
p(
¯
S; Γ) = |πΓ|
−
L
exp
−tr
¯
S
H
Γ
−
1
¯
S
. (8)
Assume that the entries of the noise matrix V are mutually
independent and each row of V has a complex Gaussian
distribution, i.e., v
n
(t
l
) ∼ CN(0, σ
2
), where σ
2
is the noise
power. For the SMV model (4), the Gaussian likelihood is
p(X|
¯
S; σ
2
, ρ) ∼ CN(
¯
A
¯
S, σ
2
I). (9)
Using the Bayes rule we obtain the posterior PDF of
¯
S as,
p(
¯
S|X; Γ, σ
2
, ρ) ∼ CN(µ
¯s
, Σ
¯s
) (10)
with mean
µ
¯s
= Γ
¯
A
H
Σ
−1
x
X (11)
and the covariance matrix
Σ
¯s
= (Γ
−1
+ σ
−2
¯
A
H
¯
A)
−1
= Γ − Γ
¯
A
H
Σ
−1
x
¯
AΓ
(12)
where Σ
x
= σ
2
I +
¯
AΓ
¯
A
H
.
To find the hyperparameters Θ =
γ, σ
2
, ρ
, we employ
the EM method to minimize −log p(X,
¯
S; Θ) while treating
¯
S as hidden variables. It is equivalent to maximizing
Q(Θ) = E
¯
S|X;Θ
old
log p(X,
¯
S; Θ)
= E
¯
S|X;Θ
old
log p(X|
¯
S; σ
2
, ρ)
+ E
¯
S|X;Θ
old
log p(
¯
S; Γ)
(13)