0018-9545 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TVT.2017.2746562, IEEE
Transactions on Vehicular Technology
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. XX, NO. XX, XXX 2017 3
Under the superimposed training framework, one round of
the training transmission from S to D can be partitioned into
two phases. During phase I, S sends one source-training block
T
s
∈ C
τ ×N
s
to R, where τ is the number of the time slots
for the training transmission. The received training at R during
phase I is
r
r
= T
s
h + n
r
, (1)
where n
r
∈ C
τ ×1
is the additive white Gaussian noise
(AWGN) vector with zero mean and the covariance matrix
σ
2
n
I
τ
.
During phase II, R superimposes its training t
r
∈ C
τ ×1
over its received one r
r
, and the resultant training at R can
be denoted as [21]–[23]
r
t
= αr
r
+ t
r
(2)
where α is the relay amplification factor. Then, R forwards r
t
to D, and the received training at D can be written as
Y = αT
s
hg
T
+ t
r
g
T
+ αn
r
g
T
+ N
d
, (3)
where N
d
∈ C
τ ×N
d
denotes the AWGN matrix at D. Note
that the entries in N
d
are i.i.d CSCG random variables with
zero mean and variance σ
2
n
.
We set the average transmitting power of R as P
r
, i.e.,
P
r
=E
h
{kα(T
s
h + n
r
) + t
r
k
2
}
=α
2
σ
2
h
P
s
+ τ α
2
σ
2
n
+ P
t
, (4)
where P
s
= tr{T
s
T
H
s
} and P
t
= ||t
r
||
2
are the powers of
the source training and the relay training, respectively. Then
the amplification factor α can be expressed as
α =
s
P
r
− P
t
σ
h
2
P
s
+ τ σ
2
n
. (5)
III. IN-CHANNEL ESTIMATION
A. Iterative LMMSE In-Channel Estimator
Before preceding, let us define y = vec(Y) and
n
d
= vec(N
d
). Resorting to the Kronecker product property
vec(ABC) = (C
T
⊗ A)vec(B) [29], we can obtain
y =α(g ⊗ T
s
)h + (g ⊗ I
τ
)t
r
+ α(g ⊗ I
τ
)n
r
+ n
d
|
{z }
n
=
α(I
N
d
⊗ T
s
h) + (I
N
d
⊗ t
r
)
g + n, (6)
where the equivalent noise vector n, defined as the correspond-
ing term, is dependent on the specific realization of g.
When the in-channel prior statistics is known, the optimal
in-channel recovering method is the MAP estimator and can
be formulated as
{
ˆ
h, ˆg} = arg max
h,g
p(y|h, g)p(h, g), (7)
where p(y|h, g) denotes the PDF of y conditioned on h, g,
and p(h, g) is the joint PDF of h, g. With (6), we can obtain
ln p(y|h, g) = Const. − ln |C
n|h,g
|
− (y − µ)
H
C
−1
n|h,g
(y − µ), (8)
ln p(h, g) = Const. − σ
−2
h
khk
2
− σ
−2
g
kgk
2
, (9)
where
µ =α(g ⊗ T
s
)h + (g ⊗ I
τ
)t
r
=
α(I
N
d
⊗ T
s
h) + (I
N
d
⊗ t
r
)
g, (10)
C
n|h,g
=
E
{nn
H
|h, g} = σ
2
n
(α
2
gg
H
+ I
N
d
) ⊗ I
τ
, (11)
and the statistical assumption about both h and g is con-
sidered here. Furthermore, the Kronecker product property
(A ⊗ B)(C ⊗ D) = AC ⊗ BD is utilized in the above
derivation.
After straightforward calculation, the MAP in-channel esti-
mator can be represented as
{
ˆ
h, ˆg} = arg min
h,g
(y − µ)
H
C
−1
n|h,g
(y − µ)
+ ln |C
n|h,g
| + σ
−2
h
khk
2
+ σ
−2
g
kgk
2
. (12)
Unfortunately, due to its complicated structure, especially
the presence of C
n|h,g
, the MAP estimator requires a high
dimensional search and is difficult to implement. Theoretically,
the LMMSE estimator, which minimizes the MSE for the
unknown parameters’ estimation under the constraint that the
estimator must be linear, can be adopted as one suboptimal
method under the Bayesian framework. However, due to the
presence of nonlinear term (g ⊗ T
s
)h or (I
N
d
⊗ T
s
h)g, the
model in (6) is not Bayesian linear with respect to h and g.
Thus, the Bayesian Gaussian-Markov theorem does not hold
here, and the LMMSE estimator cannot be directly used to
estimate the in-channels h and g. Instead, we would like to
conceive an iterative LMMSE in-channel estimator.
The proposed iterative LMMSE in-channel estimator is
based on the following observation. For specific realization
of g (or h), the data model of y in (6) is Bayesian
linear with respect to h (or g). Hence, with the Bayesian
Gaussian-Markov theorem [30], the LMMSE estimation of h
conditioned on given g can be formulated as
ˆ
h|g =α
R
−1
h
+ α
2
(g ⊗ T
s
)
H
C
−1
n|g
(g ⊗ T
s
)
−1
(g ⊗ T
s
)
H
C
−1
n|g
y − (g ⊗ I
τ
)t
r
, (13)
where the covariance matrix
C
n|g
= E{nn
H
|g} = σ
2
n
(α
2
gg
H
+ I
N
d
) ⊗ I
τ
. (14)
Utilizing the matrix property equations (I + AB)
−1
= I −
A(I+BA)
−1
B and (A⊗B)
−1
= A
−1
⊗B
−1
, we can derive
the inverse of C
n|g
as
C
−1
n|g
=σ
−2
n
I
N
d
− α
2
g(1 + α
2
g
H
g
∗
)
−1
g
H
⊗ I
τ
=σ
−2
n
I
N
d
−
α
2
1 + α
2
kgk
2
gg
H
⊗ I
τ
. (15)
Substituting (15) into (13), we can reexpress (13) as
ˆ
h|g =α
R
−1
h
+
α
2
kgk
2
σ
2
n
(1 + α
2
kgk
2
)
T
H
s
T
s
−1
g
H
σ
2
n
(1 + α
2
kgk
2
)
⊗ T
H
s
y − (g ⊗ I
τ
)t
r
=
α
σ
2
n
(1 + α
2
kgk
2
)
R
−1
h
+
α
2
kgk
2
σ
2
n
(1 + α
2
kgk
2
)
T
H
s
T
s
−1
T
H
s
Y − t
r
g
T
g
∗
, (16)