没有合适的资源?快使用搜索试试~ 我知道了~
首页Capacity of multi-antenna Gaussian channels
资源详情
资源评论
资源推荐
Capacity of Multi-antenna Gaussian Channels
_
I. Emre Telatar
Abstract
We investigate the use of multiple transmitting and/or receiving antennas
for single user communications over the additive Gaussian channel with and
without fading. We derive formulas for the capacities and error exp onents of
such channels, and describ e computational pro cedures to evaluate such for-
mulas. We show that the potential gains of such multi-antenna systems over
single-antenna systems is rather large under indep endence assumptions for the
fades and noises at dierent receiving antennas.
1 Introduction
We will consider a single user Gaussian channel with multiple transmitting and/or
receiving antennas. We will denote the numb er of transmitting antennas by
t
and the
number of receiving antennas by
r
. We will exclusively deal with a linear mo del in
which the received vector
y
2
C
r
dep ends on the transmitted vector
x
2
C
t
via
y
=
H
x
+
n
(1)
where
H
is a
r
t
complex matrix and
n
is zero-mean complex Gaussian noise with
indep endent, equal variance real and imaginary parts. We assume
E
[
nn
y
]=
I
r
, that
is, the noises corrupting the dierent receivers are indep endent. The transmitter is
constrained in its total p ower to
P
,
E
[
x
y
x
]
P:
Equivalently, since
x
y
x
= tr(
xx
y
), and exp ectation and trace commute,
tr(
E
[
xx
y
])
P:
(2)
Rm. 2C-174, LucentTechnologies, Bell Lab oratories, 600 Mountain Avenue, Murray Hill, NJ,
USA 07974,
telatar@lucent.com
1
This second form of the power constraint will prove more useful in the up coming
discussion.
We will consider several scenarios for the matrix
H
:
1.
H
is deterministic.
2.
H
is a random matrix (for whichwe shall use the notation
H
), chosen according
to a probability distribution, and each use of the channel corresp onds to an
indep endent realization of
H
.
3.
H
is a random matrix, but is xed once it is chosen.
The main fo cus of this pap er in on the last two of these cases. The rst case is
included so as to exp ose the techniques used in the later cases in a more familiar
context. In the cases when
H
is random, we will assume that its entries form an
i.i.d. Gaussian collection with zero-mean, indep endent real and imaginary parts, each
with variance 1
=
2. Equivalently, each entry of
H
has uniform phase and Rayleigh
magnitude. This choice mo dels a
Rayleigh fading environment
with enough separation
within
the receiving antennas and the transmitting antennas such that the fades for
each transmitting-receiving antenna pair are indep endent. In all cases, we will assume
that the
realization
of
H
is known to the receiver, or, equivalently, the channel output
consists of the pair (
y
;
H
), and the
distribution
of
H
is known at the transmitter.
2 Preliminaries
A complex random vector
x
2
C
n
is said to be Gaussian if the real random vector
^
x
2
R
2
n
consisting of its real and imaginary parts,
^
x
=
h
Re
(
x
)
Im
(
x
)
i
, is Gaussian. Thus,
to sp ecify the distribution of a complex Gaussian random vector
x
, it is necessary to
sp ecify the expectation and covariance of
^
x
, namely,
E
[
^
x
]
2
R
2
n
and
E
(
^
x
,E
[
^
x
])(
^
x
,E
[
^
x
])
y
2
R
2
n
2
n
:
We will say that a complex Gaussian random vector
x
is
circularly symmetric
if the
covariance of the corresp onding
^
x
has the structure
E
(
^
x
,E
[
^
x
])(
^
x
,E
[
^
x
])
y
=
1
2
Re
(
Q
)
,
Im
(
Q
)
Im
(
Q
)
Re
(
Q
)
(3)
for some Hermitian non-negative denite
Q
2
C
n
n
. Note that the real part of an
Hermitian matrix is symmetric and the imaginary part of an Hermitian matrix is
anti-symmetric and thus the matrix app earing in (3) is real and symmetric. In this
case
E
(
x
,E
[
x
])(
x
,E
[
x
])
y
=
Q
, and thus, a circularly symmetric complex Gaussian
random vector
x
is sp ecied by prescribing
E
[
x
] and
E
(
x
,E
[
x
])(
x
,E
[
x
])
y
.
2
For any
z
2
C
n
and
A
2
C
n
m
dene
^
z
=
Re
(
z
)
Im
(
z
)
and
^
A
=
Re
(
A
)
,
Im
(
A
)
Im
(
A
)
Re
(
A
)
:
Lemma 1.
The mappings
z
!
^
z
=
h
Re
(
z
)
Im
(
z
)
i
and
A
!
^
A
=
h
Re
(
A
)
,
Im
(
A
)
Im
(
A
)
Re
(
A
)
i
have the
following prop erties:
C
=
AB
()
^
C
=
^
A
^
B
(4a)
C
=
A
+
B
()
^
C
=
^
A
+
^
B
(4b)
C
=
A
y
()
^
C
=
^
A
y
(4c)
C
=
A
,
1
()
^
C
=
^
A
,
1
(4d)
det(
^
A
)=
j
det(
A
)
j
2
= det(
AA
y
) (4e)
z
=
x
+
y
()
^
z
= ^
x
+^
y
(4f )
y
=
Ax
()
^
y
=
^
A
^
x
(4g)
Re
(
x
y
y
)= ^
x
y
^
y:
(4h)
Proof.
The prop erties (4a), (4b) and (4c) are immediate. (4d) follows from (4a) and
the fact that
^
I
n
=
I
2
n
. (4e) follows from
det(
^
A
) = det
I iI
0
I
^
A
I
,
iI
0
I
= det
A
0
Im
(
A
)
A
= det(
A
) det(
A
)
:
(4f), (4g) and (4h) are immediate.
Corollary 1.
U
2
C
n
n
is unitary if and only if
^
U
2
R
2
n
2
n
is orthonormal.
Proof.
U
y
U
=
I
n
()
(
^
U
)
y
^
U
=
^
I
n
=
I
2
n
.
Corollary 2.
If
Q
2
C
n
n
is non-negative denite then so is
^
Q
2
R
2
n
2
n
.
Proof.
Given
x
= [
x
1
;:::;x
2
n
]
y
2
R
2
n
, let
z
= [
x
1
+
jx
n
+1
;:::;x
n
+
jx
2
n
]
y
2
C
n
, so
that
x
= ^
z
. Then by (4g) and (4h)
x
y
^
Qx
=
Re
(
z
y
Qz
)=
z
y
Qz
0
:
The probability density (with resp ect to the standard Leb esgue measure on
C
n
)
of a circularly symmetric complex Gaussian with mean
and covariance
Q
is given
3
by
;Q
(
x
) = det(
^
Q
)
,
1
=
2
exp
,
,
(^
x
,
^
)
y
^
Q
,
1
(^
x
,
^
)
= det(
Q
)
,
1
exp
,
,
(
x
,
)
y
Q
,
1
(
x
,
)
where the second equality follows from (4d){(4h). The dierential entropy of a com-
plex Gaussian
x
with covariance
Q
is given by
H
(
Q
)=
E
Q
[
,
log
Q
(
x
)]
= log det(
Q
) + (log
e
)
E
[
x
y
Q
,
1
x
]
= log det(
Q
) + (log
e
)tr(
E
[
xx
y
]
Q
,
1
)
= log det(
Q
) + (log
e
)tr(
I
)
= log det(
eQ
)
:
For us, the imp ortance of the circularly symmetric complex Gaussians is due to the
following lemma: circularly symmetric complex Gaussians are entropy maximizers.
Lemma 2.
Supp ose the complex random vector
x
2
C
n
is zero-mean and satises
E
[
xx
y
] =
Q
, i.e.,
E
[
x
i
x
j
] =
Q
ij
,
1
i; j
n
. Then the entropy of
x
satises
H
(
x
)
log det(
eQ
)
with equality if and only if
x
is a circularly symmetric complex
Gaussian with
E
[
xx
y
]=
Q
Proof.
Let
p
be any density function satisfying
R
C
n
p
(
x
)
x
i
x
j
dx
=
Q
ij
, 1
i; j
n
.
Let
Q
(
x
) = det(
Q
)
,
1
exp
,
,
x
y
Q
,
1
x
:
Observe that
R
C
n
Q
(
x
)
x
i
x
j
dx
=
Q
ij
, and that log
Q
(
x
) is a linear combination of
the terms
x
i
x
j
. Thus
E
Q
[log
Q
(
x
)] =
E
p
[log
Q
(
x
)]. Then,
H
(
p
)
,H
(
Q
)=
,
Z
C
n
p
(
x
) log
p
(
x
)
dx
+
Z
C
n
Q
(
x
) log
Q
(
x
)
dx
=
,
Z
C
n
p
(
x
) log
p
(
x
)
dx
+
Z
C
n
p
(
x
) log
Q
(
x
)
dx
=
Z
C
n
p
(
x
) log
Q
(
x
)
p
(
x
)
dx
0
;
with equality only if
p
=
Q
. Thus
H
(
p
)
H
(
Q
).
Lemma 3.
If
x
2
C
n
is a circularly symmetric complex Gaussian then so is
y
=
A
x
for any
A
2
C
m
n
.
4
Proof.
We may assume
x
is zero-mean. Let
Q
=
E
[
xx
y
]. Then
y
is zero-mean,
^
y
=
^
A
^
x
, and
E
[
^
y
^
y
y
]=
^
A
E
[
^
x
^
x
y
]
^
A
y
=
1
2
^
A
^
Q
^
A
y
=
1
2
^
K
where
K
=
AQA
y
.
Lemma 4.
If
x
and
y
are indep endent circularly symmetric complex Gaussians, then
z
=
x
+
y
is a circularly symmetric complex Gaussian.
Proof.
Let
A
=
E
[
xx
y
] and
B
=
E
[
yy
y
]. Then
E
[
^
z
^
z
y
]=
1
2
^
C
with
C
=
A
+
B
.
3 The Gaussian channel with fixed transfer function
We will start by reminding ourselves the case of deterministic
H
. The results of this
section can b e inferred from [1, Ch. 8]
3.1 Capacity
We will rst derive an expression for the capacity
C
(
H; P
) of this channel. To that
end, we will maximize the average mutual information
I
(
x
;
y
)between the input and
the output of the channel over the choice of the distribution of
x
.
By the singular value decomp osition theorem, any matrix
H
2
C
r
t
can b e written
as
H
=
UDV
y
where
U
2
C
r
r
and
V
2
C
t
t
are unitary, and
D
2
R
r
t
is non-negative and diagonal.
In fact, the diagonal entries of
D
are the non-negative square ro ots of the eigenvalues
of
HH
y
, the columns of
U
are the eigenvectors of
HH
y
and the columns of
V
are the
eigenvectors of
H
y
H
. Thus, we can write (1) as
y
=
UDV
y
x
+
n
:
Let
~
y
=
U
y
y
,
~
x
=
V
y
x
,
~
n
=
U
y
n
. Note that
U
and
V
are invertible,
~
n
has the same
distribution as
n
and,
E
[
~
x
y
~
x
] =
E
[
x
y
x
]. Thus, the original channel is equivalent to
the channel
~
y
=
D
~
x
+
~
n
(5)
where
~
n
is zero-mean, Gaussian, with indep endent, identically distributed real and
imaginary parts and
E
[
~
n
~
n
y
] =
I
r
. Since
H
is of rank at most min
f
r;t
g
, at most
min
f
r;t
g
of the singular values of it are non-zero. Denoting these by
1
=
2
i
,
i
=
1
;:::;
min
f
r;t
g
,we can write (5) comp onent-wise, to get
~
y
i
=
1
=
2
i
~
x
i
+~
n
i
;
1
i
min
f
r;t
g
;
5
剩余27页未读,继续阅读
chenjingrui10087
- 粉丝: 5
- 资源: 6
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- RTL8188FU-Linux-v5.7.4.2-36687.20200602.tar(20765).gz
- c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf
- 建筑供配电系统相关课件.pptx
- 企业管理规章制度及管理模式.doc
- vb打开摄像头.doc
- 云计算-可信计算中认证协议改进方案.pdf
- [详细完整版]单片机编程4.ppt
- c语言常用算法.pdf
- c++经典程序代码大全.pdf
- 单片机数字时钟资料.doc
- 11项目管理前沿1.0.pptx
- 基于ssm的“魅力”繁峙宣传网站的设计与实现论文.doc
- 智慧交通综合解决方案.pptx
- 建筑防潮设计-PowerPointPresentati.pptx
- SPC统计过程控制程序.pptx
- SPC统计方法基础知识.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0