importance function, and assign (nonnormalized)
weights according to
w
px
x
m*( )
()
()
=
π
(9)
which upon normalization become
w
w
w
m
m
i
M
i
()
*( )
*( )
=
=
∑
1
.
(10)
Suppose now that the posterior distribution
p
tt
(|)
::
xy
01 01−−
is approximated by the discrete random
measure
χ
tt
m
t
m
m
M
w
−−−=
=
10111
{,}
:
() ()
x
. Note that the trajecto
-
ries or streams of particles
x
01:
()
t
m
−
can be considered parti
-
cles of
p
tt
(|)
::
xy
01 01−−
. Given the discrete random
measure
χ
t −1
and the observation
y
t
, the objective is to
exploit
χ
t −1
in obtaining
χ
t
. Sequential importance sam
-
pling methods achieve this by generating particles
x
t
m()
and appending them to
x
01:
()
t
m
−
to form
x
0:
()
t
m
, and updating
the weights
w
t
m()
so that
χ
t
allows for accurate estimates
of the unknowns of interest at time
t
.
If we use an importance function that can be factored
as
ππ π(|)(| , )( | )
:: : : : :
xy xx y x y
00 010 0101tt tt t t t
=
−−−
(11)
and if
xxy
01 01 01:
()
::
~( | )
t
m
tt−−−
π
(12)
and
w
p
t
m
t
m
t
t
m
t
−
−−
−−
∝
1
01 01
01 01
()
:
()
:
:
()
:
(|)
(|)
xy
xyπ
(13)
we can augment the trajectory
x
01:
()
t
m
−
with
x
t
m()
, where
()
xxxy
t
m
tt
m
t
()
:
()
:
~|,π
01 0−
(14)
and easily associate with it an updated weight
w
t
m()
ob
-
tained according to
w
pp
t
m
tt
m
t
m
t
m
t
m
t
m
()
() () ()
()
:
()
(| )( | )
(|
∝
−
−
yx x x
xx
1
01
π ,)
:
()
y
0
1
t
t
m
w
−
.
(15)
The sequential importance sampling algorithm can
thus be implemented by performing the following two
steps for every
t
:
s
1) Draw particles
xxxy
t
m
tt
m
t
() ()
:
~( | , ),π
−10
where
mM=12,, ,K
.
s
2) Compute the weights of
w
t
m()
according to (15).
The importance function plays a very important role in
the performance of the particle filter. This function must
have the same support as the probability distribution that
is being approximated. In general, the closer the impor
-
tance function to that distribution, the better the approxi
-
mation is. In the literature, the two most frequently used
importance functions are the prior and the optimal im
-
portance function. The prior importance function is
given by
p
tt
m
(| )
()
xx
−1
, and it implies particle weight up
-
dates by
()
wwp
t
m
t
m
tt
m() () ()
|∝
−1
yx
. (16)
The optimal importance function minimizes the vari
-
ance of the importance weights conditional on the trajec
-
tory
x
01:
()
t
m
−
and the observations
y
0:t
and is given by
p
tt
m
t
(| , )
:
()
:
xx y
01 0−
[11]. When the optimal function is
used, the update of the weights is carried out according to
()
wwp
t
m
t
m
tt
m() () ()
|∝
−−11
yx
. (17)
Note that implementations of particle filters with prior
importance functions are much easier than those with op
-
timal importance functions. The reason is that the com
-
putation of
p
tt
m
(| )
()
yx
−1
requires integration.
A major problem with particle filtering is that the dis
-
crete random measure degenerates quickly. In other
words, all the particles except for a very few are assigned
negligible weights. The degeneracy implies that the per
-
formance of the particle filter will deteriorate. Degener-
acy, however, can be reduced by using good importance
sampling functions and resampling.
Resampling is a scheme that eliminates particles with
small weights and replicates particles with large weights.
In principle, it is implemented as follows:
s
1) Draw
M
particles,
x
t
m*( )
from the discrete distribu-
tion
χ
t
.
s
2) Let
xx
t
m
t
m() *()
=
, and assign equal weights (
1/ M
)
to the particles.
The idea of resampling is depicted in Figure 2 with
M =10
particles. There, the left column of circles repre
-
sents particles before resampling, where the diameters of
the circles are proportional to the weights of the particles.
The right column of circles are the particles after
resampling. In general the large particles are replicated
and the small particles are removed. For example, the
“blue” particle with the largest weight is replicated three
times and the “yellow” particle, two times, whereas the
green particles, which have small weights, are removed.
Also, after resampling all the circles have equal diameters,
that is, all the weights are set to
1/ .M
In Figure 3, we rep
-
resent pictorially the random measures and the actual
probability distributions of interest as well as the three
steps of particle filtering: particle generation, weight up
-
date, and resampling. In the figure, the solid curves repre
-
sent the distributions of interest, which are approximated
by the discrete measures. The sizes of the particles reflect
the weights that are assigned to them. Finally in Figure 4,
we display a flowchart that summarizes the particle filter
-
ing algorithm. At time
t
, a new set of particles is gener
-
ated, and their weights are computed. Thereby we obtain
the random measure
χ
t
, which can be used for estimation
of the desired unknowns. Before we proceed with the
22 IEEE SIGNAL PROCESSING MAGAZINE SEPTEMBER 2003
Authorized licensed use limited to: NANKAI UNIVERSITY. Downloaded on October 21, 2009 at 02:49 from IEEE Xplore. Restrictions apply.