2
trajectory x
1:t
= x
1
, . . . , x
t
of the robot. This estimation is
performed given the observations z
1:t
= z
1
, . . . , z
t
and the
odometry measurements u
1:t−1
= u
1
, . . . , u
t−1
obtained by
the mobile robot. The Rao-Blackwellized particle filter for
SLAM makes use of the following factorization
p(x
1:t
, m | z
1:t
, u
1:t−1
) =
p(m | x
1:t
, z
1:t
) · p(x
1:t
| z
1:t
, u
1:t−1
). (1)
This factorization allows us to first estimate only the trajectory
of the robot and then to compute the map given that trajectory.
Since the map strongly depends on the pose estimate of
the robot, this approach offers an efficient computation. This
technique is often referred to as Rao-Blackwellization.
Typically, Eq. (1) can be calculated efficiently since the pos-
terior over maps p(m | x
1:t
, z
1:t
) can be computed analytically
using “mapping with known poses” [31] since x
1:t
and z
1:t
are known.
To estimate the posterior p(x
1:t
| z
1:t
, u
1:t−1
) over the po-
tential trajectories, one can apply a particle filter. Each particle
represents a potential trajectory of the robot. Furthermore, an
individual map is associated with each sample. The maps are
built from the observations and the trajectory represented by
the corresponding particle.
One of the most common particle filtering algorithms is
the sampling importance resampling (SIR) filter. A Rao-
Blackwellized SIR filter for mapping incrementally processes
the sensor observations and the odometry readings as they
are available. It updates the set of samples that represents the
posterior about the map and the trajectory of the vehicle. The
process can be summarized by the following four steps:
1) Sampling: The next generation of particles {x
(i)
t
} is ob-
tained from the generation {x
(i)
t−1
} by sampling from the
proposal distribution π. Often, a probabilistic odometry
motion model is used as the proposal distribution.
2) Importance Weighting: An individual importance weight
w
(i)
t
is assigned to each particle according to the impor-
tance sampling principle
w
(i)
t
=
p(x
(i)
1:t
| z
1:t
, u
1:t−1
)
π(x
(i)
1:t
| z
1:t
, u
1:t−1
)
. (2)
The weights account for the fact that the proposal distri-
bution π is in general not equal to the target distribution
of successor states.
3) Resampling: Particles are drawn with replacement pro-
portional to their importance weight. This step is nec-
essary since only a finite number of particles is used to
approximate a continuous distribution. Furthermore, re-
sampling allows us to apply a particle filter in situations
in which the target distribution differs from the proposal.
After resampling, all the particles have the same weight.
4) Map Estimation: For each particle, the corresponding
map estimate p(m
(i)
| x
(i)
1:t
, z
1:t
) is computed based on
the trajectory x
(i)
1:t
of that sample and the history of
observations z
1:t
.
The implementation of this schema requires to evaluate
the weights of the trajectories from scratch whenever a new
observation is available. Since the length of the trajectory
increases over time, this procedure would lead to an obviously
inefficient algorithm. According to Doucet et al. [7], we obtain
a recursive formulation to compute the importance weights by
restricting the proposal π to fulfill the following assumption
π(x
1:t
| z
1:t
, u
1:t−1
) = π(x
t
| x
1:t−1
, z
1:t
, u
1:t−1
)
·π(x
1:t−1
| z
1:t−1
, u
1:t−2
). (3)
Based on Eq. (2) and (3), the weights are computed as
w
(i)
t
=
p(x
(i)
1:t
| z
1:t
, u
1:t−1
)
π(x
(i)
1:t
| z
1:t
, u
1:t−1
)
(4)
=
ηp(z
t
| x
(i)
1:t
, z
1:t−1
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
π(x
(i)
t
| x
(i)
1:t−1
, z
1:t
, u
1:t−1
)
·
p(x
(i)
1:t−1
| z
1:t−1
, u
1:t−2
)
π(x
(i)
1:t−1
| z
1:t−1
, u
1:t−2
)
|
{z }
w
(i)
t−1
(5)
∝
p(z
t
| m
(i)
t−1
, x
(i)
t
)p(x
(i)
t
| x
(i)
t−1
, u
t−1
)
π(x
t
| x
(i)
1:t−1
, z
1:t
, u
1:t−1
)
· w
(i)
t−1
.(6)
Here η = 1/p(z
t
| z
1:t−1
, u
1:t−1
) is a normalization factor
resulting from Bayes’ rule that is equal for all particles.
Most of the existing particle filter applications rely on the
recursive structure of Eq. (6). Whereas the generic algorithm
specifies a framework that can be used for learning maps, it
leaves open how the proposal distribution should be computed
and when the resampling step should be carried out. Through-
out the remainder of this paper, we describe a technique that
computes an accurate proposal distribution and that adaptively
performs resampling.
III. RBPF WITH IMPROVED PROPOSALS AND ADAPTIVE
RESAMPLING
In the literature, several methods for computing improved
proposal distributions and for reducing the risk of particle
depletion have been proposed [7, 30, 35]. Our approach applies
two concepts that have previously been identified as key
pre-requisites for efficient particle filter implementations (see
Doucet et al. [7]), namely the computation of an improved
proposal distribution and an adaptive resampling technique.
A. On the Improved Proposal Distribution
As described in Section II, one needs to draw samples from
a proposal distribution π in the prediction step in order to ob-
tain the next generation of particles. Intuitively, the better the
proposal distribution approximates the target distribution, the
better is the performance of the filter. For instance, if we were
able to directly draw samples from the target distribution, the
importance weights would become equal for all particles and
the resampling step would no longer be needed. Unfortunately,
in the context of SLAM a closed form of this posterior is not
available in general.
As a result, typical particle filter applications [3, 29] use
the odometry motion model as the proposal distribution. This
motion model has the advantage that it is easy to compute for