![](https://csdnimg.cn/release/download_crawler_static/10241717/bg7.jpg)
C. Multi-State-Constraint Kalman Filter (MSCKF)
In contrast to EKF-SLAM, the MSCKF is an EKF algorithm that maintains in its state vector a
sliding window of poses, and uses feature observations to impose probabilistic constraints between
these poses (Mourikis and Roumeliotis, 2007). The state vector of the MSCKF at time-step ℓ is
defined as:
x
ℓ
=
x
T
I
ℓ
π
T
ℓ−1
π
T
ℓ−2
· · · π
T
ℓ−N
T
(11)
where π
i
= [
I
i
G
¯
q
T G
p
T
i
]
T
, for i = ℓ − N, . . . , ℓ − 1, are the IMU poses at the times the last N
images are recorded.
During MSCKF propagation, the IMU measurements are used to propagate the IMU state
estimate and the filter covariance matrix, similarly to EKF-SLAM. The difference lies in the
way in which the feature measurements are used. Specifically, every time a new image is recorded
by the camera, the MSCKF state and covariance are augmented with a copy of the current IMU
pose, and the image is processed to extract and match features. Each feature is tracked until all
its measurements become available (e.g., until it goes out of the field of view), at which time an
update is carried out using all the measurements simultaneously.
To present the update equations, we consider the case where the feature f
i
, observed from the
N poses in the MSCKF state vector, is used for an update at time step ℓ. The first step of the
process is to obtain an estimate of the feature position,
G
ˆ
p
f
i
. To this end, we use all the feature’s
measurements to estimate its position via Gauss-Newton minimization (Mourikis and Roumeliotis,
2007). Subsequently, we compute the residuals (for j = ℓ − N, . . . , ℓ − 1):
r
ij
=z
ij
− h(
ˆ
π
j|ℓ−1
,
G
ˆ
p
f
i
) (12)
≃H
π
ij
(
ˆ
π
j|ℓ−1
,
G
ˆ
p
f
i
)
˜
π
j|ℓ−1
+H
f
ij
(
ˆ
π
j|ℓ−1
,
G
ˆ
p
f
i
)
G
˜
p
f
i
+n
ij
(13)
where
˜
π
j|ℓ−1
and
G
˜
p
f
i
are the error of the current estimate for the j-th pose and the error in
the feature position respectively, and the matrices H
π
ij
and H
f
ij
are the corresponding Jacobians,
evaluated using
ˆ
π
j|ℓ−1
, and
G
ˆ
p
f
i
. At this point we note that, in the EKF algorithm, to be able
to employ a measurement residual, r, for a filter update, we must be able to write this residual
in the form r ≃ H
˜
x + n, where
˜
x is the error in the state estimate, and n is a noise vector that
is independent from
˜
x. The residual in (13) does not have this form, as the feature position error
G
˜
p
f
i
is correlated to both
˜
π
j|ℓ−1
and n
ij
(this is because
G
ˆ
p
f
i
is computed as a function of
ˆ
π
j|ℓ−1
and z
ij
, j = ℓ − N, . . . , ℓ − 1). Therefore, in the MSCKF we proceed to remove
G
˜
p
f
i
from the
residual equations. For this purpose, we first form the vector containing the N residuals from all
the feature’s measurements:
r
i
≃ H
π
i
(
ˆ
x
ℓ|ℓ−1
,
G
ˆ
p
f
i
)
˜
x
ℓ|ℓ−1
+ H
f
i
(
ˆ
x
ℓ|ℓ−1
,
G
ˆ
p
f
i
)
G
˜
p
f
i
+ n
i
(14)
where r
i
and n
i
are block vectors with elements r
ij
and n
ij
, respectively, and H
π
i
and H
f
i
are
matrices with block rows H
π
ij
and H
f
ij
. Subsequently, we define the residual vector r
o
i
= V
T
i
r
i
,
where V
i
is a matrix whose columns form a basis of the left nullspace of H
f
i
. From (14), we
obtain:
r
o
i
= V
T
i
r
i
≃ H
o
i
(
ˆ
x
ℓ|ℓ−1
,
G
ˆ
p
f
i
)
˜
x
ℓ|ℓ−1
+ n
o
i
(15)
where H
o
i
= V
T
i
H
π
i
and n
o
i
= V
T
i
n
i
. Note that the residual vector r
o
i
is now independent of
the errors in the feature coordinates, and thus can be used for an EKF update. It should also be
mentioned that, for efficiency, r
o
i
and H
o
i
are computed without explicitly forming V
i
(Mourikis
and Roumeliotis, 2007).
Once r
o
i
and H
o
i
are computed, we proceed to carry out a Mahalanobis gating test for the
residual r
o
i
. Specifically, we compute:
γ
i
= (r
o
i
)
T
H
o
i
P
ℓ|ℓ−1
(H
o
i
)
T
+ σ
2
I
−1
r
o
i
(16)