State Estimation for Legged Robots - Consistent
Fusion of Leg Kinematics and IMU
Michael Bloesch, Marco Hutter, Mark A. Hoepflinger, Stefan Leutenegger, Christian Gehring,
C. David Remy and Roland Siegwart
Autonomous Systems Lab, ETH Z
¨
urich, Switzerland, bloeschm@ethz.ch
Abstract—This paper introduces a state estimation framework
for legged robots that allows estimating the full pose of the
robot without making any assumptions about the geometrical
structure of its environment. This is achieved by means of an
Observability Constrained Extended Kalman Filter that fuses
kinematic encoder data with on-board IMU measurements. By
including the absolute position of all footholds into the filter
state, simple model equations can be formulated which accurately
capture the uncertainties associated with the intermittent ground
contacts. The resulting filter simultaneously estimates the position
of all footholds and the pose of the main body. In the algorithmic
formulation, special attention is paid to the consistency of the
linearized filter: it maintains the same observability properties
as the nonlinear system, which is a prerequisite for accurate state
estimation. The presented approach is implemented in simulation
and validated experimentally on an actual quadrupedal robot.
I. INTRODUCTION
Particularly in rough and highly unstructured environments
in which we ultimately want to employ autonomous legged
robots, postural controllers require fast and precise knowledge
of the state of the robots they are regulating. Especially
for dynamic locomotion, the underlying state estimation can
quickly become a bottleneck in terms of achievable bandwidth,
robustness, and locomotion speed. To achieve the required
performance, a state estimator for legged robots should explic-
itly take into account that such systems are interacting with
their environment via multiple intermittent ground contacts.
Ignoring or neglecting the ground interaction will lead to
computationally and sensory more “expensive” approaches,
ranging from vision-based [2, 16, 17] to GPS-supported [4, 6]
methods. In contrast to such approaches, we will show in the
following that in cases where on-board sensors fully measure
the internal kinematics of the robot as well as its inertial
acceleration and rotational rate, precise information on the
robot’s pose can be made readily available.
One of the earliest approach exploiting information given by
the leg kinematics was implemented by Lin et al. [13] in 2005
on a hexapod robot. Assuming that the robot is in contact with
three of its six feet at all times and assuming completely flat
terrain, they implemented a leg-based odometer. Their method
requires the robots to follow a tripod gait and is affected by
drift. In [14], the same group fused the leg-based odometer
with data from an Inertial Measurement Unit (IMU) and thus
is able to handle tripod running. Using the assumption of
knowing the precise relief of the terrain, Chitta et al. [3]
implemented a pose estimator based on a particle filter. It fuses
Fig. 1. Experimental quadruped platform StarlETH [11]. The inertial and
the body fixed coordinate frames I and B are depicted, as well as the
absolute position vectors of the robot r and of the footholds p
1
, . . . , p
N
.
The presented EKF includes all foothold positions into the estimation process.
leg kinematics and IMU in order to globally localize a robot.
Just very recently, three novel pose estimators have been
presented that are all based on leg kinematics. Reinstein
and Hoffmann [15] presented a data-driven approach using
joint encoders, pressure sensors, and an on-board IMU. They
searched for significant sensory based indicators in order to
determine the stride length when given a specific gait pattern.
With this assumption, they successfully limited the position
drift of the IMU and by appropriate training of the filter could
additionally handle slippage. Chilian et al. [2] implemented a
leg odometer based on point cloud matching for a hexapod
robot, requiring a minimum of three feet in contact. It is
based on a multisensor fusion algorithm that includes inertial
measurements and visual odometry. Assuming planar spring-
mass running, Gur and Saranli [7] proposed a generic, model-
based state estimation technique.
In the presented approach we implement an Extended
Kalman Filter (EKF) and choose an appropriate state vector
in order to break down the estimation problem to the proper
formulation of a few simple measurement equations. Without
any assumption about the shape of the terrain, we are able to
estimate the full state of the robot’s main body, and we can
provide an estimate of the ground geometry. By performing an
observability analysis, we show that apart from the absolute
position and yaw angle of the robot all other states can be
precisely observed as long as at least one foot is in contact