
Robust and Precise Vehicle Localization based on Multi-sensor Fusion
in Diverse City Scenes
Guowei Wan, Xiaolong Yang, Renlan Cai, Hao Li, Yao Zhou, Hao Wang, Shiyu Song
1
Abstract— We present a robust and precise localization
system that achieves centimeter-level localization accuracy in
disparate city scenes. Our system adaptively uses information
from complementary sensors such as GNSS, LiDAR, and
IMU to achieve high localization accuracy and resilience in
challenging scenes, such as urban downtown, highways, and
tunnels. Rather than relying only on LiDAR intensity or
3D geometry, we make innovative use of LiDAR intensity
and altitude cues to significantly improve localization system
accuracy and robustness. Our GNSS RTK module utilizes the
help of the multi-sensor fusion framework and achieves a better
ambiguity resolution success rate. An error-state Kalman filter
is applied to fuse the localization measurements from different
sources with novel uncertainty estimation. We validate, in detail,
the effectiveness of our approaches, achieving 5-10cm RMS
accuracy and outperforming previous state-of-the-art systems.
Importantly, our system, while deployed in a large autonomous
driving fleet, made our vehicles fully autonomous in crowded
city streets despite road construction that occurred from time to
time. A dataset including more than 60 km real traffic driving
in various urban roads is used to comprehensively test our
system.
I. INTRODUCTION
Vehicle localization is one of the fundamental tasks in
autonomous driving. The single-point positioning accuracy
of the global navigation satellite system (GNSS) is about
10m due to satellite orbit and clock errors, together with
tropospheric and ionospheric delays. These errors can be
calibrated out with observations from a surveyed reference
station. The carrier-phase based differential GNSS tech-
nique, known as Real Time Kinematic (RTK), can provide
centimeter positioning accuracy [1]. The most significant
advantage of RTK is that it provides almost all-weather
availability. However, its disadvantage is equally obvious
that it’s highly vulnerable to signal blockage, multi-path
because it relies on the precision carrier-phase positioning
techniques. Intuitively, LiDAR is a promising sensor for
precise localization. Failure during harsh weather conditions
and road construction still is an important issue of LiDAR-
based methods, although related works have shown good
progress in solving these problems, for example, light rain [2]
and snow [3]. Furthermore, LiDAR and RTK are two sensors
those are complementary in terms of applicable scenes.
LiDAR works well when the environment is full of 3D or
*This work is supported by Baidu Autonomous Driving Business Unit
in conjunction with the Apollo Project.
The authors are with Baidu Autonomous Driving Business Unit,
{wanguowei, yangxiaolong02, cairenlan, lihao30,
zhouyao, wanghao29, songshiyu}@baidu.com.
1
Author to whom correspondence should be addressed, E-mail:
songshiyu@baidu.com
Fig. 1: Our autonomous vehicle is equipped with a Velodyne LiDAR HDL-
64E. An integrated navigation system, NovAtel ProPak6 plus NovAtel IMU-
IGM-A1, is installed for raw sensor data collection, such as GNSS pseudo
range and carrier wave, IMU specific force and rotation rate. The built-in
tightly integrated inertial and satellite navigation solution was not used. A
computing platform equipped with Dual Xeon E5-2658 v3 12 cores, and a
Xilinx KU115 FPGA chip with 55% utilization for LiDAR localization.
texture features, while RTK performs excellently in open
space. An inertial measurement unit (IMU), including the
gyroscopes and the accelerometers, continuously calculate
the position, orientation, and velocity via the technology
that is commonly referred to as dead reckoning. It’s self-
contained navigation method, that is immune to jamming
and deception. But it suffers badly from integration drift.
Thus, each sensor has its own unique characteristics and its
working conditions. Here, we propose a robust and precise
localization system using multi-sensor fusion designed for
autonomous vehicles driving in complex urban and highway
scenes. More precisely, we adaptively fuse different local-
ization methods based on sensors such as LiDAR, RTK, and
IMU. The sensor configuration of our system is shown in
Figure 1. Our system provides stable, resilient and precise lo-
calization service to other modules in an autonomous vehicle,
which has the capability of driving in several complex scenes,
such as downtown, tunnels, tree-lined roads, parking garages,
and highways. We demonstrate large-scale localization using
over 60 km of data in dynamic urban and highway scenes.
In Figure 2, we show the architecture of our multi-sensor
fusion framework.
To summarize, our main contributions are:
• A joint framework for vehicle localization that adap-
tively fuses different sensors including LiDAR, RTK,
and IMU. It effectively leverages their advantages and
shields our system from their failure in various scenes