Multi-sensor Data Fusion in Automotive Applications
Thomas Herpel
1
, Christoph Lauer
1
, Reinhard German
1
and Johannes Salzberger
2
1
Department of Computer Science 7 – Computer Networks and Communication Systems,
Friedrich-Alexander University, Erlangen, Germany
2
Department of Safety Electronics,
Audi AG, Ingolstadt, Germany
{herpel, christoph.lauer, german}@informatik.uni-erlangen.de, johannes.salzberger@audi.de
Abstract
The application of environment sensor systems in modern – often called “intelligent” – cars is regarded as a
promising instrument for increasing road traffic safety. Based on a context perception enabled by well-known
technologies such as radar, laser or video, these cars are able to detect threats on the road, anticipate emerging
dangerous driving situations and take proactive actions for collision avoidance. Besides the combination of
sensors towards an automotive multi-sensor system, complex signal processing and sensor data fusion strategies
are of remarkable importance for the availability and robustness of the overall system. In this paper, we consider
data fusion approaches on near-raw sensor data (low-level) and on pre-processed measuring points (high-level).
We model sensor phenomena, road traffic scenarios, data fusion paradigms and signal processing algorithms and
investigate the impact of combining sensor data on different levels of abstraction on the performance of the
multi-sensor system by means of discrete event simulation.
Keywords: multi-sensor data fusion, simulation, intelligent cars, environment perception, automotive
1 Introduction
Increasing road traffic safety and at the same time
reducing the number of fatal car accidents is one of
the most challenging future tasks for both car
manufacturers and research institutions worldwide.
Besides intelligent roadside infrastructures, advanced
traffic routing and information services considerable
effort is spend on enhancing the intelligence of
individual vehicles within the traffic flow. Presently,
sensor technologies well-known from other
application areas like military or civil aviation are
employed. Radar, laser, ultrasonic or video devices
perceive information about the environment and
possible threats around the vehicle either actively or
passively. This significantly enhances the car’s ability
to anticipate dangerous driving situations and to act
early and effectively in order to avoid a collision or at
least mitigate the accident severity by proactive
activation of adequate protection means. The quality
of context perception by a set of environment sensors
is of utmost importance for the so-called Advanced
Driver Assistance Systems (ADAS) which rely on the
sensor data. Important sensor properties that influence
the quality of the environment perception include
range, field-of-view (FOV), weather robustness,
power consumption or placement constraints. Single
sensor systems often have undesired weaknesses that
suggest the use of multi-sensor systems. However, the
sensor signal processing and fusion of sensor data
from multiple devices is a sophisticated process
including important design decisions regarding
system performance and dependability. Various
algorithms have been investigated for tasks of
clustering measurement points, association of data
with real world objects and filtering of sensor
information [1]. Depending on the system's fusion
paradigm the data integration takes place at a specific
level of data abstraction. In a low-level data fusion
near-raw data from various devices is combined at a
very early stage of signal processing and the
algorithms are applied to the conglomerate of
measurement points. A high-level data fusion strategy
pre-processes the data of the single sensors
individually - i.e. each sensor is capable of dedicated
clustering, association and filtering - and fuses the
edited information, often represented as lists of
detected objects. Both approaches are expected to
have certain advantages and disadvantages in terms of
entropy of information, computational complexity and
adaptivity. In this paper, we use the concept of
discrete event simulation for the analysis of a model
consisting of various multi-sensor systems, sensor
phenomena like reflection of radar or laser beams,
road traffic scenarios and sensor data fusion
strategies. The simulation results allow for a
comparison on what data fusion paradigm, low-level
or high-level, performs best in which scenario and is
preferable with respect to maximum detection
performance, robustness and reliability of proactive
ADAS applications. The paper is organised as
follows: Chapter 2 presents related work, chapter 3
introduces to the design of multi-sensor data fusion
architectures and important techniques for context
perception. Chapter 4 describes the implemented
generic fusion system model. The simulation results
are presented in chapter 5. Finally, chapter 6
concludes this paper and presents some areas of future
work.
978-1-4244-2177-0/08/$25.00 © 2008 IEEE
3rd International Conference on Sensing Technology, Nov. 30 – Dec. 3, 2008, Tainan, Taiwan
Authorized licensed use limited to: TONGJI UNIVERSITY. Downloaded on April 22,2010 at 08:26:57 UTC from IEEE Xplore. Restrictions apply.