Dynamically Event-triggered State Estimation of Hidden Markov
Models through a Lossy Communication Channel
Jiarao Huang
†
, Dawei Shi
‡
, and Tongwen Chen
†
Abstract— In this work, a problem of event-based state esti-
mation for hidden Markov models is investigated. We consider
the scenario that the transmission of the sensor measurement
is decided by a dynamic event-trigger, the state of which
depends on both the sensor measurement and the previous
triggering state. An independent and identically distributed
Bernoulli process is utilized to model the effect of packet
dropout. Using the reference probability measure approach,
expressions for the unnormalized and normalized conditional
probability distributions of the states on the event-triggered
measurement information are derived, based on which optimal
event-based state estimates can be obtained. The effectiveness of
the proposed results is illustrated through a numerical example
together with comparative simulations.
I. INTRODUCTION
Due to the increasing demand of maintaining system per-
formance with limited communication resources, event-based
sampling and signal processing have received considerable
attention in the control community, since the pioneering work
of
˚
Astr
¨
om and Bernhardsson in [1].
The scope of this paper focuses on event-based state
estimation. In this topic, the primary concern is how the
information provided by the event-triggering scheme can
be properly exploited to maintain/improve the estimation
performance; a number of interesting results have been
reported in the literature during the past few years. In Sijs
and Lazar [2], a general description of event-based sampling
was presented, and a state estimator with a hybrid update at
different sampling instants was proposed. The problems of
approximate minimum mean square error (MMSE) event-
based state estimation were investigated in [3] and [4]
for the single sensor case and the multiple sensor case,
respectively. Event-triggered state estimators were designed
by formulating constrained optimization problems in [5].
Variance-based triggering policies were adopted in Trimpe
and D’Andrea [6], [7], and it was shown that the transmission
pattern converges to a periodic schedule for the scalar case
in [7]. Stochastic event-triggering conditions parameterized
by Gaussian kernels were proposed in Han et al. [8] for
linear Gaussian systems, and the exact MMSE estimates
were obtained in recursive and closed form. Lee et al.
This work was supported in part by Natural Sciences and Engineering
Research Council of Canada, and in part by the National Natural Science
Foundation of China under Grant 61503027.
†
J. Huang and T. Chen are with the Department of Electrical and
Computer Engineering, University of Alberta, Edmonton, Canada T6G 1H9.
e-mails: jiarao@ualberta.ca, tchen@ualberta.ca
‡
D. Shi is with the State Key Laboratory of Intelligent Control and Deci-
sion of Complex Systems, School of Automation, Beijing Institute of Tech-
nology, Beijing, 100081, P.R. China. e-mail: dawei.shi@outlook.com
[9] considered a problem of event-based state estimation
for continuous-time nonlinear systems utilizing the Markov
chain approximation method. To study the effect of an lossy
channel on event-based estimation, the reference probability
measure approach was utilized to exploit the event-triggered
measurement information in [10]. The scenario of event-
based state estimation for systems with unknown exogenous
inputs was considered in [11]. For more results on event-
based state estimation, see also [12], [13], [14], [15], [16],
[17], [18], [19] and the recent monograph [20].
In the aforementioned investigations, the event-triggering
conditions considered were normally static and known to
the remote estimator, which limits the potential of utilizing
event-triggered transmission protocols in maintaining estima-
tion performance at reduced communication cost. One fea-
sible way of overcoming this issue is to introduce dynamics
in the event-triggering condition so that an additional degree
of freedom can be provided to the event-trigger in deciding
whether or not to send the measurements at each time
instant. The consequence, however, is that the corresponding
event-triggering condition will not be exactly known to the
estimator; this adds to the difficulty to solve the event-
based estimation problem, and in particular, the situation will
become even more complicated when the effect of packet
dropout is considered, which is normally inevitable when the
measurements are transmitted through a wired or wireless
communication channel. Based on these considerations, a
remote estimation problem of this type is investigated in this
work for hidden Markov models, and the main contributions
are summarized as follows:
1) A dynamic event-triggering transmission protocol is
proposed. The state of the event-trigger depends not
only on the measurement of the sensor, but also on
its own state at the previous time instant. The packet
dropout effect is considered and modeled by an in-
dependent and identically distributed (i.i.d.) Bernoulli
process.
2) To solve the problem of remote estimation, a refer-
ence probability measure is constructed, under which
the sensor measurement process is i.i.d. uniformly
distributed, and the state of the event-trigger is also
i.i.d. uniformly distributed, independent of its previous
state and the sensor measurement. A map that links
the reference measure to the real-world measure is
proposed.
3) Under the reference measure, the unnormalized condi-
tional distribution of the state on the event-triggered
2016 IEEE 55th Conference on Decision and Control (CDC)
ARIA Resort & Casino
December 12-14, 2016, Las Vegas, USA
978-1-5090-1836-9/16/$31.00 ©2016 IEEE 5122