DESIGN OF ROBUST DISCRETE-TIME OBSERVER-BASED
REPETITIVE-CONTROL SYSTEM
Lan Zhou, Jinhua She, Shaowu Zhou, and Min Wu
ABSTRACT
This paper concerns the design of a robust discrete-time observer-based repetitive-control system for a class of linear plants
with periodic uncertainties. A discrete two-dimensional model is built that partially uncouples the control and learning actions of
a repetitive-control system, enabling their preferential adjustment. The combination of a singular-value decomposition of the
output matrix and Lyapunov stability theory is used to derive a linear-matrix-inequality-based design algorithm that determines
the control and state-observer gains. A numerical example illustrates the main advantage of the method: easy, preferential
adjustment of control and learning by means of two tuning parameters in an linear-matrix-inequality-based condition.
Key Words: Repetitive control, state observer, two-dimensional system, linear matrix inequality (LMI)
I. INTRODUCTION
In discrete time, any periodic signal can be generated by
a free dynamic system with positive feedback around a pure
time delay. Based on this idea and the internal model princi-
ple [1], Inoue et al. [2] devised the control strategy called
repetitive control (RC), which adds a human-like self-
learning capability to a control system by embedding an
internal model of a periodic signal in a repetitive controller
[3]. Fig. 1 shows the configuration of a basic discrete-time
RC system (RCS).
In the figure, r(k) is a periodic reference input with a
period of N, G(z) is a compensated plant, and z is a shift
operator. The part enclosed by the dotted line is a repetitive
controller containing a pure delay with a positive-feedback
loop. In an RCS, self-learning occurs through periodic delay-
based updates. That is, the control effort, v(k - N), of the
previous period is added to the present control input by means
of the pure-delay positive-feedback line so as to regulate the
current control input. This allows the system to gradually
eliminate any tracking error and provide very precise control.
RC is closely related to two essentially equivalent tech-
niques: iterative learning control (ILC) and a linear repetitive
process (LRP) [4]. However, as pointed out in [2,5], even
though both an RCS and an ILC system (ILCS) [or an LRP
system (LRPS)] use the control experience of previous
periods for regulation, there are significant differences
between them. First, the initial conditions for a period are
different. For an RCS, the state at the beginning of a period is
exactly the same as the final state of the system in the previ-
ous period, while an ILCS is reset to the same given state after
every period. This difference leads to different criteria for
convergence. Since the state of an RCS progresses continu-
ously from one period to the next, we check if it converges in
the time interval [0, +•). In contrast, since an ILCS starts
from the same state in each period, we check if the trial-to-
trial error converges. Second, the problems involved in stabi-
lization are different. The RCS in Fig. 1 is a neutral-type
delay system that contains an infinite number of poles on the
imaginary axis. As a result, it can be stabilized only when the
relative degree of the plant is zero. This restriction does not
apply to an ILCS (or an LRPS), which is easy to stabilize,
even for a strictly proper plant. Thus, the stability conditions
for an ILCS or LRPS in [4, 6], which used a two-dimensional
(2D) system approach and linear matrix inequalities (LMIs),
cannot be directly extended to an RCS.
A close examination of RC shows that an RCS actually
involves two different actions: control and learning [7–9].
Continuous-discrete 2D system theory [10] was employed in
[7–9] to design a continuous RCS. This showed how effective
adjustment of control and learning was in improving system
performance. However, since this paper concerns the problem
of discrete repetitive control and since both control and learn-
ing are discrete, the method in [7–9] cannot be directly
extended to the design of a discrete-time RCS.
On the other hand, [11] presented a method of designing
a discrete, robust, guaranteed-cost, state-feedback RCS that is
Manuscript received May 2, 2012; revised August 8, 2012; accepted February 10,
2013.
Lan Zhou (corresponding author, e-mail: zlly98@yahoo.cn) and Shaowu Zhou are
with School of Information and Electrical Engineering, Hunan University of Science
and Technology, Xiangtan 411201, China.
Jinhua She is with School of Computer Science, Tokyo University of Technology,
Tokyo 192-0982, Japan.
Min Wu is with School of Information Science and Engineering, Central South
University, Changsha 410083, China.
This work was supported in part by National Natural Science Foundation of China
under Grant Nos. 61203010 and 61210011, Hunan Natural Science Foundation under
Grant No. 11JJ4059, and Scientific Research Fund of Hunan Provincial Education
Department under Grant No. 12B044.
Asian Journal of Control, Vol. 16, No. 2, pp. 1–10, March 2014
Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/asjc.718
© 2013 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society