
where is the FRF matrix containing the FRFs between all inputs and all outputs; is
the numerator matrix polynomial and is the denominator matrix polynomial. Each row of the right
matrix-fraction model (1) can be written as:
ml
H
×
∈ω )( m l
ml
B
×
∈ω )(
mm
A
×
∈ω )(
]
1
)()()(:,...,2,1
−
ω>ω<=>ω<=∀ ABHlo
oo
(2)
The numerator row-vector polynomial of output o and the denominator matrix polynomial are defined as:
(3)
[] [
∑∑
==
αωΩ=ω>β<ωΩ=>ω<
p
r
rr
p
r
orro
AB
00
)()(,)()(
]
where
are the polynomial basis functions and is the polynomial order. In the LSCF method, a z-domain
model is used (i.e. a frequency-domain model that is derived from a discrete-time model) and, by consequence,
the basis functions are (with
the sampling time):
)(ωΩ
r
p
t∆
(4)
rtj
r
∆ω−
=ωΩ e)(
The polynomial coefficients
β and are assembled in following matrices:
m
or
×
∈
1
mm
r
×
∈α
(5)
mpml
l
mpm
p
mp
op
o
o
o
lo
×++×+×+
∈
α
β
β
β
=θ∈
α
α
α
=α=∀∈
β
β
β
=β
)1)((
2
1
)1(
1
0
)1(
1
0
,),,...,2,1(
The FRF model of (1) is now written as function of the coefficients ),(
k
H . Please note that in the LSCF
formulation of this paper real-valued polynomial coefficients are assumed. It is possible to derive another variant
of the method with complex coefficients. A discussion on these aspects for the common-denominator LSCF
implementation can be found in [5].
2.2 Equation error formulation
The challenge is now to find all unknown model coefficients
based on measurements (or better: non-parametric
estimates) of the FRFs , where denotes a measured quantity and )(
ˆ
ko
H ω •
ˆ
),...,2,1(
fk
Nk =
are the discrete
frequencies at which FRF measurements are available. The coefficients
can be identified by minimising the
following non-linear least-squares (NLS) equation errors :
m
ko
×
∈θωε
1
),
NLS
(
)(
ˆ
),(),()()(
ˆ
),()(),(
1NLS
kokokokokokokoko
HABwHHw ω−αωβωω=ω−θωω=θωε
−
(6)
where the scalar weighting function
is introduced. This frequency- and output-dependent weighting
function allows taking into account data quality differences that may exist between different outputs. The equation
errors for all outputs and all frequency lines are combined in following scalar cost function:
)(
ko
w ω
()
}
∑∑
==
θωεθωε=θ
l
o
N
k
ko
H
ko
f
11
NLSNLSNLS
),(),(tr)( (7)
where • is the complex conjugate transpose (Hermitian) of a matrix and
H
}
tr is the trace of a matrix (sum of the
elements on the main diagonal). In fact, the “trace” notation boils down to computing the squared 2-norm of the
row vector
ε . The cost function is minimised by putting the derivatives of (7) with respect to the unknown
model coefficients equal to zero. It is obvious that this leads to non-linear equations when expression (6) is
used for the equation errors. This non-linear least-squares problem can be approximated by a (sub-optimal) linear
least-squares one by right-multiplying (6) with the numerator matrix polynomial
),(
NLS
θω
ko
θ
, yielding equation errors
that are linear in the parameters:
m×
∈
1
ko
ωε
LS
( θ),
()
)
∑
=
αωωΩ−βωΩω=αωω−βωω=θωε
p
r
rkokrorkrkokkookokoko
HwAHBw
0
LS
)(
ˆ
)()()(),()(
ˆ
),()(),( (8)
The equation errors at all frequency lines are stacked in a matrix
:
mN
o
f
E
×
∈θ )(
LS