xviii
PREFACE
also plays
a
large
role,
although
its
actual computation will
be
treated
in the
second
volume
of
this
series.
The first two
chapters
set the
stage
not
only
for the
present vol-
ume
but for the
whole
series.
The first is
devoted
to the
mathematical background—
matrices, vectors,
and
linear algebra
and
analysis.
The
second chapter discusses
the
realities
of
matrix computations
on
computers.
The
third chapter
is
devoted
to the LU
decomposition—the result
of
Gaussian
elimination. This extraordinarily
flexible
algorithm
can be
implemented
in
many dif-
ferent
ways,
and the
resulting decomposition
has
innumerable applications.
Unfortu-
nately, this
flexibility has a
price:
Gaussian elimination
often
quivers
on the
edge
of
instability.
The
perturbation theory
and
rounding-error analysis required
to
understand
why
the
algorithm works
so
well (and
our
understanding
is
still imperfect)
is
presented
in
the
last
two
sections
of the
chapter.
The
fourth
chapter treats
the QR
decomposition—the factorization
of a
matrix
into
the
product
of an
orthogonal matrix
and an
upper triangular matrix. Unlike
the
LU
decomposition,
the QR
decomposition
can be
computed
two
ways:
by the
Gram-
Schmidt algorithm, which
is
old,
and by the
method
of
orthogonal triangularization,
which
is
new.
The
principal application
of the
decomposition
is the
solution
of
least
squares problems, which
is
treated
in the
second section
of the
chapter.
The
last section
treats
the
updating problem—the problem
of
recomputing
a
decomposition when
the
original matrix
has
been altered.
The
focus
here
is on the QR
decomposition, although
other updating algorithms
are
briefly
considered.
The
last chapter
is
devoted
to
decompositions that
can
reveal
the
rank
of a
matrix
and
produce approximations
of
lower rank.
The
issues stand
out
most clearly when
the
decomposition
in
question
is the
singular value decomposition, which
is
treated
in the
first
section.
The
second treats
the
pivoted
QR
decomposition
and a new
extension,
the QLP
decomposition.
The
third section treats
the
problem
of
estimating
the
norms
of
matrices
and
their inverses—the so-called problem
of
condition estimation.
The
estimators
are
used
in the
last section, which treats rank revealing
URV and ULV de-
compositions. These decompositions
in
some sense
lie
between
the
pivoted
QR de-
composition
and the
singular value decomposition and, unlike either,
can be
updated.
Many
methods treated
in
this volume
are
summarized
by
displays
of
pseudocode
(see
the
list
of
algorithms following
the
table
of
contents). These summaries
are for
purposes
of
illustration
and
should
not be
regarded
as finished
implementations.
In
the first
place, they
often
leave
out
error checks that would clutter
the
presentation.
Moreover,
it is
difficult
to
verify
the
correctness
of
algorithms written
in
pseudocode.
In
most cases,
I
have checked
the
algorithms against
MATLAB
implementations.
Un-
fortunately,
that procedure
is not
proof against transcription errors.
A
word
on
organization.
The
book
is
divided into numbered chapters, sections,
and
subsections, followed
by
unnumbered subsubsections. Numbering
is by
section,
so
that (3.5)
refers
to the fifth
equations
in
section three
of the
current chapter. Ref-
erences
to
items outside
the
current chapter
are
made explicitly—e.g., Theorem 2.7,
Chapter
1.