1.3 Size of Problems 5
1.3 SIZE OF PROBLEMS
One obvious measure of the complexity of a programming problem is its size,
measured in terms of the number of unknown variables or the number of constraints.
As might be expected, the size of problems that can be effectively solved has been
increasing with advancing computing technology and with advancing theory. Today,
with present computing capabilities, however, it is reasonable to distinguish three
classes of problems: small-scale problems having about five or fewer unknowns
and constraints; intermediate-scale problems having from about five to a hundred
or a thousand variables; and large-scale problems having perhaps thousands or even
millions of variables and constraints. This classification is not entirely rigid, but
it reflects at least roughly not only size but the basic differences in approach that
accompany different size problems. As a rough rule, small-scale problems can be
solved by hand or by a small computer. Intermediate-scale problems can be solved
on a personal computer with general purpose mathematical programming codes.
Large-scale problems require sophisticated codes that exploit special structure and
usually require large computers.
Much of the basic theory associated with optimization, particularly in nonlinear
programming, is directed at obtaining necessary and sufficient conditions satisfied
by a solution point, rather than at questions of computation. This theory involves
mainly the study of Lagrange multipliers, including the Karush-Kuhn-Tucker
Theorem and its extensions. It tremendously enhances insight into the philosophy
of constrained optimization and provides satisfactory basic foundations for other
important disciplines, such as the theory of the firm, consumer economics, and
optimal control theory. The interpretation of Lagrange multipliers that accom-
panies this theory is valuable in virtually every optimization setting. As a basis for
computing numerical solutions to optimization, however, this theory is far from
adequate, since it does not consider the difficulties associated with solving the
equations resulting from the necessary conditions.
If it is acknowledged from the outset that a given problem is too large and
too complex to be efficiently solved by hand (and hence it is acknowledged that
a computer solution is desirable), then one’s theory should be directed toward
development of procedures that exploit the efficiencies of computers. In most cases
this leads to the abandonment of the idea of solving the set of necessary conditions
in favor of the more direct procedure of searching through the space (in an intelligent
manner) for ever-improving points.
Today, search techniques can be effectively applied to more or less general
nonlinear programming problems. Problems of great size, large-scale programming
problems, can be solved if they possess special structural characteristics, especially
sparsity, that can be explioted by a solution method. Today linear programming
software packages are capable of automatically identifying sparse structure within
the input data and take advantage of this sparsity in numerical computation. It
is now not uncommon to solve linear programs of up to a million variables and
constraints, as long as the structure is sparse. Problem-dependent methods, where
the structure is not automatically identified, are largely directed to transportation
and network flow problems as discussed in Chapter 6.