1.1 Multi-parametric Nonlinear Programming 7
η
. However, as pointed out in [8], there are computational difficulties associated
with large penalty parameters, due to ill-conditioning. Therefore, most algorithms
using penalty functions solve a sequence of problems (1.25) for an increasing se-
quence of penalty parameters. With each new value of the penalty parameter, an
optimization technique is employed, starting with the optimal solution of prob-
lem (1.25) obtained for the parameter value chosen previously. Such an approach
is sometimes referred to as a sequential unconstrained minimization technique [8].
More details about the penalty function methods can be found in [8].
For a given
η
, the optimization problem (1.25) can be solved by applying the
steepest descent method [18]. Let h(z,x
0
,
η
)= f(z,x
0
)+
η
p(z,x
0
). Then, the steep-
est descent direction from z is −∇
z
h(z,x
0
,
η
). With the method of steepest descent
[18], the values of optimization variables at the k +1-th iteration are obtained by the
formula:
z
k+1
= z
k
−
α
∇
z
h(z
k
,x
0
,
η
) , (1.28)
where
α
> 0 is the step length. In order for the steepest descent method to be suc-
cessful, it is important to choose the step length
α
. One way to do this is to let
α
=
β
m
,where
β
∈ (0,1) and m ≥ 0 is the smallest nonnegative integer such that
there is a sufficient decrease in h(z,x
0
,
η
).Thismeansthat:
h(z
k
−
α
∇
z
h(z
k
,x
0
,
η
),x
0
,
η
) −h(z
k
,x
0
,
η
) < −
μ
∇
z
h(z
k
,x
0
,
η
) , (1.29)
where
μ
∈ (0,1). This strategy, introduced in [3], is an example of a line search in
which one searches on a ray from z
k
in a direction in which h(z,x
0
,
η
) is locally
decreasing. More details about the method of steepest descent can be found in [44].
Unfortunately, the methods based on steepest descent have slow local convergence,
even for very simple functions [44]. This is due to the fact that the steepest descent
direction scales with h(z, x
0
,
η
) and therefore the speed of convergence depends on
conditioning and scaling. A good alternative to the steepest descent method is the
conjugate gradient method [28, 1], which has improved local convergence proper-
ties. Also, the Newton-type methods can be successfully applied to solve the opti-
mization problem (1.25).
1.1.3.3 Direct Search Methods
The direct search methods do not use or approximate the objective function’s gradi-
ent, i.e. they represent derivative-free methods for optimization. These methods use
values of the objective function and constraints taken from a set of sample points
and use that information to continue the sampling. More precisely, the direct search
methods consist in a sequential examination of trial solutions involving comparison
of each trial solution with the best obtained up to that time together with a strategy
for determining (as a function of earlier results) what the next trial solution will be
[35]. There is a number of direct search methods for unconstrained optimization (see
for example [44, 51]). However, here, the most widely used direct search methods
for constrained nonlinear optimization are outlined.