6 Chapter 1. INTRODUCTION
flow solutions. Detailed reviews of effective flow solvers for the Navier–Stokes equations are
given by Nelson [119], Pueyo [141], and Walsh [176]. Among the fastest algorithms are the
Newton–Krylov solvers [175, 9, 7, 142, 183, 22]. For example, promising results are presented
by Pueyo and Zingg [142], who used the preconditioned generalized minimal residual (GMRES)
Krylov subspace method [152] in conjunction with an inexact-Newton strategy. A critical com-
ponent in this approach is a fast solution of the linear system at each Newton iteration, which
is provided by the preconditioned GMRES method. For the aerodynamic shape optimization
problem, such Newton–Krylov solvers are very appealing, since they not only provide fast so-
lutions to the flow equations, but the preconditioned GMRES method can also be used to
compute objective function gradients.
Numerical optimization methods, which control the solution of the optimization problem,
can be divided into the following four categories: 1) classical direct search methods, 2) stochas-
tic methods, 3) gradient-based methods, and 4) fully-coupled methods. An insightful summary
of direct search methods is provided by Lewis et al. [99]. Among the most well-known di-
rect search methods is the simplex method of Nelder and Mead. Duvigneau and Visonneau [42]
investigate the performance of this method for aerodynamic and hydrodynamic shape optimiza-
tion problems, including the optimization of a high-lift configuration. The flow is governed by
the two-dimensional incompressible Navier–Stokes equations. Although this approach provides
a robust optimization method, its convergence to the optimal solution is slow and requires a
large number of flow evaluations.
Simulated annealing [94] and genetic algorithms [68] are good examples of stochastic meth-
ods. The latter, in particular, have received considerable attention for application in aero-
dynamic shape optimization problems, see for example Giannakoglou [59], Marco et al. [108],
Obayashi [134], and Oyama [136]. The fundamental concepts of genetic algorithms are based
on the process of natural selection. These algorithms are capable of finding a global optimum,
are well-suited for problems with multiple and non-smooth objectives, and can be used with
categorical design variables. The primary disadvantage of genetic algorithms is high compu-
tational cost. Tse and Chan [170] and Holst and Pulliam [79] provide optimization examples
where up to 10,000 flow solutions are required to reach convergence.
Gradient-based methods [66, 36, 133] are potentially the most effective methods for aero-
dynamic shape optimization problems, since significant design improvements can be obtained
in a relatively few evaluations of the objective function and gradient. However, these methods
require a smooth design space
3
and inherently converge to a local optimum. The gradient can
be used directly to determine a search direction for the optimization process, which is the case
3
Recently, Moreau and Aeyels [115] examined the optimization of discontinuous objective functions using the
concept of generalized gradients.