the series of papers by Johnson and colleagues [86]. See
Refs. 1 and 3 for more details of SA.
2.6. Tabu search
TS tries to enhance LS by using the memory of the
previous search. Basically the best solution in N(x)\({x}
T) is chosen as the next solution, where the set T, called tabu
list (or short-term memory), is a set of solutions which
includes those solutions most recently visited. Within this
restricted neighborhood, a move to a new solution is always
executed even if the current solution is already locally
optimal. In this process, cycling of a short period can be
avoided as a result of introducing T. Another type of mem-
ory, called long-term memory, is often employed in the
framework of TS, which memorizes the past search infor-
mation such as the frequency that each decision variable has
been changed, the length of the term that each decision
variable takes a certain value, and so on. This memory is
used to direct the search to the unvisited region (i.e., diver-
sification).
TS was proposed by Glover [54, 55]. See Refs. 59,
6264, and 75 for details of TS.
3. General Framework of Metaheuristics
In this section, we explain essential ideas used in
metaheuristic algorithms within a generalized framework
of local search. In this paper, we regard a metaheuristic
algorithm as a method of repeating the following two basic
steps.
I. Generate initial solutions x.
II. Improve x by using (generalized) LS.
A basic framework of LS of Step II was already given
in Section 2.2. Here we define LS in the following more
general framework.
LS: Start from the given initial solution x, and repeat
replacing x with the solution in its neighborhood N(x) (i.e.,
a move) determined by a specified move strategy, until
some stopping criterion is satisfied.
We call the basic LS explained in Section 2.2 simple
LS if we want to distinguish it from more general LS. To
implement algorithm LS, therefore, we must specify:
A. the neighborhood N(x),
B. the evaluation function f of solutions,
C. the move strategy, and
D. the stopping criterion.
Rule A specifies how to generate candidate solutions from
the current solution. Rule B specifies the goodness criterion
for the solutions in the neighborhood. Rule C specifies the
order of solutions to be searched in the neighborhood, and
the rule to choose the next solution. Rule D specifies when
algorithm LS terminates. For example, in the simple LS, the
rule for C is to set x := xc if fxc fx holds, and the rule
for D is to stop if x is locally optimal (i.e., f xc t f
x
for
all xc N(x)).
Then most ideas of metaheuristics are considered as
the contrivance on the above constituents I, II-A, II-B, II-C,
and II-D. According to this observation, we will explain
various metaheuristic ideas in the following subsections.
Note that ideas in II-A (the neighborhood) will be explained
separately in Section 5, as they are usually rather compli-
cated.
3.1. I: Initial solutions
One of the simplest ways of generating initial solu-
tions is to use random solutions. This results in the random
MLS as already explained in Section 2.3. However, it is
expected that the solution quality will be improved if the
search is started from better solutions. The following ideas
are mainly based on such consideration.
GRASP uses the initial solutions which are generated
by randomized greedy method. In the greedy method, a
feasible solution is usually constructed step by step by
choosing the element with the best local evaluation. Al-
though better initial solutions than random ones are usually
obtained, the variety of solutions constructed by this
method is quite limited, which is not desirable in the frame-
work of MLS. To overcome this, in GRASP, a feasible
solution is constructed, in each step, by randomly choosing
an element from the candidate list composed of those
elements having good evaluations.
GRASP was proposed by Feo and colleagues [e.g.,
44, 47], and applied to various combinatorial. optimization
problems by themselves and others [e.g., 43, 46, 97, 99,
137]. See Ref. 45 for details. The basic idea of randomized
greedy method has appeared in early papers such as Ref.
74, though LS was not incorporated.
In ILS, the initial solutions are generated by slightly
perturbing a good solution x
seed
found during the past
search. The initial solutions are usually generated by ran-
domly choosing a solution in the neighborhood Nc(x
seed
),
where a different neighborhood from N of LS is often used.
ILS was proposed by Johnson [85]. While the incumbent
solution (i.e., the best feasible solution found so far) is used
as x
seed
in the basic ILS of Ref. 85, there are variants of ILS,
called chained local optimization or large-step Markov
chain, in which x
seed
is chosen randomly according to a rule
similar to SA [104, 108110]. (Historically the preprints of
Martin and colleagues [109] inspired Johnson to propose
ILS, a simplified variant of the MartinOttoFelten algo-
rithm, as described in p. 292 of Ref. 87.)
37