An adaptive memetic framework for multi -objective combinatorial optimization...
The external-set-guided adaptive mechanism is inspired
by EAG-MOEA/D (Cai et al. 2014). So this section dis-
cusses the similarities and differences between aMOMA-SA
and EAG-MOEA/D. Both aMOMA-SA and EAG-MOEA/D
adopt adaptive selection of solutions. However, these two
algorithms are different in the following aspects:
1. The frameworks of the aMOMA-SA and EAG-MOEA/D
are different.
(a) aMOMA-SA is essentially a multiobjective memetic
algorithm, which uses external-set-guided selection
for the local search (simulated annealing) while EAG-
MOEA/D does not have any local search. It only uses
external set to guide the global search.
(b) In EAG-MOEA/D, all the generated solutions have
been put in a temporary population L, which later
is used to update the external set A. However, in
aMOMA-SA, the update of population L is based
on Boltzmann criterion (see Sect. 4.2.4).
2. The adaptive mechanisms for choosing s ubproblems are
different. In EAG-MOEA/D, the probability for choosing
a subproblem is calculated based on the contributions of
each subproblem on the external set over the last certain
number of generations. However, in aMOMA-SA, the
probability is calculated based on the contributions of
each subproblem on the current external set.
4 The proposed multi-objective memetic
algorithms
Our multi-objective memetic algorithm with simulated
annealing as its local search method (MOMA-SA) decom-
poses a MOP into N single-objective optimization subprob-
lems by aggregating objective functions with different weight
vectors. In principle, we can adopt any aggregation method
for this purpose. For simplicity, we use the weighted sum
approach with N weight vectors:
λ
j
= (λ
j
1
,...,λ
j
m
) j = 1,...,N. (2)
where λ
j
∈ R
m
+
and
m
i=1
λ
j
i
= 1, m is the number of
objectives. Subproblem k is:
minimize g
ws
k
(x) =
m
i=1
λ
j
i
f
i
(x)
subject to x ∈ (3)
For each k = 1,...,N ,setB(k) stores the indices of the
T closest weight vectors to λ
k
in terms of the Euclidean dis-
tance. Subproblem i is defined as a neighbour of subproblem
k, when i ∈ B(k).
The following solution are maintained during the evolu-
tionary process:
– Decomposition population set P ={x
1
,...,x
k
,...,x
N
},
where x
k
is the best solution associated with subproblem
k.
– External set (dominance archive) A, which contains
N solutions selected by fast-non-dominated-sorting and
crowding-distance-assignment in NSGA-II (Deb 2001;
Deb et al. 2002).
– L, which stores solutions after undergoing local search.
– Y , which stores solutions after undergoing global search.
The pseudocode of the framework is presented in Algo-
rithm 1. It works as follows:
Step 1 Initialization: Initialize P and A.
Step 2 Local Search: Use a mechanism to select solutions
from P (in Sect. 4.3) to undergo local search; the
generated new solution set are stored in L.
Step 3 Global Search: Perform global search on P to gen-
erate a new solution set Y .
Step 4 Update of Population and External Set: Use L and
Y to update P and A.
Step 5 Stopping Criterion: If preset criteria are satisfied,
output A.Otherwise,gotoStep2.
More detailed descriptions of Steps 1–4 of Algorithm 1
are given as follows:
4.1 Initialization
A MOP is originally decomposed into N subproblems based
on Eqs. 2 and 3. The decomposition population P is ran-
domly generated and assigned to the external set A.TheT
closest neighbours, in terms of Euclidean distance, to each
weight vector are determined. Therefore, subproblem i has
T neighbours, denoted as B(i) ={i
1
,...,i
T
}. The whole
process of initialization is shown in Step 1.
4.2 Local search
The procedures of local search are demonstrated in Step 2
as follows. In Step 2a, using one of the following mecha-
nisms, N solutions are chosen from P for local search. This
is actually to address the issue of the frequency of refine-
ment and individual subset selection in Sudholt (2006). In
this work, one baseline and two adaptive mechanisms for
selecting individuals for local search are adopted as follows.
123