734 Q. Lin et al. / European Journal of Operational Research 247 (2015) 732–744
utopia vector and the direction error to the weighted vector d
2
from
the solution F
(x) in the objective space, as defined by
Min
x∈
g(x|λ, z
∗
) = d
1
+ θd
2
(6)
where
d
1
=
(F(x) − z
∗
)
T
λ
λ
(7)
d
2
=
(
F(x) − z
∗
) − d
1
λ
||λ||
(8)
and z
∗
= (z
∗
1
, z
∗
2
,...,z
∗
m
) is the reference point, i.e., z
∗
i
= min{ f
i
(x)|x ∈
} for each i = 1, 2,...,m. In practical implementation, as z
∗
is un-
available in advance, it is usually replaced by the minimal value of
each objective found by the algorithms so far. The generation ap-
proaches for uniform weighted vectors
λ have been introduced in (Li
& Zhang, 2009; Zhang & Li, 2007).
2.3. Particle swarm optimization
Particle swarm optimization is an interesting nature-inspired
metaheuristic originally proposed by Kennedy and Eberhart (1995)
for dealing with global optimization problems. By simulating the
movement rules of bird flocking and fish schooling, it is very ca-
pable for locating the optimal value in a large searching space. In
PSO, a swarm is composed by a certain number of particles. Each
particle represents a potential solution for the optimization prob-
lem, which is characterized by its position and moving velocity. Here,
it is assumed that there are N particles in a swarm. When search-
ing an n-dimensional hyperspace, the position of particle i (i = 1,
2,..., N) indicates the solution location in search space, as repre-
sented by x
i
= (x
i1
, x
i2
,...,x
in
). The positional movement of particle
i is recorded using its velocity, as described by
v
i
= (v
i1
, v
i2
,...,v
in
).
Each particle i will memorize its historically best position as noted
by pbest
i
= (p
i1
, p
i2
,...,p
in
) and the best one among all pbest
i
in a
swarmisacknowledgedasthegloballybestpositiongbest. Each par-
ticle i is evolved by exploiting positional information from the se-
lected global leader and its own personal best to update its velocity
and position values, as expressed in Eqs. (9) and (10).
v
i
(t + 1) = wv
i
(t) + c
1
r
1
(x
pbest
i
− x
i
(t)) + c
2
r
2
(x
gbest
− x
i
(t)) (9)
x
i
(t + 1) = x
i
(t) + v
i
(t + 1) (10)
where t is the iteration number, w is the inertial weight, c
1
and c
2
are two learning factors from the personal and global best particles
respectively, r
1
and r
2
are two random numbers generated uniformly
in the range [0, 1].
2.4. Existing MOPSO algorithms
Particle swarm optimization is originally designed for solving
SOPs. To extend PSO for tackling MOPs, Pareto ranking method or
decomposition approach is embedded into PSO. Thus, the existing
MOPSO algorithms can be generally classified into two categories.
The first class uses Pareto ranking to determine the personal best and
global best particles. The global best particles are generally the non-
dominated solutions found during the particle movement and they
can be exploited to guide the particle swarm to approach the entire
PF. The reported MOPSOs, such as OMOPSO (Sierra & Coello Coello,
2005)andSMPSO(Nebro et al., 2009), belong to this category. The
second type adopts decomposition approach for transforming MOPs
into a set of SOPs and then PSO can be directly applied to solve each
SOP. This kind of MOPSOs can exploit the reported technologies of
PSO to better solve MOPs. The representatives of these MOPSO algo-
rithms include SDMOPSO (Moubayed et al., 2010), dMOPSO (Martinez
& Coello Coello, 2011), CMPSO (Zhan et al., 2013) and DDMOPSO
(Moubayed et al., 2014). All of these representative MOPSOs are in-
troduced briefly as follows.
OMOPSO is proposed by Sierra and Coello Coello (2005), which
uses Pareto dominance and crowding-distance information to iden-
tify the list of leader solutions. To enhance the search capability, two
mutation operators, i.e., uniform and non-uniform mutations, are re-
spectively executed to balance the abilities of exploration and ex-
ploitation. Moreover, an external archive is exploited to collect all the
non-dominated solutions visited by the swarm and the concept of
ε-
dominance is utilized to limit the size of this archive.
SMPSO is an improved version of OMOPSO as designed by Nebro
et al. (2009), which embeds a velocity construction procedure in the
movement of particles to prevent the so-called “swarm explosion” ef-
fect (Clerc&Kennedy,2002) in OMOPSO. Thus, SMPSO is able to pro-
duce new effective particles in the cases that the velocity of particle
becomes too high. Besides that, polynomial mutation is performed
after PSO search as a turbulence factor and an external archive is
used to preserve a number of the historically found non-dominated
solutions.
MOPSO/D is reported by Peng and Zhang (2008), which may be
the first attempt to embed decomposition approach into MOPSO. It
follows the framework of MOEA/D (Zhang & Li, 2007)andreplaces
the genetic search method with the traditional PSO search approach.
The updates of personal and global particles are fully decided by the
aggregation values of all objectives. After that, a turbulence operator
is performed and an external archive based on
ε-dominance is used
to collect a number of non-dominated solutions that are historically
found during the PSO search.
SDMOPSO is an enhanced algorithm from MOPSO/D as designed
by Moubayed et al. (2010), which tackles the drawback of MOPSO/D
by fully exploiting the salient properties of neighborhood relations
in PSO. The particle’s global best is only picked from the neighboring
solutions and each particle is only associated with a unique weight
vector that gives the best scalar aggregated fitness value. Moreover, a
crowding archive is also adopted in SDMOPSO to maintain the diver-
sity of the swarm leaders.
dMOPSO is presented by Martinez and Coello Coello (2011), which
is fully dependent on decomposition approach to solve MOPs. The po-
sition of each particle is updated using a set of global particles, which
are determined based on the scalar aggregated values. The distinct
feature of dMOPSO is that a memory re-initialization procedure is
used when the particle exceeds a certain age, which is aimed at main-
taining the diversity of the swarm and avoiding the trap in local PFs.
However, as pointed out by Moubayed et al. (2014), the absence of
dominance relation in dMOPSO may lead to the fail to cover the en-
tire PF in some complex MOPs.
CMPSO is designed by Zhan et al. (2013), which is a novel coevo-
lutionary technique for PSO to solve MOPs. It provides a simple and
straightforward way to solve MOPs by letting each swarm correspond
with each objective. An external shared archive is used to store all the
visited non-dominated solutions and allow the information exchange
among the elitist individuals. Two novel approaches are presented
to enhance its performance. The first method embeds the elitist in-
formation from the shared archive to update the particle’s velocity,
while the second approach presents an elitist learning strategy for
archive update to improve the swarm diversity and avoid the trap in
local PFs.
The original DDMOPSO is proposed by Moubayed, Pertovski, and
McCall (2012), which integrates both of dominance and decomposi-
tion approaches for solving MOPs. Afterward, an improved version is
also presented by the same authors (Moubayed et al., 2014), which
can fast converge to the true PF without using the genetic operators.
It proposes a new mechanism for the selection of the particle lead-
ers and a novel archiving technique that collects the non-dominated
particles based on the crowding-distance values in both objective and
solution spaces.