L. Cao et al. / Information Sciences 453 (2018) 463–485 467
also incorporate a multi-population and a memory scheme into the proposed modified particle swarm optimization to ad-
dress dynamic optimization problems.
3. The Proposed algorithm
3.1. Neighbor-based learning strategy
As a global search technique, an algorithm must have both exploration and exploitation abilities to solve even static
optimization problems. The classical PSO has an excellent exploitation ability but it is a little weak at exploration for some
optimization problems [10] . In contrast, DE makes full use of other individuals’ information to explore more of the search
space. Motivated by the core principle of DE, we incorporate the influence of a stochastic particle’s position into the velocity
update of PSO [10] . Therefore, each learning particle (which we also call the target particle) learns from positions of two
members from the swarm, in which all other members are defined as neighbors of that learning particle. One member is
a stochastically selected neighbor and the other is the global best particle (means the particle with the best fitness) in the
swarm. For particle i in the swarm, let k be the index of another (different) randomly chosen particle among its neighbors,
Then the d
th
velocity component (1 ≤ d ≤ no. of dimensions) of particle i is updated as follows:
v
d
i
=
ω
∗ v
d
i
+ F ∗
x
d
k
− x
d
i
+ c ∗ rand3
d
i
∗
x
d
gbest
− x
d
i
, if rand 4
d
i
< p
v
d
i
, ot her wise
(3)
where F is a scaling factor borrowed from DE, rand3
d
i
and rand4
d
i
are two uniformly random numbers between 0 and 1
which are generated independently. The meanings of ω and c are as in PSO. p is a learning probability, which means that
only if this probability is met, then the d
th
velocity component learns from these two positions. Otherwise, the d
th
velocity
component retains its previous value. x
d
gbest
is the d
th
component of the current global best particle’s position. x
d
i
is the
d
th
component of the position of target particle i . x
d
k
is the d
th
component of the positon of the selected another particle.
Similarly to conventional PSO, the particle’s velocity in each dimension is restricted to its range.
The learning probability enables some components of the previous velocity of the target particle to be corresponding
components of its new velocity. This probability-based learning strategy inherits some components from the target parti-
cle’s previous velocity as a memory, which prevent the target particle from completely being attracted by a strong global
best particle or a randomly selected particle with a quite good fitness. Compared to the classic PSO, learning from neighbors
enhances the exploration of particles. In the classic PSO, each particle’s velocity update utilizes only the information of its
historical best position and that of the global best particle to alter its current velocity. Therefore the search direction is
limited, which reduces the diversity of the swarm. But learning from different stochastic neighbors provides more informa-
tion for the target particle in order to search a more widely distributed solution region. More details will be showed in the
Section 3.6 .
In the original DE operator, F is a fixed value which is suggested to be set in: F ∈ [0.5, 1]. Choosing a suitable parameter
value usually depends on the specific problem class, which might ordinarily require repeated runs to identify. Instead, a
self-adaptive method is proposed here to control this scaling factor [5] as follows:
F
(
g
)
=
F
min
+
(
F
max
− F
min
)
∗ ϕ if r
1
< τ
F
(
g − 1
)
ot her wise
(4)
where F ( g ) is the value for the g
th
generation, F
min
is the minimum value of F, F
max
is the maximum value of F , ϕ and r
1
are
two uniformly random numbers between 0 and 1, but they are generated independently. τ is the probability of adjusting F .
If this probability is not met, F retains its value from the previous generation. In the following experiment section, we will
show the strength of this self-adaptive method.
3.2. Swarm update
For particle i , its previous position is added to the updated velocity to form a new position U
i
:
U
i
= x
i
+ v
i
(5)
In the classical PSO, the particle always moves to the new position whether or not it is better than the old. Its personal
historical best position and the global best position are always updated. In the classical DE, a greedy strategy is employed
in the selection operator—the target individual is replaced if its offspring is better, otherwise, the old target individual
is retained. The greedy strategy can make the population evolve, but it often may not be the best approach for global
optimization. Viewed from the whole population, although the new individual may not be better than its parent, it may be
better than others in the population. In this proposed algorithm, we present a novel strategy to update the swarm. Since
each particle has the same chance to be selected to influence the other particles, a particle with particularly poor fitness
may affect the performance of the whole swarm. In the extreme, the performance of the whole swarm might be strongly
influenced by the particle with the worst fitness. Accordingly, the worst particle in the swarm should be given priority to