978-1-5386-2524-8/ 17/$31.00 ©2017 IEEE
Investigation of Horizontal Crossover and Stability-
based Adaptive Inertia Weight Strategies for CLPSO
Xiang Yu, Zepeng Li, Genhua Chen, Lei Wang
Provincial Key Laboratory for Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, 289
Tianxiang Road, Nanchang, Jiangxi 330099, China
xiang.yu@nit.edu.cn
Abstract—This paper investigates horizontal crossover (HC)
and stability-based adaptive inertia weight (SAIW) strategies for
comprehensive learning particle swarm optimization. HC applies
arithmetic crossover on all the dimensions of two different
personal best positions. SAIW adaptively adjusts the inertia
weight and acceleration coefficient for each particle on each
dimension. Experimental results on various benchmark functions
demonstrate that HC can significantly improve the convergence
performance for the optimizer, while SAIW cannot. The results
also indicate that HC and SAIW need to be further improved.
Keywords—particle swarm optimization, comprehensive
learning, horizontal crossover, stability, adaptive inertia weight.
I. INTRODUCTION
Particle swarm optimization (PSO), as the name suggests,
uses a swarm of particles to solve the optimization problem,
with each particle representing a candidate solution. The
particles simulate the movements of a bird flock when the birds
migrate or a fish school when the fishes search for food. Each
particle thus “flies” in the search space, and is associated with a
position, a velocity, and a search fitness.
PSO relies on iterative learning to find the global optimum.
In each iteration (or generation), each particle updates its flight
velocity according to its previous velocity, its historical best
position (i.e. personal best position), and the personal best
positions of its neighborhood particles. A large number of PSO
variants have been proposed and they differ in the strategy
adopted to guide the flight of each particle based on the
particle’s own search experience and other particles’ search
experience. Among the variants, comprehensive learning PSO
(CLPSO) [1] is a powerful variant that encourages each
particle to learn from different exemplars on different
dimensions. Each particle is additionally associated with a
learning probability and the learning probability determines the
exemplars. CLPSO is good at preserving the particles’
diversity. Experimental results reported in [1] and [2] indicate
that CLPSO is able to locate the global optimum or a near-
optimum solution on many multimodal benchmark functions;
however the accuracy of the solution obtained by CLPSO is
considerably lower than some other PSO variants on many
functions, including both unimodal and multimodal functions.
We have recently proposed enhanced CLPSO (ECLPSO)
[3] to improve the convergence performance of CLPSO.
ECLPSO features two enhancements, i.e. conducting
exploitation through perturbation and adaptively adjusting the
particles’ learning probabilities. ECLPSO can significantly
enhance the solution accuracy compared with CLPSO, as
demonstrated by the experimental results on various
benchmark functions reported in [3]. In this paper, we
investigate horizontal crossover (HC) and stability-based
adaptive inertia weight (SAIW) strategies for CLPSO. HC and
SAIW were respectively proposed in [4] and [5] recently also
for the purpose of facilitating convergence. The two
investigated variants are denoted as CLPSO-HC and CLPSO-
SAIW. We compare the two variants with CLPSO and
ECLPSO in order to understand which variant is most effective
in improving the convergence performance of CLPSO.
II. COMPREHENSIVE LEARNING PARTICLE SWARM
OPTIMIZATION
For a D-dimensional search space, each particle i is
associated with a D-dimensional velocity V
i
= (V
i,1
, V
i,2
, …, V
i,D
)
and a D-dimensional position P
i
= (P
i,1
, P
i,2
, …, P
i,D
). In each
generation, V
i
and P
i
are updated on each dimension d (1 ≤ d ≤
D) as follows.
, , , , ,
()
i d i d i d i d i d
V wV cr E P
(1)
(2)
where w is the inertia weight; c is the acceleration coefficient
and c is usually fixed as 1.5 [1]; r
i,d
is a random number
uniformly distributed in the range [0, 1]; and E
i
= (E
i,1
, E
i,2
, …,
E
i,D
) is the guidance vector of exemplars. i maintains its
personal best position B
i
= (B
i,1
, B
i,2
, …, B
i,D
); if the fitness
value of P
i
is better than that of B
i
, then B
i
is replaced by P
i
. On
each dimension d, E
i,d
is equivalent to B
i,d
or B
j,d
with j ≠ i, and
the decision to learn whether from B
i,d
or B
j,d
depends on i’s
learning probability L
i
. L
i
is set according to (3) such that all
the particles have different learning probabilities and thus
exhibit different learning capabilities.
min max min
10( 1)
exp( ) 1
1
()
exp(10) 1
i
i
N
L L L L
(3)
where N is the number of particles; and L
max
and L
min
are
respectively the maximum and minimum learning probabilities.
L
max
and L
min
are suggested to be 0.5 and 0.05 respectively [1].
The inertia weight w decreases in each generation k, i.e.
max max min
max
()
k
w w w w
k
(4)