X. Song et al. Swarm and Evolutionary Computation 50 (2019) 100549
search process, so as to balance exploration and exploitation, and thus
the search efficiency of the algorithm is improved.
3.1.1. CABC (and CABC_Elite)
To ensure the exploration ability of the algorithm, we employed
the search strategy which used random solution as the starting point
of search. Inspired by the crosser operator in genetic algorithms,
Literature [39] proposed a similar search strategy named CABC as
follows:
v
i,j
= x
r1,j
+ 𝜙
i,j
(x
r1,j
− x
r2,j
) (1)
where
𝜙
i,j
is a uniform random number on [−1, 1]; r1, r2andj are ran-
domly selected and r1
, r2 ∈{1, 2, SN}, j ∈{1, 2, D}, r1 ≠ r2 ≠ i.
In Eq. (1), the individuals are randomly selected from the population
with high randomness, thus the equation has strong exploration ability
and it has no preference to any search direction. So it can ensure the
exploration and maintain population diversity by adopting CABC in the
employed bee phase.
However, as a result, the exploitation is poor. Inspired by the natural
phenomenon that excellent individuals always have excellent genes, we
improved CABC as follows:
v
i,j
= x
pbest,j
+ 𝜙
i,j
(x
pbest,j
− x
r2,j
) (2)
In Eq. (2), pbest is an elite individual in the population. Obviously, Eq.
(2) can greatly increase the opportunity to improve around the elite
solutions, and effectively enhance the exploitation ability. This search
strategy is named CABC_Elite. It can be seen from the above equations
that CABC_Elite just limits the selection range of the first random solu-
tion in CABC to the elite solutions of the population, so as to enhance
exploitability. So CABC_Elite can be regarded as a special case of
CABC.
3.1.2. M2ABC
Although the above equation improves the exploitation ability to a
certain extent, but the exploitation ability needs to be further enhanced
because of the poor exploitation of ABC. For this purpose, a novel search
strategy named M2ABC is designed, which uses the best-so-far solution
information and searches with the mean of two random solutions as the
starting point, as shown in (3):
v
i,j
=(x
r1,j
+ x
r2,j
)∕2 + 𝜙
i,j
(x
r1,j
− x
r2,j
)+𝜙
i,j
(x
best,j
− x
r1,j
) (3)
where r1andr2 are different integers selected in
[1, SN] uniformly, and
they are different from best as well;
𝜙
i,j
is a random number generated
uniformly between
[−1, 1] and 𝜙
i,j
is also a random number generated
uniformly in the range
[0, 1]; x
best,j
is the jth element of the best-so-far
solution.
The first item of M2ABC is the midpoint of two random selected
solutions, which has certain randomness and is beneficial to maintain
exploration; the second item offers some information about direction,
which can improve the population diversity; the third item makes the
search move towards the best-so-far solution to increase the exploita-
tion ability.
To further discuss the exploration and exploitation of Eqs. (1) and
(3). We introduce the definition of exploration and exploitation in
Ref. [70], where, Crepinsek et al. proposed a more detailed definition.
Specifically, supposing that v is a candidate solution, the process of
exploration and exploitation is defined as follows:
Min
{v − x
i
2
x
i
∈ P} >𝜍 (4)
Min
{v − x
i
2
x
i
∈ P} ≤ 𝜍 (5)
where P denotes the current population, x
i
is the ith individual,
and
𝜍 is the threshold to define the nearest neighbourhood bound-
ary and depends on the problem. Eq. (4) represents exploration, and
Eq. (5) represents exploitation. In other words, the exploration pro-
cess visits areas outside the current nearest neighbourhood, while
the exploitation process searches areas within the current nearest
neighbourhood.
To prove the effectiveness of TSaABC, first of all, it should be noted
that any equation has the abilities to explore and to exploit. That
is, CABC and M2ABC not only focus on exploration or exploitation.
However, how to coordinate the proportion between exploration and
exploitation is very important to make full use of the advantages for
each equation. In the early evolution stage, the whole population con-
centrates on exploration. At this time, because of the strong randomness
of CABC, individuals in the population can search more areas, which
makes it possible to find better solutions. Later, with the population
evolution, CABC_Elite can accelerate convergence by effectively utiliz-
ing the information of high-quality solutions in the current population.
At the later evolution stage, the whole population tends to converge. At
this time, M2ABC can provide a large step due to its trinomial. This
increases the possibility of jumping out of the local optimum when
the population falls into the local optimum, and achieves the balance
between exploration and exploitation.
But, it is unknown when the population will shift from exploration
to exploitation, which is related to the specific problem. Therefore, in
order to take full advantage of each equation, this paper uses an adap-
tive mechanism to combine CABC and M2ABC. By dynamically choos-
ing different search equations in the evolution process, the population
can realize the fast optimizing behavior. The principle of the adaptive
mechanism is that if the success rate of a certain equation is higher in
the previous generation, more choices should be allocated to it in the
next generation to facilitate further search.
3.2. Adaptive mechanism
Inspired by the pheromone updating scheme [58] of ant colony algo-
rithm, the pf is updated as follows:
pf
G+1
= 𝛼 ∗ pf
G
+(1 − 𝛼)∗P
G
(6)
P
G
= S
G
1∕(S
G
1 + S
G
2) (7)
where SGk represents the success rate of candidate solutions generated
by strategy k in the previous generation. To ensure that the select proba-
bilities of the strategies are always summed to 1, we further divide S
G
k
by
(S
G
1 + S
G
2) to calculate PG. Obviously, the strategy with higher
success rate will have a higher probability to be selected to generate
candidate solution. A real number
𝛼 ∈[0, 1], is used to control the
proportion of historical experience.
In fact, the algorithm focuses more on historical experience when
𝛼
takes a larger value, while more on recent experience when 𝛼 takes a
smaller value. It can be seen from Table 1 that when
𝛼 = 0.95, the effect
of pf is only 35.85% and 12.85% respectively after the evolution of 10
and 20 generations (each generation employed bees and onlooker bees
modify pf only one time respectively). When
𝛼 = 0.9, after the evolution
of 10 and 20 generations, the original influence of pf only accounts
for 12.16% and 1.48% respectively. Thus considering the randomness
assigned to each individual by each strategy, a larger value of
𝛼 can
reduce the impact of randomness. Therefore, in this paper, the value of
𝛼 is 0.95.
3.3. TSaABC algorithm
In TSaABC, the exploration ability of the algorithm is ensured by uti-
lizing the characteristic of CABC that randomly selects individuals from
the population and does not bias in any direction. At the same time,
M2ABC can use the information of the best-so-far solution to enhance
the exploitation ability; moreover, it can provide more combinations
of search directions and step sizes because its search equation contains
5