ZHANG AND SANDERSON: JADE: ADAPTIVE DIFFERENTIAL EVOLUTION WITH OPTIONAL EXTERNAL ARCHIVE 947
violated bounds and the corresponding components of the
parent individual [2], i.e.,
v
j,i,g
= (x
low
j
+ x
j,i,g
)/2, if v
j,i,g
< x
low
j
v
j,i,g
= (x
up
j
+ x
j,i,g
)/2, if v
j,i,g
> x
up
j
where v
j,i,g
and x
j,i,g
denote the jth components of the
mutation vector v
i,g
and the parent vector x
i,g
at generation
g, respectively. This method works well especially when the
optimal solution is located near or on the boundary.
Crossover: After mutation, a binomial crossover op-
eration forms the final trial/offspring vector u
i,g
=
(u
1,i,g
, u
2,i,g
,...,u
D,i,g
)
u
j,i,g
=
v
j,i,g
, if rand(0,1) ≤ CR
i
or j = j
rand
,
x
j,i,g
, otherwise
(4)
where rand(a, b) is a uniform random number on the interval
[a, b] and independently generated for each j and each i,
j
rand
= randint(1, D) is an integer randomly chosen from
1toD and newly generated for each i, and the crossover
probability CR
i
∈ [0, 1] roughly corresponds to the average
fraction of vector components that are inherited from the
mutation vector. In classic DE, CR
i
= CR is a fixed parameter
used to generate all trial vectors at all generations, while in
many adaptive DE algorithms each individual i is associated
with its own crossover probability CR
i
.
Selection: The selection operation selects the better one
from the parent vector x
i,g
and the trial vector u
i,g
accord-
ing to their fitness values f (·). For example, if we have a
minimization problem, the selected vector is given by
x
i,g+1
=
u
i,g
, if f (u
i,g
)< f (x
i,g
)
x
i,g
, otherwise
(5)
and is used as a parent vector in the next generation.
The operation in (5) is called a successful update if the trial
vector u
i,g
is better than the parent x
i,g
, i.e., the improvement
or evolution progress
i,g
= f (x
i,g
) − f (u
i,g
) is positive.
Accordingly, the control parameters F
i
and CR
i
used to
generate u
i,g
are called a successful mutation factor and a
successful crossover probability, respectively.
The above one-to-one selection procedure is generally kept
fixed in different DE algorithms, while the crossover may
have variants other than the binomial operation in (4). Thus, a
classic differential evolution algorithm is historically named,
for example, DE/rand/1/bin, connoting its DE/rand/1 mutation
strategy and binomial crossover operation.
III. A
DAPTIVE DE ALGORITHMS
This section briefly reviews some recent adaptive and
self-adaptive DE algorithms that dynamically update control
parameters as the evolutionary search proceeds. It provides
a convenient way to compare these parameter adaptation
mechanisms with the one proposed in JADE later.
A. DESAP
In [10], an algorithm DESAP is proposed to not only
dynamically adapt mutation and crossover parameters η and
δ (which is usually the case in other adaptive or self-adaptive
algorithms) but also the population size π . The base strategy
of DESAP is slightly different from the classic DE/rand/1/bin
and rather similar to the scheme introduced in [9]—while δ
and π have the same meaning as CR and NP, respectively, η
refers to the probability of applying an additional normally
distributed mutation after crossover. The ordinary mutation
factor F is indeed kept fixed in DESAP.
In DESAP, each individual i is associated with its own
control parameters δ
i
, η
i
,andπ
i
. These parameters undergo
crossover and mutation, in a way similar to the corresponding
vector x
i
. The new values of these control parameters survive
together with u
i
if f (u
i
)< f (x
i
). In spite of its simple reason-
ing, DESAP performance is not satisfactory. It outperforms the
conventional DE only in one of the five De Jong’s test prob-
lems, while other results are very similar. Indeed, as the author
states, DESAP is mainly used to demonstrate a possibility of
further reducing control parameters by adaptively updating the
population size as well as other control parameters.
B. FADE
The fuzzy adaptive differential evolution (FADE), which
was introduced by Liu and Lampinen [11], is a new variant
of DE that uses fuzzy logic controllers to adapt the con-
trol parameters F
i
and CR
i
for the mutation and crossover
operations. A similar method has also been independently
proposed for multiobjective DE optimization [12]. Similar to
many other adaptive DE algorithms (except DESAP [10]), the
population size is assumed to be tuned in advance and kept
fixed throughout the evolution process of FADE. The fuzzy-
logic controlled approach is tested with a set of 10 standard
benchmark functions and shows better results than the classic
DE when problem dimension is high.
C. SaDE
SaDE [13] is proposed by Qin and Suganthan to simulta-
neously implement two mutation strategies “DE/rand/1” and
“DE/current-to-best/1.” It adapts the probability of generating
offspring by either strategy based on their success ratios in the
past 50 generations. It is believed that this adaptation proce-
dure can gradually evolve the most suitable mutation strategy
at different learning stages for the problem under consideration
This is similar to the scheme proposed in [27], [28], where
competing heuristics (including several DE variants, simplex
methods, and evolution strategies) are simultaneously adopted
and the probabilities of generating offspring by any of these
heuristics are adapted dynamically.
In SaDE, the mutation factors are independently generated
at each generation according to a normal distribution with
mean 0.5, standard deviation 0.3, and truncated to the interval
(0, 2]. It states that this scheme can keep both local (with small
F
i
values) and global (with large F
i
values) search ability
to generate potentially good mutation vectors throughout the