mathematically model approaching the prey, we decrease
the value of a
~
:
a
~
ðtÞ¼2
2t
Max
iter
ð16Þ
where t is the current number of iteration, Max_iter is the
maximum number of iteration. Therefore, a
~
ðtÞ is linearly
decreased from 2 to 0 over the process of iterations.
2.3 Grey wolf optimization algorithm
In grey wolf optimization algorithm, we consider the fittest
solution as the Alpha. Consequently, the second and the third
best solutions are named Beta and Delta, respectively. The
rest of the candidate solutions are assumed to be Omega. To
sum up, the search process starts with creating a random
population of grey wolves (candidate solutions) in the GWO
algorithm. Over the course of iterations, Alpha, Beta, and
Delta wolves estimate the probable position of the prey. Each
candidate solutio n updates its distance from the prey. The
parameter a
~
is decreased from 2 to 0 in order to emp hasize
exploration and exploitation, respectively. Candidate solu-
tions tend to diverge from the prey when A
~
[ 1 and con-
verge towards the prey when A
~
\1.
The pseudo code of the classical grey wolf optimization
algorithm is presented in Fig. 2.
3 The proposed MAL-IGWO algorithm
3.1 Constraint-handling method
It is necessary to note that evolutionary computation-based
algorithms are unconstrained optimization techniques that
need additional mechanism to deal with constraints when
solving constrained optimization problems. As a result, a
variety of evolutionary computation-based constraint-han-
dling techniques have been developed for constrained
optimization problems which can be grouped as follows
[29, 30]: (1) methods based on penalty functions; (2)
methods based on preserving feasibility of solutions; (3)
methods based on the superiority of feasible solutions over
infeasible solutions; and (4) other methods.
Penalty function methods are the most common con-
straint-handling technique. The augmented Lagrangian
multiplier method is an interesting penalty function that
avoids the side effects associated with ill-conditioning of
simpler penalty and barrier functions [31 ]. Therefore, this
paper uses a modified augmented Lagrangian method in
[32] to deal with constraints.
If the simple bound (4) is not present, then one can use the
modified augmented Lagrangian multiplier method to solve
problem (1)–(3). For given Lagrangian multiplier vector k
k
and penalty parameter vector r
k
, the unconstrained penalty
sub-problem at the k-th step of this method is:
min Pðx; k
k
; r
k
Þð17Þ
where P(x, k, r) is the following modified augmented
Lagrangian function:
Pðx; k; rÞ¼f ðxÞ
X
p
j¼1
k
j
g
j
ðxÞ
1
2
r
j
ðg
j
ðxÞÞ
2
X
m
j¼pþ1
~
P
j
ðx; k; rÞ
ð18Þ
and
~
P
j
ðx; k; rÞ is defined as follows:
~
P
j
ðx; k; rÞ¼
k
j
g
j
ðxÞ
1
2
r
j
ðg
j
ðxÞÞ
2
; if k
j
r
j
g
j
ðxÞ [ 0
1
2
k
2
j
.
r
j
; otherwise
8
>
<
>
:
ð19Þ
It can be easily shown that the Kuhn–Tucker solution
(x*, k
k
) of the primal problem (1–3) is identical to that of
the augmented problem (17).
If the simple bound (4) is present, the above modified
augmented Lagrangian multiplier method needs to be
modified. Unlike the modified barrier function methods, we
make another modification to deal with the bound con-
straints. At the k-th step, assume that the Lagrangian
multiplier vector k
k
and penalty parameter vector r
k
are
given, we solve the following bound constrained sub-
problem instead of (17):
min Pðx; k
k
; r
k
Þ
s:t: l x u
ð20Þ
where P(x, k, r) is the same modified augmented Lagran-
gian function as in (18). Let S ( R
n
designate the search
space, which is defined by the lower and upper bounds of
Initialize the prey wolf population
),,2,1( niX
i
=
Initialize a, A, and C
Calculate the fitness of each search agent
=
α
X
the best search agent
=
β
X
the second best search agent
=
δ
X
the third best search agent
while (
<t
Max number of iterations)
for each search agent
Update the position of the current search agent by equation (15)
end for
Update a, A, and C
Calculate the fitness of all search agents
Update
α
X
,
β
X
,and
δ
X
1+= tt
end while
return
α
X
Fig. 2 Pseudo code of the classical grey wolf optimization algorithm
S424 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
123