没有合适的资源？快使用搜索试试~ 我知道了~

首页Grey Wolf Optimizer（狼群算法 最早的一篇文章）GWO.pdf

资源详情

资源评论

资源推荐

Grey Wolf Optimizer

Seyedali Mirjalili

a,

⇑

, Seyed Mohammad Mirjalili

b

, Andrew Lewis

a

a

School of Information and Communication Technology, Grifﬁth University, Nathan Campus, Brisbane QLD 4111, Australia

b

Department of Electrical Engineering, Faculty of Electrical and Computer Engineering, Shahid Beheshti University, G.C. 1983963113, Tehran, Iran

article info

Article history:

Received 27 June 2013

Received in revised form 18 October 2013

Accepted 11 December 2013

Available online 21 January 2014

Keywords:

Optimization

Optimization techniques

Heuristic algorithm

Metaheuristics

Constrained optimization

GWO

abstract

This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves

(Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey

wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for sim-

ulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encir-

cling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known

test functions, and the results are veriﬁed by a comparative study with Particle Swarm Optimization

(PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming

(EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very com-

petitive results compared to these well-known meta-heuristics. The paper also considers solving three

classical engineering design problems (tension/compression spring, welded beam, and pressure vessel

designs) and presents a real application of the proposed method in the ﬁeld of optical engineering. The

results of the classical engineering design problems and real application prove that the proposed algo-

rithm is applicable to challenging problems with unknown search spaces.

Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Meta-heuristic optimization techniques have become very pop-

ular over the last two decades. Surprisingly, some of them such as

Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2],

and Particle Swarm Optimization (PSO) [3] are fairly well-known

among not only computer scientists but also scientists from differ-

ent ﬁelds. In addition to the huge number of theoretical works,

such optimization techniques have been applied in various ﬁelds

of study. There is a question here as to why meta-heuristics have

become remarkably common. The answer to this question can be

summarized into four main reasons: simplicity, ﬂexibility, deriva-

tion-free mechanism, and local optima avoidance.

First, meta-heuristics are fairly simple. They have been mostly

inspired by very simple concepts. The inspirations are typically re-

lated to physical phenomena, animals’ behaviors, or evolutionary

concepts. The simplicity allows computer scientists to simulate dif-

ferent natural concepts, propose new meta-heuristics, hybridize

two or more meta-heuristics, or improve the current meta-heuris-

tics. Moreover, the simplicity assists other scientists to learn meta-

heuristics quickly and apply them to their problems.

Second, ﬂexibility refers to the applicability of meta-heuristics

to different problems without any special changes in the structure

of the algorithm. Meta-heuristics are readily applicable to different

problems since they mostly assume problems as black boxes. In

other words, only the input(s) and output(s) of a system are impor-

tant for a meta-heuristic. So, all a designer needs is to know how to

represent his/her problem for meta-heuristics.

Third, the majority of meta-heuristics have derivation-free

mechanisms. In contrast to gradient-based optimization ap-

proaches, meta-heuristics optimize problems stochastically. The

optimization process starts with random solution(s), and there is

no need to calculate the derivative of search spaces to ﬁnd the opti-

mum. This makes meta-heuristics highly suitable for real problems

with expensive or unknown derivative information.

Finally, meta-heuristics have superior abilities to avoid local op-

tima compared to conventional optimization techniques. This is

due to the stochastic nature of meta-heuristics which allow them

to avoid stagnation in local solutions and search the entire search

space extensively. The search space of real problems is usually un-

known and very complex with a massive number of local optima,

so meta-heuristics are good options for optimizing these challeng-

ing real problems.

The No Free Lunch (NFL) theorem [4] is worth mentioning here.

This theorem has logically proved that there is no meta-heuristic

best suited for solving all optimization problems. In other words,

a particular meta-heuristic may show very promising results on a

set of problems, but the same algorithm may show poor perfor-

mance on a different set of problems. Obviously, NFL makes this

ﬁeld of study highly active which results in enhancing current ap-

proaches and proposing new meta-heuristics every year. This also

0965-9978/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.

http://dx.doi.org/10.1016/j.advengsoft.2013.12.007

⇑

Corresponding author. Tel.: +61 434555738.

E-mail addresses: seyedali.mirjalili@grifﬁthuni.edu.au (S. Mirjalili),

mohammad.smm@gmail.com (S.M. Mirjalili), a.lewis@grifﬁth.edu.au (A. Lewis).

Advances in Engineering Software 69 (2014) 46–61

Contents lists available at ScienceDirect

Advances in Engineering Software

journal homepage: www.elsevier.com/locate/advengsoft

motivates our attempts to develop a new meta-heuristic with

inspiration from grey wolves.

Generally speaking, meta-heuristics can be divided into two

main classes: single-solution-based and population-based. In the

former class (Simulated Annealing [5] for instance) the search pro-

cess starts with one candidate solution. This single candidate solu-

tion is then improved over the course of iterations. Population-

based meta-heuristics, however, perform the optimization using

a set of solutions (population). In this case the search process starts

with a random initial population (multiple solutions), and this

population is enhanced over the course of iterations. Population-

based meta-heuristics have some advantages compared to single

solution-based algorithms:

Multiple candidate solutions share information about the

search space which results in sudden jumps toward the prom-

ising part of search space.

Multiple candidate solutions assist each other to avoid locally

optimal solutions.

Population-based meta-heuristics generally have greater explo-

ration compared to single solution-based algorithms.

One of the interesting branches of the population-based meta-

heuristics is Swarm Intelligence (SI). The concepts of SI was ﬁrst

proposed in 1993 [6]. According to Bonabeau et al. [1],SIis‘‘The

emergent collective intelligence of groups of simple agents’’. The inspi-

rations of SI techniques originate mostly from natural colonies,

ﬂock, herds, and schools. Some of the most popular SI techniques

are ACO [2], PSO [3], and Artiﬁcial Bee Colony (ABC) [7]. A compre-

hensive literature review of the SI algorithms is provided in the

next section. Some of the advantages of SI algorithms are:

SI algorithms preserve information about the search space over

the course of iteration, whereas Evolutionary Algorithms (EA)

discard the information of the previous generations.

SI algorithms often utilize memory to save the best solution

obtained so far.

SI algorithms usually have fewer parameters to adjust.

SI algorithms have less operators compared to evolutionary

approaches (crossover, mutation, elitism, and so on).

SI algorithms are easy to implement.

Regardless of the differences between the meta-heuristics, a

common feature is the division of the search process into two

phases: exploration and exploitation [8–12]. The exploration phase

refers to the process of investigating the promising area(s) of the

search space as broadly as possible. An algorithm needs to have sto-

chastic operators to randomly and globally search the search space

in order to support this phase. However, exploitation refers to the lo-

cal search capability around the promising regions obtained in the

exploration phase. Finding a proper balance between these two

phases is considered a challenging task due to the stochastic nature

of meta-heuristics. This work proposes a new SI technique with

inspiration from the social hierarchy and hunting behavior of grey

wolf packs. The rest of the paper is organized as follows:

Section 2 presents a literature review of SI techniques. Section 3

outlines the proposed GWO algorithm. The results and discussion

of benchmark functions, semi-real problems, and a real application

are presented in Sections 4-6, respectively. Finally, Section 7 con-

cludes the work and suggests some directions for future studies.

2. Literature review

Meta-heuristics may be classiﬁed into three main classes:

evolutionary, physics-based, and SI algorithms. EAs are usually

inspired by the concepts of evolution in nature. The most popular

algorithm in this branch is GA. This algorithm was proposed by

Holland in 1992 [13] and simulates Darwnian evolution concepts.

The engineering applications of GA were extensively investigated

by Goldberg [14]. Generally speaking, the optimization is done

by evolving an initial random solution in EAs. Each new population

is created by the combination and mutation of the individuals in

the previous generation. Since the best individuals have higher

probability of participating in generating the new population, the

new population is likely to be better than the previous genera-

tion(s). This can guarantee that the initial random population is

optimized over the course of generations. Some of the EAs are Dif-

ferential Evolution (DE) [15], Evolutionary Programing (EP) [16,17],

and Evolution Strategy (ES) [18,19], Genetic Programming (GP)

[20], and Biogeography-Based Optimizer (BBO) [21].

As an example, the BBO algorithm was ﬁrst proposed by Simon

in 2008 [21]. The basic idea of this algorithm has been inspired by

biogeography which refers to the study of biological organisms in

terms of geographical distribution (over time and space). The case

studies might include different islands, lands, or even continents

over decades, centuries, or millennia. In this ﬁeld of study different

ecosystems (habitats or territories) are investigated for ﬁnding the

relations between different species (habitants) in terms of immi-

gration, emigration, and mutation. The evolution of ecosystems

(considering different kinds of species such as predator and prey)

over migration and mutation to reach a stable situation was the

main inspiration of the BBO algorithm.

The second main branch of meta-heuristics is physics-based

techniques. Such optimization algorithms typically mimic physical

rules. Some of the most popular algorithms are Gravitational Local

Search (GLSA) [22], Big-Bang Big-Crunch (BBBC) [23], Gravitational

Search Algorithm (GSA) [24], Charged System Search (CSS) [25],

Central Force Optimization (CFO) [26], Artiﬁcial Chemical Reaction

Optimization Algorithm (ACROA) [27], Black Hole (BH) [28] algo-

rithm, Ray Optimization (RO) [29] algorithm, Small-World Optimi-

zation Algorithm (SWOA) [30], Galaxy-based Search Algorithm

(GbSA) [31], and Curved Space Optimization (CSO) [32]. The mech-

anism of these algorithms is different from EAs, in that a random

set of search agents communicate and move throughout search

space according to physical rules. This movement is implemented,

for example, using gravitational force, ray casting, electromagnetic

force, inertia force, weights, and so on.

For example, the BBBC algorithm was inspired by the big bang

and big crunch theories. The search agents of BBBC are scattered

from a point in random directions in a search space according to

the principles of the big bang theory. They search randomly and

then gather in a ﬁnal point (the best point obtained so far) accord-

ing to the principles of the big crunch theory. GSA is another phys-

ics-based algorithm. The basic physical theory from which GSA is

inspired is Newton’s law of universal gravitation. The GSA algo-

rithm performs search by employing a collection of agents that

have masses proportional to the value of a ﬁtness function. During

iteration, the masses are attracted to each other by the gravita-

tional forces between them. The heavier the mass, the bigger the

attractive force. Therefore, the heaviest mass, which is possibly

close to the global optimum, attracts the other masses in propor-

tion to their distances.

The third subclass of meta-heuristics is the SI methods. These

algorithms mostly mimic the social behavior of swarms, herds,

ﬂocks, or schools of creatures in nature. The mechanism is almost

similar to physics-based algorithm, but the search agents navigate

using the simulated collective and social intelligence of creatures.

The most popular SI technique is PSO. The PSO algorithm was pro-

posed by Kennedy and Eberhart [3] and inspired from the social

behavior of birds ﬂocking. The PSO algorithm employs multiple

particles that chase the position of the best particle and their

S. Mirjalili et al. / Advances in Engineering Software 69 (2014) 46–61

47

own best positions obtained so far. In other words, a particle is

moved considering its own best solution as well as the best solu-

tion the swarm has obtained.

Another popular SI algorithm is ACO, proposed by Dorigo et al.

in 2006 [2]. This algorithm was inspired by the social behavior of

ants in an ant colony. In fact, the social intelligence of ants in ﬁnd-

ing the shortest path between the nest and a source of food is the

main inspiration of ACO. A pheromone matrix is evolved over the

course of iteration by the candidate solutions. The ABC is another

popular algorithm, mimicking the collective behavior of bees in

ﬁnding food sources. There are three types of bees in ABS: scout,

onlooker, and employed bees. The scout bees are responsible for

exploring the search space, whereas onlooker and employed bees

exploit the promising solutions found by scout bees. Finally, the

Bat-inspired Algorithm (BA), inspired by the echolocation behavior

of bats, has been proposed recently [33]. There are many types of

bats in the nature. They are different in terms of size and weight,

but they all have quite similar behaviors when navigating and

hunting. Bats utilize natural sonar in order to do this. The two main

characteristics of bats when ﬁnding prey have been adopted in

designing the BA algorithm. Bats tend to decrease the loudness

and increase the rate of emitted ultrasonic sound when they chase

prey. This behavior has been mathematically modeled for the BA

algorithm. The rest of the SI techniques proposed so far are as

follows:

Marriage in Honey Bees Optimization Algorithm (MBO) in 2001

[34].

Artiﬁcial Fish-Swarm Algorithm (AFSA) in 2003 [35].

Termite Algorithm in 2005 [36].

Wasp Swarm Algorithm in 2007 [37].

Monkey Search in 2007 [38].

Bee Collecting Pollen Algorithm (BCPA) in 2008 [39].

Cuckoo Search (CS) in 2009 [40].

Dolphin Partner Optimization (DPO) in 2009 [41].

Fireﬂy Algorithm (FA) in 2010 [42].

Bird Mating Optimizer (BMO) in 2012 [43].

Krill Herd (KH) in 2012 [44].

Fruit ﬂy Optimization Algorithm (FOA) in 2012 [45].

This list shows that there are many SI techniques proposed so

far, many of them inspired by hunting and search behaviors. To

the best of our knowledge, however, there is no SI technique in

the literature mimicking the leadership hierarchy of grey wolves,

well known for their pack hunting. This motivated our attempt

to mathematically model the social behavior of grey wolves, pro-

pose a new SI algorithm inspired by grey wolves, and investigate

its abilities in solving benchmark and real problems.

3. Grey Wolf Optimizer (GWO)

In this section the inspiration of the proposed method is ﬁrst

discussed. Then, the mathematical model is provided.

3.1. Inspiration

Grey wolf (Canis lupus) belongs to Canidae family. Grey wolves

are considered as apex predators, meaning that they are at the top

of the food chain. Grey wolves mostly prefer to live in a pack. The

group size is 5–12 on average. Of particular interest is that they

have a very strict social dominant hierarchy as shown in Fig. 1.

The leaders are a male and a female, called alphas. The alpha is

mostly responsible for making decisions about hunting, sleeping

place, time to wake, and so on. The alpha’s decisions are dictated

to the pack. However, some kind of democratic behavior has also

been observed, in which an alpha follows the other wolves in the

pack. In gatherings, the entire pack acknowledges the alpha by

holding their tails down. The alpha wolf is also called the dominant

wolf since his/her orders should be followed by the pack [46]. The

alpha wolves are only allowed to mate in the pack. Interestingly,

the alpha is not necessarily the strongest member of the pack

but the best in terms of managing the pack. This shows that the

organization and discipline of a pack is much more important than

its strength.

The second level in the hierarchy of grey wolves is beta. The be-

tas are subordinate wolves that help the alpha in decision-making

or other pack activities. The beta wolf can be either male or female,

and he/she is probably the best candidate to be the alpha in case

one of the alpha wolves passes away or becomes very old. The beta

wolf should respect the alpha, but commands the other lower-level

wolves as well. It plays the role of an advisor to the alpha and dis-

cipliner for the pack. The beta reinforces the alpha’s commands

throughout the pack and gives feedback to the alpha.

The lowest ranking grey wolf is omega. The omega plays the

role of scapegoat. Omega wolves always have to submit to all the

other dominant wolves. They are the last wolves that are allowed

to eat. It may seem the omega is not an important individual in

the pack, but it has been observed that the whole pack face internal

ﬁghting and problems in case of losing the omega. This is due to

the venting of violence and frustration of all wolves by the ome-

ga(s). This assists satisfying the entire pack and maintaining the

dominance structure. In some cases the omega is also the babysit-

ters in the pack.

If a wolf is not an alpha, beta, or omega, he/she is called subor-

dinate (or delta in some references). Delta wolves have to submit

to alphas and betas, but they dominate the omega. Scouts, senti-

nels, elders, hunters, and caretakers belong to this category. Scouts

are responsible for watching the boundaries of the territory and

warning the pack in case of any danger. Sentinels protect and guar-

antee the safety of the pack. Elders are the experienced wolves who

used to be alpha or beta. Hunters help the alphas and betas when

hunting prey and providing food for the pack. Finally, the caretak-

ers are responsible for caring for the weak, ill, and wounded wolves

in the pack.

In addition to the social hierarchy of wolves, group hunting is

another interesting social behavior of grey wolves. According to

Muro et al. [47] the main phases of grey wolf hunting are as

follows:

Tracking, chasing, and approaching the prey.

Pursuing, encircling, and harassing the prey until it stops

moving.

Attack towards the prey.

These steps are shown in Fig. 2.

In this work this hunting technique and the social hierarchy of

grey wolves are mathematically modeled in order to design GWO

and perform optimization.

Fig. 1. Hierarchy of grey wolf (dominance decreases from top down).

48 S. Mirjalili et al. / Advances in Engineering Software 69 (2014) 46–61

3.2. Mathematical model and algorithm

In this subsection the mathematical models of the social hierar-

chy, tracking, encircling, and attacking prey are provided. Then the

GWO algorithm is outlined.

3.2.1. Social hierarchy

In order to mathematically model the social hierarchy of wolves

when designing GWO, we consider the ﬁttest solution as the alpha

(

a

). Consequently, the second and third best solutions are named

beta (b) and delta (d) respectively. The rest of the candidate solu-

tions are assumed to be omega (

x

). In the GWO algorithm the

hunting (optimization) is guided by

a

, b, and d. The

x

wolves fol-

low these three wolves.

3.2.2. Encircling prey

As mentioned above, grey wolves encircle prey during the hunt.

In order to mathematically model encircling behavior the follow-

ing equations are proposed:

~

D ¼j

~

C

~

X

p

ðtÞ

~

XðtÞj ð3:1Þ

~

Xðt þ 1Þ¼

~

X

p

ðtÞ

~

A

~

D ð3:2Þ

where t indicates the current iteration,

~

A and

~

C are coefﬁcient vec-

tors,

~

X

p

is the position vector of the prey, and

~

X indicates the posi-

tion vector of a grey wolf.

The vectors

~

A and

~

C are calculated as follows:

~

A ¼ 2

~

a

~

r

1

~

a ð3:3Þ

~

C ¼ 2

~

r

2

ð3:4Þ

where components of

~

a are linearly decreased from 2 to 0 over the

course of iterations and r

1

, r

2

are random vectors in [0, 1].

To see the effects of Eqs. (3.1) and (3.2), a two-dimensional po-

sition vector and some of the possible neighbors are illustrated in

Fig. 3(a). As can be seen in this ﬁgure, a grey wolf in the position of

(X, Y) can update its position according to the position of the prey

(X

, Y

). Different places around the best agent can be reached with

respect to the current position by adjusting the value of

~

A and

~

C

vectors. For instance, (X

–X, Y

) can be reached by setting

~

A ¼ð1; 0Þ and

~

C ¼ð1; 1Þ. The possible updated positions of a grey

wolf in 3D space are depicted in Fig. 3(b). Note that the random

vectors r

1

and r

2

allow wolves to reach any position between the

points illustrated in Fig. 3. So a grey wolf can update its position in-

side the space around the prey in any random location by using

Eqs. (3.1) and (3.2).

The same concept can be extended to a search space with n

dimensions, and the grey wolves will move in hyper-cubes (or hy-

per-spheres) around the best solution obtained so far.

3.2.3. Hunting

Grey wolves have the ability to recognize the location of prey

and encircle them. The hunt is usually guided by the alpha. The

beta and delta might also participate in hunting occasionally. How-

ever, in an abstract search space we have no idea about the loca-

tion of the optimum (prey). In order to mathematically simulate

the hunting behavior of grey wolves, we suppose that the alpha

(best candidate solution) beta, and delta have better knowledge

about the potential location of prey. Therefore, we save the ﬁrst

three best solutions obtained so far and oblige the other search

agents (including the omegas) to update their positions according

to the position of the best search agents. The following formulas

are proposed in this regard.

~

D

a

¼j

~

C

1

~

X

a

~

Xj;

~

D

b

¼j

~

C

2

~

X

b

~

Xj;

~

D

d

¼j

~

C

3

~

X

d

~

Xjð3:5Þ

~

X

1

¼

~

X

a

~

A

1

ð

~

D

a

Þ;

~

X

2

¼

~

X

b

~

A

2

ð

~

D

b

Þ;

~

X

3

¼

~

X

d

~

A

3

ð

~

D

d

Þð3:6Þ

~

Xðt þ 1Þ¼

~

X

1

þ

~

X

2

þ

~

X

3

3

ð3:7Þ

Fig. 4 shows how a search agent updates its position according to

alpha, beta, and delta in a 2D search space. It can be observed that

the ﬁnal position would be in a random place within a circle which

is deﬁned by the positions of alpha, beta, and delta in the search

space. In other words alpha, beta, and delta estimate the position

of the prey, and other wolves updates their positions randomly

around the prey.

3.2.4. Attacking prey (exploitation)

As mentioned above the grey wolves ﬁnish the hunt by attack-

ing the prey when it stops moving. In order to mathematically

model approaching the prey we decrease the value of

~

a. Note that

the ﬂuctuation range of

~

A is also decreased by

~

a. In other words

~

A is

a random value in the interval [2a, 2a] where a is decreased from

2 to 0 over the course of iterations. When random values of

~

A are in

[1,1], the next position of a search agent can be in any position

Fig. 2. Hunting behavior of grey wolves: (A) chasing, approaching, and tracking prey (B–D) pursuiting, harassing, and encircling (E) stationary situation and attack [47].

S. Mirjalili et al. / Advances in Engineering Software 69 (2014) 46–61

49

剩余15页未读，继续阅读

资源存储库

- 粉丝: 64
- 资源: 379

上传资源 快速赚钱

- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助

#### 会员权益专享

### 最新资源

- ARM Cortex-A(armV7)编程手册V4.0.pdf
- ABB机器人保养总结解析.ppt
- 【超详细图解】菜鸡如何理解双向链表的python代码实现
- 常用网络命令的使用 ipconfig ping ARP FTP Netstat Route Tftp Tracert Telnet nslookup
- 基于单片机控制的DC-DC变换电路
- RS-232接口电路的ESD保护.pdf
- linux下用time(NULL)函数和localtime()获取当前时间的方法
- Openstack用户使用手册.docx
- KUKA KR 30 hA,KR 60 hA机器人产品手册.pdf
- Java programming with JNI

资源上传下载、课程学习等过程中有任何疑问或建议，欢迎提出宝贵意见哦~我们会及时处理！
点击此处反馈

安全验证

文档复制为VIP权益，开通VIP直接复制

信息提交成功

## 评论0