Applied
Soft
Computing
61
(2017)
1013–1021
Contents
lists
available
at
ScienceDirect
Applied
Soft
Computing
j
ourna
l
h
o
mepage:
www.elsevier.com/locate/asoc
Mixed
second
order
partial
derivatives
decomposition
method
for
large
scale
optimization
Lin
Li
a,∗
,
Licheng
Jiao
b,∗
,
Rustam
Stolkin
c
,
Fang
Liu
b
a
Key
Laboratory
of
Information
Fusion
Technology
of
Ministry
of
Education,
School
of
Automation,
Northwestern
Polytechnical
University,
Xi’an,
Shaanxi
Province
710072,
PR
China
b
Key
Lab
of
Intelligent
Perception
and
Image
Understanding
of
Ministry
of
Education,
Xidian
University,
Xi’an
710071,
PR
China
c
Extreme
Robotics
Lab,
University
of
Birmingham,
Edgbaston,
Birmingham
B152TT,
UK
a
r
t
i
c
l
e
i
n
f
o
Article
history:
Received
11
December
2015
Received
in
revised
form
16
May
2017
Accepted
8
August
2017
Available
online
31
August
2017
Keywords:
Large-scale
optimization
Evolutionary
algorithm
Cooperative
co-evolution
Divide-and-conquer
Decomposition
method
Nonseparability
Curse
of
dimensionality
a
b
s
t
r
a
c
t
This
paper
focuses
on
decomposition
strategies
for
large-scale
optimization
problems.
The
cooperative
co-evolution
approach
improves
the
scalability
of
evolutionary
algorithms
by
decomposing
a
single
high
dimensional
problem
into
several
lower
dimension
sub-problems
and
then
optimizing
each
of
them
individually.
However,
the
dominating
factor
for
the
performance
of
these
algorithms,
on
large-scale
function
optimization
problems,
is
the
choice
of
the
decomposition
approach
employed.
This
paper
pro-
vides
a
theoretical
analysis
of
the
interaction
between
variables
in
such
approaches.
Three
theorems
and
three
lemma
are
introduced
to
investigate
the
relationship
between
decision
variables,
and
we
provide
theoretical
explanations
on
overlapping
subcomponents.
An
automatic
decomposition
approach,
based
on
the
mixed
second
order
partial
derivatives
of
the
analytic
expression
of
the
optimization
problem,
is
presented.
We
investigate
the
advantages
and
disadvantages
of
the
differential
grouping
(DG)
automatic
decomposition
approach,
and
we
propose
one
enhanced
version
of
differential
grouping
to
deal
with
problems
which
the
original
differential
grouping
method
is
unable
to
resolve.
We
compare
the
perfor-
mance
of
three
different
grouping
strategies
and
provide
the
results
of
empirical
evaluations
using
20
benchmark
data
sets.
©
2017
Elsevier
B.V.
All
rights
reserved.
1.
Introduction
1.1.
Overview
The
solution
of
large
optimization
problems
has
attracted
increasing
attention
from
the
evolutionary
computation
com-
munity
in
recent
years
[1–3].
A
wide
variety
of
metaheuristic
optimization
algorithms
have
been
proposed
during
the
past
few
decades,
such
as
Genetic
Algorithms
[4,5],
evolutionary
algorithms
(EAs)
[6–10],
particle
swarm
optimization
(PSO)
[11,12],
differen-
tial
evolution
(DE)
[13,14],
simulated
annealing
[15,16],
ant
colony
optimization
[17,18],
evolutionary
programming
(EP).
While
these
methods
have
been
successfully
applied
to
theoretical
and
real-
world
optimization
problems,
their
application
to
problems
of
large
dimension
(e.g.
problems
with
more
than
one
hundred
decision
variables)
remain
problematic.
This
paper
discusses
techniques
for
solving
such
large-scale
optimization
(LSO)
problems.
∗
Corresponding
authors.
E-mail
addresses:
linli@nwpu.edu.cn
(L.
Li),
lchjiao@mail.xidian.edu.cn
(L.
Jiao).
The
performance
of
many
metaheuristic
methods
deteriorates
rapidly
with
the
increase
in
dimension
of
the
decision
variables,
referred
to
as
the
“curse
of
dimensionality”
in
much
of
the
literature
[19,20].
There
are
two
reasons
for
this
phenomenon
[21].
Firstly,
the
search
space
grows
exponentially
with
dimension,
engender-
ing
much
greater
computation
time
in
algorithms
which
performed
well
on
low
dimensional
spaces.
Secondly,
the
complexity
of
an
optimization
problem
may
change
with
high
dimensions,
mak-
ing
the
search
for
optimal
solutions
more
difficult.
For
example,
Rosenbrocks
function
is
unimodal
when
there
are
only
two
vari-
ables
but
it
becomes
multimodal
when
the
dimension
is
larger
than
four
[22,23].
An
intuitive
yet
efficient
way
to
deal
with
this
predicament
is
to
decompose
the
original
large-scale
optimization
problem
into
a
group
of
smaller
and
less
complex
sub-problems,
and
then
handle
each
sub-problems
separately.
This
is
known
as
a
“divide-and-conquer”
strategy
and
it
has
been
successfully
applied
in
many
areas
[19,24–27].
The
cooperative
co-evolution
(CC)
method,
proposed
by
Potter
and
De
Jong
in
[28],
provided
a
new
way
to
solve
more
com-
plex
structures
such
as
neural
networks
and
rule
sets,
and
its
performance
has
since
been
tested
on
well-studied
optimization
http://dx.doi.org/10.1016/j.asoc.2017.08.025
1568-4946/©
2017
Elsevier
B.V.
All
rights
reserved.