A Comparative Study of STA on Large Scale Global Optimization
Xiaojun Zhou, Chunhua Yang
∗
and Weihua Gui
Abstract— State transition algorithm has been emerging as
a new intelligent global optimization method in recent few
years. The standard continuous STA has demonstrated powerful
global search ability for global optimization problems whose
dimension is no more than 100. In this study, we give a test
report to present the performance of standard continuous STA
for large scale global optimization when compared with other
state-of-the-art evolutionary algorithms. From the experimental
results, it is shown that the standard continuous STA still works
well for almost all of the test problems, and its global search
ability is much superior to its competitors.
I. INTRODUCTION
S
TATE TRANSITION ALGORITHM (STA) has been
emerging as a new intelligent optimization method f or
global optimization in recent few years [1]-[13]. In state
transition algorithm, a solution to an optimization problem
is considered as a state, and an update of a solution can
be regarded as a state transition. By referring to state space
representation, on the basis of current state x
k
, the unified
form of generation of a new state x
k+1
in state transition
algorithm can be described as follows:
x
k+1
= A
k
x
k
+ B
k
u
k
y
k+1
= f(x
k+1
)
, (1)
where x
k
=[x
1
,x
2
, ···,x
n
]
T
stands for a state, correspond-
ing to a solution of an optimization problem; u
k
is a function
of x
k
and historical states; A
k
and B
k
are state transition
matrices, which are usually some state transformation oper-
ators; f (·) is the objective function or fitness function, and
y
k+1
is the function value at x
k+1
.
Unlike most of th e existing evolutionary algorithms, the
basic STA is an individual-based iterative method. Based
on an current state, a regular neighborhood is automati-
cally formed by u sing certain state transformation operators,
since there exists stochastic properties in the state transition
matrices, and then a sampling technique is used to create
a candidate state set. That is to say, the generation of a
candidate set in STA is completely different from mo st
other evolutionary algorithms. Furthermore, special local
and global search operators are both designed, and in the
meanwhile, there exists an alternative way of using local and
global operators in STA. The form of STA can be continuous
or discrete, called continuous STA or discrete STA respec-
tively, depending on the state transformation operators. In
continuous STA, we have designed four state transformation
operators named rotation, translation, expansion,andaxesion
∗
Corresponding author of this paper. The authors are with the School of
Information Science and Engineering, Central South University, Changsha
410083, China (email: ychh@csu.edu.cn).
This work was supported by the National Natural Science Foundation of
China (Grant No. 61503416, 61533020, 61533021,61590921).
to deal with continuous variables (see [3] for details); while
in discrete STA, other four state transformation operators
named swap, shift, symmetry and substitute are designed as
well, and they can tackle discrete variable in an effective
way (please refer to [12] for details). The powerfulness of
both continuous and discrete STA has been demonstrated in
[3]-[16] in terms of global search ability and convergence
rate. In this study, we focus on the continuous STA for the
following global optimization problem
min
x∈Ω
f(x) (2)
where x ∈ R
n
, Ω ⊆ R
n
is a closed and compact set, which
is usually composed of lower and upper bounds of x.
The effectiveness and efficiency of continuous STA have
been testified when compared with other state-of-the-art in-
telligent optimization methods, like real-coded genetic algo-
rithm (RCGA) [18], comprehensive learning particle swarm
optimizer (CLPSO) [1 7], self-adaptive differential evolution
(SaDE) [19] and artificial bee colony (ABC) algorithm [20].
However, in these studies, the size of the benchmark func-
tions chosen for test is no more than 100. It is reported that
the performance of most intelligent optimization methods
will deteriorate severely when the size increases, especially
for large scale g lobal optimization. Therefore, the motivation
of this study is to test the continuous STA on large scale
global optimization problems.
The remainder of this paper is organized as follows:
Section II gives a brier review of continuous STA and
its procedures. Section III presents the experimental results
and discussions of continuous STA with its competitors on
large scale benchmark problems. The conclusions and future
perspectives are given in Section IV.
II. A
BRIEF REVIEW OF CONTINUOUS STA
The initial version of continuous STA was firstly proposed
in [1], in which, th ere are only three state transformation
operators, and then in [2], the axesion transformation was
replenished to strengthen single d imensional search. By
replenishing the axesion transformation and changing the
rotation factor α to decline periodically in an outer loop,
the standard continuous STA was born in [3].
A. State transition operators
Using state space transformation for reference, four spe-
cial state transformation operators are designed to generate
continuous solutions for an optimization problem.
(1) Rotation transformation
x
k+1
= x
k
+ α
1
nx
k
2
R
r
x
k
, (3)
2016 12th World Congress on Intelligent Control and Automation (WCICA)
June 12-15, 2016, Guilin, China
978-1-4673-8414-8/16/$31.00 ©2016 IEEE 2115