Cluster Comput (2016) 19:1323–1332
DOI 10.1007/s10586-016-0587-4
First hitting time analysis of continuous evolutionary algorithms
based on average gain
Zhang Yushan
1
· Huang Han
2
· Hao Zhifeng
3
· Hu Guiwu
1
Received: 6 April 2016 / Revised: 12 June 2016 / Accepted: 20 June 2016 / Published online: 27 July 2016
© Springer Science+Business Media New York 2016
Abstract Runtime analysis of continuous evolutionary
algorithms (EAs) is a hard topic in the theoretical research of
evolutionary computation, relatively few results have been
obtained compared to the discrete EAs. In this paper, we
introduce the martingale and stopping time theory to estab-
lish a general average gain model to estimate the upper bound
for the expected first hitting time. The proposed model is
established on a non-negative stochastic process and does not
need the assumption of Markov property, thus is more gen-
eral. Afterwards, we demonstrate how the proposed model
can be applied to runtime analysis of continuous EAs. In
the end, as a case study, we analyze the runtime of (1, λ)ES
with adaptive step-size on Sphere function using the pro-
posed approach, and derive a closed-form expression of the
time upper bound for 3-dimensional case. We also discuss
the relationship between the step size and the offspring size
λ to ensure convergence. The experimental results show that
the proposed approach helps to derive tight upper bound for
the expected first hitting time of continuous EAs.
B
Huang Han
bssthh@163.com
Zhang Yushan
scuthill@163.com
Hao Zhifeng
zfhao@fosu.edu.cn
Hu Guiwu
guiwuhugz@163.com
1
School of Mathematics and Statistics, Guangdong University
of Finance and Economics, Guangzhou 510320, China
2
School of Software Engineering, South China University of
Technology, Guangzhou 510006, China
3
School of Mathematics and Big Data, Foshan University,
Foshan 528000, China
Keywords Evolutionary algorithm · Continuous optimiza-
tion · First hitting time · Average gain model · Martingale ·
Stopping time
1 Introduction
Evolutionary algorithms (EAs) are a class of adaptive search
algorithms inspired by the natural evolution process. With
many successful applications of EAs in various optimization
areas, the theoretical research of EAs has now increasingly
attracted attention [1,2]. Theoretical study is helpful to under-
stand the mechanism of EAs more precisely, and guides us
to design more efficient EAs in practice.
Runtime analysis is a hot spot in the theory of EAs. Intu-
itively, the aim is to analyze the number of fitness evaluations
until one goal of optimization (e.g., good approximation
found) is reached. The runtime can be measured by the first
hitting time for a set of states of an underlying discrete-
time stochastic process [3]. Due to their stochastic nature,
the runtime analysis of EAs is not an easy task. The early
studies were related to basic EAs [such as the (1 + 1)EA]
on pseudo-Boolean functions with good structural properties
[4–6]. These studies demonstrated some useful mathemati-
cal methods and tools, and obtained theoretical results on
some instances. Nowadays, the runtime analysis of (1+1)EA
has been gradually extended from simple pseudo-Boolean
functions to combinatorial optimization problems with prac-
tical applications. Oliveto et al. [7] analyzed the runtime of
(1 +1)EA and random local search (RLS) on some instances
of an NP-hard combinatorial optimization problem—Vertex
Cover Problems, and proved that if the vertex degree is
at most two, then the (1 + 1)EA can solve the problem
in polynomial time. Computing unique input output (UIO)
sequence is a fundamental and hard problem in conformance
123