Adaptive Block Coordinate DIRECT algorithm 5
iteration of optimization. The first process is to identify the POHs, which are identi-
fied to potentially contain good, unsampled points. DIRECT sorts the sub-rectangles
by their measures d
j
, and selects the rectangles with the lowest function values at
centers from every sorted measure d
j
. POHs are identified from the set of these rect-
angles with Definition 1. The second process of DIRECT is to sample and divide
POHs, which shrinks the location of the global optimum into smaller space. The
repetition of the two key processes can iteratively approach the global optimum and
confine the global optimum within a very small rectangle. Therefore, DIRECT itera-
tively approaches better solutions as iterations go on. Without the limit of iterations
and function evaluations, DIRECT is guaranteed with global convergence and it can
restrict the global optimum to a small rectangle with any given accuracy.
Unfortunately, the efficiency of DIRECT drops rapidly with dimension increas-
ing, since working with all the coordinates of an optimization problem at each itera-
tion may be inconvenient and the required function evaluations are soaring with di-
mensions increasing. From [9], we know that, after r divisions, the rectangle have p =
mod(r,n) sides of length 3
−(k+1)
and n−p sides of length 3
−k
, where k = (r −p)/n.
Thus, the center-to-vertex distance of the rectangle is given by
d = 0.5
q
[3
−2(k+1)
p+ 3
−2k
(n− p)]. (5)
From equation (5), it is easy to obtain the smallest required division r of a rect-
angle with the center-to-vertex distance d. In DIRECT, a rectangle with the center-
to-vertex distance d is required to undergo at least r = nlog
3
(
√
n/2d) + p −n di-
visions. Since new rectangles are formed by dividing existing ones into thirds on
the longest side, every division is accompanied with 2 function evaluations. Thus,
in DIRECT, any rectangle with the center-to-vertex distance d is accompanied with
at least N
0
= ⌈2
nlog
3
(
√
n/2d)+p−n
⌉ function evaluations. When n = 3 and
ε
= 10
−4
,
N
0
= ⌈581.09⌉. When n = 10 and
ε
= 10
−4
, N
0
= ⌈7.3056
10
⌉. We can see that the
computational complexity of the DIRECT soars exponentially with dimension n in-
creasing.
In high dimensions, DIRECT suffers from dimension curse, which is also a com-
mon defect of algorithms based on strategies of domain partition and function evalua-
tion. Besides the effect of dimension, DIRECT also converges slowly around smooth
area, where the objective function is very flat. In such case, even DIRECT gets close
to the basin of global optimum, the smooth neighbor around the optimum may also
hamper it to achieve higher accuracy. Along with the spirits of DIRECT, multilevel
coordinate search (MSC) is proposed with the strategy of domain partition and func-
tion evaluation to tackle problem (1) [22]. MCS partitions the domain into small
boxes with more irregular splitting. In contract to DIRECT, MCS simplifies the divi-
sion procedure. To speed up the convergence, MCS introduces a local enhancement
to start local searches when the corresponding sub-domains reach the maximal split
level. Although MCS improves computing speed compared with DIRECT, it lacks
further exploration when all the local searches are finished. Thus the accuracy of
MCS is undesirable if the local search fails to bring the solution to the sufficiently
small neighborhood of the global optimum with given accuracy. Recently, simulta-
neous optimistic optimization (SOO) is introduced by Munos to solve problem (1)