Solving High-Dimensional Multi-Objective
Optimization Problems with Low Effective Dimensions
∗
Hong Qian, Yang Yu
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, 210023, China
{qianh,yuy}@lamda.nju.edu.cn
Abstract
Multi-objective (MO) optimization problems require simulta-
neously optimizing two or more objective functions. An MO
algorithm needs to find solutions that reach different opti-
mal balances of the objective functions, i.e., optimal Pareto
front, therefore, high dimensionality of the solution space
can hurt MO optimization much severer than single-objective
optimization, which was little addressed in previous stud-
ies. This paper proposes a general, theoretically-grounded yet
simple approach ReMO, which can scale current derivative-
free MO algorithms to the high-dimensional non-convex MO
functions with low effective dimensions, using random em-
bedding. We prove the conditions under which an MO func-
tion has a low effective dimension, and for such functions,
we prove that ReMO possesses the desirable properties of
optimal Pareto front preservation, time complexity reduc-
tion, and rotation perturbation invariance. Experimental re-
sults indicate that ReMO is effective for optimizing the high-
dimensional MO functions with low effective dimensions,
and is even effective for the high-dimensional MO functions
where all dimensions are effective but most only have a small
and bounded effect on the function value.
Introduction
Solving sophisticated optimization problems plays an essen-
tial role in the development of artificial intelligence. In some
real-world applications, we need to simultaneously optimize
two or more objective functions instead of only one, which
leads to the progress of multi-objective (MO) optimization.
Let f(x)=
f
1
(x),...,f
m
(x)
denote a multi-objective
function. In this paper, we assume that the optimization
problems are deterministic, i.e., each call of f returns the
same function value for the same solution x. Furthermore,
we focus on the derivative-free optimization. That is to say,
f is regarded as a black-box function, and we can only per-
form the MO optimization based on the sampled solutions
and their function values. Other information such as gradi-
ent is not used or even not available. Since the derivative-
free optimization methods do not depend on gradient, they
∗
This research was supported by the NSFC (61375061,
61333014, 61305067), Jiangsu Science Foundation
(BK20160066), and Foundation for the Author of National
Excellent Doctoral Dissertation of China (201451).
Copyright
c
2017, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
are suitable for a wide range of sophisticated real-world
optimization problems, such as non-convex functions, non-
differentiable functions, and discontinuous functions.
The MO optimization has achieved many remarkable
applications such as in reinforcement learning (Moffaert
and Now
´
e 2014), constrained optimization (Qian, Yu, and
Zhou 2015a), and software engineering (Harman, Mansouri,
and Zhang 2012; Minku and Yao 2013). Two reasons ac-
counting for its successful applications. First, the MO al-
gorithm can find the solutions that reach different opti-
mal balances of the objectives (e.g., performance and cost),
which can satisfy different demands from different users.
Besides, it has been shown that MO optimization can do
better than single-objective optimization in some machine
learning tasks (Li et al. 2014; Qian, Yu, and Zhou 2015b;
2015c).
Driven by the demand from real-world applications, de-
spite the hardness of MO optimization such as the ob-
jectives are often conflicted with each other, derivative-
free MO optimization methods have obtained substantial
achievements. Well-known MO optimization methods, such
as improved version of the strength Pareto evolutionary al-
gorithm (SPEA2) (Zitzler, Laumanns, and Thiele 2001),
region-based selection in evolutionary multi-objective opti-
mization (PESA-II) (Corne et al. 2001), non-dominated sort-
ing genetic algorithm II (NSGA-II) (Deb et al. 2002), and
multi-objective evolutionary algorithm based on decomposi-
tion (MOEA/D) (Zhang and Li 2007), non-dominated neigh-
bor immune algorithm (NNIA) (Gong et al. 2008), etc., have
been proposed and successfully applied in various applica-
tions.
Problem. Previous studies have shown that derivative-
free MO methods are effective and efficient for the MO
functions in low-dimensional solution space. However, MO
optimization methods may lose their power for the high-
dimensional MO functions, because the convergence rate
is slow or the computational cost of each iteration is high
in high-dimensional solution space. Furthermore, high di-
mensionality of the solution space can hurt MO optimiza-
tion much severer than single-objective optimization, since
an MO algorithm needs to find a set of solutions that reaches
different optimal balances of the objectives. Therefore, scal-
ability becomes one of the main bottlenecks of MO opti-
mization, which restricts the further applications of it.
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)