1.1 History 7
(SDP) relaxations of the given polynomial optimization problem in such a way
that the corresponding optimal values are monotone and converge to the optimal
value of the original problem. Thus it can in principle solve any instance of (PO) to
any given accuracy. For univariate polynomial optimization, Nesterov [88] showed
that the SOS method in combination with the SDP solution has a polynomial-time
complexity. This is also true for unconstrained multivariate quadratic polynomial
and bivariate quartic polynomial when the nonnegativity is equivalent to the SOS.
In general, however, the SDP problems required to be solved by the SOS method
may grow very large, and are not practical when the program dimension goes high.
At any rate, thanks to the recently developed efficient SDP solvers (e.g., SeDuMi of
Sturm [109], SDPT3 of Toh et al. [112]), the SOS method appears to be attractive.
Henrion and Lasserre [49] developed a specialized tool known as GloptiPoly (the
latest version, GloptiPoly 3, can be found in Henrion et al. [50]) for finding a global
optimal solution of the polynomial optimization problems arising from the SOS
method, based on Matlab and SeDuMi. For an overview on the recent theoretical
developments, we refer to the excellent survey by Laurent [69].
Along a different line, the intractability of general polynomial optimization also
motivates the search for suboptimal, or more formally, approximate solutions. In the
case that the objective polynomial is quadratic, a well-known example is the SDP
relaxation and randomization approach for the max-cut problem due to Goemans
and Williamson [39], where essentially a 0.878-approximation ratio of the model
max
x
x
x∈{1,−1}
n
x
x
x
T
F
F
Fx
x
x is shown with F
F
F being the Laplacian of a given graph. Note that
the approach in [39] has been generalized subsequently by many authors, including
Nesterov [87], Ye [115, 116], Nemirovski et al. [86], Zhang [117], Charikar and
Wirth [23], Alon and Naor [5], Zhang and Huang [118], Luo et al. [74], and He et al.
[48]. In particular, when the matrix F
F
F is only known to be positive semidefinite,
Nesterov [87] derived a 0.636-approximation bound for max
x
x
x∈{1,−1}
n
x
x
x
T
F
F
Fx
x
x.For
general diagonal-free matrix F
F
F, Charikar and Wirth [23] derived an Ω(1/ lnn)-
approximation bound, while its inapproximate results are also discussed by Arora
et al. [7]. For the matrix ∞ → 1-norm problem max
x
x
x∈{1,−1}
n
1
,y
y
y∈{1,−1}
n
2
x
x
x
T
F
F
Fy
y
y,Alon
and Naor [5] derived a 0.56-approximation bound. Remark that all these approx-
imation bounds remain hitherto the best available ones. In continuous polynomial
optimization, Nemirovski et al. [86] proposed an Ω(1/lnm)-approximation bound
for maximizing a quadratic form over the intersection of m co-centered ellipsoids.
Their models are further studied and generalized by Luo et al. [74] and He et al. [48].
Among all the successful approximation stories mentioned above, the objective
polynomials are all quadratic. However, there are only a few approximation results
in the literature when the degree of the objective polynomial is greater than two.
Perhaps the very first one is due to De Klerk et al. [63] in deriving a PTAS of
optimizing a fixed degree homogenous polynomial over a simplex, and it turns out
to be a PTAS of optimizing a fixed degree even form (homogeneous polynomial
with only even exponents) over the Euclidean sphere. Later, Barvinok [14] showed
that optimizing a certain class of polynomials over the Euclidean sphere also