where f (x) and g(x) are convex, smooth and nonnegative. Then we define F
1
(x, ¯x)=f(x)g(¯x) and
F
2
(¯x, x)=f(¯x)g(x), which implies that F
1
and F
2
satisfy Assumption and we have
Q
1
µ
(x, ¯x)=f(x)g(¯x)+f (¯x) hrg(¯x),x ¯xi+
1
2µ
||x ¯x||
2
2
Q
2
µ
(¯x, x)=f(¯x)g(x )+g(¯x) hrf(¯x),x ¯xi +
1
2µ
||x ¯x||
2
2
.
(2.9)
Finally, we present another general setting which inc l ud es the objective fun ct i on of (1.2) and
satisfies Assumption 2.2.
Example 2.3. Assume that h =
P
i
((f
i
)
T
g
i
c
i
)
2
and thus
F (x)=
X
i
(f
i
(x)
T
g
i
(x) c
i
)
2
, (2.10)
where each f
i
(x)=[f
i
1
(x),f
i
2
(x),...,f
i
m
i
(x)] and g
i
(x)=[g
i
1
(x),g
i
2
(x),...,g
i
m
i
(x)] is an affine function
of x, R
n
! R
m
i
.
It is easy to see, that is functions f
i
and g
i
are affine and homogeneous (that i s they do not
contain a constant term), then this form is equivalent to (1.2), with an appropriate choic e of M
i
and
c
i
parameters, ot h er wi se formulation (1.4) applies with q(x) = 0. Clearly, when q(x) is a smooth
convex function, then h =
P
i
(f
T
i
g
i
c
i
)
2
+ q(x) should be considered .
In the risk parity case, in particular, we have
min
x,✓
F (x)=
P
n
i=1
(x
T
M
i
x ✓)
2
s.t. x 2 X,
which can be written as (2. 10) with f
i
(x, ✓)=[x, 1] and g
i
(x, ✓)=[M
i
x, ✓], i =1,...,n are
(n + 1)-dimensional affine vector functions of x and ✓ and c
i
= 0, 8i. For any given ¯x: F
1
(x, ¯x,
¯
✓)=
P
m
i=1
(x
T
M
i
¯x
¯
✓)
2
and F
2
(¯x, x)=
P
m
i=1
(¯x
T
M
i
x ✓)
2
. Both F
1
and F
2
are convex quadratic
functions and Assumption 2.2 is satisfied.
3 Variable splitting and augmented L a gr a ngi a n based methods
In this section, we discuss several alternating direction methods , all of which are based on the
augmented Lagrangi an framework with variable splitting. Augmented Lagrangi an method (with
variable splitting) and its variants have been increasingly popular in recent literat u r e [1, 7, 12, 28,
29].
In particular observe that (2.1) can be equivalently written as
min
x2X,y
F (x, y)=h(f(x),g(y))
s.t. x = y,
(3.1)
where x, y 2 R
n
. In other words, we map the dimension of decision variable from n in (2.1) to 2n
in (3.1).
Consider problem in the form of (3.1). Provided a penalty parameter 1/µ (µ>0), we have the
following augmented Lagrangian function:
L
A
(x, y; )=F (x, y)
T
(x y)+
1
2µ
kx yk
2
, (3.2)
and, hence, (3.1) can be solved by the augmented Lagrangian method described in Algorithm 1.
7