1342 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015
Randomized Gradient-Free Method for Multiagent Optimization
Over Time-Varying Networks
Deming Yuan and Daniel W. C. Ho, Senior Member, IEEE
Abstract— In this brief, we consider the multiagent optimization over a
network where multiple agents try to minimize a sum of nonsmooth but
Lipschitz continuous functions, subject to a convex state constraint set.
The underlying network topology is modeled as time varying. We propose
a randomized derivative-free method, where in each update, the random
gradient-free oracles are utilized instead of the subgradients (SGs).
In contrast to the existing work, we do not require that agents are
able to compute the SGs of their objective functions. We establish the
convergence of the method to an approximate solution of the multiagent
optimization problem within the error level depending on the smoothing
parameter and the Lipschitz constant of each agent’s objective function.
Finally, a numerical example is provided to demonstrate the effectiveness
of the method.
Index Terms—Average consensus, distributed multiagent
system, distributed optimization, networked control systems.
I. I
NTRODUCTION
In recent years, the development of distributed methods for
dealing with convex optimization problems has attracted considerable
research interest [5]–[7], [12]–[17], [21]–[23]. Such distributed meth-
ods are, in general, a combination of the standard subgradient (SG)
methods in convex optimization theory and the average consensus
algorithms in multiagent systems, which focus on computing the
average of the states distributed among the agents in the network in
an iterative pattern [1]–[4], [10], [18], [20]. The average consensus
algorithms arise in many applications, including distributed coordi-
nation of multiple autonomous vehicles [25], [26], synchronization
of complex networks [11], [24], and so on.
The problem of minimizing a sum of convex objective func-
tions, which are distributed among multiple agents over a net-
work, has attracted much research interest recently. Nedic and
Ozdaglar [17] develops a general framework for the multiagent
optimization problem over a network; they propose the distributed
SG methods and analyze the convergence properties of the methods.
Nedic et al. [13] further considers the case that the agents’ states
are constrained to convex sets, and proposes the projected con-
sensus algorithm. Ram et al. [12] considers the case of the noisy
SGs corrupted by stochastic errors. Zhu and Martinez [6] further
take the inequality and equality constraints into consideration, and
they propose the distributed Lagrangian primal-dual SG method,
the basic idea of which is to characterize the primal-dual opti-
mal solutions as the saddle points of the associated Lagrangian.
Johansson et al. [23] proposes a variant of the distributed SG method,
Manuscript received April 16, 2013; revised February 25, 2014; accepted
June 26, 2014. Date of publication August 1, 2014; date of current version
May 15, 2015. This work was supported in part by the National Natural
Science Foundation of China under Grant 61304042, in part by the Natural
Science Foundation of Jiangsu Province under Grant BK20130856, in part by
the University Grants Commission under Grant CityU 114113, and in part by
the Program for Changjiang Scholars.
D. Yuan is with the College of Automation, Nanjing University
of Posts and Telecommunications, Nanjing 210046, China (e-mail:
dmyuan1012@gmail.com).
D. W. C. Ho is with the Department of Mathematics, City University
of Hong Kong, Hong Kong, and also with the School of Automation,
Nanjing University of Science and Technology, Nanjing 210094, China
(e-mail: madaniel@cityu.edu.hk).
Digital Object Identifier 10.1109/TNNLS.2014.2336806
in which the estimate is first adjusted along the negative SG direction
of the local cost function, and then the updated information is
shared with its neighboring agents by executing several consensus
steps (lower bounded by some positive integer). Inspired by [23],
Yuan et al. [5] further incorporates the global inequality con-
straints and investigates the convergence properties of the method.
These methods or algorithms are synchronous because they require
that all agents in the network update at the same time. From
a different viewpoint, [22] develops an asynchronous distributed
algorithm for the multiagent optimization problem; they also establish
convergence results of the algorithm. The aforementioned methods
or algorithms, however, rely on the assumption that the SGs of the
objective functions are available to each agent, respectively.
However, in some cases, it may be computationally infeasible or
expensive to calculate the exact SGs (see [27], [29], and references
therein). It is desirable to develop gradient-free methods for solving
convex optimization problems in a distributed setting.
On a much broader scale, our work in this brief is related to the dis-
tributed stochastic gradient methods [8], [9] (for recent convergence
results on the asymptotic behavior of distributed stochastic gradient
methods with decreasing step sizes, see [19]).
The main goal of this brief is to study the convergence properties of
the randomized derivative-free method for the multiagent optimiza-
tion problem, in which the objective function is a sum of the agents’
local nonsmooth objective functions, and the underlying network
topology of the problem is modeled as time varying. Different from
existing results in the literature, we focus on using the random
gradient-free oracles, which are developed in [27], other than the
SGs of each objective function, to develop the distributed method for
the multiagent optimization problem. It is noted that such gradient-
free methods are absent in this field. We estimate the convergence
performance of the proposed method, and show that each agent can
achieve approximate solutions within the error level depending on
the smoothing parameter and Lipschitz constant of each objective
function; in particular, we establish the convergence of the method
to an approximate solution of the multiagent optimization problem
by choosing the smoothing parameters appropriately.
In contrast to the SG-based methods in [7], [12], and [17], our
proposed method applies to the case when the SGs of the objective
functions cannot be evaluated, or, it is computationally demanding
to find the SGs. Note also that as compared to [27], the proposed
randomized gradient-free (RGF) method extends the method in [27]
to the multiagent scenario.
By comparison to previous work, our convergence results
are different, and the main contributions of this brief are
twofold.
1) First, different from the methods considered in existing papers,
which rely on computing the SGs of each agent’s objective
function, we propose the derivative-free method, which is based
on random gradient-free oracles for the multiagent optimization
problem.
2) Second, we obtain some error bounds on the convergence of
the method and establish the relation between the error level
and smoothing parameters.
2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.