
2.2. Throughput-guaranteed power-aware routing model
The objective of throughput-guaranteed power-aware
routing is to compute the transmission paths based on a gi-
ven traffic matrix, so that the total power consumption of
switches involved in the routing paths is as little as possi-
ble while meeting predefined network performance
thresholds. Throughout this paper we use network
throughput as a key performance metric, since it is the
most important metric for data-intensive computations.
For the convenience of presentation, we call our through-
put-guaranteed Power-aware Routing Problem PRP.
PRP: Assume there is a quadruplet of input parameters
(G,T, K, RP). G denotes the topology of a data center net-
work, which consists of servers and switches along with
their connections and link capacity. T is the network traffic
matrix, which denotes the communication relationship be-
tween each pair of servers in the topology G. K and RP de-
note the predefined thresholds of total network
throughput and network reliability respectively. The objec-
tive of PRP is to find a routing R
⁄
for T, under the con-
straints of Eqs. (5)–(7). R denotes one of routing
selections from a candidate set R
þ
which includes all pos-
sible routing paths for T, L(R) and X(R) denote the number
of switches and ports involved in R respectively, M(R) de-
notes the total network throughput of all flows in T under
R, and Z(R) denotes the minimum number of available
paths for each flow in T under R. We assume the fixed
power consumption F of each switch is the same and all
switch ports in data center networks have the same capac-
ity and power consumption A, as current advanced data
center topologies, such as Fat-Tree and BCube, usually
use homogeneous commodity switches to interconnect
servers. We summarize the main notations used for our
power-aware routing model and algorithm in the Table 1.
R
¼ arg min
R
ðF LðRÞþA XðRÞÞ ð5Þ
MðRÞ P K; R 2R
þ
ð6Þ
ZðRÞ P RP; R 2R
þ
ð7Þ
The purpose of our model design is to find the optimal
power-saving routing, in which the total power consump-
tion of switches involved is minimal while satisfying the
network throughput and reliability thresholds. In this pa-
per we focus on bandwidth-hungry batch-processing tasks
in data centers, such as MapReduce [1] and GFS [2]. For this
kind of application, total network throughput is a better
metric to reflect the computation progress, instead of the
constraints of single flows. Therefore, we use the threshold
K to restrict the lower bound of the total throughput of all
flows in the Eq. (6).
Also, it is necessary to ensure any network flow identified
in the traffic matrix will not be interrupted when using the
power-aware routing. As PRP is an NP-hard problem, it is dif-
ficult to find the optimal solution of the problem in polyno-
mial time, especially when the scale of the data center
network is large. We have proved NP-hardness of the prob-
lem by reducing the 0–1 Knapsack problem [10] to it. The de-
tails of NP-hardness proof can be found in the appendix.
It is worth noting that the model we use is different
from some previous similar work. In [7] and [41], each net-
work flow has a fixed traffic rate, which is taken as the in-
put of the power minimization problem. In our PRP model,
the transfer rate of each flow in the traffic matrix is im-
pacted by the bandwidth competition among different
flows, as the network is currently the bottleneck for dis-
tributed computation in data centers. Moreover, TCP traffic
is dominant in current data center networks. TCP uses the
sliding window scheme and AIMD (Additive Increase Mul-
tiplicative Decrease) algorithm to control the transfer rate
of flows. The work in [11] reveals that the rate of compet-
ing TCP flows mainly relies on their RTT (Round-Trip Time).
In high-bandwidth and low-latency data center networks,
TCP flows have a similar RTT value and share fairly the
available bandwidth of the bottleneck link. Therefore,
TCP follows the Max–Min fairness model [12] to assign
the transfer rate for each flow in DCNs.
The throughput-guaranteed power-aware routing mod-
el is applied in data center networks, which have their own
characteristics compared with the Internet. First, current
advanced DCN architectures use an excessive number of
network resources to provide routing services and enjoy
abundant routing paths between each pair of servers. Also,
existing DCN architecture designs usually employ symmet-
ric topology structures [52] and homogeneous network de-
vices. Second, data center networks have more flexible
traffic control and management schemes, such as Open-
Flow [7], which make the design and implementation of
power-aware routing more tractable. Third, there are some
typical traffic patterns in data centers, such as One-to-One,
One-to-Many and All-to-All traffic patterns.
2.3. Related work
In the research area of green networking, some survey
papers [22,36] have summarized many novel network
power saving technologies appearing in recent years. In
this subsection, we only discuss typical data center net-
work architectures and representative power conservation
schemes used in Internet and data centers.
2.3.1. Data center network
Due to the well-known problem of the current practice
of tree topology, recently there are a bunch of proposals on
Table 1
Summary of main notations.
Notation Description
G topology of a data center network
T network traffic matrix
K network throughput threshold
R
þ
set of all possible routing paths
R a possible routing selection from R
+
R
⁄
optimal power-aware routing
F fixed power consumption of switches
A power consumption of each switch port
L(R), X(R) number of switches and ports involved in R
M(R) total network throughput of all flows under R
PR network performance threshold percentage
RP reliability requirement parameter
Y
i
rate of flow i
C capacity of a link
M. Xu et al. / Computer Networks 57 (2013) 2880–2899
2883