server NIC ports. DCell is highlighted by its excellent scal-
ability, i.e., the number of servers supported increases dou-
ble-exponentially with the number of server NIC ports.
FiConn [14] goes one step further to limit the number of
server NIC ports as two, since most of current data center
servers have two built-in ports. Compared with DCell,
FiConn not only eliminates the necessity to add server NIC
ports during data center expansion, but also reduces the
wiring cost. The downside is that the network capacity in
FiConn is lower than DCell.
BCube [2] targets at building a data center container, typ-
ically with 1k4k servers. BCube is also a recursive struc-
ture. Each server uses multiple ports to connect different
levels of switches. The link resource in BCube is so rich that
1:1 oversubscription ratio is guaranteed. MDCube [20]
designs an architecture to interconnect the BCube-based
containers. The inter-container connection and routing in
MDCube are closely coupled with the intra-container archi-
tecture, so as to provide high bisection width and great fault
tolerance.
2.2 Power Model for Data Center Networks
Switches and server NICs are the two main parts for net-
work power consumption in switch-centric data center
architectures. server CPU cores also contribute to packet
processing and forwarding, and accordingly influence net-
work power consumption in server-centric data center
architectures. Therefore, we should consider the server
CPU power consumed for packet forwarding when calcu-
lating the total network power consumption in server-cen-
tric architectures.
Eq. (1) shows the total amount of network power con-
sumption in both a switch-centric architecture and a server-
centric architecture. I and J denote the sets of switches and
server NIC ports respectively in the data center, and L
denotes the set of server CPU cores used for packet process-
ing and forwarding in server-centric data centers. U
i
(i 2I)
and V
j
(j 2J) denote the power consumption of a switch i
and that of a server NIC port j respectively. E
l
and Y
l
(l 2L)
denote the power of a server CPU core used for network
processing and forwarding at maximum utilization and the
utilization ratio of the CPU core, respectively. Here, we sim-
ply assume that a CPU core in a server is energy propor-
tional to its utilization [21], [22]. In Eq. (1), we do not
consider the power consumption of cables, since they are
shown to occupy only a very small portion of the total
power in the network [23] although with a non-ignorable
deployment expenditure
P ¼
P
i2I
U
i
þ
P
j2J
V
j
; switch-centric
P
i2I
U
i
þ
P
j2J
V
j
þ
P
l2L
ðE
l
Y
l
Þ; server-centric:
(1)
The IEEE Energy Efficient Ethernet standard suggests
three energy states for Ethernet ports on switches [15], i.e.,
Active, Normal Idle (N_IDLE), and Low-Power Idle (LP_I-
DLE). The Active state consumes high power when sending
packets. The N_IDLE state does not transmit packets but
consumes same or less power than the Active state. The
LP_IDLE state consumes almost no power by putting the
ports into deep sleep. Rate adaptation, i.e., changing
the operating rate of a port according to the traffic load, is
not recommended by EEE, since its saving is moderate com-
pared to putting the port into sleep. Therefore, we assume
that an Ethernet port can go to sleep when it is idle for a
short period of time, i.e., sleep-on-idle, and a sleeping port
can be waken up when a packet arrives, i.e., wake-on-
arrival (WoA). SoI and WoA only take tens of microseconds
to transition, which is tolerable for almost all kinds of appli-
cations [15]. These technologies are already implemented in
Cisco switches [24]. When all the ports in a switch are
asleep, we can also put the entire switch into sleep, so as to
save the power consumption on the switching fabric, fans,
and other parts, which are relatively fixed. Therefore, we
use Eq. (2) to calculate the power consumption for a switch
i, where C
i
denotes the total number of ports in the switch,
M
i
denotes the number of sleeping ports, Q
i
denotes the
power consumption of an active port, and T
i
denotes the
fixed amount of power consumption in the switch. Here,
we assume all active ports on a switch have the same power
consumption, as current advanced data center topologies,
such as Fat-Tree [1] and BCube [2], usually employ com-
modity switches equipped with homogeneous network
ports to interconnect servers
U
i
¼
0ifM
i
¼ C
i
T
i
þðC
i
M
i
ÞQ
i
if M
i
<C
i
:
(2)
2.3 Power Saving Techniques in Data Center
Networks
Recently there are many research works studying power-
saving mechanisms for data center networks. They can be
generally divided into two categories: developing energy-
efficient network devices and designing power-aware rout-
ing. The consistent theme is to let the power usage of data
center networks be proportional to their traffic load. When
we compare the power characteristics of six typical data
center architectures later, we will apply both the sleeping
technique of network devices and power-aware routing
into them, and study the impacts of the two techniques on
improving the network power effectiveness of these
architectures.
Energy efficient network devices. Many novel energy-saving
techniques have been proposed in order to improve the
energy usage efficiency of individual network devices these
years. The network component sleeping and dynamical
adaptation technologies are two main methods. Nedevschi
et al. [25] argued that putting idle network elements into a
sleep mode and dynamically adapting the rate of network
ports to their forwarding loads can effectively save the
power consumption of network devices. Later, they studied
two implementation mechanisms of network component
sleeping: wake-on-lan and assistant proxy processing, and
proposed an effective proxy framework in [26]. Further-
more, Gupta and Singh [27] proposed a method of detecting
idle and under-utilized links to save energy with little sacri-
fice on network delay and packet loss. Gunaratne et al. [28]
investigated the optimal strategies of tuning the transmis-
sion rate of links in response to the link utilization and
buffer queue lengths of switch ports. Ananthanarayanan
and Katz [29] designed a shadow port as the packet
SHANG ET AL.: ON THE NETWORK POWER EFFECTIVENESS OF DATA CENTER ARCHITECTURES 3239