Journal of Systems Engineering and Electronics
Vol. 24, No. 5, October 2013, pp.870–878
Low-power task scheduling algorithm for
large-scale cloud data centers
Xiaolong Xu
1,2,*
,JiaxingWu
1
, Geng Yang
3
, and Ruchuan Wang
1
1. College of Computer, Nanjing University of Posts and Telecommunications, Nanjing 210003, China;
2. State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210046, China;
3. Key Lab of Broadband Wireless Communication and Sensor Network Technology of Ministry of Education,
Nanjing University of Posts and Telecommunications, Nanjing 210003, China
Abstract:
How to effectively reduce the energy consumption of
large-scale data centers is a key issue in cloud computing. This pa-
per presents a novel low-power task scheduling algorithm (LTSA)
for large-scale cloud data centers. The winner tree is introduced
to make the data nodes as the leaf nodes of the tree and the
final winner on the purpose of reducing energy consumption is
selected. The complexity of large-scale cloud data centers is fully
consider, and the task comparson coefficient is defined to make
task scheduling strategy more reasonable. Experiments and per-
formance analysis show that the proposed algorithm can effec-
tively improve the node utilization, and reduce the overall power
consumption of the cloud data center.
Keyw ords: cloud computing, data center, task scheduling, energy
consumption.
DOI: 10.1109/JSEE.2013.00101
1. Introduction
As the large-scale cloud computing infrastructure, the
cloud data center gets into a high-speed development
phase. However, the high energy consumption of cloud
data cen ters is becoming a difficult issue around the world
in general. Th e investigation of the United States Envi-
ronmental Protection Agency showed that the amount of
power consumed by the information technology (IT) in-
frastructure accounted for 1.5% of the total power con-
sumption in the United States in 2006, and will increase
Manuscript received November 03, 2012.
*Corresponding author.
This w ork was supported by the National Natural Science Foun-
dation of China (61202004; 61272084), the National Key Basic Re-
search Program of China (973 Program) (2011CB302903), the Spe-
cialized Research Fund for the Doctoral Program of Higher Education
(20093223120001; 20113223110003), the China Postdoctoral Science
Foundation Funded Project (2011M500095; 2012T50514), the Natural
Science Foundation of Jiangsu Province (BK2011754; BK2009426), the
Jiangsu Postdoctoral Science Foundation Funded Project (1102103C),
the Natural Science Fund of Higher Education of Jiangsu Province
(12KJB520007), and the Project Funded by the Priority Academic Pro-
gram Development of Jiangsu Higher Education Institutions (yx002001).
to six times in 2014 [1]. The research report of Interna-
tional Data Corporation (IDC) showed that the energy con-
sumption brought about by large-scale data centers has in-
creased by 400% in the past 30 years, and is still growing
rapidly [2]. In the life cycle of a data center, the cost of its
energy has exceeded the cost o f h ardware, and become the
second largest after the cost o f human resources [3]. How
to control and reduce energy consumption has become one
of the key issues needed to be solved as soon as possible.
In order to maintain the quality of service (QoS), cloud
data centers need mostly to build and c onfigure high-
performance server cluster according to the highest pos-
sible workload. This strategy makes the rate of server uti-
lization be gener ally less than 30% [4], which is one of the
main reasons leading to energy waste.
From the perspective of energy consumption, both heat
generated by servers running at low utilization and high
utilization and their consumption of power is close, as
shown in Fig. 1 [5 ]. As disk utilization is almost certain,
the energy consum ption difference of CPU utilization un-
der 10% and 80% is very small, which provides a lot of
space for the further research.
Fig. 1 The relations of CPU utilization and power consumption