International Journal of Grid Distribution Computing
Vol.8, No.2 (2015)
Copyright ⓒ 2015 SERSC 203
where r is the sampling rate. If all types of service resource are with the same
sampling rate, the time series will be dispersed into
sub-time-series:
)()(
1)1(
)(
)(
2
)(
1
)(
2
)()(
1
)(
1
)()(
)()(
)(
,,
,,
,,
l
QZ
l
ZQ
l
Q
l
Z
l
Z
l
l
Z
ll
ll
ll
l
uuU
uuU
uuU
(2)
Based on the dispersed period window, a similarity test is conducted for any two
windows. If the Pearson correlation coefficient of any two windows U
i
and U
j
closes
to 1 and their average values are approximately equal, it is determined that the
resource request of the service is in accordance with the cycle Z(l). Then, the DWT
(Dynamic Time Warping) technology is used to get the resources required sequence
of the service X(l) in the recent cycle, which makes the mapping distance between it
and the current requiring resource sequence shortest. Thus, the resource demand of
service X(l) at time t is
.
However, when the service X(l) does not have cyclical characteristic, we use
Markov chain to predict the resource requirement at time t in the paper. In order to
use Markov theory, the resources required state of the service X(l) is equally divided
into
( ) ( )
m in m ax
[( ) / ]
ll
n X X I
intervals in advance, which is used to distinguish the state
of the resource requirements in different conditions. Correspondingly, each resource
demand interval represents a state of the resource or service status, and resource
requirement for each state takes the mean value of the interval. Therefore, the
changes of the resource requirement of the service can be considered as a time
series
( )( )( 1, 2, )X l t t w
. Resource requirement for each time point belongs to the
different state of resource requirements. Then, we employ statistical technology to
calculate the transition probability matrix of the Markov process.
( ) ( )
| 1, , ; 1, ,
ll
ij
p p i n j n
(3)
where
( ) ( ) ( )
/
l l l
ij i ij
p m m
is the transition probability from state
to state
,
represents the numbers that the state
appears at different time, and
represents the numbers state
transfer to state
. So we can obtain the state
transition probability matrix of this service
.
Based on this, MPRA predicts the short-term resource requirements by
constructing a discrete finite-state Markov chain model. Provided that the Markov
chain is homogeneous, any system status probability in time t can be calculated by
C-K equation, and it follows the possible state prediction of the service:
( ) ( ) ( ) ( ) ( ) 2 ( ) ( )
1 2 0
( ) ( ) ( )
l l l l l l l t
t t t
p p p
(4)
where
and
is the probability distribution at the initial time and at time t of
respectively. According to the current state
, the probability in state
after the time t can be inferred. Therefore, the forecast value of the resource
requirements of the service at time t is
. In summary, the resource
requirements of the service
at time t can be forecasted as: