![](https://csdnimg.cn/release/download_crawler_static/1520684/bg3.jpg)
relationships seem to be a bit exaggerated, because 1) the
relationships are not continuous, they rather progresses as a
stepwise function relative to the scaling of the voltage and
2) the power or energy consumed by the processor does not
take into account the amount of power or energy spent on
memory accesses that in any computational environment is
a necessary condition. As mentioned in [2], the frequency of
a processor is approximately proportional to the supply
voltage; hence, all the subsequent relationships in (2) and
(3) are a mere approximation.
3.2 The System Model
The system is a collection of machines that comprise the
computational grid and the collection of tasks.
Machines. Consider a computational grid comprising of
a set of machines, M ¼fm
1
;m
2
; ...;m
m
g. Assume that each
machine is equipped with a DVS module. Each machine is
characterized by 1) The frequency of the CPU, f
j
, given in
cycles per unit time. With the help of a DVS, f
j
can vary
from f
min
j
to f
max
j
, where 0 <f
min
j
<f
max
j
. From frequency,
it is easy to obtain the speed of the CPU, S
j
, which is
approximately proportional to the frequency of the ma-
chine. (This relationship between speed and the frequency
of a processor is a widespread approximation, e.g., [14],
[32].) 2) The specific machine architecture, Aðm
j
Þ. The
architecture would include the type of CPU, bus types, and
speeds in GHz, I/O, and memory in bytes.
Tasks. Consider a metatask, T ¼ft
1
;t
2
; ...;t
n
g, where t
i
is a task. Each task is characterized by 1) The computational
cycles, c
i
, that it needs to complete. The assumption here is
that the c
i
is known a priori. 2) The specific machine
architecture, Aðt
i
Þ, that it needs to complete its execution.
3) The deadline, d
i
, before it has to complete its execution.
Moreover, we also assume that the metatask, T, also has a
deadline, D, which is met if and only if the deadlines of all
its tasks are met.
Preliminaries. Now, suppose we are given a computa-
tional grid and a metatask, T , and we are required to map T
on the computational grid such that all the characteristics of
the tasks and the deadline constraint of T are fulfilled. We
term this fulfillment as a feasible task to machine mapping.
A feasible task to machine mapping happens when 1) Each
task t
i
2 T can be mapped to at least one m
j
subject to all
the constraints associated with each task—computational
cycles, architecture, and deadline. 2) The deadline con-
straint of T is also satisfied.
The number of computational cycles required by t
i
to
execute on m
j
is assumed to be a finite positive number,
denoted by c
ij
. The execution time of t
i
under a constant
speed S
ij
, given in cycles per second is t
ij
¼ c
ij
=S
ij
. For the
associated data and instructions of a task, we assume that
the processor always retrieves it from the level-1 (primary)
data cache. A task, t
i
, when executed on machine m
j
draws,
p
ij
amount of instantaneous power. Lowering the instanta-
neous power will lower the CPU frequency and conse-
quently will decrease the speed of the CPU and hence cause
t
i
to possibly miss its deadline. For simplicity, assume that
switching the overhead, the CPU frequency is minimal, and
hence, ignored.
The architectural requirements of each task are recorded
as a tuple with each element bearing a specific requirement,
for instance, 1) what processor does the task require for its
execution? 2) The architectural affinity matching of the task
to the machine. We assume that the mapping of architec-
tural requirements is a Boolean operation. That is, the
architectural mapping is only fulfilled when all of the
architectural constraints are satisfied, otherwise not.
3.3 Formulating the Min-Min-Max Problem
Given is a computation grid and a metatask T . Find the task
to machine mapping, where 1) the cumulative instanta-
neous power utilized by the computational grid is
minimized such that 2) the makespan of the metatask, T ,
is minimized.
Mathematically, we can say
minimize
X
n
i¼1
X
m
j¼1
p
ij
x
ij
and minimize max
1jm
X
n
i¼1
t
ij
x
ij
subject to
x
ij
2f0; 1g;i ¼ 1; 2; ...;n; j ¼ 1; 2; ...m; ð4Þ
t
i
! m
j
; 8i; 8j;ifAðt
i
Þ¼Aðm
j
Þ; then x
ij
¼ 1; ð5Þ
t
ij
x
ij
d
i
; 8i; 8j; x
ij
¼ 1; ð6Þ
ðt
ij
x
ij
d
i
Þ2f0; 1g; ð7Þ
Y
n
i¼1
ðt
ij
x
ij
d
i
Þ¼1; 8i; 8j; x
ij
¼ 1: ð8Þ
Constraint (4) is the mapping constraint, when x
ij
¼ 1,a
task, t
i
, is mapped to machine, m
j
. Constraint (5) elaborates
on this mapping in conjunction to the architectural
requirements, and it states that a mapping can only exists
if the architecture is mapped. Constraint (6) relates to the
fulfillment of the deadline of each task, and constraint (7)
tells us about the Boolean relationship between the deadline
and the actual time of execution of the tasks. Constraint (8)
relates to the deadline constraints of the metatask that will
hold if and only if all the deadlines of the tasks, d
i
,
i ¼ 1; 2; ...n, are satisfied.
The above problem formulation is in a form of multi-
constrained multiobjective optimization problem, where the
preference is given to one optimization over the other.
Because we are more interested in minimizing the instanta-
neous power consumption, we give preference to the first
objective that is the instantaneous power consumption. This
formulation is in the same form as that of a GAP except for
constraints of (6), (7), and (8). The major difference between
EATA and GAP is that the capacity of resources in EATA,
in terms of the utilization of instantaneous power, is
defined in groups, whereas in the case of GAP, it is defined
individually. The EATA problem is formulated as a multi-
objective optimization problem. In the literature, there are
two standard ways to tackle such problems: 1) optimize
objectives concurrently and 2) optimize one objective first,
then make that as a constraint of the second objective. To
optimize objectives concurrently, the classical method to
tackle such problems is to use Lagrange multipliers. The
Lagrangian will converge the two (in our case) or multiple
objectives to a saddle point [12]. To optimize one objective
first, then make that as a constraint of the second objective,
348 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 20, NO. 3, MARCH 2009