As engineers, we all love finding geeky solutions to
the problems which we come across. It may come as
a surprise to find that, in this particular area, there are
none. Clever tricks may save some power, but the field is dominated
by other, simpler considerations. There are several very large ele-
phants in this room and we must be careful to hunt the elephants
we can see, before spending significant effort chasing smaller
mammals.
When looking at the power consumption of a system, it is important
to be clear about what we are actually measuring. When we say
“power-efficient” we can mean several things. Do we mean “power”
or “energy.” In practice we need both. Most handheld portable de-
vices have two distinct budgets here: a power budget which governs
instantaneous power consumption and prevents over-heating or
thermal stress, and an energy budget for total long-term energy use.
Software is required to stay within the power budget over the short-
term and within the energy budget over the longer term.
Clearly, we could reduce the power consumption of any device to
near-zero simply by having it do nothing – or nothing meaningful at
any rate! Unavoidably, carrying out useful functions requires energy
use. So, this game is a continuous compromise between doing
something meaningful and saving power. In order to carry out the
functions we need, we must burn energy; consequently, we must
ensure that those functions are carried out in as power-efficient a
manner as possible.
48
Technology In-Depth
Developing Power-Efficient Software
Systems on ARM Platforms
By Chris Shore, ARM Training Manager
Given the emphasis on battery life for portable devices, it would seem strange that there are very few
software engineers who actually have energy reduction in their daily project accountabilities. I suspect
that those who do give the subject some thought are likely to do it on a “commendation vs. court
martial” basis. We are entering a period when this will have to change. As battery life and performance
requirements continue to fight with each other we, as software engineers, will be forced to spend a lot
more time looking at how we can design and write our software in an energy-efficient way – at least
until the tools catch up with us.
A
A better metric, commonly
used in the academic material on this subject, is the “Energy-Delay
Product.” Although it has neither standard units nor methodology, it
combines energy consumption with a measure of performance. In-
creasing energy use or decreasing performance will increase EDP, so
what we seek is the lowest acceptable value of EDP – in other
words, the lowest energy use consistent with carrying out the re-
quired tasks within the time allowed.
All computing machines carry
out two essential functions. And they are both essential – without
both no meaningful tasks can be accomplished.
Computation – or data manipulation – is naturally what we think of
first. Typically, computation is carried out on values held in machine
registers. In order to carry out computational tasks as efficiently as
possible, we need to execute the smallest possible number of in-
structions in the shortest possible time. Most importantly, efficient
computation allows one of two things: either we can finish earlier
and go to sleep or we can turn down the clock speed and still com-
plete within the allotted time.
What is often neglected is the aspect of communication - moving
data around. In the majority of architectures (ARM, as a load-store
architecture, is no exception) data movement is essential. You can-
not process any information without moving it from one place to
The Energy-Delay Product
Where does the energy go?