Our model focuses on accommodating foreground
TCP flows, leaving emulation for other types of fore-
ground flows as future work. We also concentrate on em-
ulating stationary conditions for paths; in principle, any
or all parameters to our model can be made time-varying
to capture more dynamic network behavior.
2.1 Base RTT
The round-trip time (RTT) of a path is the time it takes
for a packet to be transferred in one direction plus the
time for an acknowledgment to be transferred in the op-
posite direction. We model the RTT of a path by break-
ing it into two components: the “base RTT” [6] (RTT
base
)
and the queuing delay of the bottleneck link.
The base RTT includes the propagation, transmission,
and processing delay for the entire path and the queuing
delay of all non-bottleneck links. When the queue on
the bottleneck link is empty, the RTT of the path is sim-
ply the base RTT. In practice, the minimum RTT seen
on a path is a good approximation of its base RTT. Be-
cause transmission and propagation delays are constant,
and processing delays for an individual flow tend to be
stable, a period of low RTT indicates a period of little or
no queuing delay.
The base RTT represents the portion of delay that is
relatively insensitive to network load offered by the fore-
ground flows. This means that we do not need to emulate
these network delays on a detailed hop-by-hop basis: a
fixed delay for each path is sufficient.
2.2 Capacity, Available Bandwidth, and
Queuing
The bottleneck link controls the bandwidth available on
the path, contributes queuing delay to the RTT, and
causes packet loss when its queue fills. Thus, three prop-
erties of this link are closely intertwined: link capacity,
available bandwidth, and queue size.
We make the common assumption that there is only
one bottleneck link on a path in a given direction [9] at a
given time, though we do not assume that the same link
is the bottleneck in both directions.
2.2.1 Capacity and Available Bandwidth
Existing link emulators fundamentally emulate limited
capacity on links. The link speed given to the emula-
tor is used t o determine the rate at which packets drain
from the emulator’s bandwidth queue, in the same way
that a router’s queue empties at a rate governed by the
capacity of the outgoing link. The quantity that more di-
rectly affects distributed applications, however, is avail-
able bandwidth, which we consider to be the maximum
rate sustainable by a foreground TCP flow. This is the
rate at which the foreground flow’s packets empty from
the bottleneck queue. Assuming the existence of com-
peting traffic, this rate is lower than the link’s capacity.
It is not enough to emulate available bandwidth us-
ing a capacity mechanism. Suppose that we set the ca-
pacity of a link emulator using the available bandwidth
measured on some Internet path: inside of the emulator,
packets will drain more slowly than they do in the real
world. This difference in rate can result in vastly dif-
ferent queuing delays, which is not only disastrous for
latency-sensitive experiments, but as we will show, can
cause inaccurate bandwidth in the emulator as well.
Let q
f
and q
r
be the sizes of the bottleneck queues in
the forward and reverse directions, respectively, and let
C
f
and C
r
be the capacities. The maximum time a packet
may spend in a queue is
q
C
, giving us a maximum RTT
that can be observed on the path:
RTT
max
= RTT
base
+
q
f
C
f
+
q
r
C
r
(1)
If we were to use ABW
f
and ABW
r
—the available
bandwidth measured from some real Internet path—to
set C
f
and C
r
, Equation 1 would yield much larger queu-
ing delays within the emulator than seen on the real path
(assuming the queues sizes on the path and in the emula-
tor are the same).
For instance, consider a real path with RTT
base
=
50ms, a bottleneck of symmetric capacity C
f
= C
r
=
43Mbps (a T-3 link) and available bandwidth ABW
f
=
ABW
r
= 4.3Mbps. For a small q
f
and q
r
of 64 KB (fil-
lable by a single TCP flow), the RTT on the path is
bounded at 74 ms, since the forward and reverse direc-
tions each contribute at most 12 ms of queuing delay.
However, if we set C
f
= C
r
= 4.3Mbps within an em-
ulator (keeping queue sizes the same), each direction of
the path can contribute up 120 ms of queuing delay. The
total resulting RTT could reach as high as 290 ms.
This unrealistically high RTT can lead to two prob-
lems. First, it fails to accurately emulate the RTT of the
real path, causing problems for latency-sensitive appli-
cations. Second, it can also affect the bandwidth avail-
able to TCP, a problem we discuss in more detail in
Section 2.2.2.
One approach reducing the maximum queuing delay
would be to simply reduce the q
f
and q
r
inside of the
emulator. This may result in queues that are simply too
small. In the example above, to reduce the queuing delay
within the path emulator to the same level as the Inter-
net path, we would we would have to reduce the queue
size by a factor of 10 to 6.4 KB. A queue this small will
cause packet loss if a stream sends a small burst of traf-
fic, preventing TCP from achieving the requested avail-
able bandwidth. We also discuss minimum queue size in
more detail in Section 2.2.2.
3