overall performance of many web services without result-
ing in a congestion collapse.
Keith Winstein et al. [23] propose a new approach to
end-to-end congestion control on a multi-user network.
In this approach, the protocol designer specifies their prior
knowledge or assumptions about the network and an
objective that the algorithm will try to achieve, e.g., high
throughput and low queuing delay. Remy then produces
a distributed algorithm—the control rules for the indepen-
dent endpoints—that tries to achieve this objective. In
Remy, TCP’s entire congestion control algorithm have
adapted in response to empirical variations in underlying
networks. Unfortunately, this approach seems impractical
in that: (1) it requires prior knowledge or assumptions
about the network and an objective specified by the proto-
col designer; (2) running on a 48-core server, Remy
generally takes a few hours of wall-clock time (one or
two CPU-weeks) to generate congestion-control algo-
rithms offline. Moreover, the approach has been evaluated
only through the simulation experiments by now, and its
friendliness with existing TCP variants is uncertain. It
might not be applicable over the Internet in the short term.
Monia Ghobadi et al. [24] propose OpenTCP as a TCP
adaptation framework that works in SDNs. The operator
can easily customize end-hosts’ TCP stack according to
the statistic information collected by the SDN controllers.
However, OpenTCP requires the assistance of the control-
lers and thus could not be deployed in Internet in the short
term.
2.2. Congestion control approaches on the high BDP networks
There are many protocols are presented for the net-
works with high BDP, including the loss-based, the delay-
based and the synergy of loss-based and delay-based ap-
proaches. In Section 4, we will compare ACCF with Cubic
TCP, Fast TCP [26] and TCP Illinois, which are the loss-
based, the delay-based and the mixed approaches
respectively.
Loss-based approaches modify the AIMD mechanism of
TCP congestion avoidance phases to quickly increase and
slowly decrease the congestion window than TCP Reno so
as to achieve high throughput in high BDP networks. H-
TCP sets a function of the elapsed time since last packet
loss as increase parameter and uses an adaptive backoff
strategy once congestion events occur so as to achieve a
perfect performance on responsiveness and efficiency in
high BDP networks. In HTCP, the window size curve
between two consecutive loss events is convex. However,
an ideal window curve should be concave, which is more
efficient and avoids heavy congestion [27]. BIC TCP and
Cubic TCP are proposed to yield a non-convex window
curve. BIC TCP adjusts the window size using a binary
search method to reach a reference value. The increment
of the window size is linear at the initial stage and then be-
comes logarithmic when approaching to the reference
point. BIC TCP performs better than the earlier approaches.
However, it suffers the RTT unfairness problem. Cubic TCP
is developed to improve the RTT-fairness performance of
BIC TCP. The protocol modifies the linear window growth
function of existing TCP standards to be a cubic function
in order to improve the scalability of TCP over fast and long
delay networks. The window growth function of Cubic is
defined as [11]:
WðtÞ¼Cðt KÞ
3
þ W
max
ð1Þ
K ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffi
W
max
b
C
3
r
ð2Þ
where C is a Cubic parameter, t is the elapsed time from the
last window reduction, W
max
is the window size where the
loss event occurs, and K is the time period that the above
function takes to increase W to W
max
when there is no fur-
ther loss event. During steady state, Cubic increases the
window size aggressively when the window is far from
the saturation point, and slowly when it is close to the sat-
uration point. Cubic need to choose a max probing method
to detect the new maximum after the saturation point is
reached, and this yields a convex window curve.
Although above-mentioned approaches can achieve
higher throughput than the standard TCP, oscillation is
unavoidable because they uses packet loss as the only con-
gestion signal. Moreover, the more aggressive increase rate
and drop behavior can increase burden on intermediate
router buffers, thereby resulting in serious congestion.
Hence, loss-based protocols could not achieve full band-
width utilization when the network status changes.
Delay-based protocols make use of the queue delay as
the network congestion estimator and can achieve excel-
lent steady state performance. FAST TCP is a typical
delay-based high-speed TCP variant derived from TCP
Vegas. The protocol maintains queue occupancy at routers
for a small but not zero value so as to make the network
around full bandwidth utilization and achieve a higher
average throughput. It periodically updates the congestion
window based on the estimated average RTT and average
queuing delay, and the function is defined as [26]:
W ¼ min 2w
old
; ð1
c
Þw þ
c
baseRTT
a
v
e RTT
w
old
þ
a
ð3Þ
where w
old
is the congestion window size of last RTT and w
is the current congestion window.
a
is the number of pack-
ets which can be queued in routers at the equilibrium
state.
c
2 (0,1], baseRTT is the minimum RTT observed so
far.
Unfortunately, these delay-based protocols may suffer
from some inherent weakness. Firstly, delay-based flows
could suffer from significant performance degradation
when competing with loss-based flows [13]. Secondly,
they require the buffer size at the router to be larger than
a specified value
a
and this value increases with the num-
ber of users N. Since N is difficult to be estimated,
a
is usu-
ally fixed, e.g. it is set to 200 packets in FAST TCP. As a
result, the change of the buffer size at the router may result
in the performance degradation of delay-based protocols.
Lastly, the performance of these delay-based protocols
deteriorates if the delay measurements are noisy. They
may suffer from performance degradation on congested
links because a large number of packet losses could result
in wrong delay estimation.
Mixed protocols combine loss-based protocols and
delay-based protocols. They use both packet loss information
310 M. Wang et al. / Computer Networks 64 (2014) 308–321