was not designed for virtualization and has no common
hardware abstraction layer. While individual technolo-
gies can slice particular hardware resources (e.g., MPLS
can virtualize forwarding tables) and layers (e.g., WDM
slices the physical layer, VLANs slices the link layer),
there is currently no one single technology or clean ab-
straction that will virtualize the network as a whole.
Further, it’s not clear how—or if it’s even possible—to
virtualize the equipment already deployed in our net-
works. Commodity switches and routers typically have
proprietary architectures—equipment manufacturers do
not publish their full design—with limited mechanisms
to change and control the equipment’s software, e.g.,
operators can only change a limited set of configuration
parameters via the command-line interface.
The specific system described in this paper builds on
OpenFlow [13] as an abstraction of the underlying hard-
ware. As we describe later, OpenFlow offers most of
what we need for a hardware abstraction layer. In prin-
ciple, other abstraction layers could be used, although
we’re not aware of any available today that meets our
needs.
In this paper, we describe FlowVisor: A network vir-
tualization layer that we built and deployed in our net-
work. Much as a hypervisor resides between software
and hardware on a PC, the FlowVisor uses OpenFlow
as a hardware abstraction layer to sit logically between
control and forwarding paths on a network device. To
get hands-on experience in running virtual networks,
we deployed FlowVisor into our own production net-
work and use it to create experimental and production
virtual slices of our campus wireline and wireless net-
work. The resulting virtual network runs on existing or
new low-cost hardware, runs at line-rate, and is back-
wardly compatible with our current legacy network. We
have gained some experience using it as our every-day
network, and we believe the same approach might be
used to virtualize networks in data centers, enterprises,
homes, WANs and so on.
Our goal is not to claim that our system is perfect - as
we will see, there are several open questions. Rather, we
are trying to understand what is easy and what is hard,
and to guide the evolution of our hardware abstraction
to make it easier in future to virtualize the network.
Our roadmap for the rest of the paper is as follows.
We first describe our specific vision of network virtual-
ization in § 2. In §3 we describe the FlowVisor’s design
and architecture, and how we implement strong isola-
tion (§4) between virtual instances. We validate(§5) the
FlowVisor’s isolation capabilities and quantify its over-
head. In §6 we describe the FlowVisor’s deployment in
our production network and our experience using it to
run network experiments in distinct virtual networks on
the same physical network. We finish with some con-
cluding remarks.
2. NETWORK VIRTUALIZATION
In order to virtualize a network, we need to know
what resources we are trying to slice. We argue that
there are five primary slicing dimensions:
Bandwidth. It should be possible to give each slice its
own fraction of bandwidth on a link. This requires
a basic primitive to divide link bandwidth. There
are well-known ways to do this, and the hardware
abstraction can provide some choice as to how it
is implemented (e.g. WDM in a optical network,
emulated circuits in a packet-switched network).
All elements on the forwarding path that limit the
forwarding rate need to sliced too. For example, if
forwarding takes place in software, then the CPU
needs to be virtualized too.
Topology. Each slice should have its own view of net-
work nodes (e.g., switches and routers) and the
connectivity between them. In this way, slices can
experience virtual network events such as link fail-
ure and forwarding loops.
Traffic. It should be possible to associate a specific set
of traffic to one (or more) virtual networks so that
one set of traffic can be cleanly isolated from an-
other. Here, traffic is defined broadly. It could be
all packets to/from a set of addresses; it could be
all traffic belonging to a group of users. It could be
quite specific, such as all http traffic, or all traffic
with even number Ethernet addresses; or very gen-
eral such as a specific user’s experiment. In gen-
eral we believe the hardware abstraction should
work with – but not be in any way constrained
by – the specific layering structures in use today.
It should provide a way to slice the forwarding
plane even as new protocols and address formats
are defined at any layer. This suggests a very flex-
ible way to define and slice the header space (or
“flowspace” as we will call it later).
Device CPU. Switches and routers have computational
resources that must be sliced. Without proper
CPU slicing, switches will stop forwarding slow-
path packets (e.g., packets with IP options, IGMP
join/leave messages), updating statistic counters
(e.g., SNMP) and, more importantly, will stop pro-
cessing updates to the forwarding table (e.g., route
changes).
Forwarding Tables. Network devices typically sup-
port a finite number of forwarding rules (e.g.,
TCAM entries). Failure to isolate forwarding en-
tries between slices might allow one slice to pre-
vent another from forwarding packets.
2