In this post we will see how OVN implements virtual networks for OpenStack. The structure of this post is such that starting from the highest level of networking abstraction we will delve deeper into implementation details with each subsequent section. The biggest emphasis will be on how networking data model gets transformed into a set of logical flows, which eventually become OpenFlow flows. The final section will introduce a new overlay protocol GENEVE and explain why VXLAN no longer satisfies the needs of an overlay protocol.
OpenStack - virtual network topology
In the previous post we have installed OpenStack and created a simple virtual topology as shown below. In OpenStack’s data model this topology consists of the following elements:
- Network defines a virtual L2 broadcast domain
- Subnet attached to the network, defines an IP subnet within the network
- Router provides connectivity between all directly connected subnets
- Port VM’s point of attachment to the subnet
So far nothing unusual, this is a simple Neutron data model, all that information is stored in Neutron’s database and can be queried with
neutron CLI commands.
OVN Northbound DB - logical network topology
Every call to implement an element for the above data model is forwarded to OVN ML2 driver as defined by the
mechanism driver setting of the ML2 plugin. This driver is responsible for the creation of an appropriate data model inside the OVN Northbound DB. The main elements of this data model are:
- Switch equivalent of a Neutron’s Subnet, enables L2 forwarding for all attached ports
- Distributed Router provides distributed routing between directly connected subnets
- Gateway Router provides connectivity between external networks and distributed routers, implements NAT and Load Balancing
- Port of a logical switch, attaches VM to the switch
This is a visual representation of our network topology inside OVN’s Northbound DB, built based on the output of
ovn-nbctl show command:
This topology is pretty similar to Neutron’s native data model with the exception of a gateway router. In OVN, a gateway router is a special non-distributed router which performs functions that are very hard or impossible to distribute amongst all nodes, like NAT and Load Balancing. This router only exists on a single compute node which is selected by the scheduler based on the
ovn_l3_scheduler setting of the ML2 plugin. It is attached to a distributed router via a point-to-point /30 subnet defined in the
ovn_l3_admin_net_cidr setting of the ML2 plugin.
Apart from the logical network topology, Northbound database keeps track of all QoS, NAT and ACL settings and their parent objects. The detailed description of all tables and properties of this database can be found in the official Northbound DB documentation.
OVN Southbound DB - logical flows
OVN northd process running on the controller node translates the above logical topology into a set of tables stored in Southbound DB. Each row in those tables is a logical flow and together they form a forwarding pipeline by stringing together multiple actions to be performed on a packet. These actions range from packet drop through packet header modification to packet output. The stringing is implemented with a special
next action which moves the packet one step down the pipeline starting from table 0. Let’s have a look at the simplified versions of L2 and L3 forwarding pipelines using examples from our virtual topology.
In the first example we’ll explore the L2 datapath between VM1 and VM3. Both VMs are attached to the ports of the same logical switch. The full datapath of a logical switch consists of two parts - ingress and egress datapath (the direction is from the perspective of a logical switch). The ultimate goal of an ingress datapath is to determine the output port or ports (in case of multicast) and pass the packet to the egress datapath. The egress datapath does a few security checks before sending the packet out to its destination. Two things are worth noting at this stage:
- The two datapaths can be located either on the same or on two different hypervisor nodes. In the latter case, the packet is passed between the two nodes in an overlay tunnel.
- The egress datapath does not have a destination lookup step which means that all information about the output port MUST be supplied by the ingress datapath. This means that destination lookup does not have to be done twice and it also has some interesting implications on the choice of encapsulation protocol as we’ll see in the next section.
Let’s have a closer look at each of the stages of the forwarding pipeline. I’ll include snippets of logical flows demonstrating the most interesting behaviour at each stage. Full logical datapath is quite long and can be viewed with
ovn-sbctl lflow-list [DATAPATH] command. Here is some useful information, collected from the Northbound database, that will be used in the examples below:
- Port security - makes sure that incoming packet has the correct source MAC and IP addresses.
1 2 3 4
- Egress ACL - set of tables that implement Neutron’s Egress Port Security functionality. Default rules allow all egress traffic from a VM. The first flow below matches all new connections coming from VM1 and marks them for connection tracking with
reg0 = 1. The next table catches these marked packets and commits them to the connection tracker. Special
ct_label=0/1action ensures return traffic is allowed which is a standard behaviour of all stateful firewalls.
1 2 3 4 5 6
- ARP Responder - matches an incoming ARP/ND request and generates an appropriate ARP/ND response. The way it is accomplished is similar to Neutron’s native ARP responder feature. Effectively an ARP request gets transformed into an ARP response by swapping source and destination fields.
1 2 3 4
- DHCP Processing - set of tables that implement the DHCP server functionality using the approach similar to the ARP responder described above.
1 2 3 4 5
- Destination Lookup - implements L2 forwarding based on the destination MAC address of a frame. At this stage the outport variable is set to the VM3’s port UUID.
- Ingress ACL - set of tables that implement Neutron’s Ingress Port security. For the sake of argument let’s assume that we have enabled inbound SSH connections. The principle is same as before - the packet gets matched in one table and submitted to connection tracking in another table.
1 2 3 4 5 6 7
- Port Security - implements inbound port security for destination VM by checking the sanity of destination MAC and IP addresses.
1 2 3 4 5 6
Similar to a logical switch pipeline, L3 datapath is split into ingress and egress parts. In this example we’ll concentrate on the Gateway router datapath. This router is connected to a distributed logical router via a transit subnet (SWtr) and to an external network via an external bridge (SWex) and performs NAT translation for all VM traffic.
Here is some useful information about router interfaces and ports that will be used in the examples below.
|SW function||IP||MAC||Port UUID|
- Port security - implements sanity check for all incoming packets.
- IP Input - performs additional L3 sanity checks and implements typical IP services of a router (e.g. ICMP/ARP reply)
1 2 3 4 5 6 7 8 9 10 11
- UNSNAT - translates the destination IP to the real address for packets coming from external networks
- DNAT - implements what is commonly known as static NAT, i.e. performs one-to-one destination IP translation for every configured floating IP.
- IP routing - implements L3 forwarding based on the destination IP address. At this stage the
outportis decided, IP TTL is decremented and the new next-hop IP is set in register0.
1 2 3
- Next Hop Resolver - discovers the next-hop MAC address for a packet. This could either be a statically configured value when the next-hop is an OVN-managed router or a dynamic binding learned through ARP and stored in a special
MAC_Bindingtable of Southbound DB.
1 2 3 4
- SNAT - implements what is commonly known as overload NAT. Translates source IP, source UDP/TCP port number and ICMP Query ID to hide them behind a single IP address
- Output - send the packet out the port determined during the IP routing stage.
This was a very high-level, abridged and simplified version of how logical datapaths are built in OVN. Hopefully this lays enough groundwork to move on to the official northd documentation which describes both L2 and L3 datapaths in much greater detail.
Apart from the logical flows, Southbound DB also contains a number of tables that establish the logical-to-physical bindings. For example, the
Port_Binding table establishes binding between logical switch, logical port, logical port overlay ID (a.k.a. tunnel key) and the unique hypervisor ID. In the next section we’ll see how this information is used to translate logical flows into OpenFlow flows at each compute node. For full description of Southbound DB, its tables and their properties refer to the official SB schema documentation.
OVN Controller - OpenFlow flows
OVN Controller process is the distributed part of OVN SDN controller. This process, running on each compute node, connects to Southbound DB via OVSDB and configures local OVS according to information received from it. It also uses Southbound DB to exchange the physical location information with other hypervisors. The two most important bits of information that OVN controller contributes to Southbound DB are physical location of logical ports and overlay tunnel IP address. These are the last two missing pieces to map logical flows to physical nodes and networks.
The whole flat space of OpenFlow tables is split into multiple areas. Tables 16 to 47 implement an ingress logical pipeline and tables 48 to 63 implement an egress logical pipeline. These tables have no notion of physical ports and are functionally equivalent to logical flows in Southbound DB. Tables 0 and 65 are responsible for mapping between the physical and logical realms. In table 0 packets are matched on the physical incoming port and assigned to a correct logical datapath as was defined by the
Port_Binding table. In table 65 the information about the outport, that was determined during the ingress pipeline processing, is mapped to a local physical interface and the packet is sent out.
To demonstrate the details of OpenFlow implementation, I’ll use the traffic flow between VM1 and external destination (126.96.36.199). For the sake of brevity I will only cover the major steps of packet processing inside OVS, omitting security checks and ARP/DHCP processing.
When packets traverse OpenFlow tables they get labelled or annotated with special values to simplify matching in subsequent tables. For example, when table 0 matches the incoming port, it annotates the packet with the datapath ID. Since it would have been impractical to label packets with globally unique UUIDs from Soutbound DB, these UUIDs get mapped to smaller values called tunnel keys. To make things even more confusing, each port will have a local kernel ID, unique within each hypervisor. We’ll need both tunnel keys and local port IDs to be able to track the packets inside the OVS. The figure below depicts all port and datapath IDs that have been collected from the Soutbound DB and local OVSDB on each hypervisor. Local port numbers are attached with a dotted line to their respective tunnel keys.
When VM1 sends the first packet to 188.8.131.52, it reaches OVS on local port 13. OVN Controller knows that this port belongs to VM1 and installs an OpenFlow rule to match all packets from this port and annotate them with datapath ID (OXM_OF_METADATA), incoming port ID (NXM_NX_REG14), conntrack zone (NXM_NX_REG13). It then moves these annotated packets to the first table of the ingress pipeline.
1 2 3
Skipping to the L2 MAC address lookup stage, the output port (0x1) is decided based on the destination MAC address and saved in register 15.
Finally, the packet reaches the last table where it is sent out the physical patch port interface towards R1.
The other end of this patch port is connected to a local instance of distributed router R1. That means our packet, unmodified, re-enters OpenFlow table 0, only this time on a different port. Local port 2 is associated with a logical pipeline of a router, hence
metadata for this packet is set to 4.
The packet progresses through logical router datapath and finally gets to table 21 where destination IP lookup take place. It matches the catch-all default route rule and the values for its next-hop IP (0xa9fe8002), MAC address (fa:16:3e:2a:7f:25) and logical output port (0x03) are set.
1 2 3
Table 65 converts the logical output port 3 to physical port 6, which is yet another patch port connected to a transit switch.
The packet once again re-enters OpenFlow pipeline from table 0, this time from port 5. Table 0 maps incoming port 5 to the logical datapath of a transit switch with Tunnel key 7.
Destination lookup determines the output port (2) but this time, instead of entering the egress pipeline locally, the packet gets sent out the physical tunnel port (7) which points to the IP address of a compute node hosting the GW router. The headers of an overlay packet are populated with logical datapath ID (0x7), logical input port (copied from register 14) and logical output port (0x2).
1 2 3 4 5
When packet reaches the destination node, it once again enters the OpenFlow table 0, but this time all information is extracted from the tunnel keys.
1 2 3 4
At the end of the transit switch datapath the packet gets sent out port 12, whose peer is patch port 16.
The packet re-enters OpenFlow table 0 from port 16, where it gets mapped to the logical datapath of a gateway router.
1 2 3
Similar to a distributed router R1, table 21 determines the next-hop MAC address for a packet and saves the output port in register 15.
1 2 3
The first table of an egress pipeline source-NATs packets to external IP address of the GW router.
The modified packet is sent out the physical port 14 towards the external switch.
External switch determines the output port connected to the
br-ex on a local hypervisor and send the packet out.
1 2 3 4 5 6 7 8
As we’ve just seen, OpenFlow repeats the logical topology by interconnecting logical datapaths of switches and routers with virtual point-to-point patch cables. This may seem like an unnecessary modelling element with a potential for a performance impact. However, when flows get installed in kernel datapath, these patch ports do not exist, which means that there isn’t any performance impact on packets in fastpath.
Physical network - GENEVE overlay
Before we wrap up, let us have a quick look at the new overlay protocol GENEVE. The goal of any overlay protocol is to transport all the necessary tunnel keys. With VXLAN the only tunnel key that could be transported is the Virtual Network Identifier (VNI). In OVN’s case these tunnel keys include not only the logical datapath ID (commonly known as VNI) but also both input and output port IDs. You could have carved up the 24 bits of VXLAN tunnel ID to encode all this information but this would only have given you 256 unique values per key. Some other overlay protocols, like STT have even bigger tunnel ID header size but they, too, have a strict upper limit.
GENEVE was designed to have a variable-length header. The first few bytes are well-defined fixed size fields followed by variable-length Options. This kind of structure allows software developers to innovate at their own pace while still getting the benefits of hardware offload for the fixed-size portion of the header. OVN developers decided to use Options header type 0x80 to store the 15-bit logical ingress port ID and a 16-bit egress port ID (an extra bit is for logical multicast groups).
The figure above shows the ICMP ping coming from VM1(10.0.0.2) to Google’s DNS. As I’ve showed in the previous section, GENEVE is used between the ingress and egress pipelines of a transit switch (SWtr), whose datapath ID is encoded in the VNI field (0x7). Packets enter the transit switch on port 1 and leave it on port 2. These two values are encoded in the
00010002 value of the
Options Data field.
So now that GENEVE has taken over as the inter-hypervisor overlay protocol, does that mean that VXLAN is dead? OVN still supports VXLAN but only for interconnects with 3rd party devices like VXLAN-VLAN gateways or VXLAN TOR switches. Rephrasing the official OVN documentation, VXLAN gateways will continue to be supported but they will have a reduced feature set due to lack of extensibility.
OpenStack networking has always been one of the first use cases of any new SDN controller. All the major SDN platforms like ACI, NSX, Contrail, VSP or ODL have some form of OpenStack integration. And it made sense, since native Neutron networking has always been one of the biggest pain points in OpenStack deployments. As I’ve just demonstrated, OVN can now do all of the common networking functionality natively, without having to rely on 3rd party agents. In addition to that it has a fantastic documentation, implements all forwarding inside a single OVS bridge and it is an open-source project. As an OpenStack networking solution it is still, perhaps, a few months away from being production ready - active/active HA is not supported with OVSDB, GW router scheduling options are limited, lack of native support for DNS and Metadata proxy. However I anticipate that starting from the next OpenStack release (Ocata, Feb 2017) OVN will be ready for mass deployment even by companies without an army of OVS/OpenStack developers. And when that happens there will even less need for proprietary OpenStack SDN platforms.