Introduction
What’s a provider network?
Provider Network [pruh–VAHY-der NET-wurk] – noun: In the OpenStack parlance a network that, unlike a tenant network, is generally not managed by OpenStack. Instead, the network exists “out there” and OpenStack simply interfaces with it.
Of course there is more to say about the provider/tenant distinction in OpenStack (see here for more), but from a NetVirt perspective, this definition will suffice. Provider Networks are useful when you need to either access the external world (external network), or if you want to utilize an existing network when forwarding traffic between OpenStack nodes (east/west traffic).
Overview
The best way I know to explain provider networks follows (more or less) the steps used to configure them. As such, this post is structured in the same order.
- Step 1: Name the provider network and tell each OpenStack node how to connect to it.
- Step 2: Create the network in OpenStack Neutron.
- Step 3: Build your OpenStack networking and VMs to use that provider network (however you’ve intended.)
I have included links to configuration files (local.conf) and scripts for running simple provider networks using DevStack and NetVirt Boron. Take a look at that section for basic explanations of the setup and how to use the configs and scripts. I will make reference to these throughout the post at relevant points.
Step 1: Name the provider network and tell each OpenStack node how to connect to it
Unlike tenant networks, provider networks must exist before you get to work with OpenStack. NetVirt Boron supports three ways of connecting to your existing provider network:
- Connect to a local interface, e.g., eth0, that is connected to the provider network (most times you’ll just be interested in this option)
- Connect to an OpenVSwitch bridge that is connect to the provider network
- You can manually configure NetVirt’s “integration bridge”, br-int, with a port connected to the provider network
Internally, all three options are configured per OpenStack node in the OVS Open_vSwitch table’s Other_Config column. You can inspect the value of this column as follows:
$ sudo ovs-vsctl get Open_vSwitch . Other_Config:provider_mappings
The Other_Config column will contain something of the format:
<physnet>:<connector>
In this format, <physnet> is the name you want to give this provider network and <connector> is either (option 1 above) the name of a local interface, (option 2) the name of a bridge on OpenVSwitch, or (option 3) the name of a port already present on br-int.
Obviously, since <connector> may be the name of an interface on the local machine, it may need to be different per OpenStack node.
If you are using DevStack, Other_Config:provider_mappings is configured via the setting “ODL_PROVIDER_MAPPINGS” in local.conf. The networking_odl module will pick that setting up and configure Other_Config:provider_mappings after OVS is brought up.
You can check out local.conf files for my control and compute nodes at [1] and [2].
Step 1 (continued): NetVirt Does the Wiring
When an OVS instance connects to NetVirt, NetVirt will read its Other_Config:provider_mappings field. Remember the three options for connecting to external networks? Because options 2 and 3 (via pre-existing OVS bridge or via existing port on “br-int”) require pre-existing configuration, NetVirt will read that configuration and determine whether <connector> is the name of an interface (option 1), bridge (option 2), or port (option 3). NetVirt will then add additional OVS configuration:
- <connector> is the name of a local interface, add it to br-int
- <connector> is the name of a bridge on the local OVS, create patch ports to patch it to br-int
- <connector> is the name of a port already on br-int, no additional OVS config required
Here’s an example of how the provider network is added to br-int for options 1 and 2 (“ens9” is the <connector> value) :
$ sudo ovs-vsctl show
5695f50c-1d90-4bfe-b90f-b27ed510704f
Manager "tcp:10.9.8.1:6640"
is_connected: true
Bridge br-int
Controller "tcp:10.9.8.1:6653"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ens9"
Interface "ens9"
ovs_version: "2.5.0"
Step 2: Create the network in OpenStack Neutron
The provider mapping tells NetVirt how to connect to a given provider network, but neutron still needs a network defined which corresponds to that provider network.
Adding an OpenStack neutron provider network is simple from the command line:
$ neutron net-create --provider:network_type vlan --provider:physical_network joshnet --provider:segmentation_id 1010 N1
Note the “–provider:physical_network joshnet”. That’s the <physnet> defined above in Other_Config:provider_mappings. That’s how you tell NetVirt, “this network, it already exists. It’s a provider network. Want to know how to connect to it? Check out the provider_mappings entry for <physnet> (joshnet in this example.)”
The –provider:network_type can be any number of types, but in this example, we use vlan. Creating this network will cause NetVirt to add the following ingress and egress flows for this network.
$ sudo ovs-ofctl -OOpenFlow13 dump-flows br-int
OFPST_FLOW reply (OF1.3) (xid=0x2):
... table=0, ... priority=10,in_port=1,dl_vlan=1010 actions=pop_vlan,write_metadata:0x20000000001/0xffffff0000000001,goto_table:17
...
... table=220, ... priority=7,reg6=0x200 actions=push_vlan:0x8100,set_field:5106->vlan_vid,output:1
The table:0 flow takes any traffic entering on port 1 tagged as vlan 1010, pops its VLAN header, adds some internal tagging to the metadata (lport=2), and sends the packet to the “dispatcher table” (table:17).
Table:220’s flow checks some internal tagging on reg6, in this case 0x200, which means “send this out the specific provider network.” So, we push a VLAN tag, set the VLAN ID to 1010 (5106-4096=1010, not quite sure why it’s done this way), and send it out port 1.
As with any other neutron network, the provider network will need a subnet definition as well. Because the provider network “already exists,” it is crucial that the subnet IPs and ranges align with the actual IPs and ranges defined for the existing network.
As you add subnets, routers, VMs, etc. to your deployment, these two flows (in tables 0 and 220) will serve as the provider network’s entrance and exit points to the OVS OpenFlow pipeline.
You can check out neutron commands for configuring provider networks in the first few lines of [3] and [4].
Step 3: Build your OpenStack networking and VMs to use that provider network – East/West Traffic
See [3] for the script I used to test this.
Let’s say you want to use your provider network for East/West traffic, meaning, traffic between your OpenStack nodes. All you need to do is add the VMs to the provider network you created.
OpenStack Config View of VMs and Provider Network
+-------------------------------+
+---------+ | | +---------+
| | | | | |
| | | | | |
| VM 1 +------+ Provider Network +------+ VM 2 |
| | | | | |
| | | | | |
+---------+ | | +---------+
| |
+-------------------------------+
Packets that need to travel from one node to another will always exit the source node via the rule we saw in table:220 and enter the destination node via the flow on table:0, like this:
Flow for a ping from VM 1 to VM 2:
+-----------------------------------------------------+ +-----------------------------------------------------+
| NODE 1 | | NODE 2 |
| | | |
| | | |
| +------------------+ | | +------------------+ |
| | | | | | | |
| | OVS br-int | | | | OVS br-int | |
| | flows | | | | flows | |
| +---------+ | +---+------+ +--------+ +--------+ +------+---+ | +---------+ |
| | <-------------------+ 0 flow <----+ | <----------+ | <----+ flow 220 <-------------------+ | |
| | | | +---+------+ | + | | | | +------+---+ | | | |
| | VM 1 | | | |Provider| |Provider| | | | VM 2 | |
| | | | | | NIC | | NIC | | | | | |
| | | | +---+------+ | + | | + | +------+---+ | | | |
| | +-------------------> 220 flow +----> | +----------> | +----> flow 0 +-------------------> | |
| +---------+ | +---+------+ +--------+ +--------+ +------+---+ | +---------+ |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| +------------------+ | | +------------------+ |
| | | |
| | | |
| | | |
+-----------------------------------------------------+ +-----------------------------------------------------+
Step 3: Build your OpenStack networking and VMs to use that provider network – External Provider Networks
See [4] for the script I used to test this.
Another use of a provider network is connectivity beyond your OpenStack deployment, for example, to the internet. To do this, you must create an OpenStack configuration that has a router to route your traffic OUT to your provider network (and beyond.) Like this:
+----------------------+
| |
| Provider Net. |
| |
+----------+-----------+
|
+------+--------+
| |
| Router |
| |
+------+--------+
|
+----------+------------+
| |
| Internal Network |
| |
+----+-------------+----+
| |
+----+----+ +--+------+
| | | |
| VM 1 | | VM N |
| | | |
+---------+ +---------+
The router must be assigned a default route that is the IP of a router that really exists out on the provider network. You must also configure floating IPs or SNAT to translate the internal network’s IP addresses to something that will work beyond the internal OpenStack network.
Fundamentally, the flows we saw above will function exactly the same way:
+-----------------------------------------------------+
| NODE 1 |
| |
| |
| +------------------+ | +------------------------+
| | | | | |
| | OVS br+int | | | |
| | flows | | | Provider network |
| +---------+ | +---+------+ +--------+ | and beyond |
| | <-------------------+ 0 flow <----+ | <----------+ | |
| | | | +---+------+ | + | | |
| | VM 1 | | | |Provider| | |
| | | | | | NIC | | |
| | | | +---+------+ | + | | |
| | +-------------------> 220 flow +----> | +----------> | |
| +---------+ | +---+------+ +--------+ | |
| | | | | |
| | | | | |
| | | | | |
| +------------------+ | | |
| | +------------------------+
| |
| |
+-----------------------------------------------------+
The router (and floating IPs or NAT) and the internal networks required for the external access are all virtual, and implemented in the OVS OpenFlow flows that lie between our ingress (table:0) and egress (table:220) flows.
Scripts and Configuration Files
The files below come from my own development environment – Two OpenStack nodes, control and compute, that connect to an OpenDaylight NetVirt controller that runs on my laptop. [1] and [2] are my DevStack local.conf files. I encourage reading these files – they will need to be tweaked to run in your environment.
[1] DevStack local.conf for my control node
[2] DevStack local.conf for my compute node
[3] script for creating east/west provider network
[4] script for creating external provider network
[5] patch to enable flat networks in networking_odl mitaka – as of this writing, networking_odl, the DevStack driver for OpenDaylight, does not support flat provider networks. If you want to use a flat provider network, apply this patch to wherever your DevStack places the networking_odl git clone and then rerun stack.sh.