Container Networking Interface aka CNI

vikram fugro
5 min readSep 26, 2018

--

CNI is being mentioned a lot in conjunction with Kubernetes. It is now become an integral part of Kubernetes, silently doing it’s job of connecting the pods across different nodes and being well adaptive in integrating different kind of network solutions (overlays, pure L3, etc). Having said that, it is not only present in Kubernetes. It in itself is a separate framework with it’s own specification and can be used not only with Kubernetes but also with other container orchestrators like Mesos, Openshift, etc.

So what is CNI? To put it succinctly, it is an interface between a network namespace and network plugin. A container runtime (eg. Docker, Rocket) is a network namespace. Network Plugin is the implementation that follows the CNI specification to take a container runtime and configure (attach, detach) it to the network, that the plugin implements.

The plugin exists as an executable. When invoked, it reads in a JSON config to get all the required parameters to configure the container with the network. The plugin can also invoke other plugins to do the necessary tasks like allocating the IP to the container, which means CNI keeps IPAM separate and orthogonal from the core net plugin and hence there exists CNI-IPAM plugins. For eg. On most occasions during attaching, the JSON config will contain the entry for the “net” plugin that does the job of creating the interface(s) ready for the container runtime to be attached to the network, and an “ipam” entry which tells the “net” plugin what IPAM plugin to invoke for IP allocation.

CNI Specification:

Following are the generic parameters:

Apart from these params, the config can also contain fields specific to the plugin being used.

Sidenote: It is also possible to invoke multiple CNI plugins for a container, in which case one would need to pass an array of plugins in the JSON config. The JSON spec in that case would need a field called “plugins”, with the respective order of plugins.

Other than the net and ipam plugins, there is also another class called “meta” plugins. These plugins can be a wrapper over the core net and ipam plugins. A meta plugin generally will translate it’s CNI config to the core net and ipam config (eg. flannel for overlay networks) or will do some kind of extra configuration over the core plugins’ output (eg. port mapping, tuning the interfaces, sysctl, etc)

Let’s get into the setup.

So, I have a vagrant machine running ubuntu 16.04 and have built the cnitool executable from github.com/containernetworking/cni and the plugins’ executables from github.com/containernetworking/plugins.Let’s create a directory called cni in $HOME and copy cnitool into it. Now let’s place all the plugin executables inside the plugins directory within the cni directory. Also, for the CNI configs, we will create a directory called net.d in the cni directory. For now, it’s empty.

vagrant@machine-01:~/cni$ tree
.
├── cnitool
├── net.d
└── plugins
├── bandwidth
├── bridge
├── dhcp
├── flannel
├── host-device
├── host-local
├── ipvlan
├── loopback
├── macvlan
├── portmap
├── ptp
├── sample
├── static
├── tuning
└── vlan

Now, let’s have a look at one of the basic CNIs, the “bridge” plugin. The bridge plugin will create .. well, a bridge (if not present) and connect the container runtime to it via the veth pair. Just like the well known docker0 interface.

Assuming we have launched a couple of docker containers with --net=none (not attached to any network), let’s see an example CNI config to add them to the bridge network.

30-mybridge.conf

Most of the items here are self-explanatory. As specified, the bridge will act as a default gateway for the containers. The IPAM plugin used here is host-local, which is a very simple IPAM that allocates IP addresses from a given range. (see the “subnet” field inside the “ipam” section). Let’s copy this json into the net.d directory.

Now let’s cd into the cni directory and see the cnitool we are going to use to add the containers to the network.

vagrant@machine-01:~/cni$ ./cnitool -h
cnitool: Add or remove network interfaces from a network namespace
cnitool add <net> <netns>
cnitool del <net> <netns>

It takes in the name of the network (mentioned in the CNI config files inside the net.d directory) and the path of container’s network namespace.

Let’s run the cnitool now, to add the container.

vagrant@machine-01:~/cni$ sudo CNI_PATH=/home/vagrant/cni/plugins \ NETCONFPATH=/home/vagrant/cni/net.d \
./cnitool add dbnet \
$(docker inspect mycnitest1 | \
jq .[0].NetworkSettings.SandboxKey | tr -d '"')

The cnitool will scan the net.d directory, pick the config file that has the network name “dbnet” and then add the container “mycnitest1" to the network. We’ll repeat the same process for the other container as well.

So now we have two containers connected to the bridge. The containers should be able to ping each other as well as the vagrant machine, and also the external network like the host or google.com. (ipMasq = true). To remove a container from the network, we will basically run the same command with a very small but very important difference. (note “add” to “del” )

vagrant@machine-01:~/cni$ sudo CNI_PATH=/home/vagrant/cni/plugins \ NETCONFPATH=/home/vagrant/cni/net.d \
./cnitool del dbnet \
$(docker inspect mycnitest1 | \
jq .[0].NetworkSettings.SandboxKey | tr -d '"')

We can invoke more CNI plugins on a container for extra configuration. For eg. we can use the portmap plugin for mapping the container ports to the host. Here’s an example CNI config with chaining. (running one CNI plugin after another). Note the new field called “plugins” and “portMappings=true”

30-mybridge-portmap.conflist

The port-mappings can be fed to the the portmap as a part of it’s runtime configuration via an environment variable called CAP_ARGS. Let’s add a container to the network, with this CNI config and the runtime parameters set via environment variable CAP_ARGS.

vagrant@machine-01:~/cni$ sudo CNI_PATH=/home/vagrant/cni/plugins \
NETCONFPATH=/home/vagrant/cni/net.d \
CAP_ARGS='{"portMappings":[{"hostPort":6000,"containerPort":3000,"protocol":"tcp"}]}' \
./cnitool add dbnet \
$(docker inspect mycnitest1 | \
jq .[0].NetworkSettings.SandboxKey | tr -d '"')

Now, we should be able to connect to a process running inside the container on port 3000, by connecting to the port 6000 on the vagrant machine (machine-01).

For the intrigued, on how this is achieved, one can have a look at the iptables (NAT: iptables -L -t nat) rules that get configured when these plugins are invoked.

That’s it! This was a brief intro to CNI with a very basic example and just one VM(node). This should provide a good base to explore other core net plugins present alongside the bridge, portmap plugins.

Things get really interesting when we need containers to communicate with each other across nodes, allowing very interesting network topologies (overlays, pure l3 routes). That’s where the meta plugins of flannel, calico, weave, etc come into play. Would love to cover these in another post.

--

--

vikram fugro
vikram fugro

Written by vikram fugro

Open Source Software Enthusiast, Polyglot & Systems Generalist.