Get started with VPP networking
Big picture​
Install Calico and enable the VPP dataplane.
Value​
The VPP dataplane mode has several advantages over standard Linux networking pipeline mode:
- Scales to higher throughput, especially with WireGuard encryption enabled
- Further improves encryption performance with IPsec
- Native support for Kubernetes services without needing kube-proxy, which:
- Reduces first-packet latency for packets to services
- Preserves external client source IP addresses all the way to the pod
The VPP dataplane is entirely compatible with the other Calico dataplanes, meaning you can have a cluster with VPP-enabled nodes along with regular nodes. This makes it possible to migrate a cluster from Linux or eBPF networking to VPP networking.
In addition, the VPP dataplane offers some specific features for network-intensive applications, such as providing memif
userspace packet interfaces to the pods (instead of regular Linux network devices), or exposing the VPP Host Stack to run optimized L4+ applications in the pods.
The VPP dataplane has some minor behavioural differences wrt the other Calico dataplanes and some of the features are not supported. For details please refer to Known issues & unsupported features. Please report bugs on the Calico Users slack or GitHub).
Concepts​
VPP​
The Vector Packet Processor (VPP) is a high-performance, open-source userspace network dataplane written in C, developed under the fd.io umbrella. It supports many standard networking features (L2 switching, L3 routing, NAT, encapsulations), and is easily extensible using plugins. The VPP dataplane uses plugins to efficiently implement Kubernetes services load balancing and Calico policies.
Operator based installation​
This guide uses the Tigera operator to install Calico. The operator provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition. While it is also technically possible to install Calico and configure it for VPP using manifests directly, only operator based installations are supported at this stage.
How to​
This guide details ways to install Calico with the VPP dataplane:
- On a managed EKS cluster. This is the option that requires the least configuration.
- On a managed EKS cluster with the DPDK interface driver. This options is more complex to set up but provides better performance.
- On any Kubernetes cluster.
In all cases, here are the details of what you will get:
Policy | IPAM | CNI | Overlay | Routing | Datastore |
---|---|---|---|---|---|
- Install on EKS
- Install on EKS with DPDK
- Install on any cluster
Install Calico with the VPP dataplane on an EKS cluster​
Requirements​
For these instructions, we will use eksctl
to provision the cluster. However, you can use any of the methods in Getting Started with Amazon EKS
Before you get started, make sure you have downloaded and configured the necessary prerequisites
Provision the cluster​
First, create an Amazon EKS cluster without any nodes.
eksctl create cluster --name my-calico-cluster --without-nodegroup
Since this cluster will use Calico for networking, you must delete the
aws-node
DaemonSet to disable the default AWS VPC networking for the pods.kubectl delete daemonset -n kube-system aws-node
Install and configure Calico with the VPP dataplane​
Now that you have an empty cluster configured, you can install the Tigera operator.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
noteDue to the large size of the CRD bundle,
kubectl apply
might exceed request limits. Instead, usekubectl create
orkubectl replace
.Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for EKS. For more information on configuration options available in this manifest, see the installation reference.
noteBefore applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.
kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/calico/installation-eks.yaml
Now is time to install the VPP dataplane components.
kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/generated/calico-vpp-eks.yaml
Finally, add nodes to the cluster.
eksctl create nodegroup --cluster my-calico-cluster --node-type t3.medium --node-ami auto --max-pods-per-node 50
tipThe --max-pods-per-node option above, ensures that EKS does not limit the number of pods based on node-type. For the full set of node group options, see
eksctl create nodegroup --help
.
Install Calico with the VPP dataplane on an EKS cluster with the DPDK driver​
Requirements​
DPDK provides better performance compared to the standard install but it requires some additional customisations (hugepages, for instance) in the EKS worker instances. We have a bash script, init_eks.sh, which takes care of applying the required customizations and we make use of the preBootstrapCommands
property of eksctl
configuration file to execute the script during the worker node creation. These instructions require the latest version of eksctl
.
Provision the cluster​
First, create an Amazon EKS cluster without any nodes.
eksctl create cluster --name my-calico-cluster --without-nodegroup
Since this cluster will use Calico for networking, you must delete the
aws-node
DaemonSet to disable the default AWS VPC networking for the pods.kubectl delete daemonset -n kube-system aws-node
Install and configure Calico with the VPP dataplane​
Now that you have an empty cluster configured, you can install the Tigera operator.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
noteDue to the large size of the CRD bundle,
kubectl apply
might exceed request limits. Instead, usekubectl create
orkubectl replace
.Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for EKS. For more information on configuration options available in this manifest, see the installation reference.
noteBefore applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.
kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/calico/installation-eks.yaml
Now is time to install the VPP dataplane components.
kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/generated/calico-vpp-eks-dpdk.yaml
Finally, time to add nodes to the cluster. Since we need to customize the nodes for DPDK, we will use an
eksctl
config file with thepreBootstrapCommands
property to create the worker nodes. The following command will create a managed nodegroup with 2 t3.large worker nodes in the cluster:cat <<EOF | eksctl create nodegroup -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-calico-cluster
region: us-east-2
managedNodeGroups:
- name: my-calico-cluster-ng
desiredCapacity: 2
instanceType: t3.large
labels: {role: worker}
preBootstrapCommands:
- sudo curl -o /tmp/init_eks.sh "https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/scripts/init_eks.sh"
- sudo chmod +x /tmp/init_eks.sh
- sudo /tmp/init_eks.sh
EOFPlease edit the cluster name, region and other fields as appropriate for your cluster. In case you want to enable ssh access to the EKS worker instances, add the following to the above config file:
ssh:
publicKeyPath: <path to public key>For details on ssh access refer to Amazon EC2 key pairs and Linux instances.
Install Calico with the VPP dataplane on any Kubernetes cluster​
Requirements​
The VPP dataplane has the following requirements:
Required​
A blank Kubernetes cluster, where no CNI was ever configured.
These base requirements, except those related to the management of
cali*
,tunl*
andvxlan.calico
interfaces.noteIf you are using
kubeadm
to create the cluster please make sure to specify the pod network CIDR using the--pod-network-cidr
command-line argument, i.e.,sudo kubeadm init --pod-network-cidr=192.168.0.0/16
. If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR.
Optional​
For some hardware, the following hugepages configuration may enable VPP to use more efficient drivers:
At least 512 x 2MB-hugepages are available (
grep HugePages_Free /proc/meminfo
)The
vfio-pci
(vfio_pci
on centos) oruio_pci_generic
kernel module is loaded. For example:echo "vfio-pci" > /etc/modules-load.d/95-vpp.conf
modprobe vfio-pci
echo "vm.nr_hugepages = 512" >> /etc/sysctl.conf
sysctl -p
# restart kubelet to take the changes into account
# you may need to use a different command depending on how kubelet was installed
systemctl restart kubelet
Install Calico and configure it for VPP​
Start by installing the Tigera operator on your cluster.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
noteDue to the large size of the CRD bundle,
kubectl apply
might exceed request limits. Instead, usekubectl create
orkubectl replace
.Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for VPP. For more information on configuration options available in this manifest, see the installation reference.
noteBefore applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.
kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/calico/installation-default.yaml
Install the VPP dataplane components​
Start by getting the appropriate yaml manifest file for the VPP dataplane resources:
# If you have configured hugepages on your machines
curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/generated/calico-vpp.yaml
# If not, or if you're unsure
curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.28.0/yaml/generated/calico-vpp-nohuge.yaml
Then locate the calico-vpp-config
ConfigMap in this yaml manifest file and configure it as follows.
Required Configuration​
SERVICE_PREFIX
is the Kubernetes service CIDR. You can retrieve it by running:kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
If this command doesn't return anything, you can leave the default value of
10.96.0.0/12
.CALICOVPP_INTERFACES
contains a dictionary with parameters specific to interfaces in VPP. The fielduplinkInterfaces
contains a list of uplink interfaces and their configuration, with the first element of the list being the primary/main uplink interface, and the rest (if any) being the secondary host interfaces. You must specify at the least the primary/main uplink interface name, for example,CALICOVPP_INTERFACES: |-
{
"uplinkInterfaces": [ { "interfaceName": "eth0" } ]
}Please make sure that the interface name is that of a valid Linux interface and that it is up and configured with an address. Also, the address configured on this interface must be the node address in Kubernetes (
kubectl get nodes -o wide
).CALICOVPP_INTERFACES spec​
Field Description Type maxPodIfSpec spec containing max values for pod interfaces config InterfaceSpec defaultPodIfSpec spec containing default values for pod interfaces config InterfaceSpec vppHostTapSpec spec containing config for host tap interface in vpp InterfaceSpec uplinkInterfaces list of host interfaces in vpp List of UplinkInterfaceSpec InterfaceSpec​
Field Description Type Default rx Number of RX queues int 1 tx Number of TX queues int 1 rxqsz RX queue size int 1024 txqsz TX queue size int 1024 isl3 Defines the interface mode (L2/L3) boolean true for tuntap ; false for memif rxMode RX mode string among "interrupt", "adaptive", or "polling" adaptive
UplinkInterfaceSpec​
Field Description Type Default rx Number of RX queues int 1 tx Number of TX queues int 1 rxqsz RX queue size int 1024 txqsz TX queue size int 1024 isl3 Defines the interface mode (L2/L3) for drivers that support it boolean true rxMode RX mode string among "interrupt", "adaptive", or "polling" adaptive
InterfaceName interface name string unset vppDriver driver to use in vpp string unset newDriver linux driver to use before passing the interface to VPP string unset mtu the interface's mtu int use the existing MTU in linux
Optional Configuration​
To configure how VPP drives the uplink interface, use the vppDriver
field in uplinkInterfaces
in CALICOVPP_INTERFACES
.
The supported values will depend on the interface type. Available values are:
""
: will automatically select and try drivers based on interface type and available resources, starting with the fastestaf_xdp
: use an AF_XDP socket to drive the interface (requires kernel 5.4 or newer)af_packet
: use an AF_PACKET socket to drive the interface (not optimized but works everywhere)avf
: use the VPP native driver for Intel 700-Series and 800-Series interfaces (requires hugepages)vmxnet3
: use the VPP native driver for VMware virtual interfaces (requires hugepages)virtio
: use the VPP native driver for Virtio virtual interfaces (requires hugepages)rdma
: use the VPP native driver for Mellanox CX-4 and CX-5 interfaces (requires hugepages)dpdk
: use the DPDK interface drivers with VPP (requires hugepages, works with most interfaces)none
: do not configure connectivity automatically. This can be used when configuring the interface manually
For example,
CALICOVPP_INTERFACES: |-
{
...
"uplinkInterfaces": [
{
"interfaceName": "eth0",
"vppDriver": "af_packet"
}
]
...
}
Legacy options​
We still maintain legacy support for the CALICOVPP_INTERFACE
and CALICOVPP_NATIVE_DRIVER
environment variables. If CALICOVPP_INTERFACES
is unspecified, CALICOVPP_INTERFACE
is the primary interface to be used and CALICOVPP_NATIVE_DRIVER
the uplink driver to be used by VPP.
CALICOVPP_INTERFACE
--> uplinkInterfaces[0].interfaceName
CALICOVPP_NATIVE_DRIVER
--> uplinkInterfaces[0].vppDriver
Example
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-vpp-config
namespace: calico-vpp-dataplane
data:
...
CALICOVPP_INTERFACE: eth1
CALICOVPP_NATIVE_DRIVER: "af_packet"
...
Please note that CALICOVPP_INTERFACE
and CALICOVPP_NATIVE_DRIVER
are marked deprecated and will no longer be supported in the future releases. Please use CALICOVPP_INTERFACES
instead.
Apply the configuration​
To apply the configuration, run:
kubectl create -f calico-vpp.yaml
This will install all the resources required by the VPP dataplane in your cluster.
Next steps​
After installing Calico with the VPP dataplane, you can benefit from the features of the VPP dataplane, such as fast IPsec or Wireguard encryption.
Tools
- Install and configure calicoctl to configure and monitor your cluster.
Security