Skip to main content
Calico Open Source 3.29 (latest) documentation

Get started with VPP networking

Big picture

Install Calico and enable the VPP data plane.

Value

The VPP data plane mode has several advantages over standard Linux networking pipeline mode:

  • Scales to higher throughput, especially with WireGuard encryption enabled
  • Further improves encryption performance with IPsec
  • Native support for Kubernetes services without needing kube-proxy, which:
    • Reduces first-packet latency for packets to services
    • Preserves external client source IP addresses all the way to the pod

The VPP data plane is entirely compatible with the other Calico data planes, meaning you can have a cluster with VPP-enabled nodes along with regular nodes. This makes it possible to migrate a cluster from Linux or eBPF networking to VPP networking.

In addition, the VPP data plane offers some specific features for network-intensive applications, such as providing memif userspace packet interfaces to the pods (instead of regular Linux network devices), or exposing the VPP Host Stack to run optimized L4+ applications in the pods.

note

The VPP data plane has some minor behavioural differences wrt the other Calico data planes and some of the features are not supported. For details please refer to Known issues & unsupported features. Please report bugs on the Calico Users slack or GitHub).

Concepts

VPP

The Vector Packet Processor (VPP) is a high-performance, open-source userspace network data plane written in C, developed under the fd.io umbrella. It supports many standard networking features (L2 switching, L3 routing, NAT, encapsulations), and is easily extensible using plugins. The VPP data plane uses plugins to efficiently implement Kubernetes services load balancing and Calico policies.

Operator based installation

This guide uses the Tigera operator to install Calico. The operator provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition. While it is also technically possible to install Calico and configure it for VPP using manifests directly, only operator based installations are supported at this stage.

How to

This guide details ways to install Calico with the VPP data plane:

  • On a managed EKS cluster. This is the option that requires the least configuration.
  • On a managed EKS cluster with the DPDK interface driver. This options is more complex to set up but provides better performance.
  • On any Kubernetes cluster.

In all cases, here are the details of what you will get:

PolicyIPAMCNIOverlayRoutingDatastore

Install Calico with the VPP data plane on an EKS cluster

Requirements

For these instructions, we will use eksctl to provision the cluster. However, you can use any of the methods in Getting Started with Amazon EKS

Before you get started, make sure you have downloaded and configured the necessary prerequisites

Provision the cluster

  1. First, create an Amazon EKS cluster without any nodes.

    eksctl create cluster --name my-calico-cluster --without-nodegroup
  2. Since this cluster will use Calico for networking, you must delete the aws-node DaemonSet to disable the default AWS VPC networking for the pods.

    kubectl delete daemonset -n kube-system aws-node

Install and configure Calico with the VPP data plane

  1. Now that you have an empty cluster configured, you can install the Tigera operator.

    kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml
    note

    Due to the large size of the CRD bundle, kubectl apply might exceed request limits. Instead, use kubectl create or kubectl replace.

  2. Then, you need to configure the Calico installation for the VPP data plane. The yaml in the link below contains a minimal viable configuration for EKS. For more information on configuration options available in this manifest, see the installation reference.

    note

    Before applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.

    kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.29.0/yaml/calico/installation-eks.yaml
  3. Now is time to install the VPP data plane components.

    kubectl create -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.29.0/yaml/generated/calico-vpp-eks.yaml
  4. Finally, add nodes to the cluster.

    eksctl create nodegroup --cluster my-calico-cluster --node-type t3.medium --node-ami auto --max-pods-per-node 50
    tip

    The --max-pods-per-node option above, ensures that EKS does not limit the number of pods based on node-type. For the full set of node group options, see eksctl create nodegroup --help.

Next steps

After installing Calico with the VPP data plane, you can benefit from the features of the VPP data plane, such as fast IPsec or Wireguard encryption.

Tools

Security