Skip to main content
Version: 3.18 (latest)

Configuring the Calico Enterprise CNI plugins

The Calico Enterprise CNI plugins do not need to be configured directly when installed by the operator. For a complete operator configuration reference, see the installation API reference documentation.

The host-local IPAM plugin can be configured by setting the Spec.CNI.IPAM.Plugin field to HostLocal on the operator.tigera.io/Installation API.

Calico will use the host-local IPAM plugin to allocate IPv4 addresses from the node's IPv4 pod CIDR if there is an IPv4 pool configured in Spec.IPPools, and an IPv6 address from the node's IPv6 pod CIDR if there is an IPv6 pool configured in Spec.IPPools.

The following example configures Calico to assign dual-stack IPs to pods using the host-local IPAM plugin.

kind: Installation
apiVersion: operator.tigera.io/v1
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- cidr: 192.168.0.0/16
- cidr: 2001:db8::/64
cni:
type: Calico
ipam:
type: HostLocal

Using Kubernetes annotations​

Specifying IP pools on a per-namespace or per-pod basis​

In addition to specifying IP pools in the CNI config as discussed above, Calico Enterprise IPAM supports specifying IP pools per-namespace or per-pod using the following Kubernetes annotations.

  • cni.projectcalico.org/ipv4pools: A list of configured IPv4 Pools from which to choose an address for the pod.

    Example:

    annotations:
    'cni.projectcalico.org/ipv4pools': '["default-ipv4-ippool"]'
  • cni.projectcalico.org/ipv6pools: A list of configured IPv6 Pools from which to choose an address for the pod.

    Example:

    annotations:
    'cni.projectcalico.org/ipv6pools': '["2001:db8::1/120"]'

If provided, these IP pools will override any IP pools specified in the CNI config.

note

This requires the IP pools to exist before ipv4pools or ipv6pools annotations are used. Requesting a subset of an IP pool is not supported. IP pools requested in the annotations must exactly match a configured IPPool resource.

note

The Calico Enterprise CNI plugin supports specifying an annotation per namespace. If both the namespace and the pod have this annotation, the pod information will be used. Otherwise, if only the namespace has the annotation the annotation of the namespace will be used for each pod in it.

Requesting a specific IP address​

You can also request a specific IP address through Kubernetes annotations with Calico Enterprise IPAM. There are two annotations to request a specific IP address:

  • cni.projectcalico.org/ipAddrs: A list of IPv4 and/or IPv6 addresses to assign to the Pod. The requested IP addresses will be assigned from Calico Enterprise IPAM and must exist within a configured IP pool.

    Example:

    annotations:
    'cni.projectcalico.org/ipAddrs': '["192.168.0.1"]'
  • cni.projectcalico.org/ipAddrsNoIpam: A list of IPv4 and/or IPv6 addresses to assign to the Pod, bypassing IPAM. Any IP conflicts and routing have to be taken care of manually or by some other system. Calico Enterprise will only distribute routes to a Pod if its IP address falls within a Calico Enterprise IP pool using BGP mode. Calico will not distribute ipAddrsNoIpam routes when operating in VXLAN mode. If you assign an IP address that is not in a Calico Enterprise IP pool or if its IP address falls within a Calico Enterprise IP pool that uses VXLAN encapsulation, you must ensure that routing to that IP address is taken care of through another mechanism.

    Example:

    annotations:
    'cni.projectcalico.org/ipAddrsNoIpam': '["10.0.0.1"]'

    The ipAddrsNoIpam feature is disabled by default. It can be enabled in the feature_control section of the CNI network config:

    {
    "name": "any_name",
    "cniVersion": "0.1.0",
    "type": "calico",
    "ipam": {
    "type": "calico-ipam"
    },
    "feature_control": {
    "ip_addrs_no_ipam": true
    }
    }
    caution

    This feature allows for the bypassing of network policy via IP spoofing. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary IP addresses.

note
  • The ipAddrs and ipAddrsNoIpam annotations can't be used together.
  • You can only specify one IPv4/IPv6 or one IPv4 and one IPv6 address with these annotations.
  • When ipAddrs or ipAddrsNoIpam is used with ipv4pools or ipv6pools, ipAddrs / ipAddrsNoIpam take priority.

Requesting a floating IP​

You can request a floating IP address for a pod through Kubernetes annotations with Calico Enterprise.

note

The specified address must belong to an IP Pool for advertisement to work properly.

  • cni.projectcalico.org/floatingIPs: A list of floating IPs which will be assigned to the pod's workload endpoint.

    Example:

    annotations:
    'cni.projectcalico.org/floatingIPs': '["10.0.0.1"]'

    The floatingIPs feature is disabled by default. It can be enabled in the feature_control section of the CNI network config:

    {
    "name": "any_name",
    "cniVersion": "0.1.0",
    "type": "calico",
    "ipam": {
    "type": "calico-ipam"
    },
    "feature_control": {
    "floating_ips": true
    }
    }
    caution

    This feature can allow pods to receive traffic which may not have been intended for that pod. Users should make sure the proper admission control is in place to prevent users from selecting arbitrary floating IP addresses.

Using IP pools node selectors​

Nodes will only assign workload addresses from IP pools which select them. By default, IP pools select all nodes, but this can be configured using the nodeSelector field. Check out the IP pool resource document for more details.

Example:

  1. Create (or update) an IP pool that only allocates IPs for nodes where it contains a label rack=0.

    kubectl create -f -<<EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
    name: rack-0-ippool
    spec:
    cidr: 192.168.0.0/24
    ipipMode: Always
    natOutgoing: true
    nodeSelector: rack == "0"
    EOF
  2. Label a node with rack=0.

    kubectl label nodes kube-node-0 rack=0

Check out the usage guide on assign IP addresses based on topology

for a full example.

CNI network configuration lists​

The CNI 0.3.0 spec supports "chaining" multiple CNI plugins together. Calico Enterprise supports the following Kubernetes CNI plugins, which are enabled by default. Although chaining other CNI plugins may work, we support only the following tested CNI plugins.

Port mapping plugin

Calico Enterprise is required to implement Kubernetes host port functionality and is enabled by default.

note

Be aware of the following portmap plugin CNI issue where draining nodes may take a long time with a cluster of 100+ nodes and 4000+ services.

To disable it, remove the portmap section from the CNI network configuration in the Calico Enterprise manifests.

{
"type": "portmap",
"snat": true,
"capabilities": { "portMappings": true }
}

Traffic shaping plugin

The traffic shaping Kubernetes CNI plugin supports pod ingress and egress traffic shaping. This bandwidth management technique delays the flow of certain types of network packets to ensure network performance for higher priority applications. It is enabled by default.

You can add the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to your pod. For example, the following sets a 1 megabit-per-second connection for ingress and egress traffic.

apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M
...

To disable it, remove the bandwidth section from the CNI network configuration in the Calico Enterprise manifests.

{
"type": "bandwidth",
"capabilities": { "bandwidth": true }
}

Order of precedence​

If more than one of these methods are used for IP address assignment, they will take on the following precedence, 1 being the highest:

  1. Kubernetes annotations
  2. CNI configuration
  3. IP pool node selectors
note

Calico Enterprise IPAM will not reassign IP addresses to workloads that are already running. To update running workloads with IP addresses from a newly configured IP pool, they must be recreated. We recommend doing this before going into production or during a maintenance window.

Specify num_queues for veth interfaces​

num_rx_queues and num_tx_queues can be set using the num_queues option in the CNI configuration. Default: 1

For example:

{
"num_queues": 3
}