Skip to main content
Calico Open Source 3.32 (latest) documentation

KubeVirt networking

Big picture​

Calico provides networking for KubeVirt virtual machines (VMs) running on your Kubernetes cluster, including persistent IP addresses across VM lifecycle events and support for live migration.

Value​

KubeVirt runs VMs inside Kubernetes pods. When a VM reboots, is evicted, or live-migrates to another host, the underlying pod is destroyed and recreated. Without IP persistence, each new pod would receive a fresh IP address, which would break existing connections and change the VM's network identity.

Calico's KubeVirt support ensures that:

  • A VM retains the same IP address across reboots, pod evictions, and live migrations.
  • Live migration completes without breaking TCP connections or changing the VM's network identity.
  • Network policy is correctly applied to the VM on the destination host before migration traffic is switched.

Concepts​

Supported networking mode: bridge​

Calico supports KubeVirt live migration using the bridge binding mode. In bridge mode, the VM is connected to the pod network through a Linux bridge, and the VM uses the same IP address that Calico assigns to the pod. This is required because:

  • IP address persistence depends on the VM IP matching the pod IP. Bridge mode ensures the VM sees and uses the pod IP directly.
  • Network policy in KubeVirt bridge mode is applied on the pod's veth interface, so policy enforcement works correctly for VM traffic.
  • Live migration in KubeVirt bridge mode relies on detecting gratuitous ARP (GARP) packets from the VM on the pod's veth interface to know when the VM has activated on the target host.

Other KubeVirt networking modes (such as masquerade) are not supported for live migration because the VM would use a different IP than the pod IP, breaking IP persistence and policy enforcement.

BGP networking required​

Live migration currently requires BGP networking without overlay. Overlay networking (VXLAN, IP-in-IP) support is planned for a future release.

KubeVirt VM IP address persistence​

Calico uses the VM's identity (rather than the pod's identity) as the IPAM allocation handle. When a VM's pod is recreated, the new pod is allocated the same IP address as the original. This is a cluster-wide setting controlled by the IPAMConfiguration resource.

Live migration​

When a KubeVirt VM live-migrates from one host to another, Calico coordinates the network transition:

  1. The target pod is created on the destination host and assigned the same IP as the source pod.
  2. Network policy is programmed on the destination host before the VM becomes active.
  3. Once the VM activates on the target host, Calico adjusts route priorities so that traffic is steered to the new host.
  4. After a configurable convergence period (default 30 seconds), route priorities return to normal.

Policy setup timeout​

During live migration, KubeVirt needs to know when the destination host is ready for the VM. The policy_setup_timeout_seconds CNI configuration parameter interlocks the progress of the live migration with policy programming. The CNI plugin delays reporting success until network policy is in place on the destination, or until the timeout expires.

Limitations​

  • WireGuard is not supported with live migration.

Before you begin​

  • A working Kubernetes cluster with KubeVirt installed.
  • Calico installed with BGP networking without overlay.
  • Access to projectcalico.org/v3 resources, either by installing the Calico API server or by using calicoctl.

How to​

Enable live migration on VMs with bridge mode​

By default, KubeVirt does not allow live migration for VMs that use bridge binding on the pod network. To enable it, annotate your VirtualMachine template with kubevirt.io/allow-pod-bridge-network-live-migration:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: my-vm
spec:
template:
metadata:
annotations:
kubevirt.io/allow-pod-bridge-network-live-migration: ""
spec:
domain:
devices:
interfaces:
- name: default
bridge: {}
networks:
- name: default
pod: {}

The annotation must be placed in spec.template.metadata.annotations (the VMI template, not the VM itself). Without this annotation, KubeVirt rejects live migration attempts for VMs using bridge binding with an error like: cannot migrate VMI which does not use masquerade to connect to the pod network.

Restart Typha after installing KubeVirt​

If Calico was installed before the KubeVirt CRDs were available (the common bootstrap order on fresh clusters), you must restart Typha so it can discover KubeVirt CRDs (e.g. VirtualMachineInstanceMigration) needed for live migration support.

Adjust the namespace to match your installation (commonly calico-system or kube-system):

kubectl rollout restart deployment calico-typha -n <namespace>
kubectl rollout status deployment calico-typha -n <namespace> --timeout=60s

If KubeVirt was installed before Calico, this step is not needed.

Enable KubeVirt VM IP address persistence​

IP address persistence is enabled by default. If it has been previously disabled, re-enable it by setting kubeVirtVMAddressPersistence to Enabled in the IPAMConfiguration resource:

kubectl patch ipamconfigurations default --type='merge' -p '{"spec": {"kubeVirtVMAddressPersistence": "Enabled"}}'

Or using calicoctl:

calicoctl ipam configure --kubevirt-ip-persistence=Enabled
note

IP address persistence must be enabled for live migration to work. If persistence is disabled, the CNI plugin rejects migration target pods.

Disable NAT outgoing for VM IP pools​

If you want continuous connectivity from a migrating VM to any destination, the VM's IP address must not be SNATed anywhere on the path between the VM and that destination. You must disable natOutgoing on any IP pool used by KubeVirt VMs that will be live-migrated.

In a live migration, by definition the VM moves from one node to another. If natOutgoing is configured for the VM, and there is an ongoing connection between the VM and a server outside the cluster, that server will see the source IP for the connection change from the old node IP to the new node IP — which unfortunately means that the connection will break.

If Calico was installed using the operator, disable natOutgoing through the Installation resource:

kubectl patch installation default --type=json \
-p '[{"op":"replace","path":"/spec/calicoNetwork/ipPools/0/natOutgoing","value":"Disabled"}]'

The path /spec/calicoNetwork/ipPools/0 targets the first IP pool. If your cluster has multiple pools, identify the correct index for the pool used by your KubeVirt VMs and adjust the path accordingly.

caution

Do not patch the IPPool resource directly when using the operator — the operator reconciles IPPool resources from the Installation resource and will silently revert direct changes.

For manifest-based installations, set natOutgoing: false on the IPPool resource directly.

Allow live migration ports through host endpoint policy​

If you use host endpoint policies, you must allow the KubeVirt live migration ports (TCP 49152 and 49153) between hosts. These ports are used by libvirt/QEMU to transfer VM memory and block storage directly between the source and destination nodes during live migration.

Add them to the Felix FailsafeInboundHostPorts and FailsafeOutboundHostPorts configuration, or create appropriate host endpoint policies to allow this traffic. See Failsafe rules for more details.

Configure policy setup timeout for live migration​

To ensure network policy is programmed on the destination host before the VM starts receiving traffic, configure the policy setup timeout. This value specifies how long (in seconds) the CNI plugin waits for policy to be programmed before reporting success.

If you installed Calico using the operator, configure the linuxPolicySetupTimeoutSeconds field in the Installation resource's calicoNetwork settings:

kind: Installation
apiVersion: operator.tigera.io/v1
metadata:
name: default
spec:
calicoNetwork:
linuxPolicySetupTimeoutSeconds: 10

For manifest-based installations, set policy_setup_timeout_seconds directly in the CNI network configuration (typically /etc/cni/net.d/10-calico.conflist):

{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"policy_setup_timeout_seconds": 10,
...
}
]
}

Security considerations​

VM identity verification​

Calico's CNI plugin verifies the identity of VM pods using Kubernetes ownerReferences. When a pod claims to be a KubeVirt VM (to receive a persistent IP), the CNI plugin checks that the pod's ownerReferences point to a valid VirtualMachineInstance resource. This prevents arbitrary pods from claiming VM IP addresses.

RBAC recommendations​

A user who can create pods and read VirtualMachineInstance resources could potentially forge ownerReferences to claim a VM's IP address. To mitigate this:

  • Restrict pod creation permissions in namespaces that run KubeVirt VMs.
  • Limit get/list access to VirtualMachineInstance resources to trusted users and service accounts.

Admission controllers​

For production deployments, consider using admission controllers such as Kyverno or OPA/Gatekeeper to enforce that only KubeVirt controllers can set ownerReferences pointing to VirtualMachineInstance resources on pods.

Calico component permissions​

Calico components (Felix, confd, the CNI plugin) require only read access to KubeVirt resources (VirtualMachineInstance, VirtualMachineInstanceMigration, etc.). They do not create, modify, or delete any KubeVirt resources.

Additional resources​