Skip to main content
Calico Open Source 3.30 (latest) documentation

Upgrade Calico on Kubernetes

About upgrading Calico​

This page describes how to upgrade to v3.30 from Calico v3.15 or later. The procedure varies by datastore type and install method.

If you are using Calico in etcd mode on a Kubernetes cluster, we recommend upgrading to the Kubernetes API datastore as discussed here.

If you have installed Calico using the calico.yaml manifest, we recommend upgrading to the Calico operator, as discussed here.

note

Do not use older versions of calicoctl after the upgrade. This may result in unexpected behavior and data.

Upgrade OwnerReferences​

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in Calico v3.28, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any owned resources being garbage collected by Kubernetes.

  1. Remove any OwnerReferences from resources in your cluster that have apiGroup: projectcalico.org/v3.
  2. Perform the upgrade normally.
  3. Add new OwnerReferences to your resources referencing the new UID.

Upgrading an installation that was installed using Helm​

  1. Update the Helm chart to the latest version:

    helm repo update projectcalico
  2. Apply the v3.30 CRDs:

    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/operator-crds.yaml
  3. Run the Helm upgrade:

    helm upgrade calico projectcalico/tigera-operator

Upgrading an installation that uses the operator​

  1. Download the Tigera operator manifest and custom resource definitions.

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/operator-crds.yaml -O
    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/tigera-operator.yaml -O
  2. Use the following command to initiate an upgrade.

    kubectl apply --server-side --force-conflicts -f operator-crds.yaml
    kubectl apply --server-side --force-conflicts -f tigera-operator.yaml
  3. Optional: To enable the flow logs API and Calico Whisker (introduced in version 3.30) by applying the Goldmane and Whisker custom resources.

    kubectl apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: Goldmane
    metadata:
    name: default
    ---
    apiVersion: operator.tigera.io/v1
    kind: Whisker
    metadata:
    name: default
    EOF

Upgrading an installation that uses manifests and the Kubernetes API datastore​

  1. Download the v3.30 manifest that corresponds to your original installation method.

    Calico for policy and networking

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/calico.yaml -o upgrade.yaml

    Calico for policy and flannel for networking

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/canal.yaml -o upgrade.yaml

    Calico for policy (advanced)

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/calico-policy-only.yaml -o upgrade.yaml
    note

    If you manually modified the manifest, you must manually apply the same changes to the downloaded manifest.

  2. Use the following command to initiate a rolling update.

    kubectl apply --server-side --force-conflicts -f upgrade.yaml
  3. Watch the status of the upgrade as follows.

    watch kubectl get pods -n kube-system

    Verify that the status of all Calico pods indicate Running.

    calico-node-hvvg8     2/2   Running   0    3m
    calico-node-vm8kh 2/2 Running 0 3m
    calico-node-w92wk 2/2 Running 0 3m
  4. Remove any existing calicoctl instances, install the new calicoctl and configure it to connect to your datastore.

  5. Use the following command to check the Calico version number.

    calicoctl version

    It should return a Cluster Version of v3.30.x.

  6. If you have enable application layer policy, follow the instructions below to complete your upgrade. Skip this if you are not using Istio with Calico.

  7. If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.

  8. Congratulations! You have upgraded to Calico v3.30.

Upgrading an installation that uses an etcd datastore​

  1. Download the v3.30 manifest that corresponds to your original installation method.

    Calico for policy and networking

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/calico-etcd.yaml -o upgrade.yaml

    Calico for policy and flannel for networking

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/canal-etcd.yaml -o upgrade.yaml
    note

    You must manually apply the changes you made to the manifest during installation to the downloaded v3.30 manifest. At a minimum, you must set the etcd_endpoints value.

  2. Use the following command to initiate a rolling update.

    kubectl apply --server-side --force-conflicts -f upgrade.yaml
  3. Watch the status of the upgrade as follows.

    watch kubectl get pods -n kube-system

    Verify that the status of all Calico pods indicate Running.

    calico-kube-controllers-6d4b9d6b5b-wlkfj   1/1       Running   0          3m
    calico-node-hvvg8 1/2 Running 0 3m
    calico-node-vm8kh 1/2 Running 0 3m
    calico-node-w92wk 1/2 Running 0 3m
    tip

    The calico-node pods will report 1/2 in the READY column, as shown.

  4. Remove any existing calicoctl instances, install the new calicoctl and configure it to connect to your datastore.

  5. Use the following command to check the Calico version number.

    calicoctl version

    It should return a Cluster Version of v3.30.

  6. If you have enabled application layer policy, follow the instructions below to complete your upgrade. Skip this if you are not using Istio with Calico.

  7. If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.

  8. Congratulations! You have upgraded to Calico v3.30.

Upgrading if you have Application Layer Policy enabled​

Dikastes is versioned the same as the rest of Calico, but an upgraded calico-node will still be able to work with a downlevel Dikastes so that you will not lose data plane connectivity during the upgrade. Once calico-node is upgraded, you can begin redeploying your service pods with the updated version of Dikastes.

If you have enabled application layer policy, take the following steps to upgrade the Dikastes sidecars running in your application pods. Skip these steps if you are not using Istio with Calico.

  1. Update the Istio sidecar injector template to use the new version of Dikastes. Replace <your Istio version> below with the full version string of your Istio install, for example 1.4.2.

    kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.1/manifests/alp/istio-inject-configmap-<your Istio version>.yaml
  2. Once the new template is in place, newly created pods use the upgraded version of Dikastes. Perform a rolling update of each of your service deployments to get them on the new version of Dikastes.

Migrating to auto host endpoints​

caution
Auto host endpoints have an allow-all profile attached which allows all traffic in the absence of network policy. This may result in unexpected behavior and data.

In order to migrate existing all-interfaces host endpoints to Calico-managed auto host endpoints:

  1. Add any labels on existing all-interfaces host endpoints to their corresponding Kubernetes nodes. Calico manages labels on automatic host endpoints by syncing labels from their nodes. Any labels on existing all-interfaces host endpoints should be added to their respective nodes. For example, if your existing all-interface host endpoint for node node1 has the label environment: dev, then you must add that same label to its node:

    kubectl label node node1 environment=dev
  2. Enable auto host endpoints by following the enable automatic host endpoints how-to guide. Note that automatic host endpoints are created with a profile attached that allows all traffic in the absence of network policy.

    calicoctl patch kubecontrollersconfiguration default --patch ={"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}
  3. Delete old all-interfaces host endpoints. You can distinguish host endpoints managed by Calico from others in several ways. First, automatic host endpoints have the label projectcalico.org/created-by: calico-kube-controllers. Secondly, automatic host endpoints' name have the suffix -auto-hep.

    calicoctl delete hostendpoint <old_hostendpoint_name>