Skip to main content
Calico Enterprise 3.19 (latest) documentation

Upgrade Calico Enterprise installed with the operator

note

All upgrades in Calico Enterprise are free with a valid license.

Upgrades paths

You can upgrade your cluster to a maximum of two releases from your existing version. For example, if you are on version 3.15, you can upgrade to 3.16, or you can upgrade directly to 3.17. However, you cannot upgrade beyond two releases; upgrading from 3.15 to 3.18 (three releases) is not supported.

If you are several versions behind where you want to be, you must go through each group of two releases to get there. For example, if you are on version 3.16, and you want to get to 3.19, you can upgrade to 3.18, then upgrade from 3.18 directly to 3.19.

note

Always check the Release Notes for exceptions; limitations can override the above pattern.

Prerequisites

Verify that your Kubernetes cluster is using a version of Calico Enterprise installed with the operator, by running kubectl get tigerastatus. If the result is successful, then your installation is using the operator.

If your cluster is on a version earlier than 2.6 or does not use the operator, contact Tigera support to upgrade.

If your cluster has a Calico installation, contact Tigera support to upgrade.

Prepare your cluster for the upgrade

During the upgrade the controller that manages Elasticsearch is updated. Because of this, the Calico Enterprise LogStorage CR is temporarily removed during upgrade. Features that depend on LogStorage are temporarily unavailable, among which are the dashboards in the Manager UI. Data ingestion is temporarily paused and will continue when the LogStorage is up and running again.

To retain data from your current installation (optional), ensure that the currently mounted persistent volumes have their reclaim policy set to retain data. Retaining data is only recommended for users that use a valid Elastic license. Trial licenses can get invalidated during the upgrade.

Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in Calico Enterprise v3.19, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any owned resources being garbage collected by Kubernetes.

  1. Remove any OwnerReferences from resources in your cluster that have apiGroup: projectcalico.org/v3.
  2. Perform the upgrade normally.
  3. Add new OwnerReferences to your resources referencing the new UID.

Default Deny

Calico Enterprise creates a default-deny for the calico-system namespace. If you deploy workloads into the calico-system namespace, you must create policy that allows the required traffic for your workloads prior to upgrade.

Windows

If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for log storage that include the required service DNS names.

For AKS only, upgrades to a newer version will automatically upgrade Calico Enterprise for Windows. During the upgrade, Windows nodes will be tainted so new pods will not be scheduled until the upgrade of the node has finished. The Calico Enterprise for Windows upgrade status can be monitored with: kubectl get tigerastatus calico -oyaml

For all other platforms, upgrading Calico Enterprise for Windows can be performed out-of-band with the Calico Enterprise upgrade of the cluster. For each Windows node, uninstall the Calico Enterprise for Windows services, copy over the latest Calico Enterprise for Windows installation archive, then proceed with the installation.

Multi-cluster management

For Calico Enterprise, upgrading multi-cluster management setups must include updating all managed and management clusters.

note

These steps differ based on your cluster type. If you are unsure of your cluster type, look at the field clusterManagementType when you run kubectl get installation -o yaml before you proceed.

Upgrade Calico Enterprise

  1. Download the new manifests for Tigera operator.

    curl -L -O https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
  2. Download the new manifests for Prometheus operator.

    note
    If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.
    curl -L -O https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
  3. If you previously installed using a private registry, you will need to push the new images and then update the manifest downloaded in the previous step.

  4. Apply the manifest for Tigera operator.

    kubectl apply --server-side --force-conflicts -f tigera-operator.yaml
    note
    If you intend to update any operator.tigera.io or projectcalico.org resources to utilize new fields available in the update you must make sure you make those changes after applying the tigera-operator.yaml.
  5. If you downloaded the manifests for Prometheus operator from the earlier step, then apply them now.

    kubectl apply --server-side --force-conflicts -f tigera-prometheus-operator.yaml
  6. If your cluster has OIDC login configured, follow these steps:

    a. Save a copy of your Manager for reference.

    kubectl get manager tigera-secure -o yaml > manager.yaml

    b. Remove the deprecated fields from your Manager resource.

    kubectl patch manager tigera-secure --type merge -p '{"spec": null}'

    c. If you are currently using v3.2 and are using OIDC with Kibana verify that you have the following resources in your cluster:

    kubectl get authentication tigera-secure
    kubectl get secret tigera-oidc-credentials -n tigera-operator

    If both of these resources are present, you can continue with the next step. Otherwise, use the instructions to configure an identity provider to configure OIDC.

    d. Follow configure an identity provider.

  7. If your cluster is a management cluster using v3.1 or older, apply a ManagementCluster CR to your cluster.

    kubectl apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: ManagementCluster
    metadata:
    name: tigera-secure
    EOF
  8. If your cluster is v3.7 or older, apply a new Monitor CR to your cluster.

    kubectl apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: Monitor
    metadata:
    name: tigera-secure
    EOF
  9. If your cluster is v3.16 or older, apply a new PolicyRecommendation CR to your cluster.

    kubectl apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: PolicyRecommendation
    metadata:
    name: tigera-secure
    EOF
  10. You can monitor progress with the following command:

    watch kubectl get tigerastatus
    note
    If there are any problems you can use kubectl get tigerastatus -o yaml to get more details.
  11. If your cluster includes egress gateways, follow the egress gateway upgrade instructions.