Skip to main content
Version: 3.18 (latest)

Upgrade Calico Enterprise installed with OpenShift

note

All upgrades in Calico Enterprise are free with a valid license.

Upgrades paths​

You can upgrade your cluster to a maximum of two releases from your existing version. For example, if you are on version 3.6, you can upgrade to 3.7, or you can upgrade directly to 3.8. However, you cannot upgrade beyond two releases; upgrading from 3.6 to 3.9 (three releases) is not supported.

If you are several versions behind where you want to be, you must go through each group of two releases to get there. For example, if you are on version 3.6, and you want to get to 3.10, you can upgrade to 3.8, then upgrade from 3.8 directly to 3.10.

note

Always check the Release Notes for exceptions; limitations can override the above pattern.

Prerequisites​

Ensure that your Calico Enterprise OpenShift cluster is running a supported version of OpenShift Container Platform, and the Calico Enterprise operator version is v1.2.4 or greater.

note

You can check if you are running the operator by checking for the existence of the operator namespace with oc get ns tigera-operator or issuing oc get tigerastatus; a successful return means your installation is using the operator.

Prepare your cluster for the upgrade​

During upgrade, the Calico Enterprise LogStorage CR is temporarily removed so Elasticsearch can be upgraded. Features that depend on LogStorage are temporarily unavailable, including dashboards in the Manager UI. Data ingestion is paused temporarily, but resumes when the LogStorage is up and running again.

To retain data from your current installation (optional), ensure that the currently mounted persistent volumes have their reclaim policy set to retain data. Data retention is recommended only for users that have a valid Elasticsearch license. (Trial licenses can be invalidated during upgrade).

Default Deny​

Calico Enterprise creates a default-deny for the calico-system namespace. If you deploy workloads into the calico-system namespace, you must create policy that allows the required traffic for your workloads prior to upgrade.

Windows​

If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for log storage that include the required service DNS names.

Multi-cluster management​

For Calico Enterprise v3.5 and v3.7, upgrading multi-cluster management setups must include updating all managed and management clusters.

Download the new manifests​

Make a manifests directory.

mkdir manifests

Download the Calico Enterprise manifests for OpenShift and add them to the generated manifests directory:

mkdir calico
wget -qO- https://downloads.tigera.io/ee/v3.18.2/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico --exclude=01-cr-* --exclude=02-pull-secret.yaml
cp calico/* manifests/

Upgrade from 3.0 or later

note

The steps differ based on your cluster type. If you are unsure of your cluster type, look at the field clusterManagementType when you run oc get installation -o yaml before you proceed.

  1. Apply the updated manifests.

    oc apply --server-side --force-conflicts -f manifests/
  2. Apply the Calico Enterprise manifests for the Prometheus operator.

    note
    Complete this step only if you are using the Calico Enterprise Prometheus operator (including adding your own Prometheus operator). Skip this step if you are using BYO Prometheus that you manage yourself.
    oc apply -f https://downloads.tigera.io/ee/v3.18.2/manifests/ocp/tigera-prometheus-operator.yaml

    Create the pull secret in the tigera-prometheus namespace and then patch the Prometheus operator deployment. Use the image pull secret provided to you by Tigera support representative.

    oc create secret generic tigera-pull-secret \
    --type=kubernetes.io/dockerconfigjson -n tigera-prometheus \
    --from-file=.dockerconfigjson=<path/to/pull/secret>
    oc patch deployment -n tigera-prometheus calico-prometheus-operator \
    -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'
  3. If your cluster is a management cluster, apply a ManagementCluster CR to your cluster.

    oc apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: ManagementCluster
    metadata:
    name: tigera-secure
    EOF
  4. If your cluster is v3.7 or older, apply a new Monitor CR to your cluster.

    oc apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: Monitor
    metadata:
    name: tigera-secure
    EOF
  5. If your cluster is v3.16 or older, apply a new PolicyRecommendation CR to your cluster.

    oc apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: PolicyRecommendation
    metadata:
    name: tigera-secure
    EOF
  6. You can now monitor the upgrade progress with the following command:

    watch oc get tigerastatus
    note

    If there are any problems you can use kubectl get tigerastatus -o yaml to get more details.

  7. Remove unused policies in your cluster.

    If your cluster is a managed cluster, run this command:

    kubectl delete -f https://downloads.tigera.io/ee/v3.18.2/manifests/default-tier-policies-managed.yaml

    For other clusters, run this command:

    kubectl delete -f https://downloads.tigera.io/ee/v3.18.2/manifests/default-tier-policies.yaml