Skip to main content
Calico Open Source 3.29 (latest) documentation

Migrate Calico to an operator-managed installation

Big picture​

Switch your Calico installation from manifest-based resources to an installation managed by the Calico operator.

Value​

The Calico operator provides a number of advantages over traditional manifest-based installation of Calico resources, including but not limited to:

  • Automatic platform and configuration detection.
  • A simplified upgrade procedure.
  • Well-defined split between end-user configuration and product code.
  • Resource reconciliation and lifecycle management.

Concepts​

Operator vs manifest based installations​

Most Calico installations in the past have been manifest-based, meaning that Calico is installed directly as a set of Kubernetes resources in a .yaml file.

The Calico operator is a Kubernetes application that installs and manages the lifecycle of a Calico installation by creating and updating Kubernetes resources such as Deployments, DaemonSets, Secrets, without the need for direct user intervention.

There are a few key differences to be aware of, if you are familiar with manifest-based installs and are looking to use the operator:

  • Calico resources will be migrated from the kube-system namespace used by the Calico manifests to a new calico-system namespace.
  • Calico resources will no longer be hand-editable, as the Calico operator will reconcile undesired changes to maintain an expected state.
  • Calico resources can instead be configured via the operator.tigera.io APIs.

Operator migration​

For new clusters, you can simply follow the steps in the quickstart guide to get started with the operator.

For existing clusters using the calico.yaml manifest to install Calico, upon installing the operator, it will detect the existing Calico resources on the cluster and calculate how to take ownership of them. The operator will maintain existing customizations, if supported, and warn about any unsupported configurations that it detects.

Before you begin​

  • Ensure that your Calico installation is configured to use the Kubernetes datastore. If your cluster uses etcdv3 directly, you must follow the datastore migration procedure before following this document.
  • Migration to Calico v3.29 operator-managed installation is supported only from Calico v3.29 manifest-based installation

How To​

Migrate a cluster to the operator​

note

Do not edit or delete any resources in the kube-system Namespace during the following procedure as it may interfere with the upgrade.

  1. Install the Tigera Calico operator and custom resource definitions.

    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/tigera-operator.yaml
    note

    Due to the large size of the CRD bundle, kubectl apply might exceed request limits. Instead, use kubectl create or kubectl replace.

  2. Trigger the operator to start a migration by creating an Installation resource. The operator will auto-detect your existing Calico settings and fill out the spec section.

    kubectl create -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
    name: default
    spec: {}
    EOF
  3. Monitor the migration status with the following command:

    kubectl describe tigerastatus calico
  4. Now that the migration is complete, you will see Calico resources have moved to the calico-system namespace.

    kubectl get pods -n calico-system

    You should see output like this:

    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-7688765788-9rqht 1/1 Running 0 17m
    calico-node-4ljs6 1/1 Running 0 14m
    calico-node-bd8mc 1/1 Running 0 14m
    calico-node-cpbd8 1/1 Running 0 14m
    calico-node-jl97q 1/1 Running 0 14m
    calico-node-xw2nj 1/1 Running 0 14m
    calico-typha-57bf79f96f-6sk8x 1/1 Running 0 14m
    calico-typha-57bf79f96f-g99s9 1/1 Running 0 14m
    calico-typha-57bf79f96f-qtchs 1/1 Running 0 14m

    At this point, the operator will have automatically cleaned up any Calico resources in the kube-system namespace. No manual cleanup is required.