Skip to main content
Calico Enterprise 3.19 (latest) documentation

Change a cluster type

Big picture

Change the configuration type for an existing Calico Enterprise cluster to management, managed, or standalone.

Value

As you build out a multi-cluster management deployment, it is critical to have flexibility to repurpose existing cluster types to meet your needs.

Before you begin…

To verify the type of an existing cluster, run the following command:

kubectl get managementcluster,managementclusterconnection

Your cluster is a management cluster if a ManagementCluster is returned, a managed cluster if a ManagementClusterConnection is returned. We refer to Standalone clusters, as clusters who have neither.

We do not support having both ManagementCluster and ManagementClusterConnection in one cluster.

How to

Change a standalone cluster to a management cluster

  1. Create a service to expose the management cluster. The following example of a NodePort service may not be suitable for production and high availability. For options, see Fine-tune multi-cluster management for production. Apply the following service manifest.

    kubectl create -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
    name: tigera-manager-mcm
    namespace: tigera-manager
    spec:
    ports:
    - nodePort: 30449
    port: 9449
    protocol: TCP
    targetPort: 9449
    selector:
    k8s-app: tigera-manager
    type: NodePort
    EOF
  2. Find out the address where you can reach the above service and set it as a variable. (Ex. "example.com:1234" or "10.0.0.10:1234".)

    export MANAGEMENT_CLUSTER_ADDR=<your-management-cluster-addr>
  3. Apply the ManagementCluster CR.

    kubectl apply -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: ManagementCluster
    metadata:
    name: tigera-secure
    spec:
    address: $MANAGEMENT_CLUSTER_ADDR
    EOF

Change a standalone cluster to a managed cluster

The steps in this section assume that a management cluster is up and running.

note

If you wish to retain LogStorage data for your managed cluster, verify that the reclaim policy within your storage class is configured to Retain data.

  1. Choose a name for your managed cluster.

    export MANAGED_CLUSTER=my-managed-cluster
  2. Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be tigera-operator):

    export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
  3. Add the managed cluster to your management cluster. The following command will create a manifest with the name of your managed cluster in your current directory.

    kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
    apiVersion: projectcalico.org/v3
    kind: ManagedCluster
    metadata:
    name: $MANAGED_CLUSTER
    spec:
    operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
    EOF
  4. Verify that the managementClusterAddr in the manifest is correct. Apply the manifest to your managed cluster.

    kubectl apply -f $MANAGED_CLUSTER.yaml
  5. Remove unnecessary resources in the managed clusters.

    kubectl delete manager tigera-secure
    kubectl delete logstorage tigera-secure
  6. Monitor progress until everything has the status, Available.

    watch kubectl get tigerastatus

Change a management cluster to a standalone cluster

  1. Change your installation type to Standalone.

    kubectl delete managementcluster tigera-secure
  2. Delete the service that you created to expose the management cluster.

    kubectl delete service <your-service-name> -n tigera-manager
  3. Delete all managed cluster resources in your cluster.

    kubectl delete managedcluster --all
    note

    Although installation automatically cleans up credentials for managed clusters, the management cluster Elasticsearch still retains managed cluster data based on the retention value that you specify in your LogStorage.

Change a managed cluster to a standalone cluster

  1. Ensure a persistent volume is provisioned to store log data for the standalone cluster. See Configure storage for logs and reports.

  2. Remove the ManagementClusterConnection from your cluster and delete the managed cluster connection secret.

    kubectl delete managementclusterconnection tigera-secure
    kubectl delete secret tigera-managed-cluster-connection -n tigera-operator
    note

    If the operator in your managed cluster is running in a different namespace, use that namespace in the kubectl delete secret... command.

    Follow the following step to also disconnect your managed cluster from your management cluster.

    1. In your management cluster, delete the ManagedCluster resource for your managed cluster. Note that deleting the managed cluster does not result in loss of data. Whenever you reconnect your cluster using the same name, existing data is available again.
    kubectl delete managedcluster <managed-cluster-name>
  3. Install the Tigera custom resources. For more information, see the installation reference.

    kubectl apply -f https://downloads.tigera.io/ee/v3.19.4/manifests/custom-resources.yaml
  4. Monitor the progress with the following command:

    watch kubectl get tigerastatus

    When all components show a status of Available, go to the next step.

  5. Remove your managed cluster from the management cluster.

    kubectl delete managedcluster <your-managed-cluster-name>