Skip to main content
Version: 3.18 (latest)

Create a Calico Enterprise managed cluster

Big picture​

Create a Calico Enterprise managed cluster that you can control from your management cluster.

Value​

Managing standalone clusters and multiple instances of Elasticsearch is not onerous when you first install Calico Enterprise. As you move to production with 300+ clusters, it is not scalable; you need centralized cluster management and log storage. With Calico Enterprise multi-cluster management, you can securely connect multiple clusters from different cloud providers in a single management plane, and control user access using RBAC. This architecture also supports federation of network policy resources across clusters, and lays the foundation for a β€œsingle pane of glass.”

Before you begin...​

Required

How to​

Create a managed cluster​

Follow these steps in the cluster you intend to use as the managed cluster.

Install Calico Enterprise​

  1. Install the Tigera operator and custom resource definitions.

    kubectl create -f https://downloads.tigera.io/ee/v3.18.3/manifests/tigera-operator.yaml
  2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.

    note
    If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.
    kubectl create -f https://downloads.tigera.io/ee/v3.18.3/manifests/tigera-prometheus-operator.yaml
  3. Install your pull secret.

    If pulling images directly from quay.io/tigera, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials.

    kubectl create secret generic tigera-pull-secret \
    --type=kubernetes.io/dockerconfigjson -n tigera-operator \
    --from-file=.dockerconfigjson=<path/to/pull/secret>

    For the Prometheus operator, create the pull secret in the tigera-prometheus namespace and then patch the deployment.

    kubectl create secret generic tigera-pull-secret \
    --type=kubernetes.io/dockerconfigjson -n tigera-prometheus \
    --from-file=.dockerconfigjson=<path/to/pull/secret>
    kubectl patch deployment -n tigera-prometheus calico-prometheus-operator \
    -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'
  4. (Optional) If your cluster architecture requires any custom Calico Enterprise resources to function at startup, install them now using calicoctl.
  5. Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.

    curl -O -L https://downloads.tigera.io/ee/v3.18.3/manifests/custom-resources.yaml

    Remove the Manager custom resource from the manifest file.

    apiVersion: operator.tigera.io/v1
    kind: Manager
    metadata:
    name: tigera-secure
    spec:
    # Authentication configuration for accessing the Tigera manager.
    # Default is to use token-based authentication.
    auth:
    type: Token

    Remove the LogStorage custom resource from the manifest file.

    apiVersion: operator.tigera.io/v1
    kind: LogStorage
    metadata:
    name: tigera-secure
    spec:
    nodes:
    count: 1

    Now apply the modified manifest.

    kubectl create -f ./custom-resources.yaml
  6. You can now monitor progress with the following command:

    watch kubectl get tigerastatus

    Wait until the apiserver shows a status of Available, then proceed to the next section.

Create the connection manifest for your managed cluster​

To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using kubectl.

Connect cluster - Manager UI​
  1. In the Manager UI left navbar, click Managed Clusters.

  2. On the Managed Clusters page, click the button, Add Cluster.

  3. Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.

  4. Download the manifest.

connect-cluster---kubectl​

Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.

  1. First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.

    export MANAGED_CLUSTER=my-managed-cluster
  2. Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be tigera-operator):

    export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
  3. Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.

    kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
    apiVersion: projectcalico.org/v3
    kind: ManagedCluster
    metadata:
    name: $MANAGED_CLUSTER
    spec:
    operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
    EOF
  4. Verify that the managementClusterAddr in the manifest is correct.

Apply the connection manifest to your managed cluster​

  1. Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.

    kubectl apply -f $MANAGED_CLUSTER.yaml
  2. Monitor progress with the following command:

    watch kubectl get tigerastatus
    Wait until the management-cluster-connection and tigera-compliance show a status of Available.

You have now successfully installed a managed cluster!

Provide permissions to view the managed cluster​

To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.

Let's define admin-level permissions for the service account (mcm-user) we created to log in to the Manager UI. Run the following command against your managed cluster.

kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin

Next steps​