Create a Calico Enterprise managed cluster
Big picture
Create a Calico Enterprise managed cluster that you can control from your management cluster.
Value
Managing standalone clusters and multiple instances of Elasticsearch is not onerous when you first install Calico Enterprise. As you move to production with 300+ clusters, it is not scalable; you need centralized cluster management and log storage. With Calico Enterprise multi-cluster management, you can securely connect multiple clusters from different cloud providers in a single management plane, and control user access using RBAC. This architecture also supports federation of network policy resources across clusters, and lays the foundation for a “single pane of glass.”
Before you begin...
Required
How to
Create a managed cluster
Follow these steps in the cluster you intend to use as the managed cluster.
- Kubernetes
- GKE
- EKS
- AKS
- Openshift
Install Calico Enterprise
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
- Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret> - (Optional) If your cluster architecture requires any custom Calico Enterprise resources to function at startup, install them now using calicoctl.
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/custom-resources.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources.yaml
You can now monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the apiserver
shows a status of Available
, then proceed to the next section.
Create the connection manifest for your managed cluster
To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using kubectl.
Connect cluster - Manager UI
In the Manager UI left navbar, click Managed Clusters.
On the Managed Clusters page, click the button, Add Cluster.
Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.
Download the manifest.
connect-cluster---kubectl
Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.
First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.
export MANAGED_CLUSTER=my-managed-cluster
Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be
tigera-operator
):export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.
kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
apiVersion: projectcalico.org/v3
kind: ManagedCluster
metadata:
name: $MANAGED_CLUSTER
spec:
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
EOF- Verify that the
managementClusterAddr
in the manifest is correct.
Apply the connection manifest to your managed cluster
Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.
kubectl apply -f $MANAGED_CLUSTER.yaml
Monitor progress with the following command:
Wait until thewatch kubectl get tigerastatus
management-cluster-connection
andtigera-compliance
show a status ofAvailable
.
You have now successfully installed a managed cluster!
Provide permissions to view the managed cluster
To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.
Let's define admin-level permissions for the service account (mcm-user
) we created to log in to the Manager UI. Run the following command against your managed cluster.
kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin
Install Calico Enterprise
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret>Install any extra Calico resources needed at cluster start using calicoctl.
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/custom-resources.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources.yaml
You can now monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the
apiserver
shows a status ofAvailable
, then proceed to the next section.
Create the connection manifest for your managed cluster
To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using kubectl.
Connect cluster - Manager UI
In the Manager UI left navbar, click Managed Clusters.
On the Managed Clusters page, click the button, Add Cluster.
Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.
Download the manifest.
connect-cluster---kubectl
Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.
First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.
export MANAGED_CLUSTER=my-managed-cluster
Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be
tigera-operator
):export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.
kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
apiVersion: projectcalico.org/v3
kind: ManagedCluster
metadata:
name: $MANAGED_CLUSTER
spec:
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
EOF- Verify that the
managementClusterAddr
in the manifest is correct.
Apply the connection manifest to your managed cluster
Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.
kubectl apply -f $MANAGED_CLUSTER.yaml
Monitor progress with the following command:
Wait until thewatch kubectl get tigerastatus
management-cluster-connection
andtigera-compliance
show a status ofAvailable
.
You have now successfully installed a managed cluster!
Provide permissions to view the managed cluster
To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.
Let's define admin-level permissions for the service account (mcm-user
) we created to log in to the Manager UI. Run the following command against your managed cluster.
kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin
Install EKS with Amazon VPC networking
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret>Install any extra Calico Enterprise resources needed at cluster start using calicoctl.
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/eks/custom-resources.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources.yaml
Monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the
apiserver
shows a status ofAvailable
, then proceed to the next section.
Install EKS with Calico networking
Calico Enterprise networking cannot currently be installed on the EKS control plane nodes. As a result the control plane nodes will not be able to initiate network connections to Calico Enterprise pods. (This is a general limitation of EKS's custom networking support, not specific to Calico Enterprise.) As a workaround, trusted pods that require control plane nodes to connect to them, such as those implementing admission controller webhooks, can include hostNetwork:true
in their pod spec. See the Kubernetes API pod spec definition for more information on this setting.
Create an EKS cluster
For these instructions, we will use eksctl
to provision the cluster. However, you can use any of the methods in Getting Started with Amazon EKS
Before you get started, make sure you have downloaded and configured the necessary prerequisites
First, create an Amazon EKS cluster without any nodes.
eksctl create cluster --name my-calico-cluster --without-nodegroup
Since this cluster will use Calico Enterprise for networking, you must delete the
aws-node
daemon set to disable AWS VPC networking for pods.kubectl delete daemonset -n kube-system aws-node
Install Calico Enterprise
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret>Install any extra Calico Enterprise resources needed at cluster start using calicoctl.
To configure Calico Enterprise for use with the Calico CNI plugin, we must create an
Installation
resource that hasspec.cni.type: Calico
. Install thecustom-resources-calico-cni.yaml
manifest, which includes this configuration. For more information on configuration options available in this manifest, see the installation reference.Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/eks/custom-resources-calico-cni.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources-calico-cni.yaml
Monitor progress with the following command:
watch kubectl get tigerastatus
Finally, add nodes to the cluster.
eksctl create nodegroup --cluster my-calico-cluster --node-type t3.xlarge --node-ami auto --max-pods-per-node 100
Tip: Without the
--max-pods-per-node
option above, EKS will limit the number of pods based on node-type. Seeeksctl create nodegroup --help
for the full set of node group options.Monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the
apiserver
shows a status ofAvailable
, then proceed to the next section.
Create the connection manifest for your managed cluster
To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using kubectl.
Connect cluster - Manager UI
In the Manager UI left navbar, click Managed Clusters.
On the Managed Clusters page, click the button, Add Cluster.
Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.
Download the manifest.
connect-cluster---kubectl
Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.
First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.
export MANAGED_CLUSTER=my-managed-cluster
Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be
tigera-operator
):export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.
kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
apiVersion: projectcalico.org/v3
kind: ManagedCluster
metadata:
name: $MANAGED_CLUSTER
spec:
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
EOF- Verify that the
managementClusterAddr
in the manifest is correct.
Apply the connection manifest to your managed cluster
Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.
kubectl apply -f $MANAGED_CLUSTER.yaml
Monitor progress with the following command:
Wait until thewatch kubectl get tigerastatus
management-cluster-connection
andtigera-compliance
show a status ofAvailable
.
You have now successfully installed a managed cluster!
Provide permissions to view the managed cluster
To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.
Let's define admin-level permissions for the service account (mcm-user
) we created to log in to the Manager UI. Run the following command against your managed cluster.
kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin
Install with Azure CNI networking
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret>Install any extra Calico Enterprise resources needed at cluster start using calicoctl.
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/aks/custom-resources.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources.yaml
You can now monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the apiserver
shows a status of Available
, then proceed to the next section.
Install with Calico Enterprise networking
Install the Tigera operator and custom resource definitions.
kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-operator.yaml
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
noteIf you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.kubectl create -f https://downloads.tigera.io/ee/v3.19.4/manifests/tigera-prometheus-operator.yaml
Install your pull secret.
If pulling images directly from
quay.io/tigera
, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<path/to/pull/secret>Install any extra Calico Enterprise resources needed at cluster start using calicoctl.
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/aks/custom-resources-calico-cni.yaml
Remove the
Manager
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: TokenRemove the
LogStorage
custom resource from the manifest file.apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1Now apply the modified manifest.
kubectl create -f ./custom-resources-calico-cni.yaml
You can now monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the apiserver
shows a status of Available
, then proceed to the next section.
Create the connection manifest for your managed cluster
To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using kubectl.
Connect cluster - Manager UI
In the Manager UI left navbar, click Managed Clusters.
On the Managed Clusters page, click the button, Add Cluster.
Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.
Download the manifest.
connect-cluster---kubectl
Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.
First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.
export MANAGED_CLUSTER=my-managed-cluster
Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be
tigera-operator
):export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.
kubectl -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
apiVersion: projectcalico.org/v3
kind: ManagedCluster
metadata:
name: $MANAGED_CLUSTER
spec:
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
EOF- Verify that the
managementClusterAddr
in the manifest is correct.
Apply the connection manifest to your managed cluster
Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.
kubectl apply -f $MANAGED_CLUSTER.yaml
Monitor progress with the following command:
Wait until thewatch kubectl get tigerastatus
management-cluster-connection
andtigera-compliance
show a status ofAvailable
.
You have now successfully installed a managed cluster!
Provide permissions to view the managed cluster
To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.
Let's define admin-level permissions for the service account (mcm-user
) we created to log in to the Manager UI. Run the following command against your managed cluster.
kubectl create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin
Create a configuration file for the OpenShift installer
First, create a staging directory for the installation. This directory will contain the configuration file, along with cluster state files, that OpenShift installer will create:
mkdir openshift-tigera-install && cd openshift-tigera-install
Now run OpenShift installer to create a default configuration file:
openshift-install create install-config
After the installer finishes, your staging directory will contain the configuration file install-config.yaml
.
Update the configuration file to use Calico Enterprise
Override the OpenShift networking to use Calico Enterprise and update the AWS instance types to meet the system requirements:
sed -i 's/\(OpenShiftSDN\|OVNKubernetes\)/Calico/' install-config.yaml
By default openshift-installer creates 3 replicas, you can change these settings by modifying the cloud-provider part in the install-config.yaml
The following example changes the default deployment instance type and replica quantity.
...
platform:
aws:
type: m5.xlarge
replicas: 2
...
Generate the install manifests
Now generate the Kubernetes manifests using your configuration file:
openshift-install create manifests
Download the Calico Enterprise manifests for OpenShift and add them to the generated manifests directory:
mkdir calico
wget -qO- https://downloads.tigera.io/ee/v3.19.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico
cp calico/* manifests/
Add an image pull secret
Update the contents of the secret with the image pull secret provided to you by Tigera support representative.
For example, if the secret is located at ~/.docker/config.json
, run the following commands.
SECRET=$(cat ~/.docker/config.json | tr -d '\n\r\t ' | base64 -w 0)
sed -i "s/SECRET/${SECRET}/" manifests/02-pull-secret.yaml
Provide additional configuration
To provide additional configuration during installation (for example, BGP configuration or peers), use a Kubernetes ConfigMap with your desired Calico Enterprise resources. If you do not need to provide additional configuration, skip this section.
To include Calico Enterprise resources during installation, edit manifests/02-configmap-calico-resources.yaml
in order to add your own configuration.
If you have a directory with the Calico Enterprise resources, you can create the file with the command:
kubectl create configmap -n tigera-operator calico-resources \
--from-file=<resource-directory> --dry-run -o yaml \
> manifests/02-configmap-calico-resources.yaml
With recent versions of kubectl
it is necessary to have a kubeconfig configured or add --server='127.0.0.1:443'
even though it is not used.
If you have provided a calico-resources
configmap and the tigera-operator pod fails to come up with Init:CrashLoopBackOff
, check the output of the init-container with kubectl logs -n tigera-operator -l k8s-app=tigera-operator -c create-initial-resources
.
Create the cluster
Start the cluster creation with the following command and wait for it to complete.
openshift-install create cluster
Create a storage class
Calico Enterprise requires storage for logs and reports. Before finishing the installation, you must create a StorageClass for Calico Enterprise.
Install Calico Enterprise resources
Download the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
curl -O -L https://downloads.tigera.io/ee/v3.19.4/manifests/ocp/tigera-enterprise-resources.yaml
Remove the Manager
custom resource from the manifest file.
apiVersion: operator.tigera.io/v1
kind: Manager
metadata:
name: tigera-secure
spec:
# Authentication configuration for accessing the Tigera manager.
# Default is to use token-based authentication.
auth:
type: Token
Remove the LogStorage
custom resource from the manifest file.
apiVersion: operator.tigera.io/v1
kind: LogStorage
metadata:
name: tigera-secure
spec:
nodes:
count: 1
Now apply the modified manifest.
oc create -f ./tigera-enterprise-resources.yaml
Apply the Calico Enterprise manifests for the Prometheus operator.
oc create -f https://downloads.tigera.io/ee/v3.19.4/manifests/ocp/tigera-prometheus-operator.yaml
You can now monitor progress with the following command:
watch oc get tigerastatus
When it shows all components with status Available
, proceed to the next step.
(Optional) Apply the full CRDs including descriptions.
oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.19.4/manifests/operator-crds.yaml
Create the connection manifest for your managed cluster
To connect the managed cluster to your management cluster, you need to create and apply a connection manifest. You can create a connection manifest from the Manager UI in the management cluster or manually using oc.
Connect cluster - Manager UI
In the Manager UI left navbar, click Managed Clusters.
On the Managed Clusters page, click the button, Add Cluster.
Name your cluster that is easily recognized in a list of managed clusters, and click Create Cluster.
Download the manifest.
connect-cluster---kubectl
Choose a name for your managed cluster and then add it to your management cluster. The following commands will create a manifest with the name of your managed cluster in your current directory.
First, decide on the name for your managed cluster. Because you will eventually have several managed clusters, choose a name that can be easily recognized in a list of managed clusters. The name is also used in steps that follow.
export MANAGED_CLUSTER=my-managed-cluster
Get the namespace in which the Tigera operator is running in your managed cluster (in most cases this will be
tigera-operator
):export MANAGED_CLUSTER_OPERATOR_NS=tigera-operator
Add a managed cluster and save the manifest containing a ManagementClusterConnection and a Secret.
oc -o jsonpath="{.spec.installationManifest}" > $MANAGED_CLUSTER.yaml create -f - <<EOF
apiVersion: projectcalico.org/v3
kind: ManagedCluster
metadata:
name: $MANAGED_CLUSTER
spec:
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NS
EOF- Verify that the
managementClusterAddr
in the manifest is correct.
Apply the connection manifest to your managed cluster
Apply the manifest that you modified in the step, Add a managed cluster to the management cluster.
oc apply -f $MANAGED_CLUSTER.yaml
Monitor progress with the following command:
Wait until thewatch oc get tigerastatus
management-cluster-connection
andtigera-compliance
show a status ofAvailable
.
You have now successfully installed a managed cluster!
Provide permissions to view the managed cluster
To access resources belonging to a managed cluster from the Calico Enterprise Manager UI, the service or user account used to log in must have appropriate permissions defined in the managed cluster.
Let's define admin-level permissions for the service account (mcm-user
) we created to log in to the Manager UI. Run the following command against your managed cluster.
oc create clusterrolebinding mcm-user-admin --serviceaccount=default:mcm-user --clusterrole=tigera-network-admin
Next steps
- When you are ready to fine-tune your multi-cluster management deployment for production, see Fine-tune multi-cluster management
- To change an existing Calico Enterprise standalone cluster to a management or managed cluster, see Change cluster types