Install Calico Enterprise on Mirantis Kubernetes Engine 4k
This installation guide explains how to install Calico Enterprise on MKE 4k with the Tigera Operator.
About installing Calico Enterprise on MKE 4k
Installing Calico Enterprise on MKE 4k differs from most Kubernetes deployments because the CNI must be integrated directly into the cluster provisioning workflow. In a standard environment, you might install a CNI after the cluster is fully operational; however, MKE 4k requires you to declare a "custom" networking provider at the start.
This approach creates a coordinated installation where the MKE 4k installer pauses mid-process to wait for you to deploy Calico Enterprise. The nodes will remain in a NotReady state and the MKE installer will remain "pending" until the Tigera Operator is installed and networking is established.
Prerequisites
- You have already prepared to install a Kubernetes cluster with MKE 4k:
- You are familiar with the installation and configuration process.
- You have set up infrastructure that meets the MKE 4k system requirements.
- You have modified an installation configuration file with provisioning information about your infrastructure.
- You have installed the
mkectlcommand line tool on your workstation.
- Your infrastructure meets the Calico Enterprise system requirements.
- You understand your log storage requirements and have planned how you want to add storage in your environment.
Start the MKE 4k installation
-
In your installation configuration file, make the following changes:
- Specify that you want to use a custom networking provider:
spec:
network:
providers:
- enabled: true
provider: "custom" - Optional: If you don't want to use the default pod CIDR range 192.168.0.0/16, specify the range in the installation configuration file.
The pod CIDR range cannot easily be changed later, and it must be configured before you install Calico Enterprise.
Edit the configuration file as follows:
Replace
spec:
network:
providers:
- enabled: true
provider: "custom"
extraConfig:
cidrV4: <pod-cidr><pod-cidr>with a pod CIDR range that suits your deployment and does not conflict with other ranges in your network.
- Specify that you want to use a custom networking provider:
-
To start the MKE 4k installation process, run the following command:
mkectl apply -f <mke-configuration-file>.yaml --cni-check-timeout <cni-timeout-duration>Replace the following:
<mke-configuration-file>: The path of your installation configuration file.<cni-timeout-duration>: The amount of time, in minutes, that the MKE 4k installer will wait for you to complete the network installation.
Wait until you see that the installer is waiting for cluster networking to be established.
Example outputwaiting 60 minutes for cluster networking to be established using custom CNI provider -
SSH into one of the controller nodes and run the following command:
sudo /usr/local/bin/k0s kubeconfig adminAdd the kubeconfig to your local kubectl.
importantDon't close this terminal session as you move to the next step. The MKE 4k installation process does not complete until after you install Calico Enterprise. For the next step, open a second terminal session. After you install Calico Enterprise, you can return to the first terminal session to observe and verify the installation process.
Install Calico Enterprise
In a new terminal, install the Calico Enterprise CNI.
-
Install the Tigera Operator and custom resource definitions.
kubectl apply --server-side -f https://downloads.tigera.io/ee/v3.22.1/manifests/operator-crds.yaml
kubectl apply -f https://downloads.tigera.io/ee/v3.22.1/manifests/tigera-operator.yaml -
Install your pull secret.
If pulling images directly from
quay.io/tigera, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.kubectl create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-operator \
--from-file=.dockerconfigjson=<pull-secret>Replace
<pull-secret>with the path to your pull secret. -
Optional: If your cluster architecture requires any custom Calico Enterprise resources to function at startup, install them now using calicoctl.
-
Optional: Compliance and packet capture features are optional. To enable these features during installation, download and review the
custom-resources.yamlfile. Uncomment the necessary CRs and use thiscustom-resources.yamlfor installation.curl -O -L https://downloads.tigera.io/ee/v3.22.1/manifests/custom-resources.yaml -
Install the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
kubectl create -f https://downloads.tigera.io/ee/v3.22.1/manifests/custom-resources.yaml -
Restrict privileged container access in the
tigera-elasticsearchnamespace to only the necessary Tigera and ElasticSearch service accounts using an MKE admission policy annotation.kubectl annotate namespace tigera-elasticsearch "mke.mirantis.com/allowed-accounts-privileged=system:serviceaccount:tigera-eck-operator:elastic-operator,system:serviceaccount:tigera-elasticsearch:tigera-elasticsearch"` -
Wait until the API server shows a status of
Available, and then proceed to the next section.watch kubectl get tigerastatus -
Install the Calico Enterprise license.
kubectl create -f <license>.yamlReplace
<license>with the path to your license file. -
Continue monitoring until all items in tigera-status are
Availablewatch kubectl get tigerastatusWhen the Calico Enterprise installation is complete, you can return to your other terminal and wait for the MKE 4k installation to finish.
Next steps
Recommended
- Configure access to the Calico Enterprise web console
- Authentication quickstart
- Configure your own identity provider
Recommended - Networking
- The default networking uses IP in IP encapsulation with BGP routing. For all networking options, see Determine best networking option.
Recommended - Security