Install Calico Enterprise on OpenShift
Big picture
Install an OpenShift 4 cluster with Calico Enterprise.
This guide augments the applicable steps in the OpenShift documentation to install Calico Enterprise.
Before you begin
CNI support
Calico CNI for networking with Calico Enterprise network policy
The geeky details of what you get:
Policy | IPAM | CNI | Overlay | Routing | Datastore |
---|---|---|---|---|---|
Required
A compatible OpenShift cluster
Your environment meets the Calico Enterprise system requirements
A RedHat account for the pull secret to provision an OpenShift cluster.
OpenShift command line interface from cloud.redhat.com
Cluster meets the Calico Enterprise system requirements
If installing on AWS, a configured AWS account appropriate for OpenShift 4, and have set up your AWS credentials. Note that the OpenShift installer supports a subset of AWS regions.
OpenShift installer v4.8 or 4.9 and OpenShift command line interface from cloud.redhat.com
A generated a local SSH private key that is added to your ssh-agent
How to
- Create a configuration file for the OpenShift installer
- Update the configuration file to use Calico Enterprise
- Generate the install manifests
- Add an image pull secret
- Provide additional configuration
- Create the cluster
- Create a storage class
- Install the Calico Enterprise license
Create a configuration file for the OpenShift installer
First, create a staging directory for the installation. This directory will contain the configuration file, along with cluster state files, that OpenShift installer will create:
mkdir openshift-tigera-install && cd openshift-tigera-install
Now run OpenShift installer to create a default configuration file:
openshift-install create install-config
After the installer finishes, your staging directory will contain the configuration file install-config.yaml
.
Update the configuration file to use Calico Enterprise
Override the OpenShift networking to use Calico Enterprise and update the AWS instance types to meet the system requirements:
sed -i 's/\(OpenShiftSDN\|OVNKubernetes\)/Calico/' install-config.yaml
By default openshift-installer creates 3 replicas, you can change these settings by modifying the cloud-provider part in the install-config.yaml
The following example changes the default deployment instance type and replica quantity.
...
platform:
aws:
type: m5.xlarge
replicas: 2
...
Generate the install manifests
Now generate the Kubernetes manifests using your configuration file:
openshift-install create manifests
Download the Calico Enterprise manifests for OpenShift and add them to the generated manifests directory:
mkdir calico
wget -qO- https://downloads.tigera.io/ee/v3.19.4/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico
cp calico/* manifests/
Add an image pull secret
Update the contents of the secret with the image pull secret provided to you by Tigera support representative.
For example, if the secret is located at ~/.docker/config.json
, run the following commands.
SECRET=$(cat ~/.docker/config.json | tr -d '\n\r\t ' | base64 -w 0)
sed -i "s/SECRET/${SECRET}/" manifests/02-pull-secret.yaml
Provide additional configuration
To provide additional configuration during installation (for example, BGP configuration or peers), use a Kubernetes ConfigMap with your desired Calico Enterprise resources. If you do not need to provide additional configuration, skip this section.
To include Calico Enterprise resources during installation, edit manifests/02-configmap-calico-resources.yaml
in order to add your own configuration.
If you have a directory with the Calico Enterprise resources, you can create the file with the command:
kubectl create configmap -n tigera-operator calico-resources \
--from-file=<resource-directory> --dry-run -o yaml \
> manifests/02-configmap-calico-resources.yaml
With recent versions of kubectl
it is necessary to have a kubeconfig configured or add --server='127.0.0.1:443'
even though it is not used.
If you have provided a calico-resources
configmap and the tigera-operator pod fails to come up with Init:CrashLoopBackOff
, check the output of the init-container with kubectl logs -n tigera-operator -l k8s-app=tigera-operator -c create-initial-resources
.
Create the cluster
Start the cluster creation with the following command and wait for it to complete.
openshift-install create cluster
Create a storage class
Calico Enterprise requires storage for logs and reports. Before finishing the installation, you must create a StorageClass for Calico Enterprise.
Install the Calico Enterprise license
In order to use Calico Enterprise, you must install the license provided to you by Tigera support representative. Before applying the license, wait until the Tigera API server is ready with the following command:
watch oc get tigerastatus
Wait until the apiserver
shows a status of Available
.
After the Tigera API server is ready, apply the license:
oc create -f </path/to/license.yaml>
Install Calico Enterprise resources
Apply the custom resources for enterprise features.
oc create -f https://downloads.tigera.io/ee/v3.19.4/manifests/ocp/tigera-enterprise-resources.yaml
Apply the Calico Enterprise manifests for the Prometheus operator.
oc create -f https://downloads.tigera.io/ee/v3.19.4/manifests/ocp/tigera-prometheus-operator.yaml
You can now monitor progress with the following command:
watch oc get tigerastatus
When it shows all components with status Available
, proceed to the next step.
(Optional) Apply the full CRDs including descriptions.
oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.19.4/manifests/operator-crds.yaml
Next steps
Recommended
- Configure access to Calico Enterprise Manager UI
- Authentication quickstart
- Configure your own identity provider
Recommended - Networking
- The default networking uses IP in IP encapsulation with BGP routing. For all networking options, see Determine best networking option.
Recommended - Security