Skip to main content
Version: 3.18 (latest)

Calico Enterprise for Windows on an OpenShift 4 cluster (manual install)

note

Currently, Calico Enterprise for Windows supports Openshift versions only up to v4.5 because it requires the Windows Machine Config Bootstrapper binary (wmcb.exe) for adding Windows nodes to clusters. OpenShift v4.6+ does not support the Windows Machine Config Bootstrapper binary and uses the Red Hat Windows Machine Config Operator (WMCO), which does not correctly recognize Calico Enterprise networking in the cluster.

note

The manual method for installing Calico Enterprise for Windows is deprecated in favor of using the Operator and Windows HostProcess containers (HPC). Support for this method will be dropped in a future Calico Enterprise version.

Big picture​

Install an OpenShift 4 cluster on AWS with Calico Enterprise on Windows nodes using the manual installation method.

Augments the applicable steps in the OpenShift documentation to install Calico Enterprise for Windows.

Before you begin​

CNI support

Calico CNI for networking with Calico Enterprise network policy

The geeky details of what you get:

PolicyIPAMCNIOverlayRoutingDatastore

Required

How to​

  1. Create a configuration file for the OpenShift installer
  2. Update the configuration file to use Calico Enterprise
  3. Generate the install manifests
  4. Add an image pull secret
  5. Provide additional configuration
  6. Create the cluster
  7. Create a storage class
  8. Install the Calico Enterprise license
  9. Configure strict affinity
  10. Add Windows nodes to the cluster
  11. Get the Administrator password
  12. Install Calico Enterprise for Windows
  13. Configure kubelet

Create a configuration file for the OpenShift installer​

First, create a staging directory for the installation. This directory will contain the configuration file, along with cluster state files, that OpenShift installer will create:

mkdir openshift-tigera-install && cd openshift-tigera-install

Now run OpenShift installer to create a default configuration file:

openshift-install create install-config
note
See the OpenShift installer documentation for more information about the installer and any configuration changes required for your platform.

After the installer finishes, your staging directory will contain the configuration file install-config.yaml.

Update the configuration file to use Calico Enterprise​

Override the OpenShift networking to use Calico Enterprise and update the AWS instance types to meet the system requirements:

sed -i 's/\(OpenShiftSDN\|OVNKubernetes\)/Calico/' install-config.yaml
sed -i 's/platform: /platform:\n aws:\n type: m4.xlarge/g' install-config.yaml

Generate the install manifests​

Now generate the Kubernetes manifests using your configuration file:

openshift-install create manifests

Download the Calico Enterprise manifests for OpenShift and add them to the generated manifests directory:

mkdir calico
wget -qO- https://downloads.tigera.io/ee/v3.18.3/manifests/ocp.tgz | tar xvz --strip-components=1 -C calico
cp calico/* manifests/

Edit the Installation custom resource manifest manifests/01-cr-installation.yaml so that it enables VXLAN and disables BGP. This is required for Calico Enterprise for Windows:

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
variant: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- blockSize: 26
cidr: 10.128.0.0/14
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()

Add an image pull secret​

Update the contents of the secret with the image pull secret provided to you by Tigera support representative.

For example, if the secret is located at ~/.docker/config.json, run the following commands.

SECRET=$(cat ~/.docker/config.json | tr -d '\n\r\t ' | base64 -w 0)
sed -i "s/SECRET/${SECRET}/" manifests/02-pull-secret.yaml

Provide additional configuration​

To provide additional configuration during installation (for example, BGP configuration or peers), use a Kubernetes ConfigMap with your desired Calico Enterprise resources. If you do not need to provide additional configuration, skip this section.

To include Calico Enterprise resources during installation, edit manifests/02-configmap-calico-resources.yaml in order to add your own configuration.

note

If you have a directory with the Calico Enterprise resources, you can create the file with the command:

kubectl create configmap -n tigera-operator calico-resources \
--from-file=<resource-directory> --dry-run -o yaml \
> manifests/02-configmap-calico-resources.yaml

With recent versions of kubectl it is necessary to have a kubeconfig configured or add --server='127.0.0.1:443' even though it is not used.

note

If you have provided a calico-resources configmap and the tigera-operator pod fails to come up with Init:CrashLoopBackOff, check the output of the init-container with kubectl logs -n tigera-operator -l k8s-app=tigera-operator -c create-initial-resources.

Create the cluster​

Start the cluster creation with the following command and wait for it to complete.

openshift-install create cluster

Create a storage class​

Calico Enterprise requires storage for logs and reports. Before finishing the installation, you must create a StorageClass for Calico Enterprise.

Install the Calico Enterprise license​

In order to use Calico Enterprise, you must install the license provided to you by Tigera support representative. Before applying the license, wait until the Tigera API server is ready with the following command:

watch oc get tigerastatus

Wait until the apiserver shows a status of Available.

After the Tigera API server is ready, apply the license:

oc create -f </path/to/license.yaml>

Install Calico Enterprise resources​

Apply the custom resources for enterprise features.

oc create -f https://downloads.tigera.io/ee/v3.18.3/manifests/ocp/tigera-enterprise-resources.yaml

Apply the Calico Enterprise manifests for the Prometheus operator.

note
Complete this step only if you are using the Calico Enterprise Prometheus operator (including adding your own Prometheus operator). Skip this step if you are using BYO Prometheus that you manage yourself.
oc create -f https://downloads.tigera.io/ee/v3.18.3/manifests/ocp/tigera-prometheus-operator.yaml

Create the pull secret in the tigera-prometheus namespace and then patch the Prometheus operator deployment. Use the image pull secret provided to you by Tigera support representative.

oc create secret generic tigera-pull-secret \
--type=kubernetes.io/dockerconfigjson -n tigera-prometheus \
--from-file=.dockerconfigjson=<path/to/pull/secret>
oc patch deployment -n tigera-prometheus calico-prometheus-operator \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'

You can now monitor progress with the following command:

watch oc get tigerastatus

When it shows all components with status Available, proceed to the next step.

(Optional) Apply the full CRDs including descriptions.

oc apply --server-side --force-conflicts -f https://downloads.tigera.io/ee/v3.18.3/manifests/operator-crds.yaml

Configure strict affinity​

Next, install calicoctl and ensure strict affinity is true:

calicoctl ipam configure --strictaffinity=true

Add Windows nodes to the cluster​

Download the latest Windows Node Installer (WNI) binary wni that matches your OpenShift minor version.

note
For OpenShift 4.6, use the latest wni for OpenShift 4.5. A wni binary for OpenShift 4.6 is not published yet.

Next, determine the AMI id corresponding to Windows Server 1903 (build 18317) or greater. wni defaults to using Windows Server 2019 (build 10.0.17763) which does not include WinDSR support. One way to do this is by searching for AMI's matching the string Windows_Server-1903-English-Core-ContainersLatest in the Amazon EC2 console.

Next, run wni to add a Windows node to your cluster. Replace AMI_ID, AWS_CREDENTIALS_PATH, AWS_KEY_NAME and AWS_PRIVATE_KEY_PATH with your values:

chmod u+x wni
./wni aws create \
--image-id AMI_ID \
--kubeconfig openshift-tigera-install/auth/kubeconfig \
--credentials AWS_CREDENTIALS_PATH \
--credential-account default \
--instance-type m5a.large \
--ssh-key AWS_KEY_NAME \
--private-key AWS_PRIVATE_KEY_PATH

An example of running the above steps:

$ chmod u+x wni
$ ./wni aws create > --kubeconfig openshift-tigera-install/auth/kubeconfig \
> --credentials ~/.aws/credentials \
> --credential-account default \
> --instance-type m5a.large \
> --ssh-key test-key \
> --private-key /home/user/.ssh/test-key.pem
2020/10/05 12:52:51 kubeconfig source: /home/user/openshift-tigera-install/auth/kubeconfig
2020/10/05 12:52:59 Added rule with port 5986 to the security groups of your local IP
2020/10/05 12:52:59 Added rule with port 22 to the security groups of your local IP
2020/10/05 12:52:59 Added rule with port 3389 to the security groups of your local IP
2020/10/05 12:52:59 Using existing Security Group: sg-06d1de22807d5dc48
2020/10/05 12:57:30 External IP: 52.35.12.231
2020/10/05 12:57:30 Internal IP: 10.0.90.193

Get the administrator password​

The wni binary writes the instance details to the file windows-node-installer.json. An example of the file:

{"InstanceIDs":["i-02e13d4cc76c13c83"],"SecurityGroupIDs":["sg-0a777565d64e1d2ef"]}

Use the instance ID from the file and the path of the private key used to create the instance to get the Administrator user's password:

aws ec2 get-password-data --instance-id <instance id> --priv-launch-key <aws private key path>

Install Calico Enterprise for Windows​

  1. Remote into the Windows node, open a Powershell window, and prepare the directory for Kubernetes files.

    mkdir c:\k
  2. Copy the Kubernetes kubeconfig file (default location: openshift-tigera-install/auth/kubeconfig), to the file c:\k\config.

  3. Download the powershell script, install-calico-windows.ps1.

    Invoke-WebRequest /scripts/install-calico-windows.ps1  -OutFile c:install-calico-windows.ps1
  4. Run the installation script, replacing the Kubernetes version with the version corresponding to your version of OpenShift.

    c:install-calico-windows.ps1 -KubeVersion <kube version> -ServiceCidr 172.30.0.0/16 -DNSServerIPs 172.30.0.10
    note

    Get the Kubernetes version with oc version and use only the major, minor, and patch version numbers. For example from a cluster that returns:

    $ oc version
    Client Version: 4.5.3
    Server Version: 4.5.14
    Kubernetes Version: v1.18.3+5302882

    You will use 1.18.3:

  5. Install and start kube-proxy service. Execute following powershell script/commands.

    C:\TigeraCalico\kubernetes\install-kube-services.ps1 -service kube-proxy
    Start-Service -Name kube-proxy
  6. Verify kube-proxy service is running.

    Get-Service -Name kube-proxy

Configure kubelet​

From the Windows node, download the Windows Machine Config Bootstrapper wmcb.exe that matches your OpenShift minor version from Windows Machine Config Bootstrapper releases. For example, for OpenShift 4.5.x:

curl https://github.com/openshift/windows-machine-config-bootstrapper/releases/download/v4.5.2-alpha/wmcb.exe -o c:\wmcb.exe
note
For OpenShift 4.6, use the latest wmcb.exe for OpenShift 4.5. A wmcb.ex binary for OpenShift 4.6 is not published yet.

Next, we will download the worker.ign file from the API server:

$apiServer = c:kkubectl --kubeconfig c:kconfig get po -n  openshift-kube-apiserver -l apiserver=true --no-headers -o custom-columns=":metadata.name" | select -first 1
c:kkubectl --kubeconfig c:kconfig -n openshift-kube-apiserver exec $apiserver -- curl -ks https://localhost:22623/config/worker > c:worker.ign
((Get-Content c:worker.ign) -join "`n") + "`n" | Set-Content -NoNewline c:worker.ign

Next, we run wmcb to configure the kubelet:

c:\wmcb.exe initialize-kubelet --ignition-file worker.ign --kubelet-path c:\k\kubelet.exe
note
The kubelet configuration installed by Windows Machine Config Bootstrapper includes --register-with-taints="os=Windows:NoSchedule" which will require Windows pods to tolerate that taint.

Next, we make a copy of the kubeconfig because wmcb.exe expects the kubeconfig to be the file c:\k\kubeconfig. Then we configure kubelet to use Calico CNI:

cp c:\k\config c:\k\kubeconfig
c:\wmcb.exe configure-cni --cni-dir c:\k\cni --cni-config c:\k\cni\config\10-calico.conf

Finally, clean up the additional files created on the Windows node:

rm c:\k\kubeconfig,c:\wmcb.exe,c:\worker.ign

Exit the remote session to the Windows node and return to a shell to a Linux node.

We need to approve the CSR's generated by the kubelet's bootstrapping process. First, view the pending CSR's:

oc get csr

For example:

$ oc get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-55brx 4m32s kubernetes.io/kube-apiserver-client-kubelet system:admin Approved,Issued
csr-bmnfd 4m30s kubernetes.io/kubelet-serving system:node:ip-10-0-45-102.us-west-2.compute.internal Pending
csr-hwl89 5m1s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending

To approve the pending CSR's:

oc get csr -o name | xargs oc adm certificate approve

For example:

$ oc get csr -o name | xargs oc adm certificate approve
certificatesigningrequest.certificates.k8s.io/csr-55brx approved
certificatesigningrequest.certificates.k8s.io/csr-bmnfd approved
certificatesigningrequest.certificates.k8s.io/csr-hwl89 approved

Finally, wait a minute or so and get all nodes:

$ oc get node -owide

If the Windows node registered itself successfully, it should appear in the list with a Ready status, ready to run Windows pods!

Next steps​

Recommended

Recommended - Security