Skip to main content
Calico Enterprise 3.21 (latest) documentation

Install Calico Enterprise on a Charmed Kubernetes cluster

This guide describes how to install Calico Enterprise on a Charmed Kubernetes cluster.

Before you begin​

CNI support

  • Calico CNI for networking with Calico Enterprise network policy

    The geeky details of what you get by default:

    PolicyIPAMCNIOverlayRoutingDatastore

Required

Prepare a compatible cluster for Calico Enterprise using a modified bundle file​

For the best results, you should create a new Charmed Kubernetes cluster without a CNI, and then install Calico Enterprise on that cluster. This ensures proper configuration and compatibility for a smooth installation process.

By default, Charmed Kubernetes clusters include a managed version of Calico Open Source. Migrating from this managed version of Calico Open Source to Calico Enterprise is not supported.

To create a Charmed Kubernetes cluster without a CNI, you can customize your deployment by using a bundle overlay file. See the Charmed Kubernetes documentation for more information on this installation method.

  1. Copy one of the default bundle files located in Charmed Kubernetes Github Releases.

    For example, to get the default bundle for the v1.33 charmed kubernetes release:

    curl -o charmed-kubernetes-bundle.yaml -L https://raw.githubusercontent.com/charmed-kubernetes/bundle/refs/heads/main/releases/1.33/bundle.yaml
    Example of default Charmed Kubernetes bundle file

    An example of a default bundle file with the calico charm:

    description: A highly-available, production-grade Kubernetes cluster.
    docs: https://discourse.charmhub.io/t/charmed-kubernetes-bundle/14447
    issues: https://bugs.launchpad.net/charmed-kubernetes-bundles
    series: noble
    source: https://github.com/charmed-kubernetes/bundle
    website: https://ubuntu.com/kubernetes/charmed-k8s
    name: charmed-kubernetes
    applications:
    calico:
    annotations:
    gui-x: '475'
    gui-y: '605'
    channel: 1.33/stable
    charm: calico
    options:
    vxlan: Always
    containerd:
    annotations:
    gui-x: '475'
    gui-y: '800'
    channel: 1.33/stable
    charm: containerd
    easyrsa:
    annotations:
    gui-x: '90'
    gui-y: '420'
    channel: 1.33/stable
    charm: easyrsa
    constraints: cores=1 mem=4G root-disk=16G
    num_units: 1
    etcd:
    annotations:
    gui-x: '800'
    gui-y: '420'
    channel: 1.33/stable
    charm: etcd
    constraints: cores=2 mem=8G root-disk=16G
    num_units: 3
    options:
    channel: 3.4/stable
    kubeapi-load-balancer:
    annotations:
    gui-x: '450'
    gui-y: '250'
    channel: 1.33/stable
    charm: kubeapi-load-balancer
    constraints: cores=1 mem=4G root-disk=16G
    expose: true
    num_units: 1
    kubernetes-control-plane:
    annotations:
    gui-x: '800'
    gui-y: '850'
    channel: 1.33/stable
    charm: kubernetes-control-plane
    constraints: cores=2 mem=8G root-disk=16G
    num_units: 2
    options:
    channel: 1.33/stable
    kubernetes-worker:
    annotations:
    gui-x: '90'
    gui-y: '850'
    channel: 1.33/stable
    charm: kubernetes-worker
    constraints: cores=2 mem=8G root-disk=16G
    expose: true
    num_units: 3
    options:
    channel: 1.33/stable
    relations:
    - - kubernetes-control-plane:loadbalancer-external
    - kubeapi-load-balancer:lb-consumers
    - - kubernetes-control-plane:loadbalancer-internal
    - kubeapi-load-balancer:lb-consumers
    - - kubernetes-control-plane:kube-control
    - kubernetes-worker:kube-control
    - - kubernetes-control-plane:certificates
    - easyrsa:client
    - - etcd:certificates
    - easyrsa:client
    - - kubernetes-control-plane:etcd
    - etcd:db
    - - kubernetes-worker:certificates
    - easyrsa:client
    - - kubeapi-load-balancer:certificates
    - easyrsa:client
    - - calico:etcd
    - etcd:db
    - - calico:cni
    - kubernetes-control-plane:cni
    - - calico:cni
    - kubernetes-worker:cni
    - - containerd:containerd
    - kubernetes-worker:container-runtime
    - - containerd:containerd
    - kubernetes-control-plane:container-runtime
  2. Remove all references to the calico charm from the bundle file:

    1. Remove the calico application from the applications section.
      Default text to be removed
      calico:
      annotations:
      gui-x: '475'
      gui-y: '605'
      channel: 1.33/stable
      charm: calico
      options:
      vxlan: Always
    2. Remove all calico relations from the relations section.
      Default text to be removed
      - - calico:etcd
      - etcd:db
      - - calico:cni
      - kubernetes-control-plane:cni
      - - calico:cni
      - kubernetes-worker:cni

    Your default bundle file should now look like this:

    Example of modified Charmed Kubernetes bundle (no CNI)

    An example of a default bundle file with the calico charm:

    description: A highly-available, production-grade Kubernetes cluster.
    docs: https://discourse.charmhub.io/t/charmed-kubernetes-bundle/14447
    issues: https://bugs.launchpad.net/charmed-kubernetes-bundles
    series: noble
    source: https://github.com/charmed-kubernetes/bundle
    website: https://ubuntu.com/kubernetes/charmed-k8s
    name: charmed-kubernetes
    applications:
    containerd:
    annotations:
    gui-x: '475'
    gui-y: '800'
    channel: 1.33/stable
    charm: containerd
    easyrsa:
    annotations:
    gui-x: '90'
    gui-y: '420'
    channel: 1.33/stable
    charm: easyrsa
    constraints: cores=1 mem=4G root-disk=16G
    num_units: 1
    etcd:
    annotations:
    gui-x: '800'
    gui-y: '420'
    channel: 1.33/stable
    charm: etcd
    constraints: cores=2 mem=8G root-disk=16G
    num_units: 3
    options:
    channel: 3.4/stable
    kubeapi-load-balancer:
    annotations:
    gui-x: '450'
    gui-y: '250'
    channel: 1.33/stable
    charm: kubeapi-load-balancer
    constraints: cores=1 mem=4G root-disk=16G
    expose: true
    num_units: 1
    kubernetes-control-plane:
    annotations:
    gui-x: '800'
    gui-y: '850'
    channel: 1.33/stable
    charm: kubernetes-control-plane
    constraints: cores=2 mem=8G root-disk=16G
    num_units: 2
    options:
    channel: 1.33/stable
    kubernetes-worker:
    annotations:
    gui-x: '90'
    gui-y: '850'
    channel: 1.33/stable
    charm: kubernetes-worker
    constraints: cores=2 mem=8G root-disk=16G
    expose: true
    num_units: 3
    options:
    channel: 1.33/stable
    relations:
    - - kubernetes-control-plane:loadbalancer-external
    - kubeapi-load-balancer:lb-consumers
    - - kubernetes-control-plane:loadbalancer-internal
    - kubeapi-load-balancer:lb-consumers
    - - kubernetes-control-plane:kube-control
    - kubernetes-worker:kube-control
    - - kubernetes-control-plane:certificates
    - easyrsa:client
    - - etcd:certificates
    - easyrsa:client
    - - kubernetes-control-plane:etcd
    - etcd:db
    - - kubernetes-worker:certificates
    - easyrsa:client
    - - kubeapi-load-balancer:certificates
    - easyrsa:client
    - - calico:etcd
    - etcd:db
    - - containerd:containerd
    - kubernetes-worker:container-runtime
    - - containerd:containerd
    - kubernetes-control-plane:container-runtime

Set up Juju and deploy a cluster without a CNI​

  1. Configure juju with a default credential by adding a new credential or using an existing credential:

    juju add-credential <credential-name>
  2. Create the controller with a unique name and use the credential created in Step 1:

    juju bootstrap <controller-name> --credential <credential-name>
  3. Create the model with a unique name:

    juju add-model <model-name>
  4. Create the Charmed Kubernetes cluster by specifying the modified bundle file::

    juju deploy ./charmed-kubernetes-bundle.yaml
  5. If you notice that the kubernetes-control-plane and kubernetes-worker are in waiting status due to missing CNI, this can be prevented by setting ignore-missing-cni=true in the kubernetes-control-plane and kubernetes-worker charms by running:

    juju config kubernetes-control-plane ignore-missing-cni=true
    juju config kubernetes-worker ignore-missing-cni=true
    note

    The ignore-missing-cni=true configuration allows the kubernetes-control-plane and kubernetes-worker charms to be ready without waiting for a CNI plugin to be installed since Calico Enterprise will provide its own.

  6. To allow setup of the CNI to be taken care of by Calico Enterprise, the kubernetes-control-plane charm also needs to allow privileged pods by running:

    juju config kubernetes-control-plane allow-privileged=true
    note

    The allow-privileged=true configuration enables privileged containers which are required to let Calico Enterprise setup the CNI by deploying its own calico-node daemonset.

  7. Ensure the applications and units are active in the model by running:

    juju status

    The charmed kubernetes cluster should be healthy within an hour.

    note

    It is expected that the kubernetes-control-plane application should be in waiting status because it is waiting for kube-system pods to start. All other statuses should be active before proceeding to install Calico Enterprise. Example status:

    App                       Version  Status   Scale  Charm                     Channel      Rev  Exposed  Message
    containerd 1.6.38 active 5 containerd 1.33/stable 90 no Container runtime available
    easyrsa v3.0.9 active 1 easyrsa 1.33/stable 74 no Certificate Authority connected.
    etcd 3.4.37 active 3 etcd 1.33/stable 788 no Healthy with 3 known peers
    kubeapi-load-balancer 1.18.0 active 1 kubeapi-load-balancer 1.33/stable 196 yes Ready
    kubernetes-control-plane 1.33.x waiting 2 kubernetes-control-plane 1.33/stable 652 no Waiting for 3 kube-system pods to start
    kubernetes-worker 1.33.x active 3 kubernetes-worker 1.33/stable 369 yes Ready
  8. Ensure the Charmed Kubernetes model and its applications are stable by running:

    juju wait-for model <model-name> --query='life=="alive" && status=="available"'
  9. Ensure the Charmed Kubernetes applications are stable by running:

     applications=("easyrsa" "containerd" "etcd" "kubernetes-worker" "kubeapi-load-balancer")
    for i in "${!applications[@]}"; do
    app="${applications[$i]}";
    juju wait-for application $app;
    done
  10. Get the kubeconfig from the kubernetes-control-plane application by running:

    juju scp kubernetes-control-plane/0:config kubeconfig

Install Calico Enterprise​

caution

For Charmed Kubernetes clusters, you cannot use AWS EBS storage classes. You must configure an alternative storage solution such as local storage or another compatible storage provider.

Install Calico Enterprise​

  • Configure storage for Calico Enterprise.
    1. Install the Tigera Operator and custom resource definitions.

      kubectl create -f https://downloads.tigera.io/ee/v3.21.2/manifests/operator-crds.yaml
      kubectl create -f https://downloads.tigera.io/ee/v3.21.2/manifests/tigera-operator.yaml
    2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.

      note
      If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.
      kubectl create -f https://downloads.tigera.io/ee/v3.21.2/manifests/tigera-prometheus-operator.yaml
    3. Install your pull secret.

      If pulling images directly from quay.io/tigera, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials.

      kubectl create secret generic tigera-pull-secret \
      --type=kubernetes.io/dockerconfigjson -n tigera-operator \
      --from-file=.dockerconfigjson=<path/to/pull/secret>
    4. (Optional) If your cluster architecture requires any custom Calico Enterprise resources to function at startup, install them now using calicoctl.
    5. (Optional) Compliance and packet capture features are optional. To enable these features during installation, download and review the custom-resources.yaml file. Uncomment the necessary CRs and use this custom-resources.yaml for installation.

      curl -O -L https://downloads.tigera.io/ee/v3.21.2/manifests/custom-resources.yaml

    6. Install the Tigera custom resources. For more information on configuration options available, see the installation reference.

      kubectl create -f https://downloads.tigera.io/ee/v3.21.2/manifests/custom-resources.yaml
    7. You can now monitor progress with the following command:

      watch kubectl get tigerastatus

      Wait until the apiserver shows a status of Available, then proceed to the next section.

    Install Calico Enterprise license​

    Install the Calico Enterprise license provided to you by Tigera.

    kubectl create -f </path/to/license.yaml>

    You can now monitor progress with the following command:

    watch kubectl get tigerastatus

    Next steps​

    Additional resources