Skip to main content
Version: 3.18 (latest)

Google Kubernetes Engine (GKE)

Big picture​

Install Calico Enterprise on a GKE managed Kubernetes cluster.

Before you begin​

CNI support

GKE CNI with Calico Enterprise network policy:

The geeky details of what you get:

PolicyIPAMCNIOverlayRoutingDatastore

Required

  • A compatible GKE cluster

  • Cluster has these Networking settings:

    • Intranode visibility is enabled
    • Network policy is disabled
    • Dataplane V2 is disabled
    • GKE control plane access to TCP ports 5443, 8080 and 9090 The GKE control plane must be able to access the Calico Enterprise API server which runs with pod networking on TCP ports 5443 and 8080, and the Calico Enterprise Prometheus server which runs with pod networking on TCP port 9090. For multi-zone clusters and clusters with the "master IP range" configured, you will need to add a GCP firewall rule to allow access to those ports from the control plane nodes.
  • User account has IAM permissions

    Verify your user account has IAM permissions to create Kubernetes ClusterRoles, ClusterRoleBindings, Deployments, Service Accounts, and Custom Resource Definitions. The easiest way to grant permissions is to assign the "Kubernetes Service Cluster Admin Role” to your user account. For help, see GKE access control.

    note

    By default, GCP users often have permissions to create basic Kubernetes resources (such as Pods and Services) but lack the permissions to create ClusterRoles and other admin resources. Even if you can create basic resources, it's worth verifying that you can create admin resources before continuing.

  • Cluster meets system requirements

  • A Tigera license key and credentials

  • Install kubectl

How to​

  1. Install Calico Enterprise
  2. Install the Calico Enterprise license

Install Calico Enterprise​

  1. Install the Tigera operator and custom resource definitions.

    kubectl create -f https://downloads.tigera.io/ee/v3.18.2/manifests/tigera-operator.yaml
  2. Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.

    note
    If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.
    kubectl create -f https://downloads.tigera.io/ee/v3.18.2/manifests/tigera-prometheus-operator.yaml
  3. Install your pull secret.

    If pulling images directly from quay.io/tigera, you will likely want to use the credentials provided to you by your Tigera support representative. If using a private registry, use your private registry credentials instead.

    kubectl create secret generic tigera-pull-secret \
    --type=kubernetes.io/dockerconfigjson -n tigera-operator \
    --from-file=.dockerconfigjson=<path/to/pull/secret>

    For the Prometheus operator, create the pull secret in the tigera-prometheus namespace and then patch the deployment.

    kubectl create secret generic tigera-pull-secret \
    --type=kubernetes.io/dockerconfigjson -n tigera-prometheus \
    --from-file=.dockerconfigjson=<path/to/pull/secret>
    kubectl patch deployment -n tigera-prometheus calico-prometheus-operator \
    -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'
  4. Install any extra Calico resources needed at cluster start using calicoctl.

  5. Install the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.

    kubectl create -f https://downloads.tigera.io/ee/v3.18.2/manifests/custom-resources.yaml

    You can now monitor progress with the following command:

    watch kubectl get tigerastatus

    Wait until the apiserver shows a status of Available, then proceed to the next section.

Install the Calico Enterprise license​

In order to use Calico Enterprise, you must install the license provided to you by Tigera.

kubectl create -f </path/to/license.yaml>

You can now monitor progress with the following command:

watch kubectl get tigerastatus

Next steps​