Skip to main content
Calico Enterprise 3.20 (latest) documentation

Install Calico on non-cluster hosts and VMs

Big picture

Secure hosts and virtual machines (VMs) running outside of Kubernetes by installing Calico Enterprise.

Value

Calico Enterprise can also be used to protect hosts and VMs running outside of a Kubernetes cluster. In many cases, these are applications and workloads that cannot be easily containerized. Calico Enterprise lets you protect and gain visibility into these non-cluster hosts and use the same robust Calico network policy that you use for pods.

Concepts

Non-cluster hosts and host endpoints

A non-cluster host or VM is a computer that is running an application that is not part of a Kubernetes cluster. Calico Enterprise enables you to protect these hosts and VMs using the same Calico network policy that you use for workloads running inside a Kubernetes cluster. It also generates flow logs that provide visibility into the communication that host or VM is having with other endpoints in your environment.

In the following diagram, a Kubernetes cluster is running Calico Enterprise with networking (for pod-to-pod communication) and network policy; the non-cluster host uses Calico network policy for host protection and to generate flow logs for observability.

non-cluster-host

For non-cluster hosts and VMs, you can secure host interfaces using host endpoints. Host endpoints can have labels that work the same as labels on pods/workload endpoints in Kubernetes. The advantage is that you can write network policy rules to apply to both workload endpoints and host endpoints using label selectors; where each selector can refer to the either type (or be a mix of the two). For example, you can easily write a global policy that applies to every host, VM, or pod that is running Calico.

To learn how to restrict traffic to/from hosts and VMs using Calico network policy, see Protect hosts and VMs.

Before you begin

Required

  • Kubernetes API datastore is up and running and is accessible from the host

    If Calico Enterprise is installed on a cluster, you already have a datastore.

  • Non-cluster host or VM meets Calico Enterprise system requirements

    Ensure that your node OS includes the ipset and conntrack kernel dependencies.

How to

Set up your Kubernetes cluster to work with a non-cluster host or VM

  1. Apply the NonClusterHost custom resource. The endpoint field is required. It defines the remote endpoint for the non-cluster host log forwarder.

    kubectl create -f - <<EOF
    apiVersion: operator.tigera.io/v1
    kind: NonClusterHost
    metadata:
    name: tigera-secure
    spec:
    endpoint: <endpoint for non-cluster host log forwarder>
    EOF

    Wait until the manager shows a status of Available, then proceed to the next step.

  2. Obtain the token for tigera-noncluster-host service account.

    kubectl get secret -n calico-system tigera-noncluster-host -o jsonpath='{.data.token}' | base64 --decode
  3. Use a text editor to create a kubeconfig file for non-cluster host or VM.

    apiVersion: v1
    kind: Config
    current-context: noncluster-hosts
    preferences: {}
    clusters:
    - cluster:
    certificate-authority-data: <your cluster certificate>
    server: <your cluster server>
    name: noncluster-hosts
    contexts:
    - context:
    cluster: noncluster-hosts
    user: tigera-noncluster-host
    name: noncluster-hosts
    users:
    - name: tigera-noncluster-host
    user:
    token: <token from previous step>

    Take the cluster information from an already existing kubeconfig.

  4. Create a HostEndpoint resource for each non-cluster host or VM. The node and expectedIPs fields are required to match the hostname and the expected interface IP addresses.

  5. Follow Configure access to the Calico Enterprise web console page to configure the Calico Enterprise web console access.

Set up your non-cluster host or VM

  1. Configure the Calico RHEL yum/dnf repositories

    • If your non-cluster host or VM has RHEL 8 installed, then run the following command:

      curl https://downloads.tigera.io/ee/rpms/v3.20/calico_rhel8.repo -o /etc/yum.repos.d/calico_rhel8.repo
    • If your non-cluster host or VM has RHEL 9 installed, then run the following command:

      curl https://downloads.tigera.io/ee/rpms/v3.20/calico_rhel9.repo -o /etc/yum.repos.d/calico_rhel9.repo

      Only Red Hat Enterprise Linux 8 and 9 x86-64 operating systems are supported in this version of Calico Enterprise.

  2. Install Calico node and fluent-bit log forwarder packages.

    • Use dnf to install the calico-node and calico-fluent-bit packages:
    dnf install calico-node calico-fluent-bit
  3. Copy the kubeconfig created in cluster setup step 3 to host folder /etc/calico/kubeconfig and change ownership to calico:calico.

  4. Start Calico node and log forwarder.

    systemctl enable --now calico-node.service
    systemctl enable --now calico-fluent-bit.service

    You can configure the Calico node by tuning the environment variables defined in the /etc/calico/calico-node/calico-node.env file. For more information, see the Felix configuration reference.

Configure hosts to communicate with your Kubernetes cluster

Using Calico Enterprise network policy-only mode (i.e. with other CNIs), you must ensure that the non-cluster host or VM can directly communicate with your Kubernetes cluster. Here are some vendor tips:

AWS

  • For hosts to communicate with your Kubernetes cluster, the node must be in the same VPC as nodes in your Kubernetes cluster, and must use the AWS VPC CNI plugin (used by default in EKS).
  • The Kubernetes cluster security group needs to allow traffic from your host endpoint. Make sure that an inbound rule is set so that traffic from your host endpoint node is allowed.
  • For a non-cluster host to communicate with an EKS cluster, the correct IAM roles must be configured.
  • You also need to provide authentication to your Kubernetes cluster using aws-iam-authenticator and the aws cli

GKE

For hosts to communicate with your Kubernetes cluster directly, you must make the host directly reachable/routable; this is not set up by default with the VPC native network routing.

Additional resources