Skip to main content
Version: 3.18 (latest)

Install network policy on non-cluster hosts

Big picture​

Secure non-cluster hosts by installing Calico Enterprise network policy.

Value​

Not all hosts in your environment run pods/workloads. You may have physical machines or legacy applications that you cannot move into a Kubernetes cluster, but still need to securely communicate with pods in your cluster. Calico Enterprise lets you enforce policy on these non-cluster hosts using the same robust Calico Enterprise network policy that you use for pods. This solution can also be used to protect bare metal/physical servers that run Kubernetes clusters instead of VMs.

Concepts​

Non-cluster hosts and host endpoints​

A non-cluster host is a computer that is running an application that is not part of a Kubernetes cluster. But you can protect hosts using the same Calico Enterprise network policy that you use for your Kubernetes cluster. In the following diagram, the Kubernetes cluster is running full Calico Enterprise with networking (for pod-to-pod communications) and network policy; the non-cluster host uses Calico Enterprise network policy only for host protection.

non-cluster-host

For non-cluster hosts, you can secure host interfaces using host endpoints. Host endpoints can have labels, and work the same as labels on pods/workload endpoints. The advantage is that you can write network policy rules to apply to both workload endpoints and host endpoints using label selectors; where each selector can refer to the either type (or be a mix of the two). For example, you can write a cluster-wide policy for non-cluster hosts that is immediately applied to every host.

To learn how to restrict traffic to/from hosts using Calico Enterprise network policy see, Protect hosts.

Before you begin​

CNI support

Calico CNI for networking with Calico Enterprise network policy

The geeky details of what you get:

PolicyIPAMCNIOverlayRoutingDatastore

Required

  • Kubernetes API datastore is up and running and is accessible from the host

    If Calico Enterprise is installed on a cluster, you already have a datastore.

  • Non-cluster host meets Calico Enterprise system requirements

    • Ensure that your node OS includes the ipset and conntrack kernel dependencies
    • Install Docker if you are using container install option (rather than binary install)

How to​

Step 1: (Optional) Configure access for the non-cluster-host​

To run Calico Node as a container, it will need a kubeconfig. You can skip this step if you already have a kubeconfig ready to use.

  1. Create a service account

    SA_NAME=my-host
    kubectl create serviceaccount $SA_NAME -n calico-system -o yaml
  2. Create a secret for the service account

    note

    This step is needed if your Kubernetes cluster is version v1.24 or above. Prior to Kubernetes v1.24, this secret is created automatically.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    type: kubernetes.io/service-account-token
    metadata:
    name: $SA_NAME
    namespace: calico-system
    annotations:
    kubernetes.io/service-account.name: $SA_NAME
    EOF
  3. For Kubernetes v1.24+, use the following command to obtain the token for the secret associated with your host

    kubectl describe secret $SA_NAME -n calico-system

    For Kubernetes clusters prior to version v1.24, use the following command to retrieve your token:

    kubectl describe secret -n calico-system $(kubectl get serviceaccount -n calico-system $SA_NAME -o=jsonpath="{.secrets[0].name}")
  4. Use a text editor to create a kubeconfig file

    apiVersion: v1
    kind: Config

    users:
    - name: my-host
    user:
    token: <token from previous step>

    clusters:
    - cluster:
    certificate-authority-data: <your cluster certificate>
    server: <your cluster server>
    name: <your cluster name>

    contexts:
    - context:
    cluster: my-cluster
    user: my-host
    name: my-host

    current-context: my-cluster

    Take the cluster information from an already existing kubeconfig.

Run the following two commands to create a cluster role with read-only access and a corresponding cluster role binding.

kubectl apply -f https://downloads.tigera.io/ee/v3.18.2/manifests/non-cluster-host-clusterrole.yaml
kubectl create clusterrolebinding $SA_NAME --serviceaccount=calico-system:$SA_NAME --clusterrole=non-cluster-host-read-only
note

We include examples for systemd, but the commands can be applied to other init daemons such as upstart.

Step 2: Download and extract the binary​

This step requires Docker, but it can be run from any machine with Docker installed. It doesn't have to be the host you will run it on (i.e your laptop is fine).

  1. Use the following command to download the cnx-node image.

    docker pull quay.io/tigera/cnx-node:v3.18.2
  2. Confirm that the image has loaded by typing docker images.

    REPOSITORY       TAG           IMAGE ID       CREATED         SIZE
    quay.io/tigera/cnx-node v3.18.2 e07d59b0eb8a 2 minutes ago 42MB
  3. Create a temporary cnx-node container.

    docker create --name container quay.io/tigera/cnx-node:v3.18.2
  4. Copy the calico-node binary from the container to the local file system.

    docker cp container:/bin/calico-node cnx-node
  5. Delete the temporary container.

    docker rm container
  6. Set the extracted binary file to be executable.

    chmod +x cnx-node
    chown root:root cnx-node

Step 3: Copy the calico-node binary​

Copy the binary from Step 2 to the target machine, using any means (scp, ftp, USB stick, etc.).

Step 4: Create environment file​

Use the following guidelines and sample file to define the environment variables for starting Calico on the host. For more help, see the Felix configuration reference

For the Kubernetes datastore set the following:

VariableConfiguration guidance
KUBECONFIGPath to kubeconfig file to access the Kubernetes API Server

Sample EnvironmentFile - save to /etc/calico/calico.env

DATASTORE_TYPE=kubernetes
CALICO_NODENAME=""
NO_DEFAULT_POOLS="true"
CALICO_IP=""
CALICO_IP6=""
CALICO_AS=""
CALICO_NETWORKING_BACKEND=bird

Step 5: Start Felix​

There are a few ways to start Felix: create a startup script, or manually configure Felix.

Felix should be started at boot by your init system and the init system must be configured to restart Felix if it stops. Felix relies on that behavior for certain configuration changes.

If your distribution uses systemd, then you could use the following unit file:

[Unit]
Description=Calico Felix agent
After=syslog.target network.target

[Service]
User=root
EnvironmentFile=/etc/calico/calico.env
ExecStartPre=/usr/bin/mkdir -p /var/run/calico
ExecStart=/usr/local/bin/cnx-node -felix
KillMode=process
Restart=on-failure
LimitNOFILE=32000

[Install]
WantedBy=multi-user.target

Or, for upstart:

description "Felix (Calico agent)"
author "Project Calico Maintainers <maintainers@projectcalico.org>"

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]

limit nofile 32000 32000

respawn
respawn limit 5 10

chdir /var/run

pre-start script
mkdir -p /var/run/calico
chown root:root /var/run/calico
end script

exec /usr/local/bin/cnx-node -felix

Start Felix

After you've configured Felix, start it via your init system.

service calico-felix start

Configure hosts to communicate with your Kubernetes cluster​

Using Calico Enterprise network policy-only mode, you must ensure that the non-cluster host can directly communicate with your Kubernetes cluster. Here are some vendor tips:

AWS

  • For hosts to communicate with your Kubernetes cluster, the node must be in the same VPC as nodes in your Kubernetes cluster, and must use the AWS VPC CNI plugin (used by default in EKS).
  • The Kubernetes cluster security group needs to allow traffic from your host endpoint. Make sure that an inbound rule is set so that traffic from your host endpoint node is allowed.
  • For a non-cluster host to communicate with an EKS cluster, the correct IAM roles must be configured.
  • You also need to provide authentication to your Kubernetes cluster using aws-iam-authenticator and the aws cli

GKE

For hosts to communicate with your Kubernetes cluster directly, you must make the host directly reachable/routable; this is not set up by default with the VPC native network routing.

Additional resources​