Install network policy on non-cluster hosts
Big picture
Secure non-cluster hosts by installing Calico Enterprise network policy.
Value
Not all hosts in your environment run pods/workloads. You may have physical machines or legacy applications that you cannot move into a Kubernetes cluster, but still need to securely communicate with pods in your cluster. Calico Enterprise lets you enforce policy on these non-cluster hosts using the same robust Calico Enterprise network policy that you use for pods. This solution can also be used to protect bare metal/physical servers that run Kubernetes clusters instead of VMs.
Concepts
Non-cluster hosts and host endpoints
A non-cluster host is a computer that is running an application that is not part of a Kubernetes cluster. But you can protect hosts using the same Calico Enterprise network policy that you use for your Kubernetes cluster. In the following diagram, the Kubernetes cluster is running full Calico Enterprise with networking (for pod-to-pod communications) and network policy; the non-cluster host uses Calico Enterprise network policy only for host protection.
For non-cluster hosts, you can secure host interfaces using host endpoints. Host endpoints can have labels, and work the same as labels on pods/workload endpoints. The advantage is that you can write network policy rules to apply to both workload endpoints and host endpoints using label selectors; where each selector can refer to the either type (or be a mix of the two). For example, you can write a cluster-wide policy for non-cluster hosts that is immediately applied to every host.
To learn how to restrict traffic to/from hosts using Calico Enterprise network policy see, Protect hosts.
Before you begin
CNI support
Calico CNI for networking with Calico Enterprise network policy
The geeky details of what you get:
Policy | IPAM | CNI | Overlay | Routing | Datastore |
---|---|---|---|---|---|
Required
-
Kubernetes API datastore is up and running and is accessible from the host
If Calico Enterprise is installed on a cluster, you already have a datastore.
-
Non-cluster host meets Calico Enterprise system requirements
- Ensure that your node OS includes the
ipset
andconntrack
kernel dependencies - Install Docker if you are using container install option (rather than binary install)
- Ensure that your node OS includes the
How to
- Binary install
- Container install
Step 1: (Optional) Configure access for the non-cluster-host
To run Calico Node as a container, it will need a kubeconfig. You can skip this step if you already have a kubeconfig ready to use.
-
Create a service account
SA_NAME=my-host
kubectl create serviceaccount $SA_NAME -n calico-system -o yaml -
Create a secret for the service account
noteThis step is needed if your Kubernetes cluster is version v1.24 or above. Prior to Kubernetes v1.24, this secret is created automatically.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $SA_NAME
namespace: calico-system
annotations:
kubernetes.io/service-account.name: $SA_NAME
EOF -
For Kubernetes v1.24+, use the following command to obtain the token for the secret associated with your host
kubectl describe secret $SA_NAME -n calico-system
For Kubernetes clusters prior to version v1.24, use the following command to retrieve your token:
kubectl describe secret -n calico-system $(kubectl get serviceaccount -n calico-system $SA_NAME -o=jsonpath="{.secrets[0].name}")
-
Use a text editor to create a kubeconfig file
apiVersion: v1
kind: Config
users:
- name: my-host
user:
token: <token from previous step>
clusters:
- cluster:
certificate-authority-data: <your cluster certificate>
server: <your cluster server>
name: <your cluster name>
contexts:
- context:
cluster: my-cluster
user: my-host
name: my-host
current-context: my-clusterTake the cluster information from an already existing kubeconfig.
Run the following two commands to create a cluster role with read-only access and a corresponding cluster role binding.
kubectl apply -f https://downloads.tigera.io/ee/v3.19.4/manifests/non-cluster-host-clusterrole.yaml
kubectl create clusterrolebinding $SA_NAME --serviceaccount=calico-system:$SA_NAME --clusterrole=non-cluster-host-read-only
We include examples for systemd, but the commands can be applied to other init daemons such as upstart.
Step 2: Download and extract the binary
This step requires Docker, but it can be run from any machine with Docker installed. It doesn't have to be the host you will run it on (i.e your laptop is fine).
-
Use the following command to download the cnx-node image.
docker pull quay.io/tigera/cnx-node:v3.19.4
-
Confirm that the image has loaded by typing
docker images
.REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/tigera/cnx-node v3.19.4 e07d59b0eb8a 2 minutes ago 42MB -
Create a temporary cnx-node container.
docker create --name container quay.io/tigera/cnx-node:v3.19.4
-
Copy the calico-node binary from the container to the local file system.
docker cp container:/bin/calico-node cnx-node
-
Delete the temporary container.
docker rm container
-
Set the extracted binary file to be executable.
chmod +x cnx-node
chown root:root cnx-node
Step 3: Copy the calico-node
binary
Copy the binary from Step 2 to the target machine, using any means (scp
, ftp
, USB stick, etc.).
Step 4: Create environment file
Use the following guidelines and sample file to define the environment variables for starting Calico on the host. For more help, see the Felix configuration reference
For the Kubernetes datastore set the following:
Variable | Configuration guidance |
---|---|
KUBECONFIG | Path to kubeconfig file to access the Kubernetes API Server |
Sample EnvironmentFile
- save to /etc/calico/calico.env
DATASTORE_TYPE=kubernetes
CALICO_NODENAME=""
NO_DEFAULT_POOLS="true"
CALICO_IP=""
CALICO_IP6=""
CALICO_AS=""
CALICO_NETWORKING_BACKEND=bird
Step 5: Start Felix
There are a few ways to start Felix: create a startup script, or manually configure Felix.
- Startup script
- Manually configure Felix
Felix should be started at boot by your init system and the init system must be configured to restart Felix if it stops. Felix relies on that behavior for certain configuration changes.
If your distribution uses systemd, then you could use the following unit file:
[Unit]
Description=Calico Felix agent
After=syslog.target network.target
[Service]
User=root
EnvironmentFile=/etc/calico/calico.env
ExecStartPre=/usr/bin/mkdir -p /var/run/calico
ExecStart=/usr/local/bin/cnx-node -felix
KillMode=process
Restart=on-failure
LimitNOFILE=32000
[Install]
WantedBy=multi-user.target
Or, for upstart:
description "Felix (Calico agent)"
author "Project Calico Maintainers <maintainers@projectcalico.org>"
start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]
limit nofile 32000 32000
respawn
respawn limit 5 10
chdir /var/run
pre-start script
mkdir -p /var/run/calico
chown root:root /var/run/calico
end script
exec /usr/local/bin/cnx-node -felix
Start Felix
After you've configured Felix, start it via your init system.
service calico-felix start
Configure Felix by creating a file at /kubernetes/calico/felix.cfg
.
See Felix configuration for help with environment variables.
Felix tries to detect whether IPv6 is available on your platform but
the detection can fail on older (or more unusual) systems. If Felix
exits soon after startup with ipset
or iptables
errors try
setting the Ipv6Support
setting to false
.
Next, configure Felix to interact with a Kubernetes datastore. You
must set the DatastoreType
setting to kubernetes
. You must also set the environment variable CALICO_KUBECONFIG
to point to a valid kubeconfig for your kubernetes cluster and CALICO_NETWORKING_BACKEND
to none
.
For the Kubernetes datastore, Felix works in policy-only mode. Even though pod networking is disabled on the baremetal host Felix is running on, policy can still be used to secure the host.
Additional Requirements
- Verify that Docker is installed.
- Configure container to start at boot time.
The
cnx-node
container should be started at boot time by your init system and the init system must be configured to restart it if stopped. Calico Enterprise relies on that behavior for certain configuration changes.
This section describes how to run cnx-node
as a Docker container.
Step 1: (Optional) Configure access for the non-cluster-host
To run Calico Node as a container, it will need a kubeconfig. You can skip this step if you already have a kubeconfig ready to use.
-
Create a service account
SA_NAME=my-host
kubectl create serviceaccount $SA_NAME -n calico-system -o yaml -
Create a secret for the service account
noteThis step is needed if your Kubernetes cluster is version v1.24 or above. Prior to Kubernetes v1.24, this secret is created automatically.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $SA_NAME
namespace: calico-system
annotations:
kubernetes.io/service-account.name: $SA_NAME
EOF -
For Kubernetes v1.24+, use the following command to obtain the token for the secret associated with your host
kubectl describe secret $SA_NAME -n calico-system
For Kubernetes clusters prior to version v1.24, use the following command to retrieve your token:
kubectl describe secret -n calico-system $(kubectl get serviceaccount -n calico-system $SA_NAME -o=jsonpath="{.secrets[0].name}")
-
Use a text editor to create a kubeconfig file
apiVersion: v1
kind: Config
users:
- name: my-host
user:
token: <token from previous step>
clusters:
- cluster:
certificate-authority-data: <your cluster certificate>
server: <your cluster server>
name: <your cluster name>
contexts:
- context:
cluster: my-cluster
user: my-host
name: my-host
current-context: my-clusterTake the cluster information from an already existing kubeconfig.
Run the following two commands to create a cluster role with read-only access and a corresponding cluster role binding.
kubectl apply -f https://downloads.tigera.io/ee/v3.19.4/manifests/non-cluster-host-clusterrole.yaml
kubectl create clusterrolebinding $SA_NAME --serviceaccount=calico-system:$SA_NAME --clusterrole=non-cluster-host-read-only
We include examples for systemd, but the commands can be applied to other init daemons such as upstart.
Step 2: Create environment file
Use the following guidelines and sample file to define the environment variables for starting Calico on the host. For more help, see the Felix configuration reference
For the Kubernetes datastore set the following:
Variable | Configuration guidance |
---|---|
KUBECONFIG | Path to kubeconfig file to access the Kubernetes API Server |
Sample EnvironmentFile
- save to /etc/calico/calico.env
DATASTORE_TYPE=kubernetes
CALICO_NODENAME=""
NO_DEFAULT_POOLS="true"
CALICO_IP=""
CALICO_IP6=""
CALICO_AS=""
CALICO_NETWORKING_BACKEND=bird
Step 3: Configure the init system
Use an init daemon (like systemd or upstart) to start the cnx-node image as a service using the EnvironmentFile values.
Sample systemd service file: calico-node.service
[Unit]
Description=calico-node
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/calico/calico.env
ExecStartPre=-/usr/bin/docker rm -f calico-node
ExecStart=/usr/bin/docker run --net=host --privileged \
--name=calico-node \
-e NODENAME=${CALICO_NODENAME} \
-e IP=${CALICO_IP} \
-e IP6=${CALICO_IP6} \
-e CALICO_NETWORKING_BACKEND=${CALICO_NETWORKING_BACKEND} \
-e AS=${CALICO_AS} \
-e NO_DEFAULT_POOLS=${NO_DEFAULT_POOLS} \
-e DATASTORE_TYPE=${DATASTORE_TYPE} \
-e KUBECONFIG=${KUBECONFIG} \
-v /var/log/calico:/var/log/calico \
-v /var/lib/calico:/var/lib/calico \
-v /var/run/calico:/var/run/calico \
-v /run/docker/plugins:/run/docker/plugins \
-v /lib/modules:/lib/modules \
-v /etc/pki:/pki \
quay.io/tigera/cnx-node:v3.19.4 /bin/calico-node -felix
ExecStop=-/usr/bin/docker stop calico-node
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
Upon start, the systemd service:
- Confirms Docker is installed under the
[Unit]
section - Gets environment variables from the environment file above
- Removes existing
cnx-node
container (if it exists) - Starts
cnx-node
The script also stops the cnx-node
container when the service is stopped.
Depending on how you've installed Docker, the name of the Docker service
under the [Unit]
section may be different (such as docker-engine.service
).
Be sure to check this before starting the service.
Configure hosts to communicate with your Kubernetes cluster
Using Calico Enterprise network policy-only mode, you must ensure that the non-cluster host can directly communicate with your Kubernetes cluster. Here are some vendor tips:
AWS
- For hosts to communicate with your Kubernetes cluster, the node must be in the same VPC as nodes in your Kubernetes cluster, and must use the AWS VPC CNI plugin (used by default in EKS).
- The Kubernetes cluster security group needs to allow traffic from your host endpoint. Make sure that an inbound rule is set so that traffic from your host endpoint node is allowed.
- For a non-cluster host to communicate with an EKS cluster, the correct IAM roles must be configured.
- You also need to provide authentication to your Kubernetes cluster using aws-iam-authenticator and the aws cli
GKE
For hosts to communicate with your Kubernetes cluster directly, you must make the host directly reachable/routable; this is not set up by default with the VPC native network routing.