Skip to main content

Configure honeypods

Big picture

Configure honeypods in your clusters and get alerts that indicate resources may be compromised.


Based on the well-known cybersecurity method, “honeypots”, Calico Cloud honeypods are used to detect suspicious activity within a Kubernetes cluster. The feature enables you to deploy decoys disguised as a sensitive asset (called honeypods) at different locations in your Kubernetes cluster. Any resources make attempts to communicate with the honeypods, it can be considered indicative of a suspicious connection and the cluster may be compromised.

Calico Cloud honeypods can be used to detect attacks such as:

  • Data exfiltration
  • Resources enumeration
  • Privilege escalation
  • Denial of service
  • Vulnerability exploitation attempts


This how-to guide uses the following Calico Cloud features:

  • GlobalAlerts with Honeypods


Honeypod implementation

Honeypods can be configured on a per-cluster basis using "template" honeypod manifests that are easily customizable. Any alerts triggered are displayed in the Alerts tab in Calico Cloud Manager UI. The Honeypod Dashboard in Kibana provides an easy way to monitor and analyze traffic reaching the honeypods.

How To

Configure namespace and RBAC for honeypods

Apply the following manifest to create a namespace and RBAC for the honeypods:

kubectl create -f

Add tigera-pull-secret into the namespace tigera-internal:

kubectl get secret tigera-pull-secret --namespace=calico-system -o yaml | sed 's/namespace: .*/namespace: tigera-internal/' | kubectl apply -f -

Deploy honeypods in clusters

Use one of the following sample honeypods manifests or customize them for your implementation. All images include a minimal container that runs or mimics a running application. The images provided have been hardened with built-in protections to reduce the risk of them being compromised.


When modifying the provided honeypod manifests, be sure to update the globalalert section in the manifest to match your changes. Ensure the alert name has the prefix honeypod, for example

  • IP Enumeration

    Expose an empty pod that can only be reached via PodIP; this allows you to see when the attacker is probing the pod network:

kubectl apply -f
  • Expose an nginx service

    Expose a nginx service that serves a generic page. The pod can be discovered via ClusterIP or DNS lookup. An unreachable service tigera-dashboard-internal-service is created to entice the attacker to find and reach, tigera-dashboard-internal-debug:

kubectl apply -f
  • Vulnerable Service (MySQL)

    Expose a SQL service that contains an empty database with easy access. The pod can be discovered via ClusterIP or DNS lookup:

kubectl apply -f

Verify honeypods deployment

To verify the installation, ensure that honeypods are running within the tigera-internal namespace:

kubectl get pods -n tigera-internal
NAME                                         READY   STATUS    RESTARTS   AGE
tigera-internal-app-28c85 1/1 Running 0 2m19s
tigera-internal-app-8c5bt 1/1 Running 0 2m19s
tigera-internal-app-l64nz 1/1 Running 0 2m19s
tigera-internal-app-qc7gv 1/1 Running 0 2m19s
tigera-internal-dashboard-6df998578c-mtmqr 1/1 Running 0 2m15s
tigera-internal-db-5c57bd5987-k5ksj 1/1 Running 0 2m10s

And verify that global alerts are set for honeypods:

kubectl get globalalerts
NAME                   CREATED AT
honeypod.fake.svc 2020-10-22T03:44:36Z
honeypod.ip.enum 2020-10-22T03:44:31Z 2020-10-22T03:43:40Z
honeypod.port.scan 2020-10-22T03:44:31Z
honeypod.vuln.svc 2020-10-22T03:44:40Z

As an example, to trigger an alert for honeypod.ip.enum, first get the Pod IP for one of the honeypods:

kubectl get pod tigera-internal-app-28c85 -n tigera-internal -ojsonpath='{.status.podIP}'

Then, run a busybox container with the command ping on the honeypod IP:

kubectl run --restart=Never --image busybox ping-runner -- ping -c1 <honeypod IP>

If the ICMP request reaches the honeypod, an alert will be generated within 5 minutes.

After you have verified that the honeypods are installed and working, a best practice is to remove the pull secret from the namespace:

kubectl delete secret tigera-pull-secret -n tigera-internal

Additional resources