The Calico datastore
Calico stores the data about the operational and configuration state of your cluster in a central datastore. If the datastore is unavailable your Calico network continues operating, but cannot be updated (no new pods can be networked, no policy changes can be applied, etc.).
Calico has two datastore drivers you can choose from
- etcd - for direct connection to an etcd cluster
- Kubernetes - for connection to a Kubernetes API server
Using Kubernetes as the datastore​
This guide uses the Kubernetes API datastore driver. The advantages of this driver when using Calico on Kubernetes are
- Doesn't require an extra datastore, so is simpler to manage
- You can use Kubernetes RBAC to control access to Calico resources
- You can use Kubernetes audit logging to generate audit logs of changes to Calico resources
For completeness, the advantages of the etcd driver are
- Allows you to run Calico on non-Kubernetes platforms (e.g. OpenStack)
- Allows separation of concerns between Kubernetes and Calico resources, for example allowing you to scale the datastores independently
- Allows you to run a Calico cluster that contains more than just a single Kubernetes cluster, for example, bare metal servers with Calico host protection interworking with a Kubernetes cluster; or multiple Kubernetes clusters.
Custom Resources​
When using the Kubernetes API datastore driver, most Calico resources are stored as Kubernetes custom resources.
A few Calico resources are not stored as custom resources and instead are backed by corresponding native Kubernetes resources. For example, workload endpoints are Kubernetes pods.
To use Kubernetes as the Calico datastore, we need to define the custom resources Calico uses.
Download and examine the list of Calico custom resource definitions, and open it in a file editor.
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/crds.yaml
Create the custom resource definitions in Kubernetes.
kubectl apply -f crds.yaml
calicoctl​
To interact directly with the Calico datastore, use the calicoctl
client tool.
Install​
Download the
calicoctl
binary to a Linux host with access to Kubernetes.wget -O calicoctl https://github.com/projectcalico/calico/releases/latest/download/calicoctl-linux-amd64
chmod +x calicoctl
sudo mv calicoctl /usr/local/bin/Configure
calicoctl
to access Kubernetes.export KUBECONFIG=/path/to/your/kubeconfig
export DATASTORE_TYPE=kubernetesOn most systems, kubeconfig is located at
~/.kube/config
. You may wish to add theexport
lines to your~/.bashrc
so they will persist when you log in next time.
Test​
Verify calicoctl
can reach your datastore by running
calicoctl get nodes
You should see output similar to
NAME
ip-172-31-37-123
ip-172-31-40-217
ip-172-31-40-30
ip-172-31-42-47
ip-172-31-45-29
Nodes are backed by the Kubernetes node object, so you should see names that match kubectl get nodes
.
Try to get an object backed by a custom resource
calicoctl get ippools
You should see an empty result
NAME CIDR SELECTOR